Mar 14, 2025
Symbolic uses multiple frontier AI models to deliver outstanding performance and platform diversification. How do we decide what to use, and when? What process does Symbolic use to evaluate and deploy models, and what can our customers count on?
Priority 1: Security and Privacy
We start our model selection process with a set of essential criteria that must be met before we consider deploying a model from a provider. Specifically, we first consider security and privacy.
Symbolic is utilized for highly sensitive and confidential research and writing. Users must trust that their content and data are secure and that their intellectual property is fully protected. Symbolic will not use any model unless: a) no customer data or content is used in training models, b) any storage of customer data or content is done solely for that customer's own benefit, and c) stringent security measures and protocols are implemented to ensure maximum protection.
Of course this commitment extends beyond the AI models we deploy to encompass the entire Symbolic platform. The same stringent criteria apply to Symbolic’s operations. We are actively pursuing SOC-2 certification and anticipate providing further updates in the near future.
Performance; Benchmarks
Symbolic is used for high-stakes professional communication, where performance benchmarks are distinct from other generative AI applications. To evaluate models, we have developed our own series of over 20 benchmark tasks to assess model capabilities, such as synthesizing insights from extensive research documents, interpreting financial data, mirroring multiple exemplar voices and formats, correlating research with writing, and achieving fast response times.
To facilitate our model selection process, Symbolic uses a proprietary internal tool that conducts side-by-side model comparisons by executing a common task across multiple models. What we have learned from this is that no single model excels across all the benchmarks. Surprisingly, some of the highest profile model releases that supposedly "outperformed" their competitors were entirely ineffective for the tasks that matter to our customers. As a result, Symbolic employs “smart routing,” matching a customer request or action to the model best able to execute it.


Cost
In evaluating models, cost is not our primary focus. We believe that the quality and efficiency our customers gain from using Symbolic outweighs small differences in token and model-related expenses. Nevertheless, there are instances where new models or versions prove financially impractical. Our aim is to ensure economic stability by incorporating token fees within our licensing agreements, and avoiding the deployment of any model that could lead to significant unforeseen costs. Should we ever consider that the advantages of a costly model justify its expense, we would notify customers beforehand, allowing them the option to opt out.
Diversification
In addition to these considerations, it’s crucial for enterprises developing AI-driven critical processes to diversify their model dependencies. This strategy not only addresses basic redundancy—highlighted by recent high-profile outages at OpenAI and Anthropic—but also offers flexibility and choice amid potential misalignments between model providers and publishers. Symbolics' allegiance lies firmly with our users, and we will constantly assess, support, and inform them, empowering greater control over their AI outcomes.
Stay in the know.
Subscribe to our newsletter for musings on Ai and product updates.
