Many companies are competing in AI market, trying to make their products better then the rest of the industry. And, as almost every company works with similar bases, every result difference is imported. So YottaAnswers comes to rescue, with double digits improvements.
As every company that has integrated LLMs into their products uses some combination of OpenAI, Meta, or Mistral models, there is no much room to make their LLM products differentiable from their competitors.
You want to make your product better, use YottaAnswers Sources to improve results, especially if you use LLMs as facts generators. Most precise way to show the improvements that the addition of our Sources to your LLMs, is exact question answering.
We, at YottaAnswers, have conducted an experiment where we asked LLMs to ask a multitude of questions that have short exact answers, firstly using only innate knowledge of the model, and then gave them YottaAnswers Sources as additional information as the input.
For the experiment, we used platitude of the most popular models (open source and proprietary) as they represent the majority of the industry. Results can be seen in the table below:
Model | Base Accuracy | With Source Accuracy | Difference |
Phi 2 | 34% | 62% | +81% |
Mistral 7b | 40% | 62% | +56% |
Llama 2 7b | 43% | 62% | +44% |
Llama 2 13b | 43% | 64% | +50% |
GPT-3.5 | 60% | 69% | +15% |
GPT-4 | 60% | 68% | +12% |
As you can see in the table, additional information that YottaAnswers Sources gave dramatically improved accuracy of all models (all improvements are double digit), even for the State of the Art models (GPT 4 has its results improved by 12%). Additionaly, what can be seen is that every open source model in the experiment, when given YottaAnswers Sources, have better results than GPT-4 (from 3%, up to 7% improvement), which can be very money saving if you move from OpenAI proprietary models to smaller open source models augmented with YottaAnswers Sources.
Additional advantages that our Sources can give, that were not part of the experiment, faster informational update (we update our system with new information every 3-4 months), our Sources are discrete, and you can see what information will be used to augment your LLM results.
We are so happy to show these results, and hope you find them insightful and help you make your product better. Stay tuned for more great updates.