The second annual Index, which ranks the top 22 leading language models lists Anthropic's Claude 3.5 Sonnet as the best performing model across all tasks 

SAN FRANCISCO, July 29, 2024 /PRNewswire/ -- Galileo, a leader in developing generative AI for the enterprise, today announced the launch of its latest Hallucination Index, a Retrieval Augmented Generation (RAG)-focused evaluation framework, which ranks the performance of 22 leading Generative AI (Gen AI) large language models (LLMs) from brands like OpenAI, Anthropic, Google, and Meta.

Galileo Logo (PRNewsfoto/Galileo)

This year's Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months. As brands race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.

Which LLM Performed the Best
The Index tests open-and closed-sourced models using Galileo's proprietary evaluation metric, context adherence, designed to check for output inaccuracies and help enterprises make informed decisions about balancing price and performance. Models were tested with inputs ranging from 1,000 to 100,000 tokens, to understand performance across short (less than 5k tokens), medium (5k to 25k tokens), and long context (40k to 100k tokens) lengths.

  • Best Overall Performing Model: Anthropic's Claude 3.5 Sonnet. The closed-source model outpaced competing models across short, medium, and long context scenarios. Anthropic's Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores across categories, beating out last year's winners, GPT-4o and GPT-3.5, especially in shorter context scenarios.
  • Best Performing Model on Cost: Google's Gemini 1.5 Flash. The Google model ranked the best performing for the cost due to its great performance on all tasks.
  • Best Open Source Model: Alibaba's Qwen2-72B-Instruct. The open source model performed best with top scores in the short and medium context.

"In today's rapidly evolving AI landscape, developers and enterprises face a critical challenge: how to harness the power of generative AI while balancing cost, accuracy, and reliability. Current benchmarks are often based on academic use-cases, rather than real-world applications. Our new Index seeks to address this by testing models in real-world use cases that require the LLMs to retrieve data, a common practice in enterprise AI implementations," says Vikram Chatterji, CEO and Co-founder of Galileo. "As hallucinations continue to be a major hurdle, our goal wasn't to just rank models, but rather give AI teams and leaders the real-world data they need to adopt the right model, for the right task, at the right price."

Key Findings and Trends:

  • Open-Source Closing the Gap: Closed-source models like Claude-3.5 Sonnet and Gemini 1.5 Flash remain the top performers thanks to proprietary training data, but open-source models, such as Qwen1.5-32B-Chat and Llama-3-70b-chat, are rapidly closing the gap with improvements in hallucination performance and lower-cost barriers than their closed-source counterparts.
  • Overall Improvements with Long Context Lengths: Current RAG LLMs, like Claude 3.5 Sonnet, Claude-3-opus and Gemini 1.5 pro 001 perform particularly well with extended context lengths — without losing quality or accuracy — reflecting the progress being made with both model training and architecture.
  • Large Models Are Not Always Better: In certain cases, smaller models outperform larger models. For example, Gemini-1.5-flash-001 outperformed larger models, which suggests that efficiency in model design can sometimes outweigh scale.
  • From National to Global Focus: LLMs from outside of the U.S. such as Mistral's Mistral-large and Alibaba's qwen2-72b-instruct are emerging players in the space and continue to grow in popularity, representing the global push to create effective language models.
  • Room for Improvement: While Google's open-source Gemma-7b performed the worst, their closed-source Gemini 1.5 Flash model consistently landed near the top.

See a complete breakdown of Galileo's Hallucination Index results here.

About Galileo's Context Adherence Evaluation Model

Context Adherence uses a proprietary method created by Galileo Labs called ChainPoll to measure how well an AI model adheres to the information it is given, helping spot when AI makes up information that is not in the original text.

About Galileo 

San Francisco-based Galileo is the leading platform for enterprise GenAI evaluation and observability. The Galileo platform, powered by Luna Evaluation Foundation Models (EFMs), supports AI teams across the development lifecycle, from building and iterating to monitoring and protection. Galileo is used by AI teams from startups to Fortune 100 companies. Visit rungalileo.io to learn more about the Galileo suite of products.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/galileo-releases-new-hallucination-index-revealing-growing-intensity-in-llm-arms-race-302208202.html

SOURCE Galileo

Copyright 2024 PR Newswire