awesome-foundation-model-leaderboards

awesome-foundation-model-leaderboards

基础模型评估榜单和工具的综合汇总

本项目收录了多样化的基础模型评估榜单、开发工具和评估机构信息。涵盖文本、图像、代码、数学等领域的模型评估,同时包含解决方案和数据导向的评估。项目提供榜单搜索功能,便于快速查找。这一资源有助于研究人员和开发者比较和分析不同基础模型的性能。

基础模型评估基准排行榜人工智能机器学习Github开源项目
<div align="center"> <h1>Awesome Foundation Model Leaderboard</h1> <a href="https://awesome.re"> <img src="https://awesome.re/badge.svg" height="20"/> </a> <a href="https://github.com/SAILResearch/awesome-foundation-model-leaderboards/fork"> <img src="https://img.shields.io/badge/PRs-Welcome-red" height="20"/> </a> <a href="https://arxiv.org/abs/2407.04065"> <img src="https://img.shields.io/badge/📃-Arxiv-b31b1b" height="20"/> </a> </div>

Awesome Foundation Model Leaderboard is a curated list of awesome foundation model leaderboards (for an explanation of what a leaderboard is, please refer to this post), along with various development tools and evaluation organizations according to our survey:

<p align="center"><strong>On the Workflows and Smells of Leaderboard Operations (LBOps):<br>An Exploratory Study of Foundation Model Leaderboards</strong></p> <p align="center"><a href="https://github.com/zhimin-z">Zhimin (Jimmy) Zhao</a>, <a href="https://abdulali.github.io">Abdul Ali Bangash</a>, <a href="https://www.filipecogo.pro">Filipe Roseiro Côgo</a>, <a href="https://mcis.cs.queensu.ca/bram.html">Bram Adams</a>, <a href="https://research.cs.queensu.ca/home/ahmed">Ahmed E. Hassan</a></p> <p align="center"><a href="https://sail.cs.queensu.ca">Software Analysis and Intelligence Lab (SAIL)</a></p>

If you find this repository useful, please consider giving us a star :star: and citation:

@article{zhao2024workflows,
  title={On the Workflows and Smells of Leaderboard Operations (LBOps): An Exploratory Study of Foundation Model Leaderboards},
  author={Zhao, Zhimin and Bangash, Abdul Ali and C{\^o}go, Filipe Roseiro and Adams, Bram and Hassan, Ahmed E},
  journal={arXiv preprint arXiv:2407.04065},
  year={2024}
}

Additionally, we provide a search toolkit that helps you quickly navigate through the leaderboards.

If you want to contribute to this list (please do), welcome to propose a pull request.

If you have any suggestions, critiques, or questions regarding this list, welcome to raise an issue.

Also, a leaderboard should be included if only:

  • It is actively maintained.
  • It is related to foundation models.

Table of Contents

Tools

NameDescription
gradio_leaderboardgradio_leaderboard helps users build fully functional and performant leaderboard demos with gradio.
Demo leaderboardDemo leaderboard helps users easily deploy their leaderboards with a standardized template.
Leaderboard ExplorerLeaderboard Explorer helps users navigate the diverse range of leaderboards available on Hugging Face Spaces.
open_llm_leaderboardopen_llm_leaderboard helps users access Open LLM Leaderboard data easily.
open-llm-leaderboard-renameropen-llm-leaderboard-renamer helps users rename their models in Open LLM Leaderboard easily.
Open LLM Leaderboard Results PR OpenerOpen LLM Leaderboard Results PR Opener helps users showcase Open LLM Leaderboard results in their model cards.
Open LLM Leaderboard ScraperOpen LLM Leaderboard Scraper helps users scrape and export data from Open LLM Leaderboard.

Organizations

NameDescription
Allen Institute for AIAllen Institute for AI is a non-profit research institute with the mission of conducting high-impact AI research and engineering in service of the common good.
Papers With CodePapers With Code is a community-driven platform for learning about state-of-the-art research papers on machine learning.

Evaluations

Model-oriented

Comprehensive

NameDescription
CompassRankCompassRank is a platform to offer a comprehensive, objective, and neutral evaluation reference of foundation mdoels for the industry and research.
FlagEvalFlagEval is a comprehensive platform for evaluating foundation models.
GenAI-ArenaGenAI-Arena hosts the visual generation arena, where various vision models compete based on their performance in image generation, image edition, and video generation.
Holistic Evaluation of Language ModelsHolistic Evaluation of Language Models (HELM) is a reproducible and transparent framework for evaluating foundation models.
nuScenesnuScenes enables researchers to study challenging urban driving situations using the full sensor suite of a real self-driving car.
SuperCLUESuperCLUE is a series of benchmarks for evaluating Chinese foundation models.

Text

NameDescription
ACLUEACLUE is an evaluation benchmark for ancient Chinese language comprehension.
AIR-BenchAIR-Bench is a benchmark to evaluate heterogeneous information retrieval capabilities of language models.
AlignBenchAlignBench is a multi-dimensional benchmark for evaluating LLMs' alignment in Chinese.
AlpacaEvalAlpacaEval is an automatic evaluator designed for instruction-following LLMs.
ANGOANGO is a generation-oriented Chinese language model evaluation benchmark.
Arabic Tokenizers LeaderboardArabic Tokenizers Leaderboard compares the efficiency of LLMs in parsing Arabic in its different dialects and forms.
Arena-Hard-AutoArena-Hard-Auto is a benchmark for instruction-tuned LLMs.
Auto-ArenaAuto-Arena is a benchmark in which various language model agents engage in peer-battles to evaluate their performance.
BeHonestBeHonest is a benchmark to evaluate honesty - awareness of knowledge boundaries (self-knowledge), avoidance of deceit (non-deceptiveness), and consistency in responses (consistency) - in LLMs.
BenBenchBenBench is a benchmark to evaluate the extent to which LLMs conduct verbatim training on the training set of a benchmark over the test set to enhance capabilities.
BiGGen-BenchBiGGen-Bench is a comprehensive benchmark to evaluate LLMs across a wide variety of tasks.
Biomedical Knowledge Probing LeaderboardBiomedical Knowledge Probing Leaderboard aims to track, rank, and evaluate biomedical factual knowledge probing results in LLMs.
BotChatBotChat assesses the multi-round chatting capabilities of LLMs through a proxy task, evaluating whether two ChatBot instances can engage in smooth and fluent conversation with each other.
C-EvalC-Eval is a Chinese evaluation suite for LLMs.
C-Eval HardC-Eval Hard is a more challenging version of C-Eval, which involves complex LaTeX equations and requires non-trivial reasoning abilities to solve.
Capability leaderboardCapability leaderboard is a platform to evaluate long context understanding capabilties of LLMs.
Chain-of-Thought HubChain-of-Thought Hub is a benchmark to evaluate the reasoning capabilities of LLMs.
ChineseFactEvalChineseFactEval is a factuality benchmark for Chinese LLMs.
CLEMCLEM is a framework designed for the systematic evaluation of chat-optimized LLMs as conversational agents.
CLiBCLiB is a benchmark to evaluate Chinese LLMs.
CMMLUCMMLU is a Chinese benchmark to evaluate LLMs' knowledge and reasoning capabilities.
CMBCMB is a multi-level medical benchmark in Chinese.
CMMLUCMMLU is a benchmark to evaluate the performance of LLMs in various subjects within the Chinese cultural context.
CMMMUCMMMU is a benchmark to test the capabilities of multimodal models in understanding and reasoning across multiple disciplines in the Chinese context.
CompMixCompMix is a benchmark for heterogeneous question answering.
Compression LeaderboardCompression Leaderboard is a platform to evaluate the compression performance of LLMs.
CoTaEvalCoTaEval is a benchmark to evaluate the feasibility and side effects of copyright takedown methods for LLMs.
ConvReConvRe is a benchmark to evaluate LLMs' ability to comprehend converse relations.
CriticBenchCriticBench is a benchmark to evaluate LLMs' ability to make critique responses.
CRM LLM LeaderboardCRM LLM Leaderboard is a platform to evaluate the efficacy of LLMs for business applications.
DecodingTrustDecodingTrust is an assessment platform to evaluate the trustworthiness of LLMs.
Domain LLM LeaderboardDomain LLM Leaderboard is a platform to evaluate the popularity of domain-specific LLMs.
DyValDyVal is a dynamic evaluation protocol for LLMs.
Enterprise Scenarios leaderboardEnterprise Scenarios Leaderboard aims to assess the performance of LLMs on real-world enterprise use cases.
EQ-BenchEQ-Bench is a benchmark to evaluate aspects of emotional intelligence in LLMs.
Factuality LeaderboardFactuality Leaderboard compares the factual capabilities of LLMs.
FuseReviewsFuseReviews aims to advance grounded text generation tasks, including long-form question-answering and summarization.
FELMFELM is a meta benchmark to evaluate factuality evaluation benchmark for LLMs.
GAIAGAIA aims to test fundamental abilities that an AI assistant should possess.
GPT-FathomGPT-Fathom is an LLM evaluation suite, benchmarking 10+ leading LLMs as well as OpenAI's legacy models on 20+ curated benchmarks across 7 capability categories, all under aligned settings.
Guerra LLM AI LeaderboardGuerra LLM AI Leaderboard compares and ranks the performance of LLMs across quality, price, performance, context window, and others.
Hallucinations LeaderboardHallucinations Leaderboard aims to track, rank and evaluate hallucinations in LLMs.
HalluQAHalluQA is a benchmark to evaluate the phenomenon of hallucinations in Chinese LLMs.
HellaSwagHellaSwag is a benchmark to evaluate common-sense reasoning in LLMs.
HHEM LeaderboardHHEM Leaderboard evaluates how often a language model introduces hallucinations when summarizing a document.
IFEvalIFEval is a benchmark to evaluate LLMs' instruction following capabilities with verifiable instructions.
Indic LLM LeaderboardIndic LLM Leaderboard is a benchmark to track progress and rank the performance of Indic LLMs.
InstructEvalInstructEval is an evaluation suite to assess instruction selection methods in the context of LLMs.
Japanese Chatbot ArenaJapanese Chatbot Arena hosts the chatbot arena, where various LLMs compete based on their performance in Japanese.
JustEvalJustEval is a powerful tool designed for fine-grained evaluation of LLMs.
Ko Chatbot ArenaKo Chatbot Arena hosts the chatbot arena, where various LLMs compete based on their performance in Korean.
KoLAKoLA is a benchmark to evaluate the world knowledge of LLMs.
L-EvalL-Eval is a Long Context Language Model (LCLM) evaluation benchmark to evaluate the performance of handling extensive context.
Language Model CouncilLanguage Model Council (LMC) is a benchmark to evaluate tasks that are highly subjective and often lack majoritarian human agreement.
LawBenchLawBench is a benchmark to evaluate the legal capabilities of

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

热门AI工具AI办公办公工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

模型训练热门AI工具内容创作智能问答AI开发讯飞星火大模型多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

AI助手热门AI工具AI创作AI辅助写作讯飞绘文内容运营个性化文章多平台分发
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多