基础模型评估榜单和工具的综合汇总
本项目收录了多样化的基础模型评估榜单、开发工具和评估机构信息。涵盖文本、图像、代码、数学等领域的模型评估,同时包含解决方案和数据导向的评估。项目提供榜单搜索功能,便于快速查找。这一资源有助于研究人员和开发者比较和分析不同基础模型的性能。
Awesome Foundation Model Leaderboard is a curated list of awesome foundation model leaderboards (for an explanation of what a leaderboard is, please refer to this post), along with various development tools and evaluation organizations according to our survey:
<p align="center"><strong>On the Workflows and Smells of Leaderboard Operations (LBOps):<br>An Exploratory Study of Foundation Model Leaderboards</strong></p> <p align="center"><a href="https://github.com/zhimin-z">Zhimin (Jimmy) Zhao</a>, <a href="https://abdulali.github.io">Abdul Ali Bangash</a>, <a href="https://www.filipecogo.pro">Filipe Roseiro Côgo</a>, <a href="https://mcis.cs.queensu.ca/bram.html">Bram Adams</a>, <a href="https://research.cs.queensu.ca/home/ahmed">Ahmed E. Hassan</a></p> <p align="center"><a href="https://sail.cs.queensu.ca">Software Analysis and Intelligence Lab (SAIL)</a></p>If you find this repository useful, please consider giving us a star :star: and citation:
@article{zhao2024workflows,
title={On the Workflows and Smells of Leaderboard Operations (LBOps): An Exploratory Study of Foundation Model Leaderboards},
author={Zhao, Zhimin and Bangash, Abdul Ali and C{\^o}go, Filipe Roseiro and Adams, Bram and Hassan, Ahmed E},
journal={arXiv preprint arXiv:2407.04065},
year={2024}
}
Additionally, we provide a search toolkit that helps you quickly navigate through the leaderboards.
If you want to contribute to this list (please do), welcome to propose a pull request.
If you have any suggestions, critiques, or questions regarding this list, welcome to raise an issue.
Also, a leaderboard should be included if only:
Name | Description |
---|---|
gradio_leaderboard | gradio_leaderboard helps users build fully functional and performant leaderboard demos with gradio. |
Demo leaderboard | Demo leaderboard helps users easily deploy their leaderboards with a standardized template. |
Leaderboard Explorer | Leaderboard Explorer helps users navigate the diverse range of leaderboards available on Hugging Face Spaces. |
open_llm_leaderboard | open_llm_leaderboard helps users access Open LLM Leaderboard data easily. |
open-llm-leaderboard-renamer | open-llm-leaderboard-renamer helps users rename their models in Open LLM Leaderboard easily. |
Open LLM Leaderboard Results PR Opener | Open LLM Leaderboard Results PR Opener helps users showcase Open LLM Leaderboard results in their model cards. |
Open LLM Leaderboard Scraper | Open LLM Leaderboard Scraper helps users scrape and export data from Open LLM Leaderboard. |
Name | Description |
---|---|
Allen Institute for AI | Allen Institute for AI is a non-profit research institute with the mission of conducting high-impact AI research and engineering in service of the common good. |
Papers With Code | Papers With Code is a community-driven platform for learning about state-of-the-art research papers on machine learning. |
Name | Description |
---|---|
CompassRank | CompassRank is a platform to offer a comprehensive, objective, and neutral evaluation reference of foundation mdoels for the industry and research. |
FlagEval | FlagEval is a comprehensive platform for evaluating foundation models. |
GenAI-Arena | GenAI-Arena hosts the visual generation arena, where various vision models compete based on their performance in image generation, image edition, and video generation. |
Holistic Evaluation of Language Models | Holistic Evaluation of Language Models (HELM) is a reproducible and transparent framework for evaluating foundation models. |
nuScenes | nuScenes enables researchers to study challenging urban driving situations using the full sensor suite of a real self-driving car. |
SuperCLUE | SuperCLUE is a series of benchmarks for evaluating Chinese foundation models. |
Name | Description |
---|---|
ACLUE | ACLUE is an evaluation benchmark for ancient Chinese language comprehension. |
AIR-Bench | AIR-Bench is a benchmark to evaluate heterogeneous information retrieval capabilities of language models. |
AlignBench | AlignBench is a multi-dimensional benchmark for evaluating LLMs' alignment in Chinese. |
AlpacaEval | AlpacaEval is an automatic evaluator designed for instruction-following LLMs. |
ANGO | ANGO is a generation-oriented Chinese language model evaluation benchmark. |
Arabic Tokenizers Leaderboard | Arabic Tokenizers Leaderboard compares the efficiency of LLMs in parsing Arabic in its different dialects and forms. |
Arena-Hard-Auto | Arena-Hard-Auto is a benchmark for instruction-tuned LLMs. |
Auto-Arena | Auto-Arena is a benchmark in which various language model agents engage in peer-battles to evaluate their performance. |
BeHonest | BeHonest is a benchmark to evaluate honesty - awareness of knowledge boundaries (self-knowledge), avoidance of deceit (non-deceptiveness), and consistency in responses (consistency) - in LLMs. |
BenBench | BenBench is a benchmark to evaluate the extent to which LLMs conduct verbatim training on the training set of a benchmark over the test set to enhance capabilities. |
BiGGen-Bench | BiGGen-Bench is a comprehensive benchmark to evaluate LLMs across a wide variety of tasks. |
Biomedical Knowledge Probing Leaderboard | Biomedical Knowledge Probing Leaderboard aims to track, rank, and evaluate biomedical factual knowledge probing results in LLMs. |
BotChat | BotChat assesses the multi-round chatting capabilities of LLMs through a proxy task, evaluating whether two ChatBot instances can engage in smooth and fluent conversation with each other. |
C-Eval | C-Eval is a Chinese evaluation suite for LLMs. |
C-Eval Hard | C-Eval Hard is a more challenging version of C-Eval, which involves complex LaTeX equations and requires non-trivial reasoning abilities to solve. |
Capability leaderboard | Capability leaderboard is a platform to evaluate long context understanding capabilties of LLMs. |
Chain-of-Thought Hub | Chain-of-Thought Hub is a benchmark to evaluate the reasoning capabilities of LLMs. |
ChineseFactEval | ChineseFactEval is a factuality benchmark for Chinese LLMs. |
CLEM | CLEM is a framework designed for the systematic evaluation of chat-optimized LLMs as conversational agents. |
CLiB | CLiB is a benchmark to evaluate Chinese LLMs. |
CMMLU | CMMLU is a Chinese benchmark to evaluate LLMs' knowledge and reasoning capabilities. |
CMB | CMB is a multi-level medical benchmark in Chinese. |
CMMLU | CMMLU is a benchmark to evaluate the performance of LLMs in various subjects within the Chinese cultural context. |
CMMMU | CMMMU is a benchmark to test the capabilities of multimodal models in understanding and reasoning across multiple disciplines in the Chinese context. |
CompMix | CompMix is a benchmark for heterogeneous question answering. |
Compression Leaderboard | Compression Leaderboard is a platform to evaluate the compression performance of LLMs. |
CoTaEval | CoTaEval is a benchmark to evaluate the feasibility and side effects of copyright takedown methods for LLMs. |
ConvRe | ConvRe is a benchmark to evaluate LLMs' ability to comprehend converse relations. |
CriticBench | CriticBench is a benchmark to evaluate LLMs' ability to make critique responses. |
CRM LLM Leaderboard | CRM LLM Leaderboard is a platform to evaluate the efficacy of LLMs for business applications. |
DecodingTrust | DecodingTrust is an assessment platform to evaluate the trustworthiness of LLMs. |
Domain LLM Leaderboard | Domain LLM Leaderboard is a platform to evaluate the popularity of domain-specific LLMs. |
DyVal | DyVal is a dynamic evaluation protocol for LLMs. |
Enterprise Scenarios leaderboard | Enterprise Scenarios Leaderboard aims to assess the performance of LLMs on real-world enterprise use cases. |
EQ-Bench | EQ-Bench is a benchmark to evaluate aspects of emotional intelligence in LLMs. |
Factuality Leaderboard | Factuality Leaderboard compares the factual capabilities of LLMs. |
FuseReviews | FuseReviews aims to advance grounded text generation tasks, including long-form question-answering and summarization. |
FELM | FELM is a meta benchmark to evaluate factuality evaluation benchmark for LLMs. |
GAIA | GAIA aims to test fundamental abilities that an AI assistant should possess. |
GPT-Fathom | GPT-Fathom is an LLM evaluation suite, benchmarking 10+ leading LLMs as well as OpenAI's legacy models on 20+ curated benchmarks across 7 capability categories, all under aligned settings. |
Guerra LLM AI Leaderboard | Guerra LLM AI Leaderboard compares and ranks the performance of LLMs across quality, price, performance, context window, and others. |
Hallucinations Leaderboard | Hallucinations Leaderboard aims to track, rank and evaluate hallucinations in LLMs. |
HalluQA | HalluQA is a benchmark to evaluate the phenomenon of hallucinations in Chinese LLMs. |
HellaSwag | HellaSwag is a benchmark to evaluate common-sense reasoning in LLMs. |
HHEM Leaderboard | HHEM Leaderboard evaluates how often a language model introduces hallucinations when summarizing a document. |
IFEval | IFEval is a benchmark to evaluate LLMs' instruction following capabilities with verifiable instructions. |
Indic LLM Leaderboard | Indic LLM Leaderboard is a benchmark to track progress and rank the performance of Indic LLMs. |
InstructEval | InstructEval is an evaluation suite to assess instruction selection methods in the context of LLMs. |
Japanese Chatbot Arena | Japanese Chatbot Arena hosts the chatbot arena, where various LLMs compete based on their performance in Japanese. |
JustEval | JustEval is a powerful tool designed for fine-grained evaluation of LLMs. |
Ko Chatbot Arena | Ko Chatbot Arena hosts the chatbot arena, where various LLMs compete based on their performance in Korean. |
KoLA | KoLA is a benchmark to evaluate the world knowledge of LLMs. |
L-Eval | L-Eval is a Long Context Language Model (LCLM) evaluation benchmark to evaluate the performance of handling extensive context. |
Language Model Council | Language Model Council (LMC) is a benchmark to evaluate tasks that are highly subjective and often lack majoritarian human agreement. |
LawBench | LawBench is a benchmark to evaluate the legal capabilities of |
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
高分辨率纹理 3D 资产生成
Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。
一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。
3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号