Yi系列打造领先的开源双语大语言模型
Yi项目旨在开发新一代开源双语大语言模型。基于3T多语言语料训练,Yi系列模型在语言理解、常识推理和阅读理解等方面表现优异。Yi-34B-Chat模型在AlpacaEval排行榜上位居第二,仅次于GPT-4 Turbo。Yi基于Transformer和Llama架构,通过独特的训练数据、流程和基础设施实现了卓越性能。
🤖 The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI.
🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
Yi-34B-Chat model landed in second place (following GPT-4 Turbo), outperforming other LLMs (such as GPT-4, Mixtral, Claude) on the AlpacaEval Leaderboard (based on data available up to January 2024).
Yi-34B model ranked first among all existing open-source models (such as Falcon-180B, Llama-70B, Claude) in both English and Chinese on various benchmarks, including Hugging Face Open LLM Leaderboard (pre-trained) and C-Eval (based on data available up to November 2023).
🙏 (Credits to Llama) Thanks to the Transformer and Llama open-source communities, as they reduce the efforts required to build from scratch and enable the utilization of the same tools within the AI ecosystem.
💡 TL;DR
The Yi series models adopt the same model architecture as Llama but are NOT derivatives of Llama.
Both Yi and Llama are based on the Transformer structure, which has been the standard architecture for large language models since 2018.
Grounded in the Transformer architecture, Llama has become a new cornerstone for the majority of state-of-the-art open-source models due to its excellent stability, reliable convergence, and robust compatibility. This positions Llama as the recognized foundational framework for models including Yi.
Thanks to the Transformer and Llama architectures, other models can leverage their power, reducing the effort required to build from scratch and enabling the utilization of the same tools within their ecosystems.
However, the Yi series models are NOT derivatives of Llama, as they do not use Llama's weights.
As Llama's structure is employed by the majority of open-source models, the key factors of determining model performance are training datasets, training pipelines, and training infrastructure.
Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing Llama on the Alpaca Leaderboard in Dec 2023.
Yi-34B-Chat
Yi-34B-Chat-4bits
Yi-34B-Chat-8bits
Yi-6B-Chat
Yi-6B-Chat-4bits
Yi-6B-Chat-8bits
You can try some of them interactively at:
</details> <details> <summary>🔔 <b>2023-11-23</b>: The Yi Series Models Community License Agreement is updated to <a href="https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt">v2.1</a>.</summary> </details> <details> <summary>🔥 <b>2023-11-08</b>: Invited test of Yi-34B chat model.</summary> <br>Application form: </details> <details> <summary>🎯 <b>2023-11-05</b>: <a href="#base-models">The base models, </a><code>Yi-6B-200K</code> and <code>Yi-34B-200K</code>, are open-sourced and available to the public.</summary> <br>This release contains two base models with the same parameter sizes as the previous release, except that the context window is extended to 200K. </details> <details> <summary>🎯 <b>2023-11-02</b>: <a href="#base-models">The base models, </a><code>Yi-6B</code> and <code>Yi-34B</code>, are open-sourced and available to the public.</summary> <br>The first public release contains two bilingual (English/Chinese) base models with the parameter sizes of 6B and 34B. Both of them are trained with 4K sequence length and can be extended to 32K during inference time. </details> <p align="right"> [ <a href="#top">Back to top ⬆️ </a> ] </p>Yi models come in multiple sizes and cater to different use cases. You can also fine-tune Yi models to meet your specific requirements.
If you want to deploy Yi models, make sure you meet the software and hardware requirements.
Model | Download |
---|---|
Yi-34B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
Yi-34B-Chat-4bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
Yi-34B-Chat-8bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
Yi-6B-Chat | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
Yi-6B-Chat-4bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
Yi-6B-Chat-8bits | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
<sub><sup> - 4-bit series models are quantized by AWQ. <br> - 8-bit series models are quantized by GPTQ <br> - All quantized models have a low barrier to use since they can be deployed on consumer-grade GPUs (e.g., 3090, 4090). </sup></sub>
Model | Download |
---|---|
Yi-34B | • 🤗 Hugging Face • 🤖 ModelScope • 🟣 wisemodel |
Yi-34B-200K | • [🤗 Hugging |
最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。
像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。
AI数字人视频创作平台
Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。
一站式AI创作平台
提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作