Chinese-Vicuna

Chinese-Vicuna

中文LLaMA模型的低资源指令微调方案

Chinese-Vicuna项目旨在低资源环境下训练中文LLaMA模型。该方案可在单个RTX-2080TI上进行指令微调,在RTX-3090上实现长上下文多轮对话。具有参数效率高、显卡友好和易部署等特点,支持7B和13B模型微调及垂直领域应用。项目提供完整的训练、推理和部署代码,以及多种优化工具。

Chinese-VicunaLLaMALoRAAI模型自然语言处理Github开源项目

camel

Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案

GitHub Repo stars HuggingFace badge qq join discord join

| English | 中文 | NOTE&FAQ(Please take a look before using)

camel

This is the repo for the Chinese-Vicuna project, which aims to build and share instruction-following Chinese LLaMA model tuning methods which can be trained on a single Nvidia RTX-2080TI, multi-round chatbot which can be trained on a single Nvidia RTX-3090 with the context len 2048.

Why is it called Vicuna: In view of the successful development of alpaca models such as llama,alpaca,guanaco,We want to train a Chinese small alpaca like Vicuna, small but strong enough !

The advantages of our solution are high parameter efficiency, graphics card friendliness, and easy deployment:

  • Llama-7B instruction tuning is possible on a 2080Ti (11G) (7b-instruct)
  • Llama-13B instruction tuning is possible on a 3090 (24G) (13b-instruct)
  • Llama 7B can be fine-tuned on 3090 even for conversations of 2048 length; Use 50,000 pieces of data to get good results (chatv1)
  • Llama 7B fine-tuning example on medical and legal domains
  • Support qlora-4bit which can train Llama 13B on 2080Ti.
  • Easily deployable on 2080Ti/3090, support multiple-gpu inference, which can reduce VRAM more.

The repo contains:

  • code for finetune the model
  • code for generation based on trained model
  • code for run on CPU (fp16 or int4 is support, in purely C++)
  • tools to download/convert/quantify original facebook llama.ckpt

This is our instruction demo (with beam-size=4, so you will see 4 process output in the meantime):

https://user-images.githubusercontent.com/72137647/228496412-60043912-f491-430b-848a-599e6edfa5ef.mp4

This is our multi-turn instruction demo (with beam-size=4, so you will see 4 process output in the meantime):

https://user-images.githubusercontent.com/72137647/229739363-1b48f3a9-02a1-46ab-81ee-8c62dc1399b2.mp4

NOTICE!

Before asking questions, take a look at this FAQ first! In the FAQ, you can find how to solve problems may be encountered when installing and using this project.

What‘s New

  • June, 12, 2023: Release Chinese-Vicuna-4bit andChinese-Vicuna-4bit-11600 which can be continue-finetuned
  • June, 1, 2023: support for 4bit training + inference, providing a multi-GPU inference interface (NOTICE THAT the environment is different from the original 8bit! Also provides test_tokenizers.py to further check EOS token)
  • May 17, 2023: Llama 7B fine-tuning example on legal domains, The performance is in here
  • May 10, 2023: Released chatv1 which have better conversational ability. The performance is in here
  • May 10, 2023: Released instruct_chat_50k.jsonl which is composed of 30k Chinese sharegpt dataset and 20k alpaca-instruction-Chinese-dataset
  • April 11, 2023: Released our continuous-finetune on the vertical corpus of Chinese medical quizzes Chinese-Vicuna-medical.Provides examples of vertical corpus training
  • April 4, 2023: Add performance for 13B, which trains on a single 3090.
  • April 1, 2023: Add better support for multi-turn chat in chat.py ( Now support 4 generation mode in stream mode/typewriter style: beam search, greedy, sample, beam sample ; We also add cancel button for regeneration )
  • March 29, 2023: Add more detailed test samples. performance
  • March 29, 2023: Added breakpoint retraining interface to support continued training of other datasets from our checkpoint
  • March 29, 2023: Released our new 13B-based lora model
  • March 28, 2023: Released our model on huggingface
  • March 27, 2023: Released checkpoint-final for training 3 epochs on belle+guanaco
  • March 27, 2023: Added multi-round interactive dialog script with alpaca-lora-serve service
  • March 29, 2023: Added gradio typewriter-like output with beam search, better user interaction support.
  • March 26, 2023: Provides a quantitative approach
  • March 24, 2023: Released checkpoint-8000 for training about 1.5 epochs on belle+guanaco(100w data)
  • March 23, 2023: Released checkpoint-4000 with 50w data training
  • March 23, 2023: Deploy the code for fine-tuning and inferencing in colab
  • March 23, 2023: Provides code that can be used for inference in pure c++

Table of Contents

Vicuna

  • what's new
  • what is the meaning
  • try on colab
  • performance
    • Checkpoint-4000(Facico/Chinese-Vicuna-lora-7b-0.75epoch-belle-and-guanaco)
    • Checkpoint-8000(Facico/Chinese-Vicuna-lora-7b-1.5epoch-belle-and-guanaco)
    • Checkpoint-final(Facico/Chinese-Vicuna-lora-7b-3epoch-belle-and-guanaco) and it is used for multiple rounds of dialogue
  • What we need?
    • code、data、Large Language Model、LORA model、Device
  • How to use
    • Installing、Multi-gpu training、Single-gpu training、Inference and use gradio to generate a web page(Streaming mode+beam search)、 multi-round interaction and use gradio to generate a web page(Streaming mode+beam search)、Streaming mode base on alpaca-lora-serve
  • inference on CPU with pure C++
  • More tools,for more details, see tool readme
    • ways for faster weight download ( 8MB/s )download_llama.sh
    • convert tools between the original facebook checkpoint and huggingface formatconvert_llama.py
    • a quantitative approach that requires less than 4G graphics memory for inference
  • Possible problems encountered
  • todo
  • citation

Overview

We currently select the combination of BELLE and Guanaco data as our main training dataset. We will train on multi-turn instruction data.

What is the meaning?

Similar to the explosion of the stable diffusion model, platforms like civitai have emerged, consisting of a base model + various LORA models in an open source community.

The repo hopes to help you to train these LORA models.

what is LORA?: Simply, it's a plugin used to help adapt large models to your dataset, technical details can be found inLoRA: Low-Rank Adaptation of Large Language Models。Its advantage is that the finetune is very fast when it comes to getting a small model, about 30M, and the key is that it supports Plug and Play. As can be expected, this is a very suitable architecture for open source ecology.

Here, we will help you train through a very low-demand environment, with only one 2080 (11G) to achieve certain results.

Try on colab

colab linkDescriptions
Open In ColabUse the specified lora model to inference and generate a webui
Open In ColabFinetune with the data we collected
Open In ColabProvide a webui for multi-round dialogue interaction

Performance

7B performance can be seen here, 13B is here (which train on a single 3090). There are also people reporting that our method works on 33B and gains good result.

history-checkpoint performance can be seen here

<details> <summary><b>checkpoint-4000(7B)</b></summary>
HumanChinese-Vicuna
生成一份python的排序代码A: def sort_list(lst):<br/> lst.sort()<br/> return lst
自然语言处理是什么自然语言处理是一种计算机科学领域,旨在使计算机能够理解和处理自然语言。
能给我讲一段笑话吗为什么猫吃鱼?因为它们喜欢游泳。
将下面文字翻译成英文。猫喜欢吃鱼,所以它们会游泳Cats like to eat fish, so they will swim.
233和122的和等于多少233和122的和等于355。
华为是一家怎样的公司华为是一家全球范围内最大的移动通信设备制造商之一,以及一家全球范围内最大的移动通信设备制造商之一。华为公司的业务涵盖包括移动通信设备、互联网设备、智能家居设备等。
我想和女朋友在北京约会,能推荐几个约会的地方吗北京有很多约会的地方,可以推荐以下几个约会的地方: <br/><br/>1. 北京艺术博物馆<br/>2. 颐和园<br/>3. 北京故宫博物馆<br/>4. 北京天安门广场<br/>5. 颐和园风景区

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

AI办公办公工具AI工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图热门
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

热门AI开发模型训练AI工具讯飞星火大模型智能问答内容创作多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多