speculative-decoding

speculative-decoding

推测解码技术,优化大型语言模型推理速度

该开源项目聚焦于推测解码技术的研究与实现,旨在提升大型语言模型的文本生成效率。项目涵盖了多种推测解码策略,包括提前退出、推测采样和先知变压器。同时,项目致力于优化批处理推测解码,以增强整体性能。研究计划还包括对比不同策略的效果,并探索微观优化方法。这些工作为加快AI模型推理速度提供了新的技术思路。

Speculative Decoding大语言模型性能优化推理加速自然语言处理Github开源项目

<img src="./speculative-decoding.png" width="500px"></img>

Speculative Decoding

Explorations into some recent techniques surrounding <a href="https://arxiv.org/abs/2211.17192">speculative decoding</a>

Also have a few ideas of my own that I will try and share in this repository, if they work. The goal is to initially use it to speed up the text-to-semantic decoder in <a href="https://github.com/lucidrains/spear-tts-pytorch">Spear-TTS</a>

Appreciation

  • <a href="https://stability.ai/">StabilityAI</a> and <a href="https://huggingface.co/">🤗 Huggingface</a> for the generous sponsorship, as well as my other sponsors, for affording me the independence to open source current artificial intelligence techniques.

Todo

  • in early exit scheme, cache the hidden layer during spec decoding, as small and large models share the same first few layers

  • for early exit, allow an extra transformer block head (separate from main transformer stem)

  • figure out batched spec decoding - different rows may advance at different rates

  • further optimize batched spec decoding, as losing some performance from all the indexing - seems like it will take some work for this technique to be actually usable

  • make batched spec decoding work with early exit strategy

  • complete speculative sampling with prophet transformer idea - seems to work well! 🙌

  • get some wandb charts and see how prophet compares with early exit strategy, share on repository

  • also run experiments to see if prophet transformer brings any benefit to main model loss. original prophet paper only did a simple linear projection

  • for early exit strategy, try randomly summing last cached embedding back to the same model (a la alphafold2 recycling), randomly cropped along sequence length, and train early exit loss this way. see if one can improve the gamma this way

  • dedicate a morning to microoptimizations

Citations

@inproceedings{Leviathan2022FastIF, title = {Fast Inference from Transformers via Speculative Decoding}, author = {Yaniv Leviathan and Matan Kalman and Y. Matias}, booktitle = {International Conference on Machine Learning}, year = {2022}, url = {https://api.semanticscholar.org/CorpusID:254096365} }
@inproceedings{sun2023spectr, title = {SpecTr: Fast Speculative Decoding via Optimal Transport}, author = {Ziteng Sun and Ananda Theertha Suresh and Jae Hun Ro and Ahmad Beirami and Himanshu Jain and Felix Yu and Michael Riley and Sanjiv Kumar}, booktitle = {Workshop on Efficient Systems for Foundation Models @ ICML2023}, year = {2023}, url = {https://openreview.net/forum?id=d0mGsaheuT} }
@article{Chen2023AcceleratingLL, title = {Accelerating Large Language Model Decoding with Speculative Sampling}, author = {Charlie Chen and Sebastian Borgeaud and Geoffrey Irving and Jean-Baptiste Lespiau and L. Sifre and John M. Jumper}, journal = {ArXiv}, year = {2023}, volume = {abs/2302.01318}, url = {https://api.semanticscholar.org/CorpusID:256503945} }
@article{Yan2020ProphetNetPF, title = {ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training}, author = {Yu Yan and Weizhen Qi and Yeyun Gong and Dayiheng Liu and Nan Duan and Jiusheng Chen and Ruofei Zhang and Ming Zhou}, journal = {ArXiv}, year = {2020}, volume = {abs/2001.04063}, url = {https://api.semanticscholar.org/CorpusID:210164665} }
@article{Zhang2023DraftV, title = {Draft \& Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding}, author = {Jinchao Zhang and Jue Wang and Huan Li and Lidan Shou and Ke Chen and Gang Chen and Sharad Mehrotra}, journal = {ArXiv}, year = {2023}, volume = {abs/2309.08168}, url = {https://api.semanticscholar.org/CorpusID:262013673} }
@misc{medusa, author = {Tianle Cai and Yuhong Li and Zhengyang Geng and Hongwu Peng and Tri Dao}, title = {Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/FasterDecoding/Medusa}}, }

编辑推荐精选

TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
蛙蛙写作

蛙蛙写作

AI小说写作助手,一站式润色、改写、扩写

蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。

AI辅助写作AI工具蛙蛙写作AI写作工具学术助手办公助手营销助手AI助手
问小白

问小白

全能AI智能助手,随时解答生活与工作的多样问题

问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。

热门AI助手AI对话AI工具聊天机器人
Transly

Transly

实时语音翻译/同声传译工具

Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

AI办公办公工具AI工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图热门
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

热门AI开发模型训练AI工具讯飞星火大模型智能问答内容创作多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

下拉加载更多