This repository collects important tools and papers related to adapter methods for recent large pre-trained neural networks.
Adapters (aka Parameter-Efficient Transfer Learning (PETL) or Parameter-Efficient Fine-Tuning (PEFT) methods) include various parameter-efficient approaches of adapting large pre-trained models to new tasks.
Large pre-trained (Transformer-based) models have become the foundation of various ML domains in recent years. While the most prevalent method of adapting these models to new tasks involves costly full fine-tuning of all model parameters, a series of parameter-efficient and lightweight alternatives, adapters, have been established in recent time.
Using adapters provides multiple benefits. They are ...
AdapterHub: A Framework for Adapting Transformers
Conference on Empirical Methods in Natural Language Processing
Jonas Pfeiffer, Andreas Rücklé, Clifton A. Poth, Aishwarya Kamath, Ivan Vulic, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych (2020)
<details> <summary>TLDR</summary> AdaptersHub is proposed, a framework that allows dynamic “stiching-in” of pre-trained adapters for different tasks and languages that enables scalable and easy access to sharing of task-specific models, particularly in low-resource scenarios. </details>Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Conference on Empirical Methods in Natural Language Processing
Clifton A. Poth, Hannah Sterz, Indraneil Paul, Sukannya Purkayastha, Leon Arne Engländer, Timo Imhof, Ivan Vuli'c, Sebastian Ruder, Iryna Gurevych, Jonas Pfeiffer (2023)
<details> <summary>TLDR</summary> Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models and allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups, is introduced. </details>OpenDelta
PEFT: State-of-the-art Parameter-Efficient Fine-Tuning
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
Conference on Empirical Methods in Natural Language Processing
Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-Peng Lim, R. Lee, Lidong Bing, Soujanya Poria (2023)
<details> <summary>TLDR</summary> LLM-Adapters is presented, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks, demonstrating that using adapter- based PEFT in smaller-scale LLMs with few extra trainable parameters yields comparable, and in some cases superior, performance to powerful LLMs in zero-shot inference on both reasoning tasks. </details>Alpaca-LoRA
Modular Deep Learning
arXiv.org
Jonas Pfeiffer, Sebastian Ruder, Ivan Vulic, E. Ponti (2023)
<details> <summary>TLDR</summary> A survey of modular architectures is offered, providing a unified view over several threads of research that evolved independently in the scientific literature, and various additional purposes of modularity are explored, including scaling language models, causal inference, programme induction, and planning in reinforcement learning. </details>Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
arXiv.org
Vladislav Lialin, Vijeta Deshpande, Anna Rumshisky (2023)
<details> <summary>TLDR</summary> A taxonomy that covers a broad range of methods and present a detailed method comparison with a specific focus on real-life efficiency and fine-tuning multibillion-scale language models is provided. </details>PEFT-Ref: A Modular Reference Architecture and Typology for Parameter-Efficient Finetuning Techniques
arXiv.org
Mohammed Sabry, Anya Belz (2023)
<details> <summary>TLDR</summary> A reference architecture is presented which standardises aspects shared by different PEFT techniques, while isolating differences to specific locations and interactions with the standard components, supporting not only direct comparison of different techniques and their efficiency and task performance, but also systematic exploration of reusability and composability of the different types of finetuned modules. </details>Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
arXiv.org
Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, Sai Qian Zhang (2024)
<details> <summary>TLDR</summary> This survey presents comprehensive studies of various PEFT algorithms, examining their performance and computational overhead, and overview of applications developed using different PEFT algorithms and discusses common techniques employed to mitigate computation costs for PEFT. </details>Parameter-Efficient Transfer Learning for NLP
International Conference on Machine Learning
N. Houlsby, A. Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, S. Gelly (2019)
<details> <summary>TLDR</summary> To demonstrate adapter's effectiveness, the recently proposed BERT Transformer model is transferred to 26 diverse text classification tasks, including the GLUE benchmark, and adapter attain near state-of-the-art performance, whilst adding only a few parameters per task. </details>K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters
Findings
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, Ming Zhou (2020)
<details> <summary>TLDR</summary> K-Adapter is proposed, which remains the original parameters of the pre-trained model fixed and supports continual knowledge infusion and captures richer factual and commonsense knowledge than RoBERTa. </details>Parameter-Efficient Transfer Learning with Diff Pruning
Annual Meeting of the Association for Computational Linguistics
Demi Guo, Alexander M. Rush, Yoon Kim (2020)
<details> <summary>TLDR</summary> Diff pruning can match the performance of finetuned baselines on the GLUE benchmark while only modifying 0.5% of the pretrained model’s parameters per task and scales favorably in comparison to popular pruning approaches. </details>Prefix-Tuning: Optimizing Continuous Prompts for Generation
Annual Meeting of the Association for Computational Linguistics
Xiang Lisa Li, Percy Liang (2021)
<details> <summary>TLDR</summary> Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix. </details>The Power of Scale for Parameter-Efficient Prompt Tuning
Conference on Empirical Methods in Natural Language Processing
Brian Lester, Rami Al-Rfou, Noah Constant (2021)
<details> <summary>TLDR</summary> This work explores “prompt tuning,” a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks and shows that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer and enables efficient “Prompt ensembling.” </details>Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Neural Information Processing Systems
Joe Davison (2021)
<details> <summary>TLDR</summary> Compacter is proposed, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work, and accomplishes this by building on top of ideas from adapters, low-rank optimization, and parameterized hypercomplex multiplication layers. </details>LoRA: Low-Rank Adaptation of Large Language Models
International Conference on Learning Representations
J. E. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Weizhu Chen (2021)
<details> <summary>TLDR</summary> Low-Rank Adaptation, or LoRA, is proposed, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. </details>[Paper PDF] [Code] [[Semantic
最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。
像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。
AI数字人视频创作平台
Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。
一站式AI创作平台
提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作
AI办公助手,复杂任务高效处理
AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领 域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!
AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI小说写作助手,一站式润色、改写、扩写
蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号