
通用听觉能力赋能大语言模型 实现音频输入的多模态理解
SALMONN是清华大学和字节跳动共同开发的大语言模型,能处理语音、音频和音乐输入。通过结合Whisper和BEATs编码器,SALMONN实现了多语言语音识别、翻译和音频-语音推理等功能。该模型可理解多种音频输入并执行文本和语音指令,展现了跨模态能力,推动了具听觉能力的人工智能发展。
🚀🚀 Welcome to the repo of SALMONN!
SALMONN is a large language model (LLM) enabling speech, audio events, and music inputs, which is developed by the Department of Electronic Engineering at Tsinghua University and ByteDance. Instead of speech-only input or audio-event-only input, SALMONN can perceive and understand all kinds of audio inputs and therefore obtain emerging capabilities such as multilingual speech recognition and translation and audio-speech co-reasoning. This can be regarded as giving the LLM "ears" and cognitive hearing abilities, which makes SALMONN a step towards hearing-enabled artificial general intelligence.
<div style='display:flex; gap: 0.25rem; '> <a href='https://bytedance.github.io/SALMONN/'><img src='https://img.shields.io/badge/SALMONN_13B-Demo-blue'></a> <a href='https://huggingface.co/spaces/tsinghua-ee/SALMONN-7B-gradio'><img src='https://img.shields.io/badge/SALMONN_7B-Demo-orange'></a> <a href='https://openreview.net/pdf?id=14rn7HpKVk'><img src='https://img.shields.io/badge/paper-PDF-green'></a> <a href='https://huggingface.co/tsinghua-ee/SALMONN'><img src='https://img.shields.io/badge/huggingface-checkpoint-yellow'></a> </div>The model architecture of SALMONN is shown below. A window-level Q-Former is used as the connection module to fuse the outputs from a Whisper speech encoder and a BEATs audio encoder as augmented audio tokens, which are aligned with the LLM input space. The LoRA adaptor aligns the augmented LLM input space with its output space. The text prompt is used to instruct SALMONN to answer open-ended questions about the general audio inputs and the answers are in the LLM text responses.
<div align=center><img src="resource/structure.png" height="100%" width="75%"/></div>Compared with traditional speech and audio processing tasks such as speech recognition and audio caption, SALMONN leverages the general knowledge and cognitive abilities of the LLM to achieve a cognitively oriented audio perception, which dramatically improves the versatility of the model and the richness of the task. In addition, SALMONN is able to follow textual commands and even spoken commands with a relatively high degree of accuracy. Since SALMONN only uses training data based on textual commands, listening to spoken commands is also a cross-modal emergent ability.
Here are some examples of SALMONN.
| Audio | Response |
|---|---|
| gunshots.wav | ![]() |
| duck.wav | ![]() |
| music.wav | ![]() |
For SALMONN-13B v1, you need to use the following dependencies:
pip install -r requirements.txt.whisper_path.beats_path.llama_path.python3 train.py --cfg-path configs/config.yaml in A100-SXM-80GB.ckpt.python3 cli_inference.py --cfg-path configs/decode_config.yaml in A100-SXM-80GB. Now you can input wav_path and prompt. Enjoy yourself !ckpt.python3 web_demo.py --cfg-path configs/decode_config.yaml in A100-SXM-80GB.Team Tsinghua: Wenyi Yu, Changli Tang, Guangzhi Sun, Chao Zhang
Team ByteDance: Xianzhao Chen, Wei Li, Tian Tan, Lu Lu, Zejun Ma
If you find SALMONN useful, please cite our paper:
@inproceedings{
tang2024salmonn,
title={{SALMONN}: Towards Generic Hearing Abilities for Large Language Models},
author={Changli Tang and Wenyi Yu and Guangzhi Sun and Xianzhao Chen and Tian Tan and Wei Li and Lu Lu and Zejun MA and Chao Zhang},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=14rn7HpKVk}
}


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。