Awesome-Multimodal-Large-Language-Models

Awesome-Multimodal-Large-Language-Models

多模态大语言模型研究资源与最新进展汇总

该项目汇总了多模态大语言模型(MLLMs)领域的最新研究成果,包括论文、数据集和评估基准。涵盖多模态指令微调、幻觉、上下文学习等方向,提供相关代码和演示。项目还包含MLLM调查报告及MME、Video-MME等评估基准,为研究人员提供全面参考。

多模态大语言模型视觉语言模型指令微调视频理解模型评估Github开源项目

Awesome-Multimodal-Large-Language-Models

Our MLLM works

🔥🔥🔥 A Survey on Multimodal Large Language Models
Project Page [This Page] | Paper

The first comprehensive survey for Multimodal Large Language Models (MLLMs). :sparkles: </div>

Welcome to add WeChat ID (wmd_ustc) to join our MLLM communication group! :star2: </div>


🔥🔥🔥 VITA: Towards Open-Source Interactive Omni Multimodal LLM

<p align="center"> <img src="./images/vita.png" width="80%" height="80%"> </p>

<font size=7><div align='center' > [🍎 Project Page] [📖 arXiv Paper] [🌼 GitHub] </div></font>

[2024.08.12] We are announcing VITA, the first-ever open-source Multimodal LLM that can process Video, Image, Text, and Audio, and meanwhile has an advanced multimodal interactive experience. 🌟

<b>Omni Multimodal Understanding</b>. VITA demonstrates robust foundational capabilities of multilingual, vision, and audio understanding, as evidenced by its strong performance across a range of both unimodal and multimodal benchmarks. ✨

<b>Non-awakening Interaction</b>. VITA can be activated and respond to user audio questions in the environment without the need for a wake-up word or button. ✨

<b>Audio Interrupt Interaction</b>. VITA is able to simultaneously track and filter external queries in real-time. This allows users to interrupt the model's generation at any time with new questions, and VITA will respond to the new query accordingly. ✨


🔥🔥🔥 Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis

<p align="center"> <img src="./images/videomme.jpg" width="80%" height="80%"> </p>

<font size=7><div align='center' > [🍎 Project Page] [📖 arXiv Paper] [📊 Dataset][🏆 Leaderboard] </div></font>

[2024.06.03] We are very proud to launch Video-MME, the first-ever comprehensive evaluation benchmark of MLLMs in Video Analysis! 🌟

It applies to both <b>image MLLMs</b>, i.e., generalizing to multiple images, and <b>video MLLMs</b>. Our leaderboard involes SOTA models like Gemini 1.5 Pro, GPT-4o, GPT-4V, LLaVA-NeXT-Video, InternVL-Chat-V1.5, and Qwen-VL-Max. 🌟

It includes both <b>short- (< 2min)</b>, <b>medium- (4min~15min)</b>, and <b>long-term (30min~60min)</b> videos, ranging from <b>11 seconds to 1 hour</b>. ✨

<b>All data are newly collected and annotated by humans, not from any existing video dataset</b>. ✨


🔥🔥🔥 MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
Project Page [Leaderboards] | Paper | :black_nib: Citation

A comprehensive evaluation benchmark for MLLMs. Now the leaderboards include 50+ advanced models, such as Qwen-VL-Max, Gemini Pro, and GPT-4V. :sparkles:

If you want to add your model in our leaderboards, please feel free to email bradyfu24@gmail.com. We will update the leaderboards in time. :sparkles:

<details><summary>Download MME :star2::star2: </summary>

The benchmark dataset is collected by Xiamen University for academic research only. You can email yongdongluo@stu.xmu.edu.cn to obtain the dataset, according to the following requirement.

Requirement: A real-name system is encouraged for better academic communication. Your email suffix needs to match your affiliation, such as xx@stu.xmu.edu.cn and Xiamen University. Otherwise, you need to explain why. Please include the information bellow when sending your application email.

Name: (tell us who you are.)
Affiliation: (the name/url of your university or company)
Job Title: (e.g., professor, PhD, and researcher)
Email: (your email address)
How to use: (only for non-commercial use)
</details>

<br> 📑 If you find our projects helpful to your research, please consider citing: <br>

@article{fu2023mme,
  title={MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models},
  author={Fu, Chaoyou and Chen, Peixian and Shen, Yunhang and Qin, Yulei and Zhang, Mengdan and Lin, Xu and Yang, Jinrui and Zheng, Xiawu and Li, Ke and Sun, Xing and others},
  journal={arXiv preprint arXiv:2306.13394},
  year={2023}
}

@article{fu2024vita,
  title={VITA: Towards Open-Source Interactive Omni Multimodal LLM},
  author={Fu, Chaoyou and Lin, Haojia and Long, Zuwei and Shen, Yunhang and Zhao, Meng and Zhang, Yifan and Wang, Xiong and Yin, Di and Ma, Long and Zheng, Xiawu and He, Ran and Ji, Rongrong and Wu, Yunsheng and Shan, Caifeng and Sun, Xing},
  journal={arXiv preprint arXiv:2408.05211},
  year={2024}
}

@article{fu2024video,
  title={Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis},
  author={Fu, Chaoyou and Dai, Yuhan and Luo, Yondong and Li, Lei and Ren, Shuhuai and Zhang, Renrui and Wang, Zihan and Zhou, Chenyu and Shen, Yunhang and Zhang, Mengdan and others},
  journal={arXiv preprint arXiv:2405.21075},
  year={2024}
}

@article{yin2023survey,
  title={A survey on multimodal large language models},
  author={Yin, Shukang and Fu, Chaoyou and Zhao, Sirui and Li, Ke and Sun, Xing and Xu, Tong and Chen, Enhong},
  journal={arXiv preprint arXiv:2306.13549},
  year={2023}
}


<font size=5><center><b> Table of Contents </b> </center></font>


Awesome Papers

Multimodal Instruction Tuning

TitleVenueDateCodeDemo
Star <br> mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models <br>arXiv2024-08-09Github-
Star <br> VITA: Towards Open-Source Interactive Omni Multimodal LLM <br>arXiv2024-08-09Github-
Star <br> LLaVA-OneVision: Easy Visual Task Transfer <br>arXiv2024-08-06GithubDemo
Star <br> MiniCPM-V: A GPT-4V Level MLLM on Your Phone <br>arXiv2024-08-03GithubDemo
VILA^2: VILA Augmented VILAarXiv2024-07-24--
SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language ModelsarXiv2024-07-22--
EVLM: An Efficient Vision-Language Model for Visual UnderstandingarXiv2024-07-19--
Star <br> InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output <br>arXiv2024-07-03GithubDemo
Star <br> OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding <br>arXiv2024-06-27GithubLocal Demo
Star <br> Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs <br>arXiv2024-06-24GithubLocal Demo
Star <br> Long Context Transfer from Language to Vision <br>arXiv2024-06-24GithubLocal Demo
Star <br> Unveiling Encoder-Free Vision-Language Models <br>arXiv2024-06-17GithubLocal Demo
Star <br> Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models <br>arXiv2024-06-12Github-
Star <br> VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs <br>arXiv2024-06-11GithubLocal Demo
Star <br> Parrot: Multilingual Visual Instruction Tuning <br>arXiv2024-06-04Github-
Star <br> Ovis: Structural Embedding Alignment for Multimodal Large Language Model <br>arXiv2024-05-31Github-
Star <br> Matryoshka Query Transformer for Large Vision-Language Models <br>arXiv2024-05-29GithubDemo
Star <br> ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models <br>arXiv2024-05-24Github-
Star <br> Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models <br>arXiv2024-05-24GithubDemo
Star <br> [**Libra: Building Decoupled

编辑推荐精选

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

酷表ChatExcel

酷表ChatExcel

大模型驱动的Excel数据处理工具

基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。

AI工具使用教程AI营销产品酷表ChatExcelAI智能客服
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
AIWritePaper论文写作

AIWritePaper论文写作

AI论文写作指导平台

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

数据安全AI助手热门AI工具AI辅助写作AI论文工具论文写作智能生成大纲
博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

热门AI工具AI办公办公工具智能排版AI生成PPT博思AIPPT海量精品模板AI创作
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

下拉加载更多