MMSA

MMSA

多模态情感分析统一框架助力研究

MMSA是一个多模态情感分析统一框架,集成15种先进MSA模型,支持MOSI、MOSEI和CH-SIMS数据集。框架提供Python API和命令行工具,方便使用。结合MMSA-FET工具包,可进行自定义多模态特征提取实验。MMSA为多模态情感分析研究提供了全面系统的支持平台,助力相关领域的深入研究与创新。

MMSA多模态情感分析深度学习模型数据集Python工具包Github开源项目

MMSA

MMSA is a unified framework for Multimodal Sentiment Analysis.

Features

  • Train, test and compare multiple MSA models in a unified framework.
  • Supports 15 MSA models, including recent works.
  • Supports 3 MSA datasets: MOSI, MOSEI, and CH-SIMS.
  • Easy to use, provides Python APIs and commandline tools.
  • Experiment with fully customized multimodal features extracted by MMSA-FET toolkit.

1. Get Started

Note: From version 2.0, we packaged the project and uploaded it to PyPI in the hope of making it easier to use. If you don't like the new structure, you can always switch back to v_1.0 branch.

1.1 Use Python API

  • Run pip install MMSA in your python virtual environment.

  • Import and use in any python file:

    from MMSA import MMSA_run # run LMF on MOSI with default hyper parameters MMSA_run('lmf', 'mosi', seeds=[1111, 1112, 1113], gpu_ids=[0]) # tune Self_mm on MOSEI with default hyper parameter range MMSA_run('self_mm', 'mosei', seeds=[1111], gpu_ids=[1]) # run TFN on SIMS with altered config config = get_config_regression('tfn', 'mosi') config['post_fusion_dim'] = 32 config['featurePath'] = '~/feature.pkl' MMSA_run('tfn', 'mosi', config=config, seeds=[1111]) # run MTFN on SIMS with custom config file MMSA_run('mtfn', 'sims', config_file='./config.json')
  • For more detailed usage, please refer to APIs.

1.2 Use Commandline Tool

  • Run pip install MMSA in your python virtual environment.

  • Use from command line:

    # show usage $ python -m MMSA -h # train & test LMF on MOSI with default parameters $ python -m MMSA -d mosi -m lmf -s 1111 -s 1112 # tune 50 times of TFN on MOSEI with custom config file & custom save dir $ python -m MMSA -d mosei -m tfn -t -tt 30 --model-save-dir ./models --res-save-dir ./results # train & test self_mm on SIMS with custom audio features & use gpu2 $ python -m MMSA -d sims -m self_mm -Fa ./Features/Feature-A.pkl --gpu-ids 2
  • For more detailed usage, please refer to Commandline Arguments.

1.3 Clone & Edit the Code

  • Clone this repo and install requirements.
    $ git clone https://github.com/thuiar/MMSA
  • Edit the codes to your needs. See Code Structure for a basic review of our code structure.
  • After editing, run the following commands:
    $ cd MMSA-master # make sure you're in the top directory $ pip install .
  • Then run the code like above sections.
  • To further change the code, you need to re-install the package:
    $ pip uninstall MMSA $ pip install .
  • If you'd rather run the code without installation(like in v_1.0), please refer to Run Code without Installation.

2. Datasets

MMSA currently supports MOSI, MOSEI, and CH-SIMS dataset. Use the following links to download raw videos, feature files and label files. You don't need to download raw videos if you're not planning to run end-to-end tasks.

SHA-256 for feature files:

`MOSI/Processed/unaligned_50.pkl`: `78e0f8b5ef8ff71558e7307848fc1fa929ecb078203f565ab22b9daab2e02524` `MOSI/Processed/aligned_50.pkl`: `d3994fd25681f9c7ad6e9c6596a6fe9b4beb85ff7d478ba978b124139002e5f9` `MOSEI/Processed/unaligned_50.pkl`: `ad8b23d50557045e7d47959ce6c5b955d8d983f2979c7d9b7b9226f6dd6fec1f` `MOSEI/Processed/aligned_50.pkl`: `45eccfb748a87c80ecab9bfac29582e7b1466bf6605ff29d3b338a75120bf791` `SIMS/Processed/unaligned_39.pkl`: `c9e20c13ec0454d98bb9c1e520e490c75146bfa2dfeeea78d84de047dbdd442f`

MMSA uses feature files that are organized as follows:

{ "train": { "raw_text": [], # raw text "audio": [], # audio feature "vision": [], # video feature "id": [], # [video_id$_$clip_id, ..., ...] "text": [], # bert feature "text_bert": [], # word ids for bert "audio_lengths": [], # audio feature lenth(over time) for every sample "vision_lengths": [], # same as audio_lengths "annotations": [], # strings "classification_labels": [], # Negative(0), Neutral(1), Positive(2). Deprecated in v_2.0 "regression_labels": [] # Negative(<0), Neutral(0), Positive(>0) }, "valid": {***}, # same as "train" "test": {***}, # same as "train" }

Note: For MOSI and MOSEI, the pre-extracted text features are from BERT, different from the original glove features in the CMU-Multimodal-SDK.

Note: If you wish to extract customized multimodal features, please try out our MMSA-FET

3. Supported MSA Models

TypeModel NameFromPublished
Single-TaskTFNTensor-Fusion-NetworkEMNLP 2017
Single-TaskEF_LSTMMultimodalDNNACL 2018 Workshop
Single-TaskLF_DNNMultimodalDNNACL 2018 Workshop
Single-TaskLMFLow-rank-Multimodal-FusionACL 2018
Single-TaskMFNMemory-Fusion-NetworkAAAI 2018
Single-TaskGraph-MFNGraph-Memory-Fusion-NetworkACL 2018
Single-TaskMulT(without CTC)Multimodal-TransformerACL 2019
Single-TaskMFMMFMICRL 2019
Multi-TaskMLF_DNNMMSAACL 2020
Multi-TaskMTFNMMSAACL 2020
Multi-TaskMLMFMMSAACL 2020
Multi-TaskSELF_MMSelf-MMAAAI 2021
Single-TaskBERT-MAGMAG-BERTACL 2020
Single-TaskMISAMISAACMMM 2020
Single-TaskMMIMMMIMEMNLP 2021
Single-TaskBBFN (Work in Progress)BBFNICMI 2021
Single-TaskCENETCENETTMM 2022
Multi-TaskTETFNTETFNPR 2023

4. Results

Baseline results are reported in results/result-stat.md

5. Citation

Please cite our paper if you find our work useful for your research:

@inproceedings{yu2020ch,
  title={CH-SIMS: A Chinese Multimodal Sentiment Analysis Dataset with Fine-grained Annotation of Modality},
  author={Yu, Wenmeng and Xu, Hua and Meng, Fanyang and Zhu, Yilin and Ma, Yixiao and Wu, Jiele and Zou, Jiyun and Yang, Kaicheng},
  booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
  pages={3718--3727},
  year={2020}
}
@inproceedings{yu2021learning,
  title={Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis},
  author={Yu, Wenmeng and Xu, Hua and Yuan, Ziqi and Wu, Jiele},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={35},
  number={12},
  pages={10790--10797},
 

编辑推荐精选

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

酷表ChatExcel

酷表ChatExcel

大模型驱动的Excel数据处理工具

基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。

AI工具使用教程AI营销产品酷表ChatExcelAI智能客服
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
AIWritePaper论文写作

AIWritePaper论文写作

AI论文写作指导平台

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

数据安全AI助手热门AI工具AI辅助写作AI论文工具论文写作智能生成大纲
博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

热门AI工具AI办公办公工具智能排版AI生成PPT博思AIPPT海量精品模板AI创作
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

下拉加载更多