Awesome-Learning-with-Label-Noise

Awesome-Learning-with-Label-Noise

噪声标签学习研究资源汇总

Awesome-Learning-with-Label-Noise项目汇总了噪声标签学习领域的重要资源。该项目收集2008年以来的相关论文、代码和工具,涵盖多种噪声标签处理方法。这一资源列表为研究人员和开发者提供全面参考,有助于解决噪声标签问题,促进机器学习在不完美数据环境中的应用。

机器学习标签噪声深度学习数据集算法Github开源项目
<div align="center"> <h1>Awesome Learning with Noisy Labels</h1> <a href="https://awesome.re"><img src="https://awesome.re/badge.svg"/></a> </div>
A curated list of resources for Learning with Noisy Labels


Papers & Code

  • 2008-NIPS - Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. [Paper] [Code]

  • 2009-ICML - Supervised learning from multiple experts: whom to trust when everyone lies a bit. [Paper]

  • 2011-NIPS - Bayesian Bias Mitigation for Crowdsourcing. [Paper]

  • 2012-ICML - Learning to Label Aerial Images from Noisy Data. [Paper]

  • 2013-NIPS - Learning with Multiple Labels. [Paper]

  • 2013-NIPS - Learning with Noisy Labels. [Paper] [Code]

  • 2014-ML - Learning from multiple annotators with varying expertise. [Paper]

  • 2014 - A Comprehensive Introduction to Label Noise. [Paper]

  • 2014 - Learning from Noisy Labels with Deep Neural Networks. [Paper]

  • 2015-ICLR_W - Training Convolutional Networks with Noisy Labels. [Paper] [Code]

  • 2015-CVPR - Learning from Massive Noisy Labeled Data for Image Classification. [Paper] [Code]

  • 2015-CVPR - Visual recognition by learning from web data: A weakly supervised domain generalization approach. [Paper] [Code]

  • 2015-CVPR - Training Deep Neural Networks on Noisy Labels with Bootstrapping. [Paper] [Loss-Code-Unofficial-1] [Loss-Code-Unofficial-2] [Code-Keras]

  • 2015-ICCV - Webly supervised learning of convolutional networks. [Paper] [Project Pagee]

  • 2015-TPAMI - Classification with noisy labels by importance reweighting. [Paper] [Code]

  • 2015-NIPS - Learning with Symmetric Label Noise: The Importance of Being Unhinged. [Paper] [Loss-Code-Unofficial]

  • 2015-Arxiv - Making Risk Minimization Tolerant to Label Noise. [Paper]

  • 2015 - Learning Discriminative Reconstructions for Unsupervised Outlier Removal. [Paper] [Code]

  • 2015-TNLS - Rboost: label noise-robust boosting algorithm based on a nonconvex loss function and the numerically stable base learners. [Paper]

  • 2016-AAAI - Robust semi-supervised learning through label aggregation. [Paper]

  • 2016-ICLR - Auxiliary Image Regularization for Deep CNNs with Noisy Labels. [Paper] [Code]

  • 2016-CVPR - Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels. [Paper] [Code]

  • 2016-ICML - Loss factorization, weakly supervised learning and label noise robustness. [Paper]

  • 2016-RL - On the convergence of a family of robust losses for stochastic gradient descent. [Paper]

  • 2016-NC - Noise detection in the Meta-Learning Level. [Paper] [Additional information]

  • 2016-ECCV - The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition. [Paper] [Project Page]

  • 2016-ICASSP - Training deep neural-networks based on unreliable labels. [Paper] [Poster] [Code-Unofficial]

  • 2016-ICDM - Learning deep networks from noisy labels with dropout regularization. [Paper] [Code]

  • 2016-KBS - A robust multi-class AdaBoost algorithm for mislabeled noisy data. [Paper]

  • 2017-AAAI - Robust Loss Functions under Label Noise for Deep Neural Networks. [Paper]

  • 2017-PAKDD - On the Robustness of Decision Tree Learning under Label Noise. [Paper]

  • 2017-ICLR - Training deep neural-networks using a noise adaptation layer. [Paper] [Code]

  • 2017-ICLR - Who Said What: Modeling Individual Labelers Improves Classification. [Paper] [Code]

  • 2017-CVPR - Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach. [Paper] [Code]

  • 2017-CVPR - Learning From Noisy Large-Scale Datasets With Minimal Supervision. [Paper]

  • 2017-CVPR - Lean crowdsourcing: Combining humans and machines in an online system. [Paper] [Code]

  • 2017-CVPR - Attend in groups: a weakly-supervised deep learning framework for learning from web data. [Paper] [Code]

  • 2017-ICML - Robust Probabilistic Modeling with Bayesian Data Reweighting. [Paper] [Code]

  • 2017-ICCV - Learning From Noisy Labels With Distillation. [Paper] [Code]

  • 2017-NIPS - Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks. [Paper]

  • 2017-NIPS - Active bias: Training more accurate neural networks by emphasizing high variance samples. [Paper] [Code]

  • 2017-NIPS - Decoupling" when to update" from" how to update". [Paper] [Code]

  • 2017-IEEE-TIFS - A Light CNN for Deep Face Representation with Noisy Labels. [Paper] [Code-Pytorch] [Code-Keras] [Code-Tensorflow]

  • 2017-TNLS - Improving Crowdsourced Label Quality Using Noise Correction. [Paper]

  • 2017-ML - Learning to Learn from Weak Supervision by Full Supervision. [Paper] [Code]

  • 2017-ML - Avoiding your teacher's mistakes: Training neural networks with controlled weak supervision. [Paper]

  • 2017-Arxiv - Deep Learning is Robust to Massive Label Noise. [Paper]

  • 2017-Arxiv - Fidelity-weighted learning. [Paper]

  • 2017 - Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels. [Paper]

  • 2017-Arxiv - Learning with confident examples: Rank pruning for robust classification with noisy labels. [Paper] [Code]

  • 2017-Arxiv - Regularizing neural networks by penalizing confident output distributions. [Paper]

  • 2017 - Learning with Auxiliary Less-Noisy Labels. [Paper]

  • 2018-AAAI - Deep learning from crowds. [Paper]

  • 2018-ICLR - mixup: Beyond Empirical Risk Minimization. [Paper] [Code]

  • 2018-ICLR - Learning From Noisy Singly-labeled Data. [Paper] [Code]

  • 2018-ICLR_W - How Do Neural Networks Overcome Label Noise?. [Paper]

  • 2018-CVPR - CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise. [Paper] [Code]

  • 2018-CVPR - Joint Optimization Framework for Learning with Noisy Labels. [Paper] [Code] [Code-Unofficial-Pytorch]

  • 2018-CVPR - Iterative Learning with Open-set Noisy Labels. [Paper] [Code]

  • 2018-ICML - MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels. [Paper] [Code]

  • 2018-ICML - Learning to Reweight Examples for Robust Deep Learning. [Paper] [Code] [Code-Unofficial-PyTorch]

  • 2018-ICML - Dimensionality-Driven Learning with Noisy Labels. [Paper] [Code]

  • 2018-ECCV - CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images. [Paper] [Code]

  • 2018-ECCV - Deep Bilevel Learning. [Paper] [Code]

  • 2018-ECCV - Learning with Biased Complementary Labels. [Paper] [Code]

  • 2018-ISBI - Training a neural network based on unreliable human annotation of medical images. [Paper]

  • 2018-WACV - Iterative Cross Learning on Noisy Labels. [Paper]

  • 2018-WACV - A semi-supervised two-stage approach to learning from noisy labels.

编辑推荐精选

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

Transly

Transly

实时语音翻译/同声传译工具

Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
商汤小浣熊

商汤小浣熊

最强AI数据分析助手

小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。

imini AI

imini AI

像人一样思考的AI智能体

imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。

Keevx

Keevx

AI数字人视频创作平台

Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。

即梦AI

即梦AI

一站式AI创作平台

提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作

下拉加载更多