cleanlab

cleanlab

开源工具自动检测和优化机器学习数据集

cleanlab是一款开源的数据中心AI工具包,能够自动检测机器学习数据集中的标签错误、异常值和重复项等问题。该工具适用于图像、文本和表格等各类数据,并支持所有机器学习模型。除了发现数据问题,cleanlab还可以训练更稳健的模型,评估数据质量。基于可靠的理论基础,cleanlab运行高效,操作简便,是优化数据质量和提升模型性能的实用工具。

cleanlab机器学习数据清理标签错误检测数据中心AIGithub开源项目
<p align="center"> <img src="https://raw.githubusercontent.com/cleanlab/assets/master/cleanlab/cleanlab_logo_open_source.png" width=60% height=60%> </p> <div align="center"> <a href="https://pypi.org/pypi/cleanlab/" target="_blank"><img src="https://img.shields.io/pypi/v/cleanlab.svg" alt="pypi_versions"></a> <a href="https://pypi.org/pypi/cleanlab/" target="_blank"><img src="https://img.shields.io/badge/python-3.8%2B-blue" alt="py_versions"></a> <a href="https://app.codecov.io/gh/cleanlab/cleanlab" target="_blank"><img src="https://codecov.io/gh/cleanlab/cleanlab/branch/master/graph/badge.svg" alt="coverage"></a> <a href="https://github.com/cleanlab/cleanlab/stargazers/" target="_blank"><img src="https://img.shields.io/github/stars/cleanlab/cleanlab?style=social&maxAge=2592000" alt="Github Stars"></a> <a href="https://cleanlab.ai/slack" target="_blank"><img src="https://img.shields.io/static/v1?logo=slack&style=flat&color=white&label=slack&message=join" alt="Slack Community"></a> <a href="https://twitter.com/CleanlabAI" target="_blank"><img src="https://img.shields.io/twitter/follow/CleanlabAI?style=social" alt="Twitter"></a> </div> <h4 align="center"> <p> <a href="https://docs.cleanlab.ai/">Documentation</a> | <a href="https://github.com/cleanlab/examples">Examples</a> | <a href="https://cleanlab.ai/blog/">Blog</a> | <a href="#citation-and-related-publications">Research</a> | <a href="#try-easy-mode-with-cleanlab-studio">Cleanlab Studio</a> | <a href="#join-our-community">Community</a> <p> </h4>

cleanlab helps you clean data and labels by automatically detecting issues in a ML dataset. To facilitate machine learning with messy, real-world data, this data-centric AI package uses your existing models to estimate dataset problems that can be fixed to train even better models.

<p align="center"> <img src="https://raw.githubusercontent.com/cleanlab/assets/master/cleanlab/datalab_issues.png" width=74% height=74%> </p> <p align="center"> Examples of various issues in Cat/Dog dataset <b>automatically detected</b> by cleanlab via this code: </p>
lab = cleanlab.Datalab(data=dataset, label="column_name_for_labels") # Fit any ML model, get its feature_embeddings & pred_probs for your data lab.find_issues(features=feature_embeddings, pred_probs=pred_probs) lab.report()

Try easy mode with Cleanlab Studio

While this open-source package finds data issues, its utility depends on you having: a good existing ML model + an interface to efficiently fix these issues in your dataset. Providing all these pieces, Cleanlab Studio is a Data Curation platform to find and fix problems in any {image, text, tabular} dataset. Cleanlab Studio automatically runs optimized algorithms from this package on top of AutoML & Foundation models fit to your data, and presents detected issues (+ AI-suggested fixes) in an intelligent data correction interface.

Try it for free! Adopting Cleanlab Studio enables users of this package to:

  • Work 100x faster (1 min to analyze your raw data with zero code or ML work; optionally use Python API)
  • Produce better-quality data (10x more types of issues auto detected & corrected via built-in AI)
  • Accomplish more (auto-label data, deploy ML instantly, audit LLM inputs/outputs, moderate content, ...)
  • Monitor incoming data and detect issues in real-time (integrate your data pipeline on an Enterprise plan)
<p align="center"> <img src="https://raw.githubusercontent.com/cleanlab/assets/master/cleanlab/ml-with-cleanlab-studio.png" alt="The modern AI pipeline automated with Cleanlab Studio"> </p>

Run cleanlab open-source

This cleanlab package runs on Python 3.8+ and supports Linux, macOS, as well as Windows.

Practicing data-centric AI can look like this:

  1. Train initial ML model on original dataset.
  2. Utilize this model to diagnose data issues (via cleanlab methods) and improve the dataset.
  3. Train the same model on the improved dataset.
  4. Try various modeling techniques to further improve performance.

Most folks jump from Step 1 → 4, but you may achieve big gains without any change to your modeling code by using cleanlab! Continuously boost performance by iterating Steps 2 → 4 (and try to evaluate with cleaned data).

Use cleanlab with any model and in most ML tasks

All features of cleanlab work with any dataset and any model. Yes, any model: PyTorch, Tensorflow, Keras, JAX, HuggingFace, OpenAI, XGBoost, scikit-learn, etc.

cleanlab is useful across a wide variety of Machine Learning tasks. Specific tasks this data-centric AI package offers dedicated functionality for include:

  1. Binary and multi-class classification
  2. Multi-label classification (e.g. image/document tagging)
  3. Token classification (e.g. entity recognition in text)
  4. Regression (predicting numerical column in a dataset)
  5. Image segmentation (images with per-pixel annotations)
  6. Object detection (images with bounding box annotations)
  7. Classification with data labeled by multiple annotators
  8. Active learning with multiple annotators (suggest which data to label or re-label to improve model most)
  9. Outlier detection (identify atypical data that appears out of distribution)

For other ML tasks, cleanlab can still help you improve your dataset if appropriately applied. See our Example Notebooks and Blog.

So fresh, so cleanlab

Beyond automatically catching all sorts of issues lurking in your data, this data-centric AI package helps you deal with noisy labels and train more robust ML models. Here's an example:

# cleanlab works with **any classifier**. Yup, you can use PyTorch/TensorFlow/OpenAI/XGBoost/etc. cl = cleanlab.classification.CleanLearning(sklearn.YourFavoriteClassifier()) # cleanlab finds data and label issues in **any dataset**... in ONE line of code! label_issues = cl.find_label_issues(data, labels) # cleanlab trains a robust version of your model that works more reliably with noisy data. cl.fit(data, labels) # cleanlab estimates the predictions you would have gotten if you had trained with *no* label issues. cl.predict(test_data) # A universal data-centric AI tool, cleanlab quantifies class-level issues and overall data quality, for any dataset. cleanlab.dataset.health_summary(labels, confident_joint=cl.confident_joint)

cleanlab cleans your data's labels via state-of-the-art confident learning algorithms, published in this paper and blog. See some of the datasets cleaned with cleanlab at labelerrors.com.

cleanlab is:

  1. backed by theory -- with provable guarantees of exact label noise estimation, even with imperfect models.
  2. fast -- code is parallelized and scalable.
  3. easy to use -- one line of code to find mislabeled data, bad annotators, outliers, or train noise-robust models.
  4. general -- works with any dataset (text, image, tabular, audio,...) + any model (PyTorch, OpenAI, XGBoost,...) <br/>

<p align="center"> Examples of incorrect given labels in various image datasets <a href="https://l7.curtisnorthcutt.com/label-errors">found and corrected</a> using cleanlab. While these examples are from image datasets, this also works for text, audio, tabular data. </p>

Citation and related publications

cleanlab is based on peer-reviewed research. Here are relevant papers to cite if you use this package:

<details><summary><a href="https://arxiv.org/abs/1911.00068">Confident Learning (JAIR '21)</a> (<b>click to show bibtex</b>) </summary>
@article{northcutt2021confidentlearning,
    title={Confident Learning: Estimating Uncertainty in Dataset Labels},
    author={Curtis G. Northcutt and Lu Jiang and Isaac L. Chuang},
    journal={Journal of Artificial Intelligence Research (JAIR)},
    volume={70},
    pages={1373--1411},
    year={2021}
}
</details> <details><summary><a href="https://arxiv.org/abs/1705.01936">Rank Pruning (UAI '17)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{northcutt2017rankpruning,
    author={Northcutt, Curtis G. and Wu, Tailin and Chuang, Isaac L.},
    title={Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels},
    booktitle = {Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence},
    series = {UAI'17},
    year = {2017},
    location = {Sydney, Australia},
    numpages = {10},
    url = {http://auai.org/uai2017/proceedings/papers/35.pdf},
    publisher = {AUAI Press},
}
</details> <details><summary><a href="https://people.csail.mit.edu/jonasmueller/info/LabelQuality_icml.pdf"> Label Quality Scoring (ICML '22)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{kuan2022labelquality,
    title={Model-agnostic label quality scoring to detect real-world label errors},
    author={Kuan, Johnson and Mueller, Jonas},
    booktitle={ICML DataPerf Workshop},
    year={2022}
}
</details> <details><summary><a href="https://arxiv.org/abs/2207.03061"> Out-of-Distribution Detection (ICML '22)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{kuan2022ood,
    title={Back to the Basics: Revisiting Out-of-Distribution Detection Baselines},
    author={Kuan, Johnson and Mueller, Jonas},
    booktitle={ICML Workshop on Principles of Distribution Shift},
    year={2022}
}
</details> <details><summary><a href="https://arxiv.org/abs/2210.03920"> Token Classification Label Errors (NeurIPS '22)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{wang2022tokenerrors,
    title={Detecting label errors in token classification data},
    author={Wang, Wei-Chen and Mueller, Jonas},
    booktitle={NeurIPS Workshop on Interactive Learning for Natural Language Processing (InterNLP)},
    year={2022}
}
</details> <details><summary><a href="https://arxiv.org/abs/2210.06812"> CROWDLAB for Data with Multiple Annotators (NeurIPS '22)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{goh2022crowdlab,
    title={CROWDLAB: Supervised learning to infer consensus labels and quality scores for data with multiple annotators},
    author={Goh, Hui Wen and Tkachenko, Ulyana and Mueller, Jonas},
    booktitle={NeurIPS Human in the Loop Learning Workshop},
    year={2022}
}
</details> <details><summary><a href="https://arxiv.org/abs/2301.11856"> ActiveLab: Active learning with data re-labeling (ICLR '23)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{goh2023activelab,
    title={ActiveLab: Active Learning with Re-Labeling by Multiple Annotators},
    author={Goh, Hui Wen and Mueller, Jonas},
    booktitle={ICLR Workshop on Trustworthy ML},
    year={2023}
}
</details> <details><summary><a href="https://arxiv.org/abs/2211.13895"> Incorrect Annotations in Multi-Label Classification (ICLR '23)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{thyagarajan2023multilabel,
    title={Identifying Incorrect Annotations in Multi-Label Classification Data},
    author={Thyagarajan, Aditya and Snorrason, Elías and Northcutt, Curtis and Mueller, Jonas},
    booktitle={ICLR Workshop on Trustworthy ML},
    year={2023}
}
</details> <details><summary><a href="https://arxiv.org/abs/2305.15696"> Detecting Dataset Drift and Non-IID Sampling (ICML '23)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{cummings2023drift,
    title={Detecting Dataset Drift and Non-IID Sampling via k-Nearest Neighbors},
    author={Cummings, Jesse and Snorrason, Elías and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
</details> <details><summary><a href="https://arxiv.org/abs/2305.16583"> Detecting Errors in Numerical Data (ICML '23)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{zhou2023errors,
    title={Detecting Errors in Numerical Data via any Regression Model},
    author={Zhou, Hang and Mueller, Jonas and Kumar, Mayank and Wang, Jane-Ling and Lei, Jing},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
    year={2023}
}
</details> <details><summary><a href="https://arxiv.org/abs/2309.00832"> ObjectLab: Mislabeled Images in Object Detection Data (ICML '23)</a> (<b>click to show bibtex</b>) </summary>
@inproceedings{tkachenko2023objectlab,
    title={ObjectLab: Automated Diagnosis of Mislabeled Images in Object Detection Data},
    author={Tkachenko, Ulyana and Thyagarajan, Aditya and Mueller, Jonas},
    booktitle={ICML Workshop on Data-centric Machine Learning Research},
   

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

热门AI工具AI办公办公工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

模型训练热门AI工具内容创作智能问答AI开发讯飞星火大模型多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

AI助手热门AI工具AI创作AI辅助写作讯飞绘文内容运营个性化文章多平台分发
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多