HistoSSLscaling

HistoSSLscaling

病理组织图像自监督学习新方法

HistoSSLscaling项目开发了基于掩码图像建模的自监督学习方法,用于病理组织图像分析。该项目的Phikon模型在4000万张全癌种病理切片上预训练,在多项下游任务中表现出色。项目提供了预训练模型、代码和数据集特征,为计算病理学研究提供支持。

自监督学习组织病理学掩码图像建模ViTPhikonGithub开源项目
<div align="center"> <h1>Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling</h1> </div> <details> <summary> <b>Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling</b>, MedRxiv, July 2023.

[MedRxiv] [Project page] [Paper]

</summary>

Filiot, A., Ghermi, R., Olivier, A., Jacob, P., Fidon, L., Kain, A. M., Saillard, C., & Schiratti, J.-B. (2023). Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling. MedRxiv.

@article{Filiot2023scalingwithMIM, author = {Alexandre Filiot and Ridouane Ghermi and Antoine Olivier and Paul Jacob and Lucas Fidon and Alice Mac Kain and Charlie Saillard and Jean-Baptiste Schiratti}, title = {Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling}, elocation-id = {2023.07.21.23292757}, year = {2023}, doi = {10.1101/2023.07.21.23292757}, publisher = {Cold Spring Harbor Laboratory Press}, url = {https://www.medrxiv.org/content/early/2023/07/26/2023.07.21.23292757v2}, eprint = {https://www.medrxiv.org/content/early/2023/07/26/2023.07.21.23292757v2.full.pdf}, journal = {medRxiv} }
</details>

Update :tada: Phikon release on Hugging Face :tada:

We released our Phikon model on Hugging Face. Check out our community blog post ! We also provide a Colab notebook to perform weakly-supervised learning on Camelyon16 and fine-tuning with LoRA on NCT-CRC-HE using Phikon.

Here is a code snippet to perform feature extraction using Phikon.

from PIL import Image import torch from transformers import AutoImageProcessor, ViTModel # load an image image = Image.open("assets/example.tif") # load phikon image_processor = AutoImageProcessor.from_pretrained("owkin/phikon") model = ViTModel.from_pretrained("owkin/phikon", add_pooling_layer=False) # process the image inputs = image_processor(image, return_tensors="pt") # get the features with torch.no_grad(): outputs = model(**inputs) features = outputs.last_hidden_state[:, 0, :] # (1, 768) shape

Official PyTorch Implementation and pre-trained models for Scaling Self-Supervised Learning for Histopathology with Masked Image Modeling. This minimalist repository aims to:

  • Publicly release the weights of our Vision Transformer Base (ViT-B) model Phikon pre-trained with iBOT on 40M pan-cancer histology tiles from TCGA. Phikon achieves state-of-the-art performance on a large variety of downstream tasks compared to other SSL frameworks available in the literature.

⚠️ Addendum :warning:

From 09.01.2023 to 10.30.2023, this repository stated using the student, please use the teacher backbone instead.

# feature extraction snippet with `rl_benchmarks` repository from PIL import Image from rl_benchmarks.models import iBOTViT # instantiate iBOT ViT-B Pancancer model, aka Phikon # /!\ please use the "teacher" encoder which produces better results ! weights_path = "/<your_root_dir>/weights/ibot_vit_base_pancan.pth"> ibot_base_pancancer = iBOTViT(architecture="vit_base_pancan", encoder="teacher", weights_path=weights_path) # load an image and transform it into a normalized tensor image = Image.open("assets/example.tif") # (224, 224, 3), uint8 tensor = ibot_base_pancancer.transform(image) # (3, 224, 224), torch.float32 batch = tensor.unsqueeze(0) # (1, 3, 224, 224), torch.float32 # compute the 768-d features features = ibot_base_pancancer(batch).detach().cpu().numpy() assert features.shape == (1, 768)
  • Publicly release the histology features of our ViT-based iBOT models (iBOT[ViT-S]COAD, iBOT[ViT-B]COAD, iBOT[ViT-B]PanCancer, iBOT[ViT-L]COAD) for i) 11 TCGA cohorts and Camelyon16 slides datasets; and ii) NCT-CRC and Camelyon17-Wilds patches datasets.
  • Reproduce the results from our publication, including: features extraction and clinical data processing, cross-validation experiments, results generation.

Abstract

<details> <summary> Read full abstract from MedRxiv.

main_figure

</summary> Computational pathology is revolutionizing the field of pathology by integrating advanced computer vision and machine learning technologies into diagnostic workflows. Recently, Self-Supervised Learning (SSL) has emerged as a promising solution to learn representations from histology patches, leveraging large volumes of unannotated whole slide images whole slide images (WSI). In particular, Masked Image Modeling (MIM) showed remarkable results and robustness over purely contrastive learning methods. In this work, we explore the application of MIM to histology using iBOT, a self-supervised transformer-based framework. Through a wide range of downstream tasks over seven cancer indications, we provide recommendations on the pre-training of large models for histology data using MIM. First, we demonstrate that in-domain pre-training with iBOT outperforms both ImageNet pre-training and a model pre-trained with a purely contrastive learning objective, MoCo V2. Second, we show that Vision Transformers (ViT), when scaled appropriately, have the capability to learn pan-cancer representations that benefit a large variety of downstream tasks. Finally, our iBOT ViT-Base model, pre-trained on more than 40 million histology images from 16 different cancer types, achieves state-of-the-art performance in most weakly-supervised WSI classification tasks compared to other SSL frameworks. Our code, models and features are publicly available at https://github.com/owkin/HistoSSLscaling. </details>

Data structure

Download

You can download the data necessary to use the present code and reproduce our results here:

Please create weights, raw and preprocessed folders containing the content of the different downloads. This step may take time depending on your wifi bandwidth (folder takes 1.2 To). You can use rclone to download the folder from a remote machine (preferred in a tmux session).

Description

The bucket contains three main folders: a weights, raw and preprocessed folders. The weights folder contains weights for iBOT[ViT-B]PanCancer (our best ViT-B iBOT model). Other models from the literature can be retrieved from the corresponding Github repositories:

weights/
└── ibot_vit_base_pancan.pth          # Ours

The raw folder contains two subfolders for slide-level and tile-level downstream task.

  • Slide-level: each cohort contains 2 folders, clinical and slides. We provide clinical data but not raw slides. No modification was performed on the folders architectures and files names of raw slides and patches compared to the original source (i.e. TCGA, Camelyon16, NCT-CRC and Camelyon17-WILDS).
  • Tile-level: each cohort contains 2 folders, clinical and patches. We only provide clinical data (i.e. labels), not patches datasets.

[!WARNING] We don't provide raw slides or patches (slides, patches folders are empty). You can download raw slides or patches here:

Once you downloaded the data, please follow the same folders architecture as indicated below (without applying modifications on folders and files names compared to original download).

raw/
├── slides_classification               # slides classification tasks
===============================================================================
│   ├── CAMELYON16_FULL                 # cohort
│   │   ├── clinical                    # clinical data (for labels)
│   │   │   ├── test_clinical_data.csv
│   │   │   └── train_clinical_data.csv
│   │   └── slides                      # raw slides (not provided)
│   │        ├── Normal_001.tif
│   │        ├── Normal_002.tif...
│   └── TCGA
│       ├── tcga_statistics.pk          # For each cohort and label, list (n_patients, n_slides, labels_distribution)
│       ├── clinical                    # for TCGA, clinical data is divided into subfolders
│       │   ├── hrd
│       │   │   ├── hrd_labels_tcga_brca.csv
│       │   │   └── hrd_labels_tcga_ov.csv
│       │   ├── msi
│       │   │   ├── msi_labels_tcga_coad.csv
│       │   │   ├── msi_labels_tcga_read.csv...
│       │   ├── subtypes
│       │   │   ├── brca_tcga_pan_can_atlas_2018_clinical_data.tsv.gz
│       │   │   ├── coad_tcga_pan_can_atlas_2018_clinical_data.tsv.gz...
│       │   └── survival
│       │       ├── survival_labels_tcga_brca.csv
│       │       ├── survival_labels_tcga_coad.csv...
│       └── slides
│           └── parafine
│               ├── TCGA_BRCA
│               │   ├── 03627311-e413-4218-b836-177abdfc3911
│               │   │   └── TCGA-XF-AAN7-01Z-00-DX1.B8EDF045-604C-48CB-8E54-A60564CAE2AD.svs
...

└── tiles_classification                # tiles classification tasks
===============================================================================
    ├── CAMELYON17-WILDS_FULL           # cohort
    │   ├── clinical                    # clinical data (for labels)
    │   │    └── metadata.csv
    │   └── patches                     # patches (not provided)
    │        ├── patient_004_node_4...
    │        │   ├── patch_patient_004_node_4_x_10016_y_16704.png...
    └── NCT-CRC_FULL
        ├── labels                      # here the labels are set using the folders architecture
        │   └── dict_labels.pkl
        └── patches
            ├── NCT-CRC-VAL-HE-7K
            │    ├── ADI...
            │    │    ├── ADI-TCGA-AAICEQFN.tif...
            └── NCT-CRC-HE-100K-NONORM
                 ├── ADI...
                 │    ├── ADI-AAAFLCLY.tif...

The preprocessed folder contains two subfolders for slide-level and tile-level downstream tasks.

  • Slide-level: for each feature extractor and dataset, we provide coordinates and features. Coordinates are provided as (N_tiles_slide, 3) numpy arrays where the 3 first columns rows correspond to (tile_level, x_coordinate, y_coordinate). Features are provided as (N_tiles_slide, 3+d) numpy arrays, the d last columns being the model's features (3 first are the previous coordinates). Coordinates are meant to extract the same tiles as done in our publication but are not needed for downstream experiments (only features are needed). Note that coordinates are divided into coords_224, coords_256 and coords_4096, corresponding to 224 x 224 tiles (iBOT, CTransPath and ResNet models), 256 x 256 (Dino models) and 4096 x 4096 (HIPT) tiles, respectively.

[!NOTE] We provide all matter tiles for each slide. All tiles were extracted at 0.5 micrometers / pixel (20x magnification) except for CTransPath (mpp = 1.0 following the authors recommendation).

[!WARNING] The tile_level is computed with openslide.deepzoom.DeepZoomGenerator through the following schematic syntax:

from openslide import open_slide from openslide.deepzoom import DeepZoomGenerator slide = open_slide("<slide_path>") dzg = DeepZoomGenerator(slide, tile_size=224, overlap=0) tile = dzg.get_tile(level=17, address=(8, 10)) # this corresponds to coordinates (17, 8, 10) in the coordinates we provide for the given slide
  • Tile-level: for each feature extractor and dataset, we provide patches ids and features. Features are (N_patches_dataset, d) numpy arrays and ids take the form of (N_patches_dataset, 1) string numpy array.

Here is a description of the different features and coordinates we provide in the preprocessed folder.

preprocessed/                         # preprocessed data (coords, features)
===============================================================================
├── slides_classification             # slides classification tasks
│   ├── coords
│   │   ├── coords_224                # coordinates for 224 x 224 tiles
│   │   │   ├── CAMELYON16_FULL       # cohort 
│   │   │   │   ├── Normal_001.tif    # slide_id
│   │   │   │       └── coords.npy    # coordinates array (N_tiles_slide, 3)
...
│   │   │   ├── TCGA
│   │   │   │   ├── TCGA_BRCA
│   │   │   │   │   ├── TCGA-3C-AALI-01Z-00-DX1.F6E9A5DF-D8FB-45CF-B4BD-C6B76294C291.svs
│   │   │   │   │       └── coords.npy
...
│   │   ├── coords_256                # coordinates for 256 x 256 tiles
│   │   └── coords_4096               # coordinates for 4096 x 4096 tiles

...
│   └── features                      # features
│       ├── iBOTViTBasePANCAN         # feature extractor
│       │   ├── CAMELYON16_FULL       # cohort
│       │   │   ├── Normal_001.tif    # slide_id
│       │   │       └── features.npy  # features array (N_tiles_slide, 3+d)
...
│       │   ├── TCGA
│       │   │   ├── TCGA_BRCA
│       │   │   │   ├── TCGA-3C-AALI-01Z-00-DX1.F6E9A5DF-D8FB-45CF-B4BD-C6B76294C291.svs
│       │   │   │       └── features.npy
...
│       ├── MoCoWideResNetCOAD        # same structure applies for all extractors
│       ├── ResNet50
│       ├── iBOTViTBaseCOAD
│       ├── iBOTViTBasePANCAN
│       ├── iBOTViTLargeCOAD
│       ├── iBOTViTSmallCOAD
...
/!\ If you wish to extract features for

编辑推荐精选

Qwen2.5-VL

Qwen2.5-VL

一款强大的视觉语言模型,支持图像和视频输入

Qwen2.5-VL 是一款强大的视觉语言模型,支持图像和视频输入,可用于多种场景,如商品特点总结、图像文字识别等。项目提供了 OpenAI API 服务、Web UI 示例等部署方式,还包含了视觉处理工具,有助于开发者快速集成和使用,提升工作效率。

HunyuanVideo

HunyuanVideo

HunyuanVideo 是一个可基于文本生成高质量图像和视频的项目。

HunyuanVideo 是一个专注于文本到图像及视频生成的项目。它具备强大的视频生成能力,支持多种分辨率和视频长度选择,能根据用户输入的文本生成逼真的图像和视频。使用先进的技术架构和算法,可灵活调整生成参数,满足不同场景的需求,是文本生成图像视频领域的优质工具。

WebUI for Browser Use

WebUI for Browser Use

一个基于 Gradio 构建的 WebUI,支持与浏览器智能体进行便捷交互。

WebUI for Browser Use 是一个强大的项目,它集成了多种大型语言模型,支持自定义浏览器使用,具备持久化浏览器会话等功能。用户可以通过简洁友好的界面轻松控制浏览器智能体完成各类任务,无论是数据提取、网页导航还是表单填写等操作都能高效实现,有利于提高工作效率和获取信息的便捷性。该项目适合开发者、研究人员以及需要自动化浏览器操作的人群使用,在 SEO 优化方面,其关键词涵盖浏览器使用、WebUI、大型语言模型集成等,有助于提高网页在搜索引擎中的曝光度。

xiaozhi-esp32

xiaozhi-esp32

基于 ESP32 的小智 AI 开发项目,支持多种网络连接与协议,实现语音交互等功能。

xiaozhi-esp32 是一个极具创新性的基于 ESP32 的开发项目,专注于人工智能语音交互领域。项目涵盖了丰富的功能,如网络连接、OTA 升级、设备激活等,同时支持多种语言。无论是开发爱好者还是专业开发者,都能借助该项目快速搭建起高效的 AI 语音交互系统,为智能设备开发提供强大助力。

olmocr

olmocr

一个用于 OCR 的项目,支持多种模型和服务器进行 PDF 到 Markdown 的转换,并提供测试和报告功能。

olmocr 是一个专注于光学字符识别(OCR)的 Python 项目,由 Allen Institute for Artificial Intelligence 开发。它支持多种模型和服务器,如 vllm、sglang、OpenAI 等,可将 PDF 文件的页面转换为 Markdown 格式。项目还提供了测试框架和 HTML 报告生成功能,方便用户对 OCR 结果进行评估和分析。适用于科研、文档处理等领域,有助于提高工作效率和准确性。

飞书多维表格

飞书多维表格

飞书多维表格 ×DeepSeek R1 满血版

飞书多维表格联合 DeepSeek R1 模型,提供 AI 自动化解决方案,支持批量写作、数据分析、跨模态处理等功能,适用于电商、短视频、影视创作等场景,提升企业生产力与创作效率。关键词:飞书多维表格、DeepSeek R1、AI 自动化、批量处理、企业协同工具。

CSM

CSM

高质量语音生成模型

CSM 是一个开源的语音生成项目,它提供了一个基于 Llama-3.2-1B 和 CSM-1B 的语音生成模型。该项目支持多语言,可生成多种声音,适用于研究和教育场景。通过使用 CSM,用户可以方便地进行语音合成,同时项目还提供了水印功能,确保生成音频的可追溯性和透明度。

agents-course

agents-course

Hugging Face 的 AI 智能体课程,涵盖多种智能体框架及相关知识

本项目是 Hugging Face 推出的 AI 智能体课程,深入介绍了 AI 智能体的相关概念,如大语言模型、工具使用等。课程包含多个单元,详细讲解了不同的智能体框架,如 smolagents 和 LlamaIndex,提供了丰富的学习资源和实践案例。适合对 AI 智能体感兴趣的开发者和学习者,有助于提升他们在该领域的知识和技能。

RagaAI-Catalyst

RagaAI-Catalyst

用于 AI 项目管理和 API 交互的工具集,助力 AI 项目高效开发与管理。

RagaAI-Catalyst 是一款专注于 AI 领域的强大工具集,为开发者提供了便捷的项目管理、API 交互、令牌管理等功能。支持多 API 密钥上传,能快速创建、列出和管理 AI 项目,还可获取项目用例和指标信息。适用于各类 AI 开发场景,提升开发效率,推动 AI 项目顺利开展。

smolagents

smolagents

一个包含多种工具和文档处理功能,适用于 LLM 使用的项目。

smolagents 是一个功能丰富的项目,提供了如文件格式转换、网页内容读取、语义搜索等多种工具,支持将常见文件类型或网页转换为 Markdown,方便进行文档处理和信息提取,能满足不同场景下的需求,提升工作效率和数据处理能力。

下拉加载更多