CellViT

CellViT

基于Vision Transformer的细胞核分割与分类模型

CellViT是一种基于Vision Transformer的深度学习方法,用于数字化组织样本中的细胞核自动实例分割。该项目结合了预训练的Vision Transformer编码器和U-Net架构,在PanNuke数据集上取得了领先性能。通过引入加权采样策略,CellViT提高了对复杂细胞实例的识别能力。它能够快速处理千兆像素级全切片图像,并可与QuPath等软件集成,为后续分析提供定位化的深度特征。

CellViT细胞分割Vision Transformer深度学习PanNuke数据集Github开源项目

Python 3.9.7 Code style: black Flake8 Status CodeFactor <img src="https://img.shields.io/badge/PyTorch-EE4C2C?style=flat-square&logo=Pytorch&logoColor=white"/></a> Visitors PWC


<p align="center"> <img src="./docs/figures/banner.png"/> </p>

CellViT: Vision Transformers for Precise Cell Segmentation and Classification

<div align="center">

Key FeaturesInstallationUsageTrainingInferenceExamplesRoadmapCitation

</div>

Update 08.08.2023:

:bangbang: We fixed a severe training bug and uploaded new checkpoints. Please make sure to pull all changes and redownload your CellViT checkpoints to get the best results :bangbang:

:ballot_box_with_check: Improved reproducability by providing config and log files for best models (CellViT-SAM-H and CellViT-256) and adopted PanNuke inference script for an easier evaluation

:ballot_box_with_check: Inference speed improved by x100 for postprocessing, added new preprocessing with CuCIM speedup

:ballot_box_with_check: Fixed bug in postprocessing that may insert doubled cells during cell-detection

:ballot_box_with_check: Added batch-size and mixed-precision options to inference cli to support RAM limited GPUs

:ballot_box_with_check: Extended configuration and added sweep configuration


Hörst, F., Rempe, M., Heine, L., Seibold, C., Keyl, J., Baldini, G., Ugurel, S., Siveke, J., Grünwald, B., Egger, J., & Kleesiek, J. (2023). CellViT: Vision Transformers for precise cell segmentation and classification. https://doi.org/10.48550/ARXIV.2306.15350

This repository contains the code implementation of CellViT, a deep learning-based method for automated instance segmentation of cell nuclei in digitized tissue samples. CellViT utilizes a Vision Transformer architecture and achieves state-of-the-art performance on the PanNuke dataset, a challenging nuclei instance segmentation benchmark.

If you intend to use anything from this repo, citation of the original publication given above is necessary

<p align="center"> <img src="./docs/figures/network_large.png"/> </p>

Key Features

  • State-of-the-Art Performance: CellViT outperforms existing methods for nuclei instance segmentation by a substantial margin, delivering superior results on the PanNuke dataset:
    • Mean panoptic quality: 0.51
    • F1-detection score: 0.83
  • Vision Transformer Encoder: The project incorporates pre-trained Vision Transformer (ViT) encoders, which are known for their effectiveness in various computer vision tasks. This choice enhances the segmentation performance of CellViT.
  • U-Net Architecture: CellViT adopts a U-Net-shaped encoder-decoder network structure, allowing for efficient and accurate nuclei instance segmentation. The network architecture facilitates both high-level and low-level feature extraction for improved segmentation results.
  • Weighted Sampling Strategy: To enhance the performance of CellViT, a novel weighted sampling strategy is introduced. This strategy improves the representation of challenging nuclei instances, leading to more accurate segmentation results.
  • Fast Inference on Gigapixel WSI: The framework provides fast inference results by utilizing a large inference patch size of $1024 \times 1024$ pixels, in contrast to the conventional $256$-pixel-sized patches. This approach enables efficient analysis of Gigapixel Whole Slide Images (WSI) and generates localizable deep features that hold potential value for downstream tasks. We provide a fast inference pipeline with connection to current Viewing Software such as QuPath

Visualization

<div align="center">

Example

</div>

Installation

  1. Clone the repository: git clone https://github.com/TIO-IKIM/CellViT.git

  2. Create a conda environment with Python 3.9.7 version and install conda requirements: conda env create -f environment.yml. You can change the environment name by editing the name tag in the environment.yaml file. This step is necessary, as we need to install Openslide with binary files. This is easier with conda. Otherwise, installation from source needs to be performed and packages installed with pi

  3. Activate environment: conda activate cellvit_env

  4. Install torch (>=2.0) for your system, as described here. Preferred version is 2.0, see optional_dependencies for help. You can find all version here: https://pytorch.org/get-started/previous-versions/

  5. Install optional dependencies pip install -r optional_dependencies.txt to get a speedup using NVIDIA-Clara and CuCIM for preprocessing during inference. Please select your CUDA versions. Help for installing cucim can be found online. Note Error: cannot import name CuImage from cucim If you get this error, install cucim from conda to get all binary files. First remove your previous dependeny with pip uninstall cupy-cuda117 and reinstall with conda install -c rapidsai cucim inside your conda environment. This process is time consuming, so you should be patient. Also follow their official guideline.

FAQ: Environment problems

ResolvePackageNotFound: -gcc

  • Fix: Comment out the gcc package in the environment.yml file

ResolvePackageNotFound: -libtiff==4.5.0=h6adf6a1_2, -openslide==3.4.1=h7773abc_6

  • Fix: Remove the version hash from environment.yml file, such that:
    ... dependencies: ... - libtiff=4.5.0 - openslide=3.4.1 pip: ...

PyDantic Validation Errors for the CLI

Please install the pydantic version specified (pydantic==1.10.4), otherwise validation errors could occur for the CLI.

Usage:

Project Structure

We are currently using the following folder structure:

├── base_ml # Basic Machine Learning Code: CLI, Trainer, Experiment, ... ├── cell_segmentation # Cell Segmentation training and inference files │ ├── datasets # Datasets (PyTorch) │ ├── experiments # Specific Experiment Code for different experiments │ ├── inference # Inference code for experiment statistics and plots │ ├── trainer # Trainer functions to train networks │ ├── utils # Utils code │ └── run_xxx.py # Run file to start an experiment ├── configs # Config files │ ├── examples # Example config files with explanations │ └── python # Python configuration file for global Python settings ├── datamodel # Datamodels of WSI, Patientes etc. (not ML specific) ├── docs # Documentation files (in addition to this main README.md) ├── models # Machine Learning Models (PyTorch implementations) │ ├── encoders # Encoder networks (see ML structure below) │ ├── pretrained # Checkpoint of important pretrained models (needs to be downloaded from Google drive) │ └── segmentation # CellViT Code ├── preprocessing # Preprocessing code │ └── patch_extraction # Code to extract patches from WSI

Training

The CLI for a ML-experiment to train the CellViT-Network is as follows (here the run_cellvit.py script is used):

usage: run_cellvit.py [-h] --config CONFIG [--gpu GPU] [--sweep | --agent AGENT | --checkpoint CHECKPOINT] Start an experiment with given configuration file. optional arguments: -h, --help show this help message and exit --gpu GPU Cuda-GPU ID (default: None) --sweep Starting a sweep. For this the configuration file must be structured according to WandB sweeping. Compare https://docs.wandb.ai/guides/sweeps and https://community.wandb.ai/t/nested-sweep-configuration/3369/3 for further information. This parameter cannot be set in the config file! (default: False) --agent AGENT Add a new agent to the sweep. Please pass the sweep ID as argument in the way entity/project/sweep_id, e.g., user1/test_project/v4hwbijh. The agent configuration can be found in the WandB dashboard for the running sweep in the sweep overview tab under launch agent. Just paste the entity/project/sweep_id given there. The provided config file must be a sweep config file.This parameter cannot be set in the config file! (default: None) --checkpoint CHECKPOINT Path to a PyTorch checkpoint file. The file is loaded and continued to train with the provided settings. If this is passed, no sweeps are possible. This parameter cannot be set in the config file! (default: None) required named arguments: --config CONFIG Path to a config file (default: None)

The important file is the configuration file, in which all paths are set, the model configuration is given and the hyperparameters or sweeps are defined. For each specific run file, there exists an example file in the ./configs/examples/cell_segmentation folder with the same naming as well as a configuration file that explains how to run WandB sweeps for hyperparameter search. All metrics defined in your trainer are logged to WandB. The WandB configuration needs to be set up in the configuration file, but also turned off by the user.

An example config file is given here with explanations here. For sweeps, we provide a sweep example file train_cellvit_sweep.yaml.

Pre-trained ViT models for training initialization can be downloaded from Google Drive: ViT-Models. Please check out the corresponding licenses before distribution and further usage! Note: We just used the teacher models for ViT-256.

:exclamation: If your training crashes at some point, you can continue from a checkpoint

Dataset preparation

We use a customized dataset structure for the PanNuke and the MoNuSeg dataset. The dataset structures are explained in pannuke.md and monuseg.md documentation files. We also provide preparation scripts in the cell_segmentation/datasets/ folder.

Evaluation

In our paper, we did not (!) use early stopping, but rather train all models for 130 to eliminate selection bias but have the largest possible database for training. Therefore, evaluation neeeds to be performed with the latest_checkpoint.pth model and not the best early stopping model. We provide to script to create evaluation results: inference_cellvit_experiment.py for PanNuke and inference_cellvit_monuseg.py for MoNuSeg.

:exclamation: We recently adapted the evaluation code and added a tag to the config files to select which checkpoint needs to be used. Please make sure to use the right checkpoint and select the appropriate dataset magnification.

Inference

Model checkpoints can be downloaded here:

License: Apache 2.0 with Commons Clause

Proved checkpoints have been trained on 90% of the data from all folds with the settings described in the publication.

Steps

The following steps are necessary for preprocessing:

  1. Prepare WSI with our preprocessing pipeline
  2. Run inference with the inference/cell_detection.py script

Results are stored at preprocessing locations

1. Preprocessing

In our Pre-Processing pipeline, we are able to extract quadratic patches from detected tissue areas, load annotation files (.json) and apply color normlizations. We make use of the popular OpenSlide library, but extended it with the RAPIDS cuCIM framework for an x8 speedup in patch-extraction. The documentation for the preprocessing can be found here.

Preprocessing is necessary to extract patches for our inference pipeline. We use squared patches of size 1024 pixels with an overlap of 64 px.

Please make sure that you select the following properties for our CellViT inference

ParameterValue
patch_size1024
patch_overlap6.25

Resulting Dataset Structure

In general, the folder structure for a preprocessed dataset looks like this: The aim of pre-processing is to create one dataset per WSI in the following structure:

WSI_Name ├── annotation_masks # thumbnails of extracted annotation masks │ ├── all_overlaid.png # all with same dimension as the thumbnail │ ├── tumor.png │ └── ... ├── context # context patches, if extracted │ ├── 2 # subfolder for each scale │ │ ├── WSI_Name_row1_col1_context_2.png │ │ ├── WSI_Name_row2_col1_context_2.png │ │ └── ... │ └── 4 │ │ ├── WSI_Name_row1_col1_context_2.png │ │ ├── WSI_Name_row2_col1_context_2.png │ │ └── ... ├── masks # Mask (numpy) files for each patch -> optional folder for segmentation │ ├── WSI_Name_row1_col1.npy │ ├── WSI_Name_row2_col1.npy │ └── ... ├── metadata # Metadata files for each patch │ ├── WSI_Name_row1_col1.yaml │ ├── WSI_Name_row2_col1.yaml │ └── ... ├── patches # Patches as .png files │ ├── WSI_Name_row1_col1.png │ ├── WSI_Name_row2_col1.png │ └── ... ├── thumbnails # Different kind of thumbnails │ ├── thumbnail_mpp_5.png │ ├── thumbnail_downsample_32.png │ └── ... ├── tissue_masks # Tissue mask images for checking │ ├── mask.png # all with same dimension as the thumbnail │ ├── mask_nogrid.png │ └── tissue_grid.png ├── mask.png # tissue mask with green grid ├── metadata.yaml # WSI metdata for patch extraction ├── patch_metadata.json # Patch metadata of WSI merged in one file └── thumbnail.png # WSI thumbnail

The cell detection and segmentation results are stored in a newly created cell_detection

编辑推荐精选

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

Refly.AI

Refly.AI

最适合小白的AI自动化工作流平台

无需编码,轻松生成可复用、可变现的AI自动化工作流

酷表ChatExcel

酷表ChatExcel

大模型驱动的Excel数据处理工具

基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。

AI工具酷表ChatExcelAI智能客服AI营销产品使用教程
TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
AIWritePaper论文写作

AIWritePaper论文写作

AI论文写作指导平台

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

AI辅助写作AI工具AI论文工具论文写作智能生成大纲数据安全AI助手热门
博思AIPPT

博思AIPPT

AI一键生成PPT,就用博思AIPPT!

博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。

AI办公办公工具AI工具博思AIPPTAI生成PPT智能排版海量精品模板AI创作热门
潮际好麦

潮际好麦

AI赋能电商视觉革命,一站式智能商拍平台

潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。

iTerms

iTerms

企业专属的AI法律顾问

iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。

SimilarWeb流量提升

SimilarWeb流量提升

稳定高效的流量提升解决方案,助力品牌曝光

稳定高效的流量提升解决方案,助力品牌曝光

Sora2视频免费生成

Sora2视频免费生成

最新版Sora2模型免费使用,一键生成无水印视频

最新版Sora2模型免费使用,一键生成无水印视频

下拉加载更多