Efficient workflow and reproducibility are extremely important components in every machine learning projects, which enable to:
PyTorch Lightning and Hydra serve as the foundation upon this template. Such reasonable technology stack for deep learning prototyping offers a comprehensive and seamless solution, allowing you to effortlessly explore different tasks across a variety of hardware accelerators such as CPUs, multi-GPUs, and TPUs. Furthermore, it includes a curated collection of best practices and extensive documentation for greater clarity and comprehension.
This template could be used as is for some basic tasks like Classification, Segmentation or Metric Learning, or be easily extended for any other tasks due to high-level modularity and scalable structure.
As a baseline I have used gorgeous Lightning Hydra Template, reshaped and polished it, and implemented more features which can improve overall efficiency of workflow and reproducibility.
# clone template git clone https://github.com/gorodnitskiy/yet-another-lightning-hydra-template cd yet-another-lightning-hydra-template # install requirements pip install -r requirements.txt
Or run the project in docker. See more in Docker section.
PyTorch Lightning - a lightweight deep learning framework / PyTorch wrapper for professional AI researchers and machine learning engineers who need maximal flexibility without sacrificing performance at scale.
Hydra - a framework that simplifies configuring complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line.
The structure of a machine learning project can vary depending on the specific requirements and goals of the project, as well as the tools and frameworks being used. However, here is a general outline of a common directory structure for a machine learning project:
src/
data/
logs/
tests/
notebooks/
, docs/
, etc.In this particular case, the directory structure looks like:
├── configs <- Hydra configuration files
│ ├── callbacks <- Callbacks configs
│ ├── datamodule <- Datamodule configs
│ ├── debug <- Debugging configs
│ ├── experiment <- Experiment configs
│ ├── extras <- Extra utilities configs
│ ├── hparams_search <- Hyperparameter search configs
│ ├── hydra <- Hydra settings configs
│ ├── local <- Local configs
│ ├── logger <- Logger configs
│ ├── module <- Module configs
│ ├── paths <- Project paths configs
│ ├── trainer <- Trainer configs
│ │
│ ├── eval.yaml <- Main config for evaluation
│ └── train.yaml <- Main config for training
│
├── data <- Project data
├── logs <- Logs generated by hydra, lightning loggers, etc.
├── notebooks <- Jupyter notebooks.
├── scripts <- Shell scripts
│
├── src <- Source code
│ ├── callbacks <- Additional callbacks
│ ├── datamodules <- Lightning datamodules
│ ├── modules <- Lightning modules
│ ├── utils <- Utility scripts
│ │
│ ├── eval.py <- Run evaluation
│ └── train.py <- Run training
│
├── tests <- Tests of any kind
│
├── .dockerignore <- List of files ignored by docker
├── .gitattributes <- List of additional attributes to pathnames
├── .gitignore <- List of files ignored by git
├── .pre-commit-config.yaml <- Configuration of pre-commit hooks for code formatting
├── Dockerfile <- Dockerfile
├── Makefile <- Makefile with commands like `make train` or `make test`
├── pyproject.toml <- Configuration options for testing and linting
├── requirements.txt <- File for installing python dependencies
├── setup.py <- File for installing project as a package
└── README.md
Before starting a project, you need to think about the following things to unsure in results reproducibility:
This template could be used as is for some basic tasks like Classification, Segmentation or Metric Learning approach, but if you need to do something more complex, here it is a general workflow:
python src/train.py experiment=experiment_name.yaml
# using Hydra multirun mode python src/train.py -m hparams_search=mnist_optuna
python src/train.py -m logger=csv module.optimizer.weight_decay=0.,0.00001,0.0001
The template contains example with MNIST
classification, which uses for tests by the way.
If you run python src/train.py
, you will get something like this:
At the start, you need to create PyTorch Dataset for you task. It has to include __getitem__
and __len__
methods.
Maybe you can use as is or easily modify already implemented Datasets in the template.
See more details in PyTorch documentation.
Also, it could be useful to see section about how it is possible to save data for training and evaluation.
Then, you need to create DataModule using PyTorch Lightning DataModule API. By default, API has the following methods:
prepare_data
(optional): perform data operations on CPU via a single process, like load and preprocess data, etc.setup
(optional): perform data operations on every GPU, like train/val/test splits, create datasets, etc.train_dataloader
: used to generate the training dataloader(s)val_dataloader
: used to generate the validation dataloader(s)test_dataloader
: used to generate the test dataloader(s)predict_dataloader
(optional): used to generate the prediction dataloader(s)</details>from typing import Any, Dict, List, Optional, Union from torch.utils.data import DataLoader, Dataset from pytorch_lightning import LightningDataModule class YourDataModule(LightningDataModule): def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__() self.train_set: Optional[Dataset] = None self.valid_set: Optional[Dataset] = None self.test_set: Optional[Dataset] = None self.predict_set: Optional[Dataset] = None ... def prepare_data(self) -> None: # (Optional) Perform data operations on CPU via a single process # - load data # - preprocess data # - etc. ... def setup(self, stage: str) -> None: # (Optional) Perform data operations on every GPU: # - count number of classes # - build vocabulary # - perform train/val/test splits # - create datasets # - apply transforms (which defined explicitly in your datamodule) # - etc. if not self.train_set and not self.valid_set and not self.test_set: self.train_set = ... self.valid_set = ... self.test_set = ... if (stage == "predict") and not self.predict_set: self.predict_set = ... def train_dataloader(self) -> Union[DataLoader, List[DataLoader], Dict[str, DataLoader]]: # Used to generate the training dataloader(s) # This is the dataloader that the Trainer `fit()` method uses return DataLoader(self.train_set, ...) def val_dataloader(self) -> Union[DataLoader, List[DataLoader]]: # Used to generate the validation dataloader(s) # This is the dataloader that the Trainer `fit()` and `validate()` methods uses return DataLoader(self.valid_set, ...) def test_dataloader(self) -> Union[DataLoader, List[DataLoader]]: # Used to generate the test dataloader(s) # This is the dataloader that the Trainer `test()` method uses return DataLoader(self.test_set, ...) def predict_dataloader(self) -> Union[DataLoader, List[DataLoader]]: # Used to generate the prediction dataloader(s) # This is the dataloader that the Trainer `predict()` method uses return DataLoader(self.predict_set, ...) def teardown(self, stage: str) -> None: # Used to clean-up when the run is finished ...
See examples of datamodule
configs in configs/datamodule folder.
By default, the template contains the following DataModules:
train_dataloader
, val_dataloader
and
test_dataloader
return single DataLoader, predict_dataloader
returns list of DataLoaderstrain_dataloader
return dict of DataLoaders,
val_dataloader
, test_dataloader
and predict_dataloader
return list of DataLoadersIn the template, DataModules has _get_dataset_
method to simplify Datasets instantiation.
Next, your need to create LightningModule using PyTorch Lightning LightningModule API. Minimum API has the following methods:
forward
: use for inference only (separate from training_step)training_step
: the complete training loopvalidation_step
: the complete validation looptest_step
: the complete test looppredict_step
: the complete prediction loopconfigure_optimizers
: define optimizers and LR schedulersAlso, you can override optional methods for each step to perform additional logic:
training_step_end
: training step end operationstraining_epoch_end
: training epoch end operationsvalidation_step_end
: validation step end operationsvalidation_epoch_end
: validation epoch end operationstest_step_end
: test step end operationstest_epoch_end
: test epoch end operations</details>from typing import Any from pytorch_lightning import LightningModule class LitModel(LightningModule): def __init__(self, *args: Any, **kwargs: Any): super().__init__() ... def forward(self, *args: Any, **kwargs: Any): ... def training_step(self, *args: Any, **kwargs: Any): ... def training_step_end(self, step_output: Any): ... def training_epoch_end(self, outputs: Any): ... def validation_step(self, *args: Any, **kwargs: Any): ... def validation_step_end(self, step_output: Any): ... def validation_epoch_end(self, outputs: Any): ... def test_step(self, *args: Any, **kwargs: Any): ... def test_step_end(self, step_output: Any): ... def test_epoch_end(self, outputs: Any): ... def configure_optimizers(self): ... def any_extra_hook(self, *args: Any, **kwargs: Any): ...
In the template, LightningModule has model_step
method to adjust repeated operations, like forward
or loss
calculation, which are required in training_step
, validation_step
and test_step
.
The template offers the following Metrics API
:
main
metric: main metric, which also uses for all callbacks or trackers like model_checkpoint
, early_stopping
or scheduler.monitor
.valid_best
metric: use for tracking the best validation metric. Usually it can be MaxMetric
or MinMetric
.additional
metrics: additional metrics.Each metric config should contain _target_
key with metric class name and other parameters which are required by
metric. The template allows to use any metrics, for example from
torchmetrics or implemented by yourself (see examples in
modules/metrics/components/
or torchmetrics API).
See more details about implemented Metrics API and metrics
config as a part of
network
configs in configs/module/network folder.
Metric config example:
metrics: main: _target_: "torchmetrics.Accuracy" task: "binary" valid_best: _target_: "torchmetrics.MaxMetric" additional: AUROC: _target_: "torchmetrics.AUROC" task: "binary"
Also, the template includes few manually implemented metrics:
The template offers the following Losses API
:
_target_
key with loss class name and other parameters which are required by loss.weight
string in name will be wrapped by torch.tensor
and cast to torch.float
type before
passing to loss due to requirements from most of the losses.The template allows to use any losses, for example from
PyTorch or implemented by yourself (see examples in
modules/losses/components/
).
See more details about implemented Losses API and loss
config as a part of
network
configs in configs/module/network folder.
Loss config examples:
loss: _target_: "torch.nn.CrossEntropyLoss"
loss: _target_: "torch.nn.BCEWithLogitsLoss" pos_weight: [0.25]
loss: _target_:
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年 度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供 了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
高分辨率纹理 3D 资产生成
Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。
一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。
3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号