SmartSim is made up of two parts
The two library components are designed to work together, but can also be used independently.
SmartSim is a workflow library that makes it easier to use common Machine Learning (ML) libraries, like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and applications. SmartSim launches ML infrastructure on HPC systems alongside user workloads.
SmartRedis provides an API to connect HPC workloads, particularly (MPI + X) simulations, to the ML infrastructure, namely the The Orchestrator database, launched by SmartSim.
Applications integrated with the SmartRedis clients, written in Fortran, C, C++ and Python, can send data to and remotely request SmartSim infrastructure to execute ML models and scripts on GPU or CPU. The distributed Client-Server paradigm allows for data to be seamlessly exchanged between applications at runtime without the utilization of MPI.
Table of Contents
The documentation has a number of tutorials that make it easy to get used to SmartSim locally before using it on your system. Each tutorial is a Jupyter notebook that can be run through the SmartSim Tutorials docker image which will run a jupyter lab with the tutorials, SmartSim, and SmartRedis installed.
docker pull ghcr.io/craylabs/smartsim-tutorials:latest docker run -p 8888:8888 ghcr.io/craylabs/smartsim-tutorials:latest # click on link to open jupyter lab
The Infrastructure Library (IL), the smartsim
python package,
facilitates the launch of Machine Learning and simulation
workflows. The Python interface of the IL creates, configures, launches and monitors
applications.
The Experiment object
is the main interface of SmartSim. Through the Experiment
users can create references to user applications called Models
.
Below is a simple example of a workflow that uses the IL to launch hello world program using the local launcher which is designed for laptops and single nodes.
from smartsim import Experiment exp = Experiment("simple", launcher="local") settings = exp.create_run_settings("echo", exe_args="Hello World") model = exp.create_model("hello_world", settings) exp.start(model, block=True) print(exp.get_status(model))
The Experiment.create_run_settings method returns a RunSettings
object which
defines how a model is launched. There are many types of RunSettings
supported by
SmartSim.
RunSettings
MpirunSettings
SrunSettings
AprunSettings
JsrunSettings
The following example launches a hello world MPI program using the local launcher for single compute node, workstations and laptops.
from smartsim import Experiment exp = Experiment("hello_world", launcher="local") mpi_settings = exp.create_run_settings(exe="echo", exe_args="Hello World!", run_command="mpirun") mpi_settings.set_tasks(4) mpi_model = exp.create_model("hello_world", mpi_settings) exp.start(mpi_model, block=True) print(exp.get_status(model))
If an argument of run_command="auto"
(the default) is passed to
Experiment.create_run_settings
, SmartSim will attempt to find a run command on the
system with which it has a corresponding RunSettings
class. If one can be found,
Experiment.create_run_settings
will instance and return an object of that type.
SmartSim integrates with common HPC schedulers providing batch and interactive launch capabilities for all applications:
In addition, on Slurm and PBS systems, Dragon can be used as a launcher. Please refer to the documentation for instructions on how to insall it on your system and use it in SmartSim.
The following launches the same hello_world
model in an interactive allocation.
# get interactive allocation (Slurm) salloc -N 3 --ntasks-per-node=20 --ntasks 60 --exclusive -t 00:10:00 # get interactive allocation (PBS) qsub -l select=3:ncpus=20 -l walltime=00:10:00 -l place=scatter -I -q <queue> # get interactive allocation (LSF) bsub -Is -W 00:10 -nnodes 3 -P <project> $SHELL
This same script will run on a SLURM, PBS, or LSF system as the launcher
is set to auto
in the Experiment
initialization. The run command like mpirun
,
aprun
or srun
will be automatically detected from what is available on the
system.
# hello_world.py from smartsim import Experiment exp = Experiment("hello_world_exp", launcher="auto") run = exp.create_run_settings(exe="echo", exe_args="Hello World!") run.set_tasks(60) run.set_tasks_per_node(20) model = exp.create_model("hello_world", run) exp.start(model, block=True, summary=True) print(exp.get_status(model))
# in interactive terminal python hello_world.py
This script could also be launched in a batch file instead of an interactive terminal. For example, for Slurm:
#!/bin/bash #SBATCH --exclusive #SBATCH --nodes=3 #SBATCH --ntasks-per-node=20 #SBATCH --time=00:10:00 python /path/to/hello_world.py
# on Slurm system sbatch run_hello_world.sh
SmartSim can also launch workloads in a batch directly from Python, without the need
for a batch script. Users can launch groups of Model
instances in a Ensemble
.
The following launches 4 replicas of the the same hello_world
model.
# hello_ensemble.py from smartsim import Experiment exp = Experiment("hello_world_batch", launcher="auto") # define resources for all ensemble members batch = exp.create_batch_settings(nodes=4, time="00:10:00", account="12345-Cray") batch.set_queue("premium") # define how each member should run run = exp.create_run_settings(exe="echo", exe_args="Hello World!") run.set_tasks(60) run.set_tasks_per_node(20) ensemble = exp.create_ensemble("hello_world", batch_settings=batch, run_settings=run, replicas=4) exp.start(ensemble, block=True, summary=True) print(exp.get_status(ensemble))
python hello_ensemble.py
Similar to the interactive example, this same script will run on a SLURM, PBS,
or LSF system as the launcher
is set to auto
in the
Experiment
initialization. Local launching does not support batch workloads.
The Orchestrator
is an in-memory database that utilizes Redis and RedisAI to provide
a distributed database and access to ML runtimes from Fortran, C, C++ and Python.
SmartSim provides classes that make it simple to launch the database in many configurations and optionally form a distributed database cluster. The examples below will show how to launch the database. Later in this document we will show how to use the database to perform ML inference and processing.
The following script launches a single database using the local launcher.
Experiment.create_database
will initialize an Orchestrator
instance corresponding to the specified launcher.
# run_db_local.py from smartsim import Experiment exp = Experiment("local-db", launcher="local") db = exp.create_database(port=6780, # database port interface="lo") # network interface to use # by default, SmartSim never blocks execution after the database is launched. exp.start(db) # launch models, analysis, training, inference sessions, etc # that communicate with the database using the SmartRedis clients # stop the database exp.stop(db)
The Orchestrator
, like Ensemble
instances, can be launched locally, in interactive
allocations, or in a batch.
The following example launches a distributed (3 node) database cluster in an interactive allocation.
# get interactive allocation (Slurm) salloc -N 3 --ntasks-per-node=1 --exclusive -t 00:10:00 # get interactive allocation (PBS) qsub -l select=3:ncpus=1 -l walltime=00:10:00 -l place=scatter -I -q queue # get interactive allocation (LSF) bsub -Is -W 00:10 -nnodes 3 -P project $SHELL
# run_db.py from smartsim import Experiment # auto specified to work across launcher types exp = Experiment("db-on-slurm", launcher="auto") db_cluster = exp.create_database(db_nodes=3, db_port=6780, batch=False, interface="ipogif0") exp.start(db_cluster) print(f"Orchestrator launched on nodes: {db_cluster.hosts}") # launch models, analysis, training, inference sessions, etc # that communicate with the database using the SmartRedis clients exp.stop(db_cluster)
# in interactive terminal python run_db.py
The Orchestrator
can also be launched in a batch without the need for an interactive allocation.
SmartSim will create the batch file, submit it to the batch system, and then wait for the database
to be launched. Users can hit CTRL-C to cancel the launch if needed.
# run_db_batch.py from smartsim import Experiment exp = Experiment("batch-db-on-pbs", launcher="auto") db_cluster = exp.create_database(db_nodes=3, db_port=6780, batch=True, time="00:10:00", interface="ib0", account="12345-Cray", queue="cl40") exp.start(db_cluster) print(f"Orchestrator launched on nodes: {db_cluster.hosts}") # launch models, analysis, training, inference sessions, etc # that communicate with the database using the SmartRedis clients exp.stop(db_cluster)
python run_db_batch.py
The SmartSim IL Clients (SmartRedis) are implementations of Redis clients that implement the RedisAI API with additions specific to scientific workflows.
SmartRedis clients are available in Fortran, C, C++, and Python. Users can seamlessly pull and push data from the Orchestrator from different languages.
Tensors are the fundamental data structure for the SmartRedis clients. The Clients use the native array format of the language. For example, in Python, a tensor is a NumPy array while the C/C++ clients accept nested and contiguous arrays.
When stored in the database, all tensors are stored in the same format. Hence, any language can receive a tensor from the database no matter what supported language the array was sent from. This enables applications in different languages to communicate numerical data with each other at runtime.
For more information on the tensor data structure, see the documentation
Datasets are collections of Tensors and associated metadata. The Dataset
class
is a user space object that can be created, added to, sent to, and retrieved from
the Orchestrator.
For an example of how to use the Dataset
class, see the Online Analysis example
For more information on the API, see the API documentation
SmartSim and SmartRedis were designed to work together. When launched through SmartSim, applications using the SmartRedis clients are directly connected to any Orchestrator launched in the same Experiment.
In this way, a SmartSim Experiment becomes a driver for coupled ML and Simulation workflows. The following are simple examples of how to use SmartSim and SmartRedis together.
Each tutorial is a Jupyter notebook that can be run through the SmartSim Tutorials docker image which will run a jupyter
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势 。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号