TensorRT-LLM后端 适用于Triton的大语言模型推理引擎
TensorRT-LLM Backend是Triton Inference Server的专用后端,用于部署和服务TensorRT-LLM模型。它集成了in-flight batching和paged attention等先进特性,显著提升了大语言模型的推理效率。通过简洁的接口设计,此后端使TensorRT-LLM模型能无缝集成到Triton服务中,为用户提供高性能、可扩展的AI推理解决方案。
The Triton backend for TensorRT-LLM. You can learn more about Triton backends in the backend repo. The goal of TensorRT-LLM Backend is to let you serve TensorRT-LLM models with Triton Inference Server. The inflight_batcher_llm directory contains the C++ implementation of the backend supporting inflight batching, paged attention and more.
Where can I ask general questions about Triton and Triton backends? Be sure to read all the information below as well as the general Triton documentation available in the main server repo. If you don't find your answer there you can ask questions on the issues page.
There are several ways to access the TensorRT-LLM Backend.
Starting with Triton 23.10 release, Triton includes a container with the TensorRT-LLM Backend and Python Backend. This container should have everything to run a TensorRT-LLM model. You can find this container on the Triton NGC page.
build.py
Script in Server RepoStarting with Triton 23.10 release, you can follow steps described in the Building With Docker guide and use the build.py script to build the TRT-LLM backend.
The below commands will build the same Triton TRT-LLM container as the one on the NGC.
# Prepare the TRT-LLM base image using the dockerfile from tensorrtllm_backend. cd tensorrtllm_backend git lfs install git submodule update --init --recursive # Specify the build args for the dockerfile. BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.05-py3-min # Use the PyTorch package shipped with the PyTorch NGC container. PYTORCH_IMAGE=nvcr.io/nvidia/pytorch:24.05-py3 TRT_VERSION=10.1.0.27 TRT_URL_x86=https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz TRT_URL_ARM=https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.ubuntu-22.04.aarch64-gnu.cuda-12.4.tar.gz docker build -t trtllm_base \ --build-arg BASE_IMAGE="${BASE_IMAGE}" \ --build-arg PYTORCH_IMAGE="${PYTORCH_IMAGE}" \ --build-arg TRT_VER="${TRT_VERSION}" \ --build-arg RELEASE_URL_TRT_x86="${TRT_URL_x86}" \ --build-arg RELEASE_URL_TRT_ARM="${TRT_URL_ARM}" \ -f dockerfile/Dockerfile.triton.trt_llm_backend . # Run the build script from Triton Server repo. The flags for some features or # endpoints can be removed if not needed. Please refer to the support matrix to # see the aligned versions: https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html TRTLLM_BASE_IMAGE=trtllm_base TENSORRTLLM_BACKEND_REPO_TAG=rel PYTHON_BACKEND_REPO_TAG=r24.07 cd server ./build.py -v --no-container-interactive --enable-logging --enable-stats --enable-tracing \ --enable-metrics --enable-gpu-metrics --enable-cpu-metrics \ --filesystem=gcs --filesystem=s3 --filesystem=azure_storage \ --endpoint=http --endpoint=grpc --endpoint=sagemaker --endpoint=vertex-ai \ --backend=ensemble --enable-gpu --endpoint=http --endpoint=grpc \ --no-container-pull \ --image=base,${TRTLLM_BASE_IMAGE} \ --backend=tensorrtllm:${TENSORRTLLM_BACKEND_REPO_TAG} \ --backend=python:${PYTHON_BACKEND_REPO_TAG}
The TRTLLM_BASE_IMAGE
is the base image that will be used to build the
container. The TENSORRTLLM_BACKEND_REPO_TAG
and PYTHON_BACKEND_REPO_TAG
are
the tags of the TensorRT-LLM backend and Python backend repositories that will
be used to build the container. You can also remove the features or endpoints
that you don't need by removing the corresponding flags.
The version of Triton Server used in this build option can be found in the Dockerfile.
# Update the submodules cd tensorrtllm_backend git lfs install git submodule update --init --recursive # Use the Dockerfile to build the backend in a container # For x86_64 DOCKER_BUILDKIT=1 docker build -t triton_trt_llm -f dockerfile/Dockerfile.trt_llm_backend . # For aarch64 DOCKER_BUILDKIT=1 docker build -t triton_trt_llm --build-arg TORCH_INSTALL_TYPE="src_non_cxx11_abi" -f dockerfile/Dockerfile.trt_llm_backend .
Below is an example of how to serve a TensorRT-LLM model with the Triton TensorRT-LLM Backend on a 4-GPU environment. The example uses the GPT model from the TensorRT-LLM repository.
You can skip this step if you already have the engines ready. Follow the guide in TensorRT-LLM repository for more details on how to to prepare the engines for deployment.
# Update the submodule TensorRT-LLM repository git submodule update --init --recursive git lfs install git lfs pull # TensorRT-LLM is required for generating engines. You can skip this step if # you already have the package installed. If you are generating engines within # the Triton container, you have to install the TRT-LLM package. (cd tensorrt_llm && bash docker/common/install_cmake.sh && export PATH=/usr/local/cmake/bin:$PATH && python3 ./scripts/build_wheel.py --trt_root="/usr/local/tensorrt" && pip3 install ./build/tensorrt_llm*.whl) # Go to the tensorrt_llm/examples/gpt directory cd tensorrt_llm/examples/gpt # Download weights from HuggingFace Transformers rm -rf gpt2 && git clone https://huggingface.co/gpt2-medium gpt2 pushd gpt2 && rm pytorch_model.bin model.safetensors && wget -q https://huggingface.co/gpt2-medium/resolve/main/pytorch_model.bin && popd # Convert weights from HF Tranformers to TensorRT-LLM checkpoint python3 convert_checkpoint.py --model_dir gpt2 \ --dtype float16 \ --tp_size 4 \ --output_dir ./c-model/gpt2/fp16/4-gpu # Build TensorRT engines trtllm-build --checkpoint_dir ./c-model/gpt2/fp16/4-gpu \ --gpt_attention_plugin float16 \ --remove_input_padding enable \ --paged_kv_cache enable \ --gemm_plugin float16 \ --output_dir engines/fp16/4-gpu
There are five models in the all_models/inflight_batcher_llm
directory that will be used in this example:
This model is used for tokenizing, meaning the conversion from prompts(string) to input_ids(list of ints).
This model is a wrapper of your TensorRT-LLM model and is used for inferencing. Input specification can be found here
This model is used for de-tokenizing, meaning the conversion from output_ids(list of ints) to outputs(string).
This model can be used to chain the preprocessing, tensorrt_llm and postprocessing models together.
This model can also be used to chain the preprocessing, tensorrt_llm and postprocessing models together.
When using the BLS model instead of the ensemble, you should set the number of model instances to
the maximum batch size supported by the TRT engine to allow concurrent request execution. This
can be done by modifying the count
value in the instance_group
section of the BLS model config.pbtxt
.
The BLS model has an optional parameter accumulate_tokens
which can be used in streaming mode to call the
postprocessing model with all accumulated tokens, instead of only one token.
This might be necessary for certain tokenizers.
The BLS model supports speculative decoding. Target and draft triton models are set with the parameters tensorrt_llm_model_name
tensorrt_llm_draft_model_name
. Speculative decoding is performed by setting num_draft_tokens
in the request. use_draft_logits
may be set to use logits comparison speculative decoding. Note that return_generation_logits
and return_context_logits
are not supported when using speculative decoding. Also note that requests with batch size greater than 1 is not supported with speculative decoding right now.
BLS Inputs
Name | Shape | Type | Description |
---|---|---|---|
text_input | [ -1 ] | string | Prompt text |
max_tokens | [ -1 ] | int32 | number of tokens to generate |
bad_words | [2, num_bad_words] | int32 | Bad words list |
stop_words | [2, num_stop_words] | int32 | Stop words list |
end_id | [1] | int32 | End token Id. If not specified, defaults to -1 |
pad_id | [1] | int32 | Pad token Id |
temperature | [1] | float32 | Sampling Config param: temperature |
top_k | [1] | int32 | Sampling Config param: topK |
top_p | [1] | float32 | Sampling Config param: topP |
len_penalty | [1] | float32 | Sampling Config param: lengthPenalty |
repetition_penalty | [1] | float | Sampling Config param: repetitionPenalty |
min_length | [1] | int32_t | Sampling Config param: minLength |
presence_penalty | [1] | float | Sampling Config param: presencePenalty |
frequency_penalty | [1] | float | Sampling Config param: frequencyPenalty |
random_seed | [1] | uint64_t | Sampling Config param: randomSeed |
return_log_probs | [1] | bool | When true , include log probs in the output |
return_context_logits | [1] | bool | When true , include context logits in the output |
return_generation_logits | [1] | bool | When true , include generation logits in the output |
beam_width | [1] | int32_t | (Default=1) Beam width for this request; set to 1 for greedy sampling |
stream | [1] | bool | (Default=false ). When true , stream out tokens as they are generated. When false return only when the full generation has completed. |
prompt_embedding_table | [1] | float16 (model data type) | P-tuning prompt embedding table |
prompt_vocab_size | [1] | int32 | P-tuning prompt vocab size |
lora_task_id | [1] | uint64 | Task ID for the given lora_weights. This ID is expected to be globally unique. To perform inference with a specific LoRA for the first time lora_task_id lora_weights and lora_config must all be given. The LoRA will be cached, so that subsequent requests for the same task only require lora_task_id . If the cache is full the oldest LoRA will be evicted to make space for new ones. An error is returned if lora_task_id is not cached |
lora_weights | [ num_lora_modules_layers, D x Hi + Ho x D ] | float (model data type) | weights for a lora adapter. see lora docs for more details. |
lora_config | [ num_lora_modules_layers, 3] | int32t | lora configuration tensor. [ module_id, layer_idx, adapter_size (D aka R value) ] see lora docs for more details. |
embedding_bias_words | [-1] | string | Embedding bias words |
embedding_bias_weights | [-1] | float32 | Embedding bias weights |
num_draft_tokens | [1] | int32 | number of tokens to get from draft model during speculative decoding |
use_draft_logits | [1] | bool | use logit comparison during speculative decoding |
BLS Outputs
Name | Shape | Type | Description |
---|---|---|---|
text_output | [-1] | string | text output |
cum_log_probs | [-1] | float | cumulative probabilities for each output |
output_log_probs | [beam_width, -1] | float | log probabilities for each output |
context_logits | [-1, vocab_size] | float | context logits for input |
generation_logtis | [beam_width, seq_len, vocab_size] | float | generatiion logits for each output |
To learn more about ensemble and BLS models, please see the Ensemble Models and Business Logic Scripting sections of the Triton Inference Server documentation.
# Create the model repository that will be used by the Triton server cd tensorrtllm_backend mkdir triton_model_repo # Copy the example models to the model repository cp -r all_models/inflight_batcher_llm/* triton_model_repo/ # Copy the TRT engine to triton_model_repo/tensorrt_llm/1/ cp tensorrt_llm/examples/gpt/engines/fp16/4-gpu/* triton_model_repo/tensorrt_llm/1
The following table shows the fields that may to be modified before deployment:
triton_model_repo/preprocessing/config.pbtxt
Name | Description |
---|---|
tokenizer_dir | The path to the tokenizer for the model. In this example, the path should be set to /tensorrtllm_backend/tensorrt_llm/examples/gpt/gpt2 as the tensorrtllm_backend directory will be mounted to /tensorrtllm_backend within the container |
triton_model_repo/tensorrt_llm/config.pbtxt
| Name | Description |
AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多 元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI小说写作助手,一站式润色、改写 、扩写
蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。
全能AI智能助手,随时解答生活与工作的多样问题
问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。
实时语音翻译/同声传译工 具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内 容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号