
TensorRT-LLM后端 适用于Triton的大语言模型推理引擎
TensorRT-LLM Backend是Triton Inference Server的专用后端,用于部署和服务TensorRT-LLM模型。它集成了in-flight batching和paged attention等先进特性,显著提升了大语言模型的推理效率。通过简洁的接口设计,此后端使TensorRT-LLM模型能无缝集成到Triton服务中,为用户提供高性能、可扩展的AI推理解决方案。
The Triton backend for TensorRT-LLM. You can learn more about Triton backends in the backend repo. The goal of TensorRT-LLM Backend is to let you serve TensorRT-LLM models with Triton Inference Server. The inflight_batcher_llm directory contains the C++ implementation of the backend supporting inflight batching, paged attention and more.
Where can I ask general questions about Triton and Triton backends? Be sure to read all the information below as well as the general Triton documentation available in the main server repo. If you don't find your answer there you can ask questions on the issues page.
There are several ways to access the TensorRT-LLM Backend.
Starting with Triton 23.10 release, Triton includes a container with the TensorRT-LLM Backend and Python Backend. This container should have everything to run a TensorRT-LLM model. You can find this container on the Triton NGC page.
build.py Script in Server RepoStarting with Triton 23.10 release, you can follow steps described in the Building With Docker guide and use the build.py script to build the TRT-LLM backend.
The below commands will build the same Triton TRT-LLM container as the one on the NGC.
# Prepare the TRT-LLM base image using the dockerfile from tensorrtllm_backend. cd tensorrtllm_backend git lfs install git submodule update --init --recursive # Specify the build args for the dockerfile. BASE_IMAGE=nvcr.io/nvidia/tritonserver:24.05-py3-min # Use the PyTorch package shipped with the PyTorch NGC container. PYTORCH_IMAGE=nvcr.io/nvidia/pytorch:24.05-py3 TRT_VERSION=10.1.0.27 TRT_URL_x86=https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.Linux.x86_64-gnu.cuda-12.4.tar.gz TRT_URL_ARM=https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.1.0/tars/TensorRT-10.1.0.27.ubuntu-22.04.aarch64-gnu.cuda-12.4.tar.gz docker build -t trtllm_base \ --build-arg BASE_IMAGE="${BASE_IMAGE}" \ --build-arg PYTORCH_IMAGE="${PYTORCH_IMAGE}" \ --build-arg TRT_VER="${TRT_VERSION}" \ --build-arg RELEASE_URL_TRT_x86="${TRT_URL_x86}" \ --build-arg RELEASE_URL_TRT_ARM="${TRT_URL_ARM}" \ -f dockerfile/Dockerfile.triton.trt_llm_backend . # Run the build script from Triton Server repo. The flags for some features or # endpoints can be removed if not needed. Please refer to the support matrix to # see the aligned versions: https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html TRTLLM_BASE_IMAGE=trtllm_base TENSORRTLLM_BACKEND_REPO_TAG=rel PYTHON_BACKEND_REPO_TAG=r24.07 cd server ./build.py -v --no-container-interactive --enable-logging --enable-stats --enable-tracing \ --enable-metrics --enable-gpu-metrics --enable-cpu-metrics \ --filesystem=gcs --filesystem=s3 --filesystem=azure_storage \ --endpoint=http --endpoint=grpc --endpoint=sagemaker --endpoint=vertex-ai \ --backend=ensemble --enable-gpu --endpoint=http --endpoint=grpc \ --no-container-pull \ --image=base,${TRTLLM_BASE_IMAGE} \ --backend=tensorrtllm:${TENSORRTLLM_BACKEND_REPO_TAG} \ --backend=python:${PYTHON_BACKEND_REPO_TAG}
The TRTLLM_BASE_IMAGE is the base image that will be used to build the
container. The TENSORRTLLM_BACKEND_REPO_TAG and PYTHON_BACKEND_REPO_TAG are
the tags of the TensorRT-LLM backend and Python backend repositories that will
be used to build the container. You can also remove the features or endpoints
that you don't need by removing the corresponding flags.
The version of Triton Server used in this build option can be found in the Dockerfile.
# Update the submodules cd tensorrtllm_backend git lfs install git submodule update --init --recursive # Use the Dockerfile to build the backend in a container # For x86_64 DOCKER_BUILDKIT=1 docker build -t triton_trt_llm -f dockerfile/Dockerfile.trt_llm_backend . # For aarch64 DOCKER_BUILDKIT=1 docker build -t triton_trt_llm --build-arg TORCH_INSTALL_TYPE="src_non_cxx11_abi" -f dockerfile/Dockerfile.trt_llm_backend .
Below is an example of how to serve a TensorRT-LLM model with the Triton TensorRT-LLM Backend on a 4-GPU environment. The example uses the GPT model from the TensorRT-LLM repository.
You can skip this step if you already have the engines ready. Follow the guide in TensorRT-LLM repository for more details on how to to prepare the engines for deployment.
# Update the submodule TensorRT-LLM repository git submodule update --init --recursive git lfs install git lfs pull # TensorRT-LLM is required for generating engines. You can skip this step if # you already have the package installed. If you are generating engines within # the Triton container, you have to install the TRT-LLM package. (cd tensorrt_llm && bash docker/common/install_cmake.sh && export PATH=/usr/local/cmake/bin:$PATH && python3 ./scripts/build_wheel.py --trt_root="/usr/local/tensorrt" && pip3 install ./build/tensorrt_llm*.whl) # Go to the tensorrt_llm/examples/gpt directory cd tensorrt_llm/examples/gpt # Download weights from HuggingFace Transformers rm -rf gpt2 && git clone https://huggingface.co/gpt2-medium gpt2 pushd gpt2 && rm pytorch_model.bin model.safetensors && wget -q https://huggingface.co/gpt2-medium/resolve/main/pytorch_model.bin && popd # Convert weights from HF Tranformers to TensorRT-LLM checkpoint python3 convert_checkpoint.py --model_dir gpt2 \ --dtype float16 \ --tp_size 4 \ --output_dir ./c-model/gpt2/fp16/4-gpu # Build TensorRT engines trtllm-build --checkpoint_dir ./c-model/gpt2/fp16/4-gpu \ --gpt_attention_plugin float16 \ --remove_input_padding enable \ --paged_kv_cache enable \ --gemm_plugin float16 \ --output_dir engines/fp16/4-gpu
There are five models in the all_models/inflight_batcher_llm
directory that will be used in this example:
This model is used for tokenizing, meaning the conversion from prompts(string) to input_ids(list of ints).
This model is a wrapper of your TensorRT-LLM model and is used for inferencing. Input specification can be found here
This model is used for de-tokenizing, meaning the conversion from output_ids(list of ints) to outputs(string).
This model can be used to chain the preprocessing, tensorrt_llm and postprocessing models together.
This model can also be used to chain the preprocessing, tensorrt_llm and postprocessing models together.
When using the BLS model instead of the ensemble, you should set the number of model instances to
the maximum batch size supported by the TRT engine to allow concurrent request execution. This
can be done by modifying the count value in the instance_group section of the BLS model config.pbtxt.
The BLS model has an optional parameter accumulate_tokens which can be used in streaming mode to call the
postprocessing model with all accumulated tokens, instead of only one token.
This might be necessary for certain tokenizers.
The BLS model supports speculative decoding. Target and draft triton models are set with the parameters tensorrt_llm_model_name tensorrt_llm_draft_model_name. Speculative decoding is performed by setting num_draft_tokens in the request. use_draft_logits may be set to use logits comparison speculative decoding. Note that return_generation_logits and return_context_logits are not supported when using speculative decoding. Also note that requests with batch size greater than 1 is not supported with speculative decoding right now.
BLS Inputs
| Name | Shape | Type | Description |
|---|---|---|---|
text_input | [ -1 ] | string | Prompt text |
max_tokens | [ -1 ] | int32 | number of tokens to generate |
bad_words | [2, num_bad_words] | int32 | Bad words list |
stop_words | [2, num_stop_words] | int32 | Stop words list |
end_id | [1] | int32 | End token Id. If not specified, defaults to -1 |
pad_id | [1] | int32 | Pad token Id |
temperature | [1] | float32 | Sampling Config param: temperature |
top_k | [1] | int32 | Sampling Config param: topK |
top_p | [1] | float32 | Sampling Config param: topP |
len_penalty | [1] | float32 | Sampling Config param: lengthPenalty |
repetition_penalty | [1] | float | Sampling Config param: repetitionPenalty |
min_length | [1] | int32_t | Sampling Config param: minLength |
presence_penalty | [1] | float | Sampling Config param: presencePenalty |
frequency_penalty | [1] | float | Sampling Config param: frequencyPenalty |
random_seed | [1] | uint64_t | Sampling Config param: randomSeed |
return_log_probs | [1] | bool | When true, include log probs in the output |
return_context_logits | [1] | bool | When true, include context logits in the output |
return_generation_logits | [1] | bool | When true, include generation logits in the output |
beam_width | [1] | int32_t | (Default=1) Beam width for this request; set to 1 for greedy sampling |
stream | [1] | bool | (Default=false). When true, stream out tokens as they are generated. When false return only when the full generation has completed. |
prompt_embedding_table | [1] | float16 (model data type) | P-tuning prompt embedding table |
prompt_vocab_size | [1] | int32 | P-tuning prompt vocab size |
lora_task_id | [1] | uint64 | Task ID for the given lora_weights. This ID is expected to be globally unique. To perform inference with a specific LoRA for the first time lora_task_id lora_weights and lora_config must all be given. The LoRA will be cached, so that subsequent requests for the same task only require lora_task_id. If the cache is full the oldest LoRA will be evicted to make space for new ones. An error is returned if lora_task_id is not cached |
lora_weights | [ num_lora_modules_layers, D x Hi + Ho x D ] | float (model data type) | weights for a lora adapter. see lora docs for more details. |
lora_config | [ num_lora_modules_layers, 3] | int32t | lora configuration tensor. [ module_id, layer_idx, adapter_size (D aka R value) ] see lora docs for more details. |
embedding_bias_words | [-1] | string | Embedding bias words |
embedding_bias_weights | [-1] | float32 | Embedding bias weights |
num_draft_tokens | [1] | int32 | number of tokens to get from draft model during speculative decoding |
use_draft_logits | [1] | bool | use logit comparison during speculative decoding |
BLS Outputs
| Name | Shape | Type | Description |
|---|---|---|---|
text_output | [-1] | string | text output |
cum_log_probs | [-1] | float | cumulative probabilities for each output |
output_log_probs | [beam_width, -1] | float | log probabilities for each output |
context_logits | [-1, vocab_size] | float | context logits for input |
generation_logtis | [beam_width, seq_len, vocab_size] | float | generatiion logits for each output |
To learn more about ensemble and BLS models, please see the Ensemble Models and Business Logic Scripting sections of the Triton Inference Server documentation.
# Create the model repository that will be used by the Triton server cd tensorrtllm_backend mkdir triton_model_repo # Copy the example models to the model repository cp -r all_models/inflight_batcher_llm/* triton_model_repo/ # Copy the TRT engine to triton_model_repo/tensorrt_llm/1/ cp tensorrt_llm/examples/gpt/engines/fp16/4-gpu/* triton_model_repo/tensorrt_llm/1
The following table shows the fields that may to be modified before deployment:
triton_model_repo/preprocessing/config.pbtxt
| Name | Description |
|---|---|
tokenizer_dir | The path to the tokenizer for the model. In this example, the path should be set to /tensorrtllm_backend/tensorrt_llm/examples/gpt/gpt2 as the tensorrtllm_backend directory will be mounted to /tensorrtllm_backend within the container |
triton_model_repo/tensorrt_llm/config.pbtxt
| Name | Description |


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板, 涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出 国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号