GPU-Benchmarks-on-LLM-Inference

GPU-Benchmarks-on-LLM-Inference

GPU和Apple芯片在LLaMA 3推理性能基准对比

项目对比测试了NVIDIA GPU和Apple芯片在LLaMA 3模型上的推理性能,涵盖从消费级到数据中心级的多种硬件。测试使用llama.cpp,展示了不同量化级别下8B和70B模型的推理速度。结果以表格形式呈现,包括生成速度和提示评估速度。此外,项目提供了编译指南、使用示例、VRAM需求估算和模型困惑度比较,为LLM硬件选型和部署提供全面参考。

LLaMAGPU推理基准测试量化Github开源项目

GPU-Benchmarks-on-LLM-Inference

Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference? 🧐

Description

Use llama.cpp to test the LLaMA models inference speed of different GPUs on RunPod, 13-inch M1 MacBook Air, 14-inch M1 Max MacBook Pro, M2 Ultra Mac Studio and 16-inch M3 Max MacBook Pro for LLaMA 3.

Overview

Average speed (tokens/s) of generating 1024 tokens by GPUs on LLaMA 3. Higher speed is better.

GPU8B Q4_K_M8B F1670B Q4_K_M70B F16
3070 8GB70.94OOMOOMOOM
3080 10GB106.40OOMOOMOOM
3080 Ti 12GB106.71OOMOOMOOM
4070 Ti 12GB82.21OOMOOMOOM
4080 16GB106.2240.29OOMOOM
RTX 4000 Ada 20GB58.5920.85OOMOOM
3090 24GB111.7446.51OOMOOM
4090 24GB127.7454.34OOMOOM
RTX 5000 Ada 32GB89.8732.67OOMOOM
3090 24GB * 2108.0747.1516.29OOM
4090 24GB * 2122.5653.2719.06OOM
RTX A6000 48GB102.2240.2514.58OOM
RTX 6000 Ada 48GB130.9951.9718.36OOM
A40 48GB88.9533.9512.08OOM
L40S 48GB113.6043.4215.31OOM
RTX 4000 Ada 20GB * 456.1420.587.33OOM
A100 PCIe 80GB138.3154.5622.11OOM
A100 SXM 80GB133.3853.1824.33OOM
H100 PCIe 80GB144.4967.7925.01OOM
3090 24GB * 4104.9446.4016.89OOM
4090 24GB * 4117.6152.6918.83OOM
RTX 5000 Ada 32GB * 482.7331.9411.45OOM
3090 24GB * 6101.0745.5516.935.82
4090 24GB * 8116.1352.1218.766.45
RTX A6000 48GB * 493.7338.8714.324.74
RTX 6000 Ada 48GB * 4118.9950.2517.966.06
A40 48GB * 483.7933.2811.913.98
L40S 48GB * 4105.7242.4814.995.03
A100 PCIe 80GB * 4117.3051.5422.687.38
A100 SXM 80GB * 497.7045.4519.606.92
H100 PCIe 80GB * 4118.1462.9026.209.63
M1 7‑Core GPU 8GB9.72OOMOOMOOM
M1 Max 32‑Core GPU 64GB34.4918.434.09OOM
M2 Ultra 76-Core GPU 192GB76.2836.2512.134.71
M3 Max 40‑Core GPU 64GB50.7422.397.53OOM

Average 1024 tokens prompt eval speed (tokens/s) by GPUs on LLaMA 3.

GPU8B Q4_K_M8B F1670B Q4_K_M70B F16
3070 8GB2283.62OOMOOMOOM
3080 10GB3557.02OOMOOMOOM
3080 Ti 12GB3556.67OOMOOMOOM
4070 Ti 12GB3653.07OOMOOMOOM
4080 16GB5064.996758.90OOMOOM
RTX 4000 Ada 20GB2310.532951.87OOMOOM
3090 24GB3865.394239.64OOMOOM
4090 24GB6898.719056.26OOMOOM
RTX 5000 Ada 32GB4467.465835.41OOMOOM
3090 24GB * 24004.144690.50393.89OOM
4090 24GB * 28545.0011094.51905.38OOM
RTX A6000 48GB3621.814315.18466.82OOM
RTX 6000 Ada 48GB5560.946205.44547.03OOM
A40 48GB3240.954043.05239.92OOM
L40S 48GB5908.522491.65649.08OOM
RTX 4000 Ada 20GB * 43369.244366.64306.44OOM
A100 PCIe 80GB5800.487504.24726.65OOM
A100 SXM 80GB5863.92681.47796.81OOM
H100 PCIe 80GB7760.1610342.63984.06OOM
3090 24GB * 44653.935713.41350.06OOM
4090 24GB * 49609.2912304.19898.17OOM
RTX 5000 Ada 32GB * 46530.782877.66541.54OOM
3090 24GB * 65153.055952.55739.40927.23
4090 24GB * 89706.8211818.921336.261890.48
RTX A6000 48GB * 45340.106448.85539.20792.23
RTX 6000 Ada 48GB * 49679.5512637.94714.931270.39
A40 48GB * 44841.985931.06263.36900.79
L40S 48GB * 49008.272541.61634.051478.83
A100 PCIe 80GB * 48889.3511670.74978.061733.41
A100 SXM 80GB * 47782.25674.11539.081834.16
H100 PCIe 80GB * 411560.2315612.811133.232420.10
M1 7‑Core GPU 8GB87.26OOMOOMOOM
M1 Max 32‑Core GPU 64GB355.45418.7733.01OOM
M2 Ultra 76-Core GPU 192GB1023.891202.74117.76145.82
M3 Max 40‑Core GPU 64GB678.04751.4962.88OOM

Model

Thanks to shawwn for LLaMA model weights (7B, 13B, 30B, 65B): llama-dl. Access LLaMA 2 from Meta AI. Access LLaMA 3 from Meta Llama 3 on Hugging Face or my Hugging Face repos: Xiongjie Dai.

Usage

Build

  • For NVIDIA GPUs, this provides BLAS acceleration using the CUDA cores of your Nvidia GPU:

    !make clean && LLAMA_CUBLAS=1 make -j
  • For Apple Silicon, Metal is enabled by default:

    !make clean && make -j

Text Completion

Use argument -ngl 0 to only use the CPU for inference and -ngl 10000 to ensure all layers are offloaded to the GPU.

!./main -ngl 10000 -m ./models/8B-v3/ggml-model-Q4_K_M.gguf --color --temp 1.1 --repeat_penalty 1.1 -c 0 -n 1024 -e -s 0 -p """\ First Citizen:\n\n\ Before we proceed any further, hear me speak.\n\n\ \n\n\ All:\n\n\ Speak, speak.\n\n\ \n\n\ First Citizen:\n\n\ You are all resolved rather to die than to famish?\n\n\ \n\n\ All:\n\n\ Resolved. resolved.\n\n\ \n\n\ First Citizen:\n\n\ First, you know Caius Marcius is chief enemy to the people.\n\n\ \n\n\ All:\n\n\ We know't, we know't.\n\n\ \n\n\ First Citizen:\n\n\ Let us kill him, and we'll have corn at our own price. Is't a verdict?\n\n\ \n\n\ All:\n\n\ No more talking on't; let it be done: away, away!\n\n\ \n\n\ Second Citizen:\n\n\ One word, good citizens.\n\n\ \n\n\ First Citizen:\n\n\ We are accounted poor citizens, the patricians good. What authority surfeits on would relieve us: if they would yield us but the superfluity, \ while it were wholesome, we might guess they relieved us humanely; but they think we are too dear: the leanness that afflicts us, the object of \ our misery, is as an inventory to particularise their abundance; our sufferance is a gain to them Let us revenge this with our pikes, \ ere we become rakes: for the gods know I speak this in hunger for bread, not in thirst for revenge.\n\n\ \n\n\ """

Note: For Apple Silicon, check the recommendedMaxWorkingSetSize in the result to see how much memory can be allocated on the GPU and maintain its performance. Only 70% of unified memory can be allocated to the GPU on 32GB M1 Max right now, and we expect around 78% of usable memory for the GPU on larger memory. (Source: https://developer.apple.com/videos/play/tech-talks/10580/?time=346) To utilize the whole memory, use -ngl 0 to only use the CPU for inference. (Thanks to: https://github.com/ggerganov/llama.cpp/pull/1826)

Chat template for LLaMA 3 🦙🦙🦙

!./main -ngl 10000 -m ./models/8B-v3-instruct/ggml-model-Q4_K_M.gguf --color -c 0 -n -2 -e -s 0 --mirostat 2 -i --no-display-prompt --keep -1 \ -r '<|eot_id|>' -p '<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\nHi!<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' \ --in-prefix '<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n'

Benchmark

!./llama-bench -p 512,1024,4096,8192 -n 512,1024,4096,8192 -m ./models/8B-v3/ggml-model-Q4_K_M.gguf

Total VRAM Requirements

ModelQuantized size (Q4_K_M)Original size (f16)
8B4.58 GB14.96 GB
70B39.59 GB131.42 GB

You may estimate that VRAM requirement using this tool: LLM RAM Calculator

Perplexity table on LLaMA 3 70B

Less perplexity is better. (credit to: dranger003)

QuantizationSize (GiB)Perplexity (wiki.test)Delta (FP16)
IQ1_S14.299.8655 +/- 0.0625248.51%
IQ1_M15.608.5193 +/- 0.0530201.94%
IQ2_XXS17.796.6705 +/- 0.0405

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

热门AI工具AI办公办公工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

模型训练热门AI工具内容创作智能问答AI开发讯飞星火大模型多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

AI助手热门AI工具AI创作AI辅助写作讯飞绘文内容运营个性化文章多平台分发
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多