wgpu

wgpu

基于WebGPU标准的跨平台Rust图形API

wgpu项目实现了WebGPU标准的Rust版本,提供跨平台图形编程接口。它支持Vulkan、Metal、D3D12、OpenGL等多种图形API,并可在WebAssembly环境中运行。wgpu兼容WGSL、SPIR-V、GLSL等着色器语言,具有自动转换功能。该项目包含多个核心库和工具,适用于游戏引擎、3D渲染、科学计算可视化等应用场景。

wgpuWebGPU跨平台图形APIRustGithub开源项目
<img align="right" width="25%" src="logo.png">

wgpu

Matrix Space Dev Matrix  User Matrix Build Status codecov.io

wgpu is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.

The API is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox and Deno.

Repo Overview

The repository hosts the following libraries:

  • Crates.io docs.rs - User facing Rust API.
  • Crates.io docs.rs - Internal safe implementation.
  • Crates.io docs.rs - Internal unsafe GPU API abstraction layer.
  • Crates.io docs.rs - Rust types shared between all crates.
  • Crates.io docs.rs - Stand-alone shader translation library.
  • Crates.io docs.rs - Collection of thin abstractions over d3d12.
  • Crates.io - WebGPU implementation for the Deno JavaScript/TypeScript runtime

The following binaries:

  • Crates.io - Tool for translating shaders between different languages using naga.
  • Crates.io - Tool for getting information on GPUs in the system.
  • cts_runner - WebGPU Conformance Test Suite runner using deno_webgpu.
  • player - standalone application for replaying the API traces.

For an overview of all the components in the gfx-rs ecosystem, see the big picture.

Getting Started

Rust

Rust examples can be found at wgpu/examples. You can run the examples on native with cargo run --bin wgpu-examples <example>. See the list of examples.

To run the examples in a browser, run cargo xtask run-wasm. Then open http://localhost:8000 in your browser, and you can choose an example to run. Naturally, in order to display any of the WebGPU based examples, you need to make sure your browser supports it.

If you are looking for a wgpu tutorial, look at the following:

C/C++

To use wgpu in C/C++, you need wgpu-native.

If you are looking for a wgpu C++ tutorial, look at the following:

Others

If you want to use wgpu in other languages, there are many bindings to wgpu-native from languages such as Python, D, Julia, Kotlin, and more. See the list.

Community

We have the Matrix space Matrix Space with a few different rooms that form the wgpu community:

  • Wgpu Matrix - discussion of the wgpu's development.
  • Naga Matrix - discussion of the naga's development.
  • User Matrix - discussion of using the library and the surrounding ecosystem.
  • Random Matrix - discussion of everything else.

Wiki

We have a wiki that serves as a knowledge base.

Supported Platforms

APIWindowsLinux/AndroidmacOS/iOSWeb (wasm)
Vulkan🌋
Metal
DX12
OpenGL🆗 (GL 3.3+)🆗 (GL ES 3.0+)📐🆗 (WebGL2)
WebGPU

✅ = First Class Support
🆗 = Downlevel/Best Effort Support
📐 = Requires the ANGLE translation layer (GL ES 3.0 only)
🌋 = Requires the MoltenVK translation layer
🛠️ = Unsupported, though open to contributions

Shader Support

wgpu supports shaders in WGSL, SPIR-V, and GLSL. Both HLSL and GLSL have compilers to target SPIR-V. All of these shader languages can be used with any backend as we handle all of the conversions. Additionally, support for these shader inputs is not going away.

While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.

WGSL is always supported by default, but GLSL and SPIR-V need features enabled to compile in support.

Note that the WGSL specification is still under development, so the draft specification does not exactly describe what wgpu supports. See below for details.

To enable SPIR-V shaders, enable the spirv feature of wgpu. To enable GLSL shaders, enable the glsl feature of wgpu.

Angle

Angle is a translation layer from GLES to other backends developed by Google. We support running our GLES3 backend over it in order to reach platforms DX11 support, which aren't accessible otherwise. In order to run with Angle, the "angle" feature has to be enabled, and Angle libraries placed in a location visible to the application. These binaries can be downloaded from gfbuild-angle artifacts, manual compilation may be required on Macs with Apple silicon.

On Windows, you generally need to copy them into the working directory, in the same directory as the executable, or somewhere in your path. On Linux, you can point to them using LD_LIBRARY_PATH environment.

MSRV policy

Due to complex dependants, we have two MSRV policies:

  • d3d12, naga, wgpu-core, wgpu-hal, and wgpu-types's MSRV is 1.76, but may be lower than the rest of the workspace in the future.
  • The rest of the workspace has an MSRV of 1.76 as well right now, but may be higher than above listed crates.

It is enforced on CI (in "/.github/workflows/ci.yml") with the CORE_MSRV and REPO_MSRV variables. This version can only be upgraded in breaking releases, though we release a breaking version every three months.

The naga, wgpu-core, wgpu-hal, and wgpu-types crates should never require an MSRV ahead of Firefox's MSRV for nightly builds, as determined by the value of MINIMUM_RUST_VERSION in python/mozboot/mozboot/util.py.

Environment Variables

All testing and example infrastructure share the same set of environment variables that determine which Backend/GPU it will run on.

  • WGPU_ADAPTER_NAME with a substring of the name of the adapter you want to use (ex. 1080 will match NVIDIA GeForce 1080ti).
  • WGPU_BACKEND with a comma-separated list of the backends you want to use (vulkan, metal, dx12, or gl).
  • WGPU_POWER_PREF with the power preference to choose when a specific adapter name isn't specified (high, low or none)
  • WGPU_DX12_COMPILER with the DX12 shader compiler you wish to use (dxc or fxc, note that dxc requires dxil.dll and dxcompiler.dll to be in the working directory otherwise it will fall back to fxc)
  • WGPU_GLES_MINOR_VERSION with the minor OpenGL ES 3 version number to request (0, 1, 2 or automatic).
  • WGPU_ALLOW_UNDERLYING_NONCOMPLIANT_ADAPTER with a boolean whether non-compliant drivers are enumerated (0 for false, 1 for true).

When running the CTS, use the variables DENO_WEBGPU_ADAPTER_NAME, DENO_WEBGPU_BACKEND, DENO_WEBGPU_POWER_PREFERENCE.

Testing

We have multiple methods of testing, each of which tests different qualities about wgpu. We automatically run our tests on CI. The current state of CI testing:

Platform/BackendTestsNotes
Windows/DX12:heavy_check_mark:using WARP
Windows/OpenGL:heavy_check_mark:using llvmpipe
MacOS/Metal:heavy_check_mark:using hardware runner
Linux/Vulkan:heavy_check_mark:using lavapipe
Linux/OpenGL ES:heavy_check_mark:using llvmpipe
Chrome/WebGL:heavy_check_mark:using swiftshader
Chrome/WebGPU:x:not set up

Core Test Infrastructure

We use a tool called cargo nextest to run our tests. To install it, run cargo install cargo-nextest.

To run the test suite:

cargo xtask test

To run the test suite on WebGL (currently incomplete):

cd wgpu
wasm-pack test --headless --chrome --no-default-features --features webgl --workspace

This will automatically run the tests using a packaged browser. Remove --headless to run the tests with whatever browser you wish at http://localhost:8000.

If you are a user and want a way to help contribute to wgpu, we always need more help writing test cases.

WebGPU Conformance Test Suite

WebGPU includes a Conformance Test Suite to validate that implementations are working correctly. We can run this CTS against wgpu.

To run the CTS, first, you need to check it out:

git clone https://github.com/gpuweb/cts.git
cd cts
# works in bash and powershell
git checkout $(cat ../cts_runner/revision.txt)

To run a given set of tests:

# Must be inside the `cts` folder we just checked out, else this will fail
cargo run --manifest-path ../Cargo.toml -p cts_runner --bin cts_runner -- ./tools/run_deno --verbose "<test string>"

To find the full list of tests, go to the online cts viewer.

The list of currently enabled CTS tests can be found here.

Tracking the WebGPU and WGSL draft specifications

The wgpu crate is meant to be an idiomatic Rust translation of the WebGPU API. That specification, along with its shading language, WGSL, are both still in the "Working Draft" phase, and while the general outlines are stable, details change frequently. Until the specification is stabilized, the wgpu crate and the version of WGSL it implements will likely differ from what is specified, as the implementation catches up.

Exactly which WGSL features wgpu supports depends on how you are using it:

  • When running as native code, wgpu uses the Naga crate to translate WGSL code into the shading language of your platform's native GPU API. Naga has a milestone for catching up to the WGSL specification, but in general, there is no up-to-date summary of the differences between Naga and the WGSL spec.

  • When running in a web browser (by compilation to WebAssembly) without the "webgl" feature enabled, wgpu relies on the browser's own WebGPU implementation. WGSL shaders are simply passed through to the browser, so that determines which WGSL features you can use.

  • When running in a web browser with wgpu's "webgl" feature enabled, wgpu uses Naga to translate WGSL programs into GLSL. This uses the same version of Naga as if you were running wgpu as native code.

Coordinate Systems

wgpu uses the coordinate systems of D3D and Metal:

RenderTexture
render_coordinatestexture_coordinates

编辑推荐精选

音述AI

音述AI

全球首个AI音乐社区

音述AI是全球首个AI音乐社区,致力让每个人都能用音乐表达自我。音述AI提供零门槛AI创作工具,独创GETI法则帮助用户精准定义音乐风格,AI润色功能支持自动优化作品质感。音述AI支持交流讨论、二次创作与价值变现。针对中文用户的语言习惯与文化背景进行专门优化,支持国风融合、C-pop等本土音乐标签,让技术更好地承载人文表达。

QoderWork

QoderWork

阿里Qoder团队推出的桌面端AI智能体

QoderWork 是阿里推出的本地优先桌面 AI 智能体,适配 macOS14+/Windows10+,以自然语言交互实现文件管理、数据分析、AI 视觉生成、浏览器自动化等办公任务,自主拆解执行复杂工作流,数据本地运行零上传,技能市场可无限扩展,是高效的 Agentic 生产力办公助手。

lynote.ai

lynote.ai

一站式搞定所有学习需求

不再被海量信息淹没,开始真正理解知识。Lynote 可摘要 YouTube 视频、PDF、文章等内容。即时创建笔记,检测 AI 内容并下载资料,将您的学习效率提升 10 倍。

AniShort

AniShort

为AI短剧协作而生

专为AI短剧协作而生的AniShort正式发布,深度重构AI短剧全流程生产模式,整合创意策划、制作执行、实时协作、在线审片、资产复用等全链路功能,独创无限画布、双轨并行工业化工作流与Ani智能体助手,集成多款主流AI大模型,破解素材零散、版本混乱、沟通低效等行业痛点,助力3人团队效率提升800%,打造标准化、可追溯的AI短剧量产体系,是AI短剧团队协同创作、提升制作效率的核心工具。

seedancetwo2.0

seedancetwo2.0

能听懂你表达的视频模型

Seedance two是基于seedance2.0的中国大模型,支持图像、视频、音频、文本四种模态输入,表达方式更丰富,生成也更可控。

nano-banana纳米香蕉中文站

nano-banana纳米香蕉中文站

国内直接访问,限时3折

输入简单文字,生成想要的图片,纳米香蕉中文站基于 Google 模型的 AI 图片生成网站,支持文字生图、图生图。官网价格限时3折活动

扣子-AI办公

扣子-AI办公

职场AI,就用扣子

AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!

堆友

堆友

多风格AI绘画神器

堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。

图像生成热门AI工具AI图像AI反应堆AI工具箱AI绘画GOAI艺术字堆友相机
码上飞

码上飞

零代码AI应用开发平台

零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。

Vora

Vora

免费创建高清无水印Sora视频

Vora是一个免费创建高清无水印Sora视频的AI工具

下拉加载更多