基于WebGPU标准的跨平台Rust图形API
wgpu项目实现了WebGPU标准的Rust版本,提供跨平台图形编程接口。它支持Vulkan、Metal、D3D12、OpenGL等多种图形API,并可在WebAssembly环境中运行。wgpu兼容WGSL、SPIR-V、GLSL等着色器语言,具有自动转换功能。该项目包含多个核心库和工具,适用于游戏引擎、3D渲染、科学计算可视化等应用场景。
wgpu
is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL; and on top of WebGL2 and WebGPU on wasm.
The API is based on the WebGPU standard. It serves as the core of the WebGPU integration in Firefox and Deno.
The repository hosts the following libraries:
The following binaries:
naga
.cts_runner
- WebGPU Conformance Test Suite runner using deno_webgpu
.player
- standalone application for replaying the API traces.For an overview of all the components in the gfx-rs ecosystem, see the big picture.
Rust examples can be found at wgpu/examples. You can run the examples on native with cargo run --bin wgpu-examples <example>
. See the list of examples.
To run the examples in a browser, run cargo xtask run-wasm
.
Then open http://localhost:8000
in your browser, and you can choose an example to run.
Naturally, in order to display any of the WebGPU based examples, you need to make sure your browser supports it.
If you are looking for a wgpu tutorial, look at the following:
To use wgpu in C/C++, you need wgpu-native.
If you are looking for a wgpu C++ tutorial, look at the following:
If you want to use wgpu in other languages, there are many bindings to wgpu-native from languages such as Python, D, Julia, Kotlin, and more. See the list.
We have the Matrix space with a few different rooms that form the wgpu community:
We have a wiki that serves as a knowledge base.
API | Windows | Linux/Android | macOS/iOS | Web (wasm) |
---|---|---|---|---|
Vulkan | ✅ | ✅ | 🌋 | |
Metal | ✅ | |||
DX12 | ✅ | |||
OpenGL | 🆗 (GL 3.3+) | 🆗 (GL ES 3.0+) | 📐 | 🆗 (WebGL2) |
WebGPU | ✅ |
✅ = First Class Support
🆗 = Downlevel/Best Effort Support
📐 = Requires the ANGLE translation layer (GL ES 3.0 only)
🌋 = Requires the MoltenVK translation layer
🛠️ = Unsupported, though open to contributions
wgpu supports shaders in WGSL, SPIR-V, and GLSL. Both HLSL and GLSL have compilers to target SPIR-V. All of these shader languages can be used with any backend as we handle all of the conversions. Additionally, support for these shader inputs is not going away.
While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.
WGSL is always supported by default, but GLSL and SPIR-V need features enabled to compile in support.
Note that the WGSL specification is still under development,
so the draft specification does not exactly describe what wgpu
supports.
See below for details.
To enable SPIR-V shaders, enable the spirv
feature of wgpu.
To enable GLSL shaders, enable the glsl
feature of wgpu.
Angle is a translation layer from GLES to other backends developed by Google. We support running our GLES3 backend over it in order to reach platforms DX11 support, which aren't accessible otherwise. In order to run with Angle, the "angle" feature has to be enabled, and Angle libraries placed in a location visible to the application. These binaries can be downloaded from gfbuild-angle artifacts, manual compilation may be required on Macs with Apple silicon.
On Windows, you generally need to copy them into the working directory, in the same directory as the executable, or somewhere in your path.
On Linux, you can point to them using LD_LIBRARY_PATH
environment.
Due to complex dependants, we have two MSRV policies:
d3d12
, naga
, wgpu-core
, wgpu-hal
, and wgpu-types
's MSRV is 1.76, but may be lower than the rest of the workspace in the future.It is enforced on CI (in "/.github/workflows/ci.yml") with the CORE_MSRV
and REPO_MSRV
variables.
This version can only be upgraded in breaking releases, though we release a breaking version every three months.
The naga
, wgpu-core
, wgpu-hal
, and wgpu-types
crates should never
require an MSRV ahead of Firefox's MSRV for nightly builds, as
determined by the value of MINIMUM_RUST_VERSION
in
python/mozboot/mozboot/util.py
.
All testing and example infrastructure share the same set of environment variables that determine which Backend/GPU it will run on.
WGPU_ADAPTER_NAME
with a substring of the name of the adapter you want to use (ex. 1080
will match NVIDIA GeForce 1080ti
).WGPU_BACKEND
with a comma-separated list of the backends you want to use (vulkan
, metal
, dx12
, or gl
).WGPU_POWER_PREF
with the power preference to choose when a specific adapter name isn't specified (high
, low
or none
)WGPU_DX12_COMPILER
with the DX12 shader compiler you wish to use (dxc
or fxc
, note that dxc
requires dxil.dll
and dxcompiler.dll
to be in the working directory otherwise it will fall back to fxc
)WGPU_GLES_MINOR_VERSION
with the minor OpenGL ES 3 version number to request (0
, 1
, 2
or automatic
).WGPU_ALLOW_UNDERLYING_NONCOMPLIANT_ADAPTER
with a boolean whether non-compliant drivers are enumerated (0
for false, 1
for true).When running the CTS, use the variables DENO_WEBGPU_ADAPTER_NAME
, DENO_WEBGPU_BACKEND
, DENO_WEBGPU_POWER_PREFERENCE
.
We have multiple methods of testing, each of which tests different qualities about wgpu. We automatically run our tests on CI. The current state of CI testing:
Platform/Backend | Tests | Notes |
---|---|---|
Windows/DX12 | :heavy_check_mark: | using WARP |
Windows/OpenGL | :heavy_check_mark: | using llvmpipe |
MacOS/Metal | :heavy_check_mark: | using hardware runner |
Linux/Vulkan | :heavy_check_mark: | using lavapipe |
Linux/OpenGL ES | :heavy_check_mark: | using llvmpipe |
Chrome/WebGL | :heavy_check_mark: | using swiftshader |
Chrome/WebGPU | :x: | not set up |
We use a tool called cargo nextest
to run our tests.
To install it, run cargo install cargo-nextest
.
To run the test suite:
cargo xtask test
To run the test suite on WebGL (currently incomplete):
cd wgpu
wasm-pack test --headless --chrome --no-default-features --features webgl --workspace
This will automatically run the tests using a packaged browser. Remove --headless
to run the tests with whatever browser you wish at http://localhost:8000
.
If you are a user and want a way to help contribute to wgpu, we always need more help writing test cases.
WebGPU includes a Conformance Test Suite to validate that implementations are working correctly. We can run this CTS against wgpu.
To run the CTS, first, you need to check it out:
git clone https://github.com/gpuweb/cts.git
cd cts
# works in bash and powershell
git checkout $(cat ../cts_runner/revision.txt)
To run a given set of tests:
# Must be inside the `cts` folder we just checked out, else this will fail
cargo run --manifest-path ../Cargo.toml -p cts_runner --bin cts_runner -- ./tools/run_deno --verbose "<test string>"
To find the full list of tests, go to the online cts viewer.
The list of currently enabled CTS tests can be found here.
The wgpu
crate is meant to be an idiomatic Rust translation of the WebGPU API.
That specification, along with its shading language, WGSL,
are both still in the "Working Draft" phase,
and while the general outlines are stable,
details change frequently.
Until the specification is stabilized, the wgpu
crate and the version of WGSL it implements
will likely differ from what is specified,
as the implementation catches up.
Exactly which WGSL features wgpu
supports depends on how you are using it:
When running as native code, wgpu
uses the Naga crate
to translate WGSL code into the shading language of your platform's native GPU API.
Naga has a milestone
for catching up to the WGSL specification,
but in general, there is no up-to-date summary
of the differences between Naga and the WGSL spec.
When running in a web browser (by compilation to WebAssembly)
without the "webgl"
feature enabled,
wgpu
relies on the browser's own WebGPU implementation.
WGSL shaders are simply passed through to the browser,
so that determines which WGSL features you can use.
When running in a web browser with wgpu
's "webgl"
feature enabled,
wgpu
uses Naga to translate WGSL programs into GLSL.
This uses the same version of Naga as if you were running wgpu
as native code.
wgpu uses the coordinate systems of D3D and Metal:
Render | Texture |
---|---|
![]() | ![]() |
一键生成PPT和Word,让学习生活更轻松
讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。
深度推理能力全新升级,全面对标OpenAI o1
科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。
一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型
Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。
字节跳动发布的AI编程神器IDE
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。
AI助力,做PPT更简单!
咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。
选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。
专业的AI公文写作平台,公文写作神器
AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。
OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。
openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。
高分辨率纹理 3D 资产生成
Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。
一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。
3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。
最新AI工具、AI资讯
独家AI资源、AI项目落地
微信扫一扫关注公众号