
浏览器端运行先进机器学习模型的JavaScript库
Transformers.js是一个JavaScript库,可在浏览器中直接运行Hugging Face的Transformers模型,无需服务器。该库支持自然语言处理、计算机视觉、音频处理和多模态任务,使用ONNX Runtime执行模型。它的设计与Python版Transformers功能相同,提供简单API运行预训练模型,并支持将自定义模型转换为ONNX格式。
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Transformers.js is designed to be functionally equivalent to Hugging Face's transformers python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:
Transformers.js uses ONNX Runtime to run models in the browser. The best part about it, is that you can easily convert your pretrained PyTorch, TensorFlow, or JAX models to ONNX using 🤗 Optimum.
For more information, check out the full documentation.
It's super simple to translate from existing code! Just like the python library, we support the pipeline API. Pipelines group together a pretrained model with preprocessing of inputs and postprocessing of outputs, making it the easiest way to run models with the library.
</td> <td>from transformers import pipeline # Allocate a pipeline for sentiment-analysis pipe = pipeline('sentiment-analysis') out = pipe('I love transformers!') # [{'label': 'POSITIVE', 'score': 0.999806941}]
</td> </tr> </table>import { pipeline } from '@xenova/transformers'; // Allocate a pipeline for sentiment-analysis let pipe = await pipeline('sentiment-analysis'); let out = await pipe('I love transformers!'); // [{'label': 'POSITIVE', 'score': 0.999817686}]
You can also use a different model by specifying the model id or path as the second argument to the pipeline function. For example:
// Use a different model for sentiment-analysis let pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment');
To install via NPM, run:
npm i @xenova/transformers
Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, using ES Modules, you can import the library with:
<script type="module"> import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2'; </script>
Want to jump straight in? Get started with one of our sample applications/templates:
| Name | Description | Links |
|---|---|---|
| Whisper Web | Speech recognition w/ Whisper | code, demo |
| Doodle Dash | Real-time sketch-recognition game | blog, code, demo |
| Code Playground | In-browser code completion website | code, demo |
| Semantic Image Search (client-side) | Search for images with text | code, demo |
| Semantic Image Search (server-side) | Search for images with text (Supabase) | code, demo |
| Vanilla JavaScript | In-browser object detection | video, code, demo |
| React | Multilingual translation website | code, demo |
| Text to speech (client-side) | In-browser speech synthesis | code, demo |
| Browser extension | Text classification extension | code |
| Electron | Text classification application | code |
| Next.js (client-side) | Sentiment analysis (in-browser inference) | code, demo |
| Next.js (server-side) | Sentiment analysis (Node.js inference) | code, demo |
| Node.js | Sentiment analysis API | code |
| Demo site | A collection of demos | code, demo |
Check out the Transformers.js template on Hugging Face to get started in one click!
By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:
import { env } from '@xenova/transformers'; // Specify a custom location for models (defaults to '/models/'). env.localModelPath = '/path/to/models/'; // Disable the loading of remote models from the Hugging Face Hub: env.allowRemoteModels = false; // Set location of .wasm files. Defaults to use a CDN. env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
For a full list of available settings, check out the API Reference.
We recommend using our conversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses 🤗 Optimum to perform conversion and quantization of your model.
python -m scripts.convert --quantize --model_id <model_name_or_path>
For example, convert and quantize bert-base-uncased using:
python -m scripts.convert --quantize --model_id bert-base-uncased
This will save the following files to ./models/:
bert-base-uncased/
├── config.json
├── tokenizer.json
├── tokenizer_config.json
└── onnx/
├── model.onnx
└── model_quantized.onnx
For the full list of supported architectures, see the Optimum documentation.
Here is the list of all tasks and architectures currently supported by Transformers.js. If you don't see your task/model listed here or it is not yet supported, feel free to open up a feature request here.
To find compatible models on the Hub, select the "transformers.js" library tag in the filter menu (or visit this link). You can refine your search by selecting the task you're interested in (e.g., text-classification).
| Task | ID | Description | Supported? |
|---|---|---|---|
| Fill-Mask | fill-mask | Masking some of the words in a sentence and predicting which words should replace those masks. | ✅ (docs)<br>(models) |
| Question Answering | question-answering | Retrieve the answer to a question from a given text. | ✅ (docs)<br>(models) |
| Sentence Similarity | sentence-similarity | Determining how similar two texts are. | ✅ (docs)<br>(models) |
| Summarization | summarization | Producing a shorter version of a document while preserving its important information. | ✅ (docs)<br>(models) |
| Table Question Answering | table-question-answering | Answering a question about information from a given table. | ❌ |
| Text Classification | text-classification or sentiment-analysis | Assigning a label or class to a given text. | ✅ (docs)<br>(models) |
| Text Generation | text-generation | Producing new text by predicting the next word in a sequence. | ✅ (docs)<br>(models) |
| Text-to-text Generation | text2text-generation | Converting one text sequence into another text sequence. | ✅ (docs)<br>(models) |
| Token Classification | token-classification or ner | Assigning a label to each token in a text. | ✅ (docs)<br>(models) |
| Translation | translation | Converting text from one language to another. | ✅ (docs)<br>(models) |
| Zero-Shot Classification | zero-shot-classification | Classifying text into classes that are unseen during training. | ✅ (docs)<br>(models) |
| Feature Extraction | feature-extraction | Transforming raw data into numerical features that can be processed while preserving the information in the original dataset. | ✅ (docs)<br>(models) |
| Task | ID | Description | Supported? |
|---|---|---|---|
| Depth Estimation | depth-estimation | Predicting the depth of objects present in an image. | ✅ (docs)<br>(models) |
| Image Classification | image-classification | Assigning a label or class to an entire image. | ✅ |


多风格AI绘画神器
堆友平台由阿里巴巴设计团队创建,作为一款AI驱动的设计工具,专为设计师提供一站式增长服务。功能覆盖海量3D素材、AI绘画、实时渲染以及专业抠图,显著提升设计品质和效率。平台不仅提供工具,还是一个促进创意交流和个人发展的空间,界面友好,适合所有级别的设计师和创意工作者。


零代码AI应用开发平台
零代码AI应用开发平台,用户只需一句话简单描述需求,AI能自动生成小程序、APP或H5网页应用,无需编写代码。


免费创建高清无水印Sora视频
Vora是一个免费创建高清无水印Sora视频的AI工具


最适合小白的AI自动化工作流平台
无需编码,轻松生成可复用、可变现的AI自动化工作流

大模型驱动的Excel数据处理工具
基于大模型交互的表格处理系统,允许用户通过对话方式完成数据整理和可视化分析。系统采用机器学习算法解析用户指令,自动执行排序、公式计算和数据透视等操作,支持多种文件格式导入导出。数据处理响应速度保持在0.8秒以内,支持超过100万行数据的即时分析。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


AI论文写作指导平台
AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。


AI一键生成PPT,就用博思AIPPT!
博思AIPPT,新一代的AI生成PPT平台,支持智能生成PPT、AI美化PPT、文本&链接生成PPT、导入Word/PDF/Markdown文档生成PPT等,内置海量精美PPT模板,涵盖商务、教育、科技等不同风格,同时针对每个页面提供多种版式,一键自适应切换,完美适配各种办公场景。


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台 的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。
最新AI工具、AI资讯
独家AI资源、AI项目落地

微信扫一扫关注公众号