The [DeepL API][api-docs] is a language translation API that allows other computer programs to send texts and documents to DeepL's servers and receive high-quality translations. This opens a whole universe of opportunities for developers: any translation product you can imagine can now be built on top of DeepL's best-in-class translation technology.
The DeepL Python library offers a convenient way for applications written in Python to interact with the DeepL API. We intend to support all API functions with the library, though support for new features may be added to the library after they’re added to the API.
To use the DeepL Python Library, you'll need an API authentication key. To get a key, [please create an account here][create-account]. With a DeepL API Free account you can translate up to 500,000 characters/month for free.
The library can be installed from [PyPI][pypi-project] using pip:
pip install --upgrade deepl
If you need to modify this source code, install the dependencies using poetry:
poetry install
On Ubuntu 22.04 an error might occur: ModuleNotFoundError: No module named 'cachecontrol'. Use the workaround sudo apt install python3-cachecontrol as
explained in this [bug report][bug-report-ubuntu-2204].
The library is tested with Python versions 3.6 to 3.11.
The requests module is used to perform HTTP requests; the minimum is version
2.0.
Starting in 2024, we will drop support for older Python versions that have reached official end-of-life. You can find the Python versions and support timelines [here][python-version-list]. To continue using this library, you should update to Python 3.8+.
Import the package and construct a Translator. The first argument is a string
containing your API authentication key as found in your
[DeepL Pro Account][pro-account].
Be careful not to expose your key, for example when sharing source code.
import deepl auth_key = "f63c02c5-f056-..." # Replace with your key translator = deepl.Translator(auth_key) result = translator.translate_text("Hello, world!", target_lang="FR") print(result.text) # "Bonjour, le monde !"
This example is for demonstration purposes only. In production code, the authentication key should not be hard-coded, but instead fetched from a configuration file or environment variable.
Translator accepts additional options, see Configuration
for more information.
To translate text, call translate_text(). The first argument is a string
containing the text you want to translate, or a list of strings if you want to
translate multiple texts.
source_lang and target_lang specify the source and target language codes
respectively. The source_lang is optional, if it is unspecified the source
language will be auto-detected.
Language codes are case-insensitive strings according to ISO 639-1, for
example 'DE', 'FR', 'JA''. Some target languages also include the regional
variant according to ISO 3166-1, for example 'EN-US', or 'PT-BR'. The full
list of supported languages is in the
[API documentation][api-docs-lang-list].
There are additional optional arguments to control translation, see Text translation options below.
translate_text() returns a TextResult, or a list of TextResults
corresponding to your input text(s). TextResult has two properties: text is
the translated text, and detected_source_lang is the detected source language
code.
# Translate text into a target language, in this case, French: result = translator.translate_text("Hello, world!", target_lang="FR") print(result.text) # "Bonjour, le monde !" # Translate multiple texts into British English result = translator.translate_text( ["お元気ですか?", "¿Cómo estás?"], target_lang="EN-GB" ) print(result[0].text) # "How are you?" print(result[0].detected_source_lang) # "JA" the language code for Japanese print(result[1].text) # "How are you?" print(result[1].detected_source_lang) # "ES" the language code for Spanish # Translate into German with less and more Formality: print( translator.translate_text( "How are you?", target_lang="DE", formality="less" ) ) # 'Wie geht es dir?' print( translator.translate_text( "How are you?", target_lang="DE", formality="more" ) ) # 'Wie geht es Ihnen?'
In addition to the input text(s) argument, the available translate_text()
arguments are:
source_lang: Specifies the source language code, but may be omitted to
auto-detect the source language.target_lang: Required. Specifies the target language code.split_sentences: specify how input text should be split into sentences,
default: 'on'.
'on'' (SplitSentences.ON): input text will be split into sentences
using both newlines and punctuation.'off' (SplitSentences.OFF): input text will not be split into
sentences. Use this for applications where each input text contains only
one sentence.'nonewlines' (SplitSentences.NO_NEWLINES): input text will be split
into sentences using punctuation but not newlines.preserve_formatting: controls automatic-formatting-correction. Set to True
to prevent automatic-correction of formatting, default: False.formality: controls whether translations should lean toward informal or
formal language. This option is only available for some target languages, see
Listing available languages.
'less' (Formality.LESS): use informal language.'more' (Formality.MORE): use formal, more polite language.glossary: specifies a glossary to use with translation, either as a string
containing the glossary ID, or a GlossaryInfo as returned by
get_glossary().context: specifies additional context to influence translations, that is not
translated itself. Characters in the context parameter are not counted toward billing.
See the [API documentation][api-docs-context-param] for more information and
example usage.tag_handling: type of tags to parse before translation, options are 'html'
and 'xml'.The following options are only used if tag_handling is 'xml':
outline_detection: specify False to disable automatic tag detection,
default is True.splitting_tags: list of XML tags that should be used to split text into
sentences. Tags may be specified as an array of strings (['tag1', 'tag2']),
or a comma-separated list of strings ('tag1,tag2'). The default is an empty
list.non_splitting_tags: list of XML tags that should not be used to split text
into sentences. Format and default are the same as for splitting_tags.ignore_tags: list of XML tags that containing content that should not be
translated. Format and default are the same as for splitting_tags.For a detailed explanation of the XML handling options, see the [API documentation][api-docs-xml-handling].
To translate documents, you may call either translate_document() using file IO
objects, or translate_document_from_filepath() using file paths. For both
functions, the first and second arguments correspond to the input and output
files respectively.
Just as for the translate_text() function, the source_lang and
target_lang arguments specify the source and target language codes.
There are additional optional arguments to control translation, see Document translation options below.
# Translate a formal document from English to German input_path = "/path/to/Instruction Manual.docx" output_path = "/path/to/Bedienungsanleitung.docx" try: # Using translate_document_from_filepath() with file paths translator.translate_document_from_filepath( input_path, output_path, target_lang="DE", formality="more" ) # Alternatively you can use translate_document() with file IO objects with open(input_path, "rb") as in_file, open(output_path, "wb") as out_file: translator.translate_document( in_file, out_file, target_lang="DE", formality="more" ) except deepl.DocumentTranslationException as error: # If an error occurs during document translation after the document was # already uploaded, a DocumentTranslationException is raised. The # document_handle property contains the document handle that may be used to # later retrieve the document from the server, or contact DeepL support. doc_id = error.document_handle.id doc_key = error.document_handle.key print(f"Error after uploading ${error}, id: ${doc_id} key: ${doc_key}") except deepl.DeepLException as error: # Errors during upload raise a DeepLException print(error)
translate_document() and translate_document_from_filepath() are convenience
functions that wrap multiple API calls: uploading, polling status until the
translation is complete, and downloading. If your application needs to execute
these steps individually, you can instead use the following functions directly:
translate_document_upload(),translate_document_get_status() (or
translate_document_wait_until_done()), andtranslate_document_download()In addition to the input file, output file, source_lang and target_lang
arguments, the available translate_document() and
translate_document_from_filepath() arguments are:
formality: same as in Text translation options.glossary: same as in Text translation options.output_format: (translate_document() only)
file extension of desired format of translated file, for example: 'pdf'. If
unspecified, by default the translated file will be in the same format as the
input file.Glossaries allow you to customize your translations using user-defined terms. Multiple glossaries can be stored with your account, each with a user-specified name and a uniquely-assigned ID.
You can create a glossary with your desired terms and name using
create_glossary(). Each glossary applies to a single source-target language
pair. Note: Glossaries are only supported for some language pairs, see
Listing available glossary languages
for more information. The entries should be specified as a dictionary.
If successful, the glossary is created and stored with your DeepL account, and
a GlossaryInfo object is returned including the ID, name, languages and entry
count.
# Create an English to German glossary with two terms: entries = {"artist": "Maler", "prize": "Gewinn"} my_glossary = translator.create_glossary( "My glossary", source_lang="EN", target_lang="DE", entries=entries, ) print( f"Created '{my_glossary.name}' ({my_glossary.glossary_id}) " f"{my_glossary.source_lang}->{my_glossary.target_lang} " f"containing {my_glossary.entry_count} entries" ) # Example: Created 'My glossary' (559192ed-8e23-...) EN->DE containing 2 entries
You can also upload a glossary downloaded from the DeepL website using
create_glossary_from_csv(). Instead of supplying the entries as a dictionary,
specify the CSV data as csv_data either as a file-like object or string or
bytes containing file content:
# Open the CSV file assuming UTF-8 encoding. If your file contains a BOM, # consider using encoding='utf-8-sig' instead. with open('/path/to/glossary_file.csv', 'r', encoding='utf-8') as csv_file: csv_data = csv_file.read() # Read the file contents as a string my_csv_glossary = translator.create_glossary_from_csv( "CSV glossary", source_lang="EN", target_lang="DE", csv_data=csv_data, )
The [API documentation][api-docs-csv-format] explains the expected CSV format in detail.
Functions to get, list, and delete stored glossaries are also provided:
get_glossary() takes a glossary ID and returns a GlossaryInfo object for a
stored glossary, or raises an exception if no such glossary is found.list_glossaries() returns a list of GlossaryInfo objects corresponding to
all of your stored glossaries.delete_glossary() takes a glossary ID or GlossaryInfo object and deletes
the stored glossary from the server, or raises an exception if no such
glossary is found.# Retrieve a stored glossary using the ID glossary_id = "559192ed-8e23-..." my_glossary = translator.get_glossary(glossary_id) # Find and delete glossaries named 'Old glossary' glossaries = translator.list_glossaries() for glossary in glossaries: if glossary.name == "Old glossary": translator.delete_glossary(glossary)
The GlossaryInfo object does not contain the glossary entries, but instead
only the number of entries in the entry_count property.
To list the entries contained within a stored glossary, use
get_glossary_entries() providing either the GlossaryInfo object or glossary
ID:
entries = translator.get_glossary_entries(my_glossary) print(entries) # "{'artist': 'Maler', 'prize': 'Gewinn'}"
You can use a stored glossary for text translation by setting the glossary
argument to either the glossary ID or GlossaryInfo object. You must also
specify the source_lang argument (it is required when using a glossary):
text = "The artist was awarded a prize." with_glossary = translator.translate_text( text, source_lang="EN", target_lang="DE", glossary=my_glossary, ) print(with_glossary) # "Der Maler wurde mit einem Gewinn ausgezeichnet." # For comparison, the result without a glossary: without_glossary = translator.translate_text(text, target_lang="DE") print(without_glossary) # "Der Künstler wurde mit einem Preis ausgezeichnet."
Using a stored glossary for document translation is the same: set the glossary
argument and specify the source_lang argument:
translator.translate_document( in_file, out_file, source_lang="EN", target_lang="DE", glossary=my_glossary, )
The translate_document(), translate_document_from_filepath() and
translate_document_upload() functions all support the glossary argument.
To check account usage, use the get_usage() function.
The returned Usage object contains three usage subtypes: character,
document and team_document. Depending on your account type, some usage
subtypes may be invalid; this can be checked using the valid property. For API
accounts:
usage.character.valid is True,usage.document.valid and usage.team_document.valid are False.Each usage subtype (if valid) has count and limit properties giving the
amount used and maximum amount respectively, and the limit_reached property
that checks if the usage has reached the limit. The top level Usage object has
the any_limit_reached property to check all usage subtypes.
usage = translator.get_usage() if usage.any_limit_reached: print('Translation limit reached.') if usage.character.valid: print( f"Character usage: {usage.character.count} of {usage.character.limit}") if usage.document.valid: print(f"Document usage: {usage.document.count} of {usage.document.limit}")
You can request the list of languages supported by DeepL for text and documents
using the get_source_languages() and get_target_languages() functions. They
both return a list of Language objects.
The name property gives the name of the language in English, and the code
property gives the language code. The supports_formality property only appears
for target languages, and indicates whether the target language supports the
optional formality parameter.
print("Source languages:") for language in translator.get_source_languages(): print(f"{language.name} ({language.code})") # Example: "German (DE)" print("Target languages:") for language in translator.get_target_languages(): if language.supports_formality: print(f"{language.name} ({language.code}) supports formality") # Example: "Italian (IT) supports formality" else: print(f"{language.name} ({language.code})")


AI赋能电商视觉革命,一站式智能商拍平台
潮际好麦深耕服装行业,是国内AI试衣效果最好的软件。使用先进AIGC能力为电商卖家批量提供优质的、低成本的商拍图。合作品牌有Shein、Lazada、安踏、百丽等65个国内外头部品牌,以及国内10万+淘宝、天猫、京东等主流平台的品牌商家,为卖家节省将近85%的出图成本,提升约3倍出图效率,让品牌能够快速上架。


企业专属的AI法律顾问
iTerms是法大大集团旗下法律子品牌,基于最先进的大语言模型(LLM)、专业的法律知识库和强大的智能体架构,帮助企业扫清合规障碍,筑牢风控防线,成为您企业专属的AI法律顾问。


稳定高效的流量提升解决方案,助力品牌曝光
稳定高效的流量提升解决方案,助力品牌曝光


最新版Sora2模型免费使用,一键生成无水印视频
最新版Sora2模型免费使用,一键生成无水印视频


实时语音翻译/同声传译工具
Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以 满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。


选题、配图、成文,一站式创作,让内容运营更高效
讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。


AI辅助编程,代码自动修复
Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。


最强AI数据分析助手
小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。


像人一样思考的AI智能体
imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。


AI数字人视频创作平台
Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。