stable-ts

stable-ts

Whisper语音转录时间戳优化和功能扩展工具

stable-ts是一个开源的Whisper语音转录优化工具。它通过改进时间戳生成算法,提高了转录结果的时间精确度。该工具扩展了Whisper的功能,增加了语音分离、降噪和时间戳调整等特性。stable-ts支持多种输出格式,并提供API和命令行接口,使语音转录更加稳定和高效。

Whisper时间戳语音识别稳定性转录Github开源项目

Stabilizing Timestamps for Whisper

This library modifies Whisper to produce more reliable timestamps and extends its functionality.

https://github.com/jianfch/stable-ts/assets/28970749/7adf0540-3620-4b2b-b2d4-e316906d6dfa

Setup

<details> <summary>Prerequisites: FFmpeg & PyTorch</summary> <details> <summary>FFmpeg</summary>

Requires FFmpeg in PATH

# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg

# on Arch Linux
sudo pacman -S ffmpeg

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg

# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg
</details> <details> <summary>PyTorch</summary>

If PyTorch is not installed when installing Stable-ts, the default version will be installed which may not have GPU support. To avoid this issue, install your preferred version with instructions at https://pytorch.org/get-started/locally/.

</details> </details>
pip install -U stable-ts

To install the latest commit:

pip install -U git+https://github.com/jianfch/stable-ts.git
<details> <summary>Whisperless Version</summary>

To install Stable-ts without Whisper as a dependency:

pip install -U stable-ts-whisperless

To install the latest Whisperless commit:

pip install -U git+https://github.com/jianfch/stable-ts.git@whisperless
</details>

Usage

Transcribe

import stable_whisper model = stable_whisper.load_model('base') result = model.transcribe('audio.mp3') result.to_srt_vtt('audio.srt')
<details> <summary>CLI</summary>
stable-ts audio.mp3 -o audio.srt
</details>

Docstrings:

<details> <summary>load_model()</summary>
Load an instance if :class:`whisper.model.Whisper`.

Parameters
----------
name : {'tiny', 'tiny.en', 'base', 'base.en', 'small', 'small.en', 'medium', 'medium.en', 'large-v1',
    'large-v2', 'large-v3', or 'large'}
    One of the official model names listed by :func:`whisper.available_models`, or
    path to a model checkpoint containing the model dimensions and the model state_dict.
device : str or torch.device, optional
    PyTorch device to put the model into.
download_root : str, optional
    Path to download the model files; by default, it uses "~/.cache/whisper".
in_memory : bool, default False
    Whether to preload the model weights into host memory.
cpu_preload : bool, default True
    Load model into CPU memory first then move model to specified device
    to reduce GPU memory usage when loading model
dq : bool, default False
    Whether to apply Dynamic Quantization to model to reduced memory usage and increase inference speed
    but at the cost of a slight decrease in accuracy. Only for CPU.
engine : str, optional
    Engine for Dynamic Quantization.

Returns
-------
model : "Whisper"
    The Whisper ASR model instance.

Notes
-----
The overhead from ``dq = True`` might make inference slower for models smaller than 'large'.
</details> <details> <summary>transcribe()</summary>
Transcribe audio using Whisper.

This is a modified version of :func:`whisper.transcribe.transcribe` with slightly different decoding logic while
allowing additional preprocessing and postprocessing. The preprocessing performed on the audio includes:
voice isolation / noise removal and low/high-pass filter. The postprocessing performed on the transcription
result includes: adjusting timestamps with VAD and custom regrouping segments based punctuation and speech gaps.

Parameters
----------
model : whisper.model.Whisper
    An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes or AudioLoader
    Path/URL to the audio file, the audio waveform, or bytes of audio file or
    instance of :class:`stable_whisper.audio.AudioLoader`.
    If audio is :class:`numpy.ndarray` or :class:`torch.Tensor`, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
temperature : float or iterable of float, default (0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
    Temperature for sampling. It can be a tuple of temperatures, which will be successfully used
    upon failures according to either ``compression_ratio_threshold`` or ``logprob_threshold``.
compression_ratio_threshold : float, default 2.4
    If the gzip compression ratio is above this value, treat as failed.
logprob_threshold : float, default -1
    If the average log probability over sampled tokens is below this value, treat as failed
no_speech_threshold : float, default 0.6
    If the no_speech probability is higher than this value AND the average log probability
    over sampled tokens is below ``logprob_threshold``, consider the segment as silent
condition_on_previous_text : bool, default True
    If ``True``, the previous output of the model is provided as a prompt for the next window;
    disabling may make the text inconsistent across windows, but the model becomes less prone to
    getting stuck in a failure loop, such as repetition looping or timestamps going out of sync.
initial_prompt : str, optional
    Text to provide as a prompt for the first window. This can be used to provide, or
    "prompt-engineer" a context for transcription, e.g. custom vocabularies or proper nouns
    to make it more likely to predict those word correctly.
word_timestamps : bool, default True
    Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
    and include the timestamps for each word in each segment.
    Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
    String for customizing the regrouping algorithm. False disables regrouping.
    Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
    Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
    Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
    Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
    adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
min_word_dur : float or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Shortest duration each word is allowed to reach for silence suppression.
min_silence_dur : float, optional
    Shortest duration of silence allowed for silence suppression.
nonspeech_error : float, default 0.1
    Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
prepend_punctuations : str or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Punctuations to prepend to next word.
append_punctuations : str or None, default None meaning use ``stable_whisper.default.DEFAULT_VALUES``
    Punctuations to append to previous word.
stream : bool or None, default None
    Whether to loading ``audio`` in chunks of 30 seconds until the end of file/stream.
    If ``None`` and ``audio`` is a string then set to ``True`` else ``False``.
mel_first : bool, optional
    Process entire audio track into log-Mel spectrogram first instead in chunks.
    Used if odd behavior seen in stable-ts but not in whisper, but use significantly more memory for long audio.
split_callback : Callable, optional
    Custom callback for grouping tokens up with their corresponding words.
    The callback must take two arguments, list of tokens and tokenizer.
    The callback returns a tuple with a list of words and a corresponding nested list of tokens.
suppress_ts_tokens : bool, default False
    Whether to suppress timestamp tokens during inference for timestamps are detected at silent.
    Reduces hallucinations in some cases, but also prone to ignore disfluencies and repetitions.
    This option is ignored if ``suppress_silence = False``.
gap_padding : str, default ' ...'
    Padding prepend to each segments for word timing alignment.
    Used to reduce the probability of model predicting timestamps earlier than the first utterance.
only_ffmpeg : bool, default False
    Whether to use only FFmpeg (instead of not yt-dlp) for URls
max_instant_words : float, default 0.5
    If percentage of instantaneous words in a segment exceed this amount, the segment is removed.
avg_prob_threshold: float or None, default None
    Transcribe the gap after the previous word and if the average word proababiliy of a segment falls below this
    value, discard the segment. If ``None``, skip transcribing the gap to reduce chance of timestamps starting
    before the next utterance.
progress_callback : Callable, optional
    A function that will be called when transcription progress is updated.
    The callback need two parameters.
    The first parameter is a float for seconds of the audio that has been transcribed.
    The second parameter is a float for total duration of audio in seconds.
ignore_compatibility : bool, default False
    Whether to ignore warnings for compatibility issues with the detected Whisper version.
extra_models : list of whisper.model.Whisper, optional
    List of additional Whisper model instances to use for computing word-timestamps along with ``model``.
decode_options
    Keyword arguments to construct class:`whisper.decode.DecodingOptions` instances.

Returns
-------
stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the transcription of ``audio``.

See Also
--------
stable_whisper.non_whisper.transcribe_any : Return :class:`stable_whisper.result.WhisperResult` containing all the
    data from transcribing audio with unmodified :func:`whisper.transcribe.transcribe` with preprocessing and
    postprocessing.
stable_whisper.whisper_word_level.faster_whisper.faster_transcribe : Return
    :class:`stable_whisper.result.WhisperResult` containing all the data from transcribing audio with
    :meth:`faster_whisper.WhisperModel.transcribe` with preprocessing and postprocessing.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
</details> <details> <summary>transcribe_minimal()</summary>
Transcribe audio using Whisper.

This is uses the original whisper transcribe function, :func:`whisper.transcribe.transcribe`, while still allowing
additional preprocessing and postprocessing. The preprocessing performed on the audio includes: voice isolation /
noise removal and low/high-pass filter. The postprocessing performed on the transcription result includes:
adjusting timestamps with VAD and custom regrouping segments based punctuation and speech gaps.

Parameters
----------
model : whisper.model.Whisper
    An instance of Whisper ASR model.
audio : str or numpy.ndarray or torch.Tensor or bytes
    Path/URL to the audio file, the audio waveform, or bytes of audio file.
    If audio is ``numpy.ndarray`` or ``torch.Tensor``, the audio must be already at sampled to 16kHz.
verbose : bool or None, default False
    Whether to display the text being decoded to the console.
    Displays all the details if ``True``. Displays progressbar if ``False``. Display nothing if ``None``.
word_timestamps : bool, default True
    Extract word-level timestamps using the cross-attention pattern and dynamic time warping,
    and include the timestamps for each word in each segment.
    Disabling this will prevent segments from splitting/merging properly.
regroup : bool or str, default True, meaning the default regroup algorithm
    String for customizing the regrouping algorithm. False disables regrouping.
    Ignored if ``word_timestamps = False``.
suppress_silence : bool, default True
    Whether to enable timestamps adjustments based on the detected silence.
suppress_word_ts : bool, default True
    Whether to adjust word timestamps based on the detected silence. Only enabled if ``suppress_silence = True``.
use_word_position : bool, default True
    Whether to use position of the word in its segment to determine whether to keep end or start timestamps if
    adjustments are required. If it is the first word, keep end. Else if it is the last word, keep the start.
q_levels : int, default 20
    Quantization levels for generating timestamp suppression mask; ignored if ``vad = true``.
    Acts as a threshold to marking sound as silent.
    Fewer levels will increase the threshold of volume at which to mark a sound as silent.
k_size : int, default 5
    Kernel size for avg-pooling waveform to generate timestamp suppression mask; ignored if ``vad = true``.
    Recommend 5 or 3; higher sizes will reduce detection of silence.
denoiser : str, optional
    String of the denoiser to use for preprocessing ``audio``.
    See ``stable_whisper.audio.SUPPORTED_DENOISERS`` for supported denoisers.
denoiser_options : dict, optional
    Options to use for ``denoiser``.
vad : bool or dict, default False
    Whether to use Silero VAD to generate timestamp suppression mask.
    Instead of ``True``, using a dict of keyword arguments will load the VAD with the arguments.
    Silero VAD requires PyTorch 1.12.0+. Official repo, https://github.com/snakers4/silero-vad.
vad_threshold : float, default 0.35
    Threshold for detecting speech with Silero VAD. Low threshold reduces false positives for silence detection.
min_word_dur : float, default 0.1
    Shortest duration each word is allowed to reach for silence suppression.
min_silence_dur : float, optional
    Shortest duration of silence allowed for silence suppression.
nonspeech_error : float, default 0.1
    Relative error of non-speech sections that appear in between a word for silence suppression.
only_voice_freq : bool, default False
    Whether to only use sound between 200 - 5000 Hz, where majority of human speech are.
only_ffmpeg : bool, default False
    Whether to use only FFmpeg (instead of not yt-dlp) for URls
options
    Additional options used for :func:`whisper.transcribe.transcribe` and
    :func:`stable_whisper.non_whisper.transcribe_any`.
Returns
-------
stable_whisper.result.WhisperResult
    All timestamps, words, probabilities, and other data from the transcription of ``audio``.

Examples
--------
>>> import stable_whisper
>>> model = stable_whisper.load_model('base')
>>> result = model.transcribe_minimal('audio.mp3', vad=True)
>>> result.to_srt_vtt('audio.srt')
Saved: audio.srt
</details> <br> <details> <summary>faster-whisper</summary>

Use with faster-whisper:

model =

编辑推荐精选

商汤小浣熊

商汤小浣熊

最强AI数据分析助手

小浣熊家族Raccoon,您的AI智能助手,致力于通过先进的人工智能技术,为用户提供高效、便捷的智能服务。无论是日常咨询还是专业问题解答,小浣熊都能以快速、准确的响应满足您的需求,让您的生活更加智能便捷。

imini AI

imini AI

像人一样思考的AI智能体

imini 是一款超级AI智能体,能根据人类指令,自主思考、自主完成、并且交付结果的AI智能体。

Keevx

Keevx

AI数字人视频创作平台

Keevx 一款开箱即用的AI数字人视频创作平台,广泛适用于电商广告、企业培训与社媒宣传,让全球企业与个人创作者无需拍摄剪辑,就能快速生成多语言、高质量的专业视频。

即梦AI

即梦AI

一站式AI创作平台

提供 AI 驱动的图片、视频生成及数字人等功能,助力创意创作

扣子-AI办公

扣子-AI办公

AI办公助手,复杂任务高效处理

AI办公助手,复杂任务高效处理。办公效率低?扣子空间AI助手支持播客生成、PPT制作、网页开发及报告写作,覆盖科研、商业、舆情等领域的专家Agent 7x24小时响应,生活工作无缝切换,提升50%效率!

TRAE编程

TRAE编程

AI辅助编程,代码自动修复

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

热门AI工具生产力协作转型TraeAI IDE
蛙蛙写作

蛙蛙写作

AI小说写作助手,一站式润色、改写、扩写

蛙蛙写作—国内先进的AI写作平台,涵盖小说、学术、社交媒体等多场景。提供续写、改写、润色等功能,助力创作者高效优化写作流程。界面简洁,功能全面,适合各类写作者提升内容品质和工作效率。

AI助手AI工具AI写作工具AI辅助写作蛙蛙写作学术助手办公助手营销助手
问小白

问小白

全能AI智能助手,随时解答生活与工作的多样问题

问小白,由元石科技研发的AI智能助手,快速准确地解答各种生活和工作问题,包括但不限于搜索、规划和社交互动,帮助用户在日常生活中提高效率,轻松管理个人事务。

聊天机器人AI助手热门AI工具AI对话
Transly

Transly

实时语音翻译/同声传译工具

Transly是一个多场景的AI大语言模型驱动的同声传译、专业翻译助手,它拥有超精准的音频识别翻译能力,几乎零延迟的使用体验和支持多国语言可以让你带它走遍全球,无论你是留学生、商务人士、韩剧美剧爱好者,还是出国游玩、多国会议、跨国追星等等,都可以满足你所有需要同传的场景需求,线上线下通用,扫除语言障碍,让全世界的语言交流不再有国界。

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

热门AI工具AI办公办公工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图
下拉加载更多