Instructions to use InflexionLab/VibeVoice-ASR-Kazakh with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- VibeVoice
How to use InflexionLab/VibeVoice-ASR-Kazakh with VibeVoice:
import torch, soundfile as sf, librosa, numpy as np from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference # Load voice sample (should be 24kHz mono) voice, sr = sf.read("path/to/voice_sample.wav") if voice.ndim > 1: voice = voice.mean(axis=1) if sr != 24000: voice = librosa.resample(voice, sr, 24000) processor = VibeVoiceProcessor.from_pretrained("InflexionLab/VibeVoice-ASR-Kazakh") model = VibeVoiceForConditionalGenerationInference.from_pretrained( "InflexionLab/VibeVoice-ASR-Kazakh", torch_dtype=torch.bfloat16 ).to("cuda").eval() model.set_ddpm_inference_steps(5) inputs = processor(text=["Speaker 0: Hello!\nSpeaker 1: Hi there!"], voice_samples=[[voice]], return_tensors="pt") audio = model.generate(**inputs, cfg_scale=1.3, tokenizer=processor.tokenizer).speech_outputs[0] sf.write("output.wav", audio.cpu().numpy().squeeze(), 24000) - Notebooks
- Google Colab
- Kaggle
VibeVoice ASR — Kazakh
Model Description
This is VibeVoice ASR fine-tuned on the Kazakh language using the ISSAI KSC2 Structured dataset (~1,200 hours of diverse Kazakh speech). Fine-tuning was performed using LoRA (Low-Rank Adaptation) and the weights were merged into the base model for efficient inference. Model demonstrated 22% WER on test set of ISSAI KSC2.
The base VibeVoice ASR model had no prior Kazakh knowledge. This fine-tuned version produces punctuated and capitalized Kazakh transcriptions.
Training Dataset
InflexionLab/ISSAI-KSC2-Structured — an enhanced version of the ISSAI KSC2 corpus with punctuation and capitalization restored using Gemma 27B. Covers 6 domains: TV News, Crowdsourced, Parliament, Talkshow, Podcasts, and Radio.
Evaluation Results
Evaluated on the KSC2 Test split (9,351 samples) and farabi-lab/kazakh-stt 30K samples. Farabi-Lab dataset was not included in training.
| Dataset | WER | CER |
|---|---|---|
| ISSAI_KSC2 | ~22% | ~9.6% |
| farabi-lab/kazakh-stt | 17.6% | 4.25% |
- Downloads last month
- 46
Model tree for InflexionLab/VibeVoice-ASR-Kazakh
Base model
microsoft/VibeVoice-ASRDataset used to train InflexionLab/VibeVoice-ASR-Kazakh
Evaluation results
- WER on ISSAI KSC2self-reported22%