You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Urdu-LjSpeech Dataset

Dataset Description

Urdu-LjSpeech is a high-quality Urdu speech dataset designed for Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) tasks. The dataset contains Urdu audio recordings paired with their corresponding text transcriptions.

Dataset Summary

  • Language: Urdu (اردو)
  • Format: Audio files with text transcriptions
  • Audio Specifications:
    • Sampling Rate: 22,050 Hz
    • Format: PCM 16-bit
    • Channels: Mono
  • Use Cases: Text-to-Speech synthesis, Speech Recognition, Voice Cloning, Prosody Analysis

Supported Tasks

  • Text-to-Speech (TTS): Train models to synthesize natural-sounding Urdu speech
  • Automatic Speech Recognition (ASR): Develop speech-to-text systems for Urdu
  • Voice Conversion: Train voice cloning and conversion models
  • Linguistic Research: Study Urdu phonetics and prosody

Dataset Structure

Data Instances

Each instance in the dataset contains:

{
    'audio': {
        'array': array([...]), # Audio waveform
        'sampling_rate': 22050
    },
    'text': 'تم ہاتھ میں پتھر اٹھاتے ہو', # Urdu transcription
    'speaker': 'alloy', # Speaker identifier
    'id': 0 # Unique sample ID
}

Data Fields

Field Type Description
audio Audio Audio recording at 22,050 Hz sampling rate
text string Urdu text transcription in UTF-8
speaker string Speaker identifier
id int Unique identifier for the sample

Data Splits

The dataset is organized in batches for efficient loading:

dataset/
├── batch_0/
├── batch_1/
├── batch_2/
└── ...

Each batch contains approximately 1.5-2 GB of audio data.

Dataset Creation

Source Data

This dataset is a processed and validated version of speech recordings with careful quality control measures applied.

Data Collection

  • Audio recordings were collected and validated for quality
  • Each audio file was paired with its corresponding Urdu text transcription
  • Quality validation includes:
    • Minimum audio duration check (>0.1 seconds)
    • PCM format validation
    • Corrupted audio removal
    • Text-audio alignment verification

Data Processing

The dataset underwent several processing steps:

  1. Audio Validation: Each audio sample was validated for:
    • Sufficient duration (minimum 0.1 seconds)
    • Valid PCM format (even byte length for 16-bit samples)
    • No corruption or empty data
  2. Batch Organization: Files organized into ~1.5-2GB batches for efficient streaming and downloading
  3. Format Standardization: All audio normalized to:
    • 22,050 Hz sampling rate
    • 16-bit PCM format
    • Mono channel

Annotations

Text transcriptions are in standard Urdu script (UTF-8 encoded) with proper diacritical marks where applicable.

Usage

Loading the Dataset

from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("humairawan/Urdu-LjSpeech")
# Load a specific batch
dataset = load_dataset("humairawan/Urdu-LjSpeech", data_dir="batch_0")
# Access samples
sample = dataset['train'][0]
print(sample['text']) # Print Urdu text
audio_array = sample['audio']['array'] # Access audio waveform
sampling_rate = sample['audio']['sampling_rate'] # Get sampling rate

Training a TTS Model

from datasets import load_dataset
from transformers import SpeechT5ForTextToSpeech, SpeechT5Processor
# Load dataset
dataset = load_dataset("humairawan/Urdu-LjSpeech")
# Initialize model and processor
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts")
# Your training code here...

Training an ASR Model

from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load dataset
dataset = load_dataset("humairawan/Urdu-LjSpeech")
# Initialize model and processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base")
# Your training code here...

Considerations

Ethical Considerations

  • This dataset is intended for research and development of Urdu language technologies
  • Users should be aware of potential biases in speaker representation
  • Commercial use should respect speaker rights and consent

Citation

If you use this dataset in your research or applications, please cite it using the following BibTeX entry:

@dataset{awan2024urdu_ljspeech,
  author       = {Humair Munir},
  title        = {Urdu-LjSpeech: A High-Quality Urdu Speech Dataset for TTS and ASR},
  month        = dec,
  year         = 2024,
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/humairawan/Urdu-LjSpeech}
}
Downloads last month
4