Whisper-Tiny Portuguese - Mid-High Quality Filtered Synthetic Data

This model is a fine-tuned version of openai/whisper-tiny for Portuguese automatic speech recognition (ASR). It was trained on Common Voice 17.0 Portuguese combined with WAVe-filtered synthetic speech data using a balanced quality threshold (q ≥ 0.5), including both high-quality and medium-quality samples.

Purpose

This model tests whether the mid-high quality threshold (q ≥ 0.5) that works optimally for Large-v3 models can benefit Tiny architectures. The results reveal a critical architectural finding:

Key Finding: The balanced threshold that produces the best cross-domain results for Large-v3 actually hurts Tiny performance, demonstrating that optimal filtering thresholds are architecture-dependent.

Metric CV-Only Baseline This Model (Mid-High) Large-v3 (Same Threshold)
Test WER (CV) 30.72% 30.11% (+2.0%) 8.33% (+29.3%)
Test WER (MLS) 45.83% 47.25% (-3.1%) 10.27% (+32.9%)

While Large-v3 achieves its best cross-domain performance with this threshold, Tiny shows degraded MLS performance despite marginal in-domain improvement.

Model Details

Property Value
Base Model openai/whisper-tiny
Language Portuguese (pt)
Task Automatic Speech Recognition (transcribe)
Parameters 39M
Training Data Common Voice 17.0 + Mid-High Quality Synthetic (q ≥ 0.5)
Total Training Samples 41,047
Sampling Rate 16kHz

Evaluation Results

This Model (whisper-tiny-mixed-pt)

Metric Value
Validation Loss 0.4550
Validation WER 26.95%
Test WER (Common Voice) 30.11%
Test WER (MLS) 47.25%
Best Checkpoint Step 450
Max Training Steps 805

Comparison with Other Training Configurations (Whisper-Tiny Portuguese)

Training Data Max Steps Val Loss Val WER Test WER (CV) Test WER (MLS)
Common Voice Only 430 0.4463 27.05% 30.72% 45.83%
High-Quality (q ≥ 0.8) + CV 575 0.4481 26.74% 29.33% 44.18%
Mid-High (q ≥ 0.5) + CV 805 0.4550 26.95% 30.11% 47.25%
All Synthetic + CV 860 0.4517 28.06% 29.84% 46.54%

Key Performance Characteristics

  • Marginal in-domain gain: 30.11% vs 30.72% baseline (+2.0% relative)
  • Worse cross-domain: 47.25% MLS WER vs 45.83% baseline (-3.1%)
  • Largest filtered dataset: 19,181 synthetic samples used (87.3%)
  • Most steps among filtered: 805 max steps for suboptimal results
  • Demonstrates threshold dependency: What works for Large-v3 doesn't work for Tiny

Why Mid-High Filtering Hurts Tiny Models

The paper provides insight into this phenomenon:

"Compact models, with fewer parameters, struggle to disentangle the subtle acoustic differences between natural and synthetic speech. Unlike the Large-V3 model, which can exploit its deeper representational hierarchy to extract meaningful patterns, smaller models become overwhelmed by increased acoustic variability."

For Tiny models:

  • Adding medium-quality synthetic samples (0.5 ≤ q < 0.8) introduces noise the model cannot filter
  • The larger dataset size (41k vs 22k) creates more confusion rather than benefit
  • Cross-domain performance actually degrades (47.25% vs 45.83%)
  • Only strict high-quality filtering (q ≥ 0.8) provides improvement

Tiny vs Large: Same Threshold, Opposite Results

Model Mid-High (q ≥ 0.5) Config Test WER (CV) Test WER (MLS) vs Baseline
Whisper-Tiny 19,181 synthetic 30.11% 47.25% CV worse, MLS worse
Whisper-Large-v3 19,181 synthetic 8.33% 10.27% CV better, MLS best

This stark contrast demonstrates that optimal data augmentation strategies are architecture-specific.

Training Data

Dataset Composition

Source Samples Description
Common Voice 17.0 Portuguese 21,866 Real speech from Mozilla's crowdsourced dataset
Synthetic Transcript PT (q ≥ 0.5) 19,181 WAVe-filtered TTS audio (high + medium quality)
Total 41,047

WAVe Quality Distribution (Portuguese Synthetic Data)

Quality Level Samples Percentage Used in This Model
High (q ≥ 0.8) 7,312 33.3% ✓
Medium (0.5 ≤ q < 0.8) 11,869 54.0% ✓
Low (q < 0.5) 2,787 12.7% ✗

This threshold retains 87.3% of the synthetic dataset—too much variability for Tiny's limited capacity.

Training Procedure

Hyperparameters

Parameter Value
Learning Rate 5e-5
Batch Size (Global) 256
Warmup Steps 200
Max Epochs 5
Precision BF16
Optimizer AdamW (fused)
Eval Steps 50
Metric for Best Model eval_loss

Training Infrastructure

  • GPU: NVIDIA H200 (140GB VRAM)
  • Operating System: Ubuntu 22.04
  • Framework: Hugging Face Transformers

Usage

Transcription Pipeline

from transformers import pipeline

transcriber = pipeline(
    "automatic-speech-recognition",
    model="yuriyvnv/whisper-tiny-mixed-pt",
    device="cuda"
)

result = transcriber("path/to/portuguese_audio.wav")
print(result["text"])

Direct Model Usage

from transformers import WhisperProcessor, WhisperForConditionalGeneration
import librosa

processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-tiny-mixed-pt")
model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-tiny-mixed-pt")
model.to("cuda")

audio, sr = librosa.load("path/to/portuguese_audio.wav", sr=16000)
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")

predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
print(transcription)

Specifying Language

model.generation_config.language = "pt"
model.generation_config.task = "transcribe"

When to Use This Model

Generally not recommended for production. This model is primarily useful for:

  • Research purposes: Understanding how filtering thresholds interact with model capacity
  • Ablation studies: Complete picture of threshold effects across architectures
  • Demonstrating architecture-dependency: Showing that optimal strategies differ by model size

For production use:

Research Implications

This model provides evidence for a key finding:

Filtering thresholds that optimize large models may harm smaller ones.

For practitioners:

  1. Don't assume threshold transferability: Optimal q threshold depends on model size
  2. Tiny/Small need stricter filtering: Only q ≥ 0.8 helps; q ≥ 0.5 hurts
  3. Large models are more robust: Can leverage medium-quality data effectively
  4. Test before deploying: Validate augmentation strategies on target architecture

Limitations

  • Worse MLS than baseline: 47.25% vs 45.83% (degraded cross-domain)
  • Marginal in-domain improvement: Not worth the additional complexity
  • Wasted compute: 87% more training steps for mixed results
  • Domain specificity: Optimized for general Portuguese

Citation

This model is part of research on WAVe (Word-Aligned Verification) for synthetic speech quality assessment. While the WAVe methodology paper is currently under review, please cite our previous work that motivated this research:

@article{perezhohin2024enhancing,
  title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
  author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
  journal={IEEE Access},
  year={2024},
  publisher={IEEE}
}

References

License

Apache 2.0

Downloads last month
45
Safetensors
Model size
37.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yuriyvnv/whisper-tiny-mixed-pt

Finetuned
(1663)
this model

Datasets used to train yuriyvnv/whisper-tiny-mixed-pt

Collection including yuriyvnv/whisper-tiny-mixed-pt

Evaluation results