Update README.md
Browse files
README.md
CHANGED
|
@@ -1,47 +1,35 @@
|
|
| 1 |
-
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- text-generation
|
| 5 |
-
- text-to-speech
|
| 6 |
-
- automatic-speech-recognition
|
| 7 |
-
tags:
|
| 8 |
-
- Urdu
|
| 9 |
-
language:
|
| 10 |
-
- ur
|
| 11 |
-
pretty_name: ' Munch-1 Hashed Index '
|
| 12 |
-
---
|
| 13 |
-
# Munch Hashed Index - Lightweight Audio Reference Dataset
|
| 14 |
|
| 15 |
-
[](https://huggingface.co/datasets/humair025/munch-1)
|
| 16 |
[](https://huggingface.co/datasets/humair025/hashed_data_munch_1)
|
| 17 |
-
[. Instead of storing 3
|
| 24 |
|
| 25 |
- β
**Fast duplicate detection** across 3.86M+ audio samples
|
| 26 |
- β
**Efficient dataset exploration** without downloading terabytes
|
| 27 |
- β
**Quick metadata queries** (voice distribution, text stats, etc.)
|
| 28 |
- β
**Selective audio retrieval** - download only what you need
|
| 29 |
-
- β
**Storage efficiency** - 99.
|
| 30 |
|
| 31 |
### π Related Datasets
|
| 32 |
|
| 33 |
-
- **Original Dataset**: [humair025/
|
| 34 |
-
- **This Index**: [humair025/
|
| 35 |
|
| 36 |
---
|
| 37 |
|
| 38 |
## π― What Problem Does This Solve?
|
| 39 |
|
| 40 |
### The Challenge
|
| 41 |
-
The original [Munch dataset](https://huggingface.co/datasets/humair025/munch-1) contains:
|
| 42 |
-
- π **3
|
| 43 |
-
- πΎ **3.
|
| 44 |
-
- π¦
|
| 45 |
|
| 46 |
This makes it difficult to:
|
| 47 |
- β Quickly check if specific audio exists
|
|
@@ -73,8 +61,8 @@ pip install datasets pandas
|
|
| 73 |
from datasets import load_dataset
|
| 74 |
import pandas as pd
|
| 75 |
|
| 76 |
-
# Load the entire hashed index (fast - only ~
|
| 77 |
-
ds = load_dataset("humair025/
|
| 78 |
df = pd.DataFrame(ds)
|
| 79 |
|
| 80 |
print(f"Total records: {len(df)}")
|
|
@@ -133,7 +121,7 @@ def get_audio_by_hash(audio_hash, index_df):
|
|
| 133 |
|
| 134 |
# Download only the specific parquet file containing this audio
|
| 135 |
ds = load_original(
|
| 136 |
-
"humair025/
|
| 137 |
data_files=[row['parquet_file_name']],
|
| 138 |
split="train"
|
| 139 |
)
|
|
@@ -170,7 +158,7 @@ wav_io = pcm16_to_wav(audio_bytes)
|
|
| 170 |
| Field | Type | Description |
|
| 171 |
|-------|------|-------------|
|
| 172 |
| `id` | int | Original paragraph ID from source dataset |
|
| 173 |
-
| `parquet_file_name` | string | Source file in [
|
| 174 |
| `text` | string | Original Urdu text |
|
| 175 |
| `transcript` | string | TTS transcript (may differ from input) |
|
| 176 |
| `voice` | string | Voice used (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) |
|
|
@@ -228,11 +216,11 @@ long_text = df[df['text'].str.len() > 200]
|
|
| 228 |
```python
|
| 229 |
# Download only specific voices
|
| 230 |
ash_files = df[df['voice'] == 'ash']['parquet_file_name'].unique()
|
| 231 |
-
ds = load_dataset("humair025/
|
| 232 |
|
| 233 |
# Download only short audio samples
|
| 234 |
small_files = df[df['audio_size_bytes'] < 40000]['parquet_file_name'].unique()
|
| 235 |
-
ds = load_dataset("humair025/
|
| 236 |
```
|
| 237 |
|
| 238 |
### 4. **Deduplication Pipeline**
|
|
@@ -265,21 +253,24 @@ print(f"Similar audio candidates: {len(similar)}")
|
|
| 265 |
|
| 266 |
| Metric | Original Dataset | Hashed Index | Reduction |
|
| 267 |
|--------|------------------|--------------|-----------|
|
| 268 |
-
| Total Size |
|
| 269 |
-
|
|
| 270 |
-
|
|
| 271 |
-
|
|
|
|
|
|
|
|
| 272 |
|
| 273 |
### Content Statistics
|
| 274 |
|
| 275 |
```
|
| 276 |
π Dataset Overview:
|
| 277 |
-
Total Records:
|
| 278 |
-
|
| 279 |
Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
|
| 280 |
-
|
| 281 |
Avg Audio Size: ~50-60 KB per sample
|
| 282 |
Avg Duration: ~3-5 seconds per sample
|
|
|
|
| 283 |
```
|
| 284 |
|
| 285 |
---
|
|
@@ -292,7 +283,7 @@ print(f"Similar audio candidates: {len(similar)}")
|
|
| 292 |
# Analyze all hash files
|
| 293 |
from datasets import load_dataset
|
| 294 |
|
| 295 |
-
ds = load_dataset("humair025/
|
| 296 |
df = pd.DataFrame(ds)
|
| 297 |
|
| 298 |
# Group by voice
|
|
@@ -319,7 +310,7 @@ def verify_hash_exists(audio_hash, parquet_file):
|
|
| 319 |
import hashlib
|
| 320 |
|
| 321 |
ds = load_dataset(
|
| 322 |
-
"humair025/
|
| 323 |
data_files=[parquet_file],
|
| 324 |
split="train"
|
| 325 |
)
|
|
@@ -403,7 +394,8 @@ import time
|
|
| 403 |
|
| 404 |
# Load index
|
| 405 |
start = time.time()
|
| 406 |
-
|
|
|
|
| 407 |
print(f"Load time: {time.time() - start:.2f}s")
|
| 408 |
|
| 409 |
# Query by hash
|
|
@@ -418,7 +410,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
|
|
| 418 |
```
|
| 419 |
|
| 420 |
**Expected Performance**:
|
| 421 |
-
- Load
|
| 422 |
- Hash lookup: < 10 milliseconds
|
| 423 |
- Voice filter: < 50 milliseconds
|
| 424 |
- Full dataset scan: < 5 seconds
|
|
@@ -431,7 +423,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
|
|
| 431 |
|
| 432 |
```python
|
| 433 |
# 1. Query the index (fast)
|
| 434 |
-
df = pd.
|
| 435 |
target_rows = df[df['voice'] == 'ash'].head(100)
|
| 436 |
|
| 437 |
# 2. Get unique parquet files
|
|
@@ -440,7 +432,7 @@ files_needed = target_rows['parquet_file_name'].unique()
|
|
| 440 |
# 3. Download only needed files (selective)
|
| 441 |
from datasets import load_dataset
|
| 442 |
ds = load_dataset(
|
| 443 |
-
"humair025/
|
| 444 |
data_files=list(files_needed),
|
| 445 |
split="train"
|
| 446 |
)
|
|
@@ -463,46 +455,42 @@ If you use this dataset in your research, please cite both the original dataset
|
|
| 463 |
### BibTeX
|
| 464 |
|
| 465 |
```bibtex
|
| 466 |
-
@dataset{
|
| 467 |
-
title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
|
| 468 |
author={humair025},
|
| 469 |
year={2025},
|
| 470 |
publisher={Hugging Face},
|
| 471 |
-
howpublished={
|
| 472 |
-
|
| 473 |
-
},
|
| 474 |
-
note={Index of humair025/Munch dataset with SHA-256 audio hashes}
|
| 475 |
}
|
| 476 |
|
| 477 |
-
@dataset{
|
| 478 |
-
title={Munch
|
| 479 |
author={humair025},
|
| 480 |
year={2025},
|
| 481 |
publisher={Hugging Face},
|
| 482 |
-
howpublished={
|
| 483 |
-
\url{https://huggingface.co/datasets/humair025/munch-1}
|
| 484 |
-
}
|
| 485 |
}
|
| 486 |
```
|
| 487 |
|
| 488 |
### APA Format
|
| 489 |
|
| 490 |
```
|
| 491 |
-
humair025. (2025). Munch Hashed Index
|
| 492 |
[Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data_munch_1
|
| 493 |
|
| 494 |
-
humair025. (2025). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
|
| 495 |
Hugging Face. https://huggingface.co/datasets/humair025/munch-1
|
| 496 |
```
|
| 497 |
|
| 498 |
### MLA Format
|
| 499 |
|
| 500 |
```
|
| 501 |
-
humair025. "Munch Hashed Index
|
| 502 |
-
Hugging Face,
|
| 503 |
|
| 504 |
-
humair025. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face,
|
| 505 |
-
https://huggingface.co/datasets/humair025/
|
| 506 |
```
|
| 507 |
|
| 508 |
---
|
|
@@ -527,7 +515,7 @@ We welcome suggestions for:
|
|
| 527 |
|
| 528 |
## π License
|
| 529 |
|
| 530 |
-
This index dataset inherits the license from the original [Munch dataset](https://huggingface.co/datasets/humair025/
|
| 531 |
|
| 532 |
**Creative Commons Attribution 4.0 International (CC-BY-4.0)**
|
| 533 |
|
|
@@ -543,20 +531,20 @@ Under the terms:
|
|
| 543 |
|
| 544 |
## π Important Links
|
| 545 |
|
| 546 |
-
- π§ [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/munch) - Full 3.28
|
| 547 |
- π [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Lightweight reference
|
| 548 |
-
- π¬ [**Discussions**](https://huggingface.co/datasets/humair025/
|
| 549 |
-
- π [**Report Issues**](https://huggingface.co/datasets/humair025/
|
| 550 |
|
| 551 |
---
|
| 552 |
|
| 553 |
## β FAQ
|
| 554 |
|
| 555 |
### Q: Why use hashes instead of audio?
|
| 556 |
-
**A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~
|
| 557 |
|
| 558 |
### Q: Can I reconstruct audio from hashes?
|
| 559 |
-
**A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch dataset](https://huggingface.co/datasets/humair025/munch-1) using the file reference provided.
|
| 560 |
|
| 561 |
### Q: How accurate are the hashes?
|
| 562 |
**A:** SHA-256 has virtually zero collision probability. If two hashes match, the audio is identical (byte-for-byte).
|
|
@@ -565,7 +553,7 @@ Under the terms:
|
|
| 565 |
**A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/munch-1). See examples above.
|
| 566 |
|
| 567 |
### Q: Is this dataset complete?
|
| 568 |
-
**A:**
|
| 569 |
|
| 570 |
### Q: Can I contribute?
|
| 571 |
**A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
|
|
@@ -574,9 +562,9 @@ Under the terms:
|
|
| 574 |
|
| 575 |
## π Acknowledgments
|
| 576 |
|
| 577 |
-
- **Original Dataset**: [humair025/
|
| 578 |
- **TTS Generation**: OpenAI-compatible models
|
| 579 |
-
- **Voices**: 13 high-quality voices
|
| 580 |
- **Infrastructure**: HuggingFace Datasets platform
|
| 581 |
- **Hashing**: SHA-256 cryptographic hash function
|
| 582 |
|
|
@@ -584,16 +572,17 @@ Under the terms:
|
|
| 584 |
|
| 585 |
## π Version History
|
| 586 |
|
| 587 |
-
- **v1.0.0** (December 2025): Initial release
|
| 588 |
-
- Processed
|
| 589 |
-
-
|
| 590 |
-
-
|
|
|
|
| 591 |
|
| 592 |
---
|
| 593 |
|
| 594 |
**Last Updated**: December 2025
|
| 595 |
|
| 596 |
-
**Status**:
|
| 597 |
|
| 598 |
---
|
| 599 |
|
|
|
|
| 1 |
+
# Munch-1 Hashed Index - Lightweight Audio Reference Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
+
[](https://huggingface.co/datasets/humair025/munch-1)
|
| 4 |
[](https://huggingface.co/datasets/humair025/hashed_data_munch_1)
|
| 5 |
+
[]()
|
| 6 |
+
[]()
|
| 7 |
+
[]()
|
| 8 |
|
| 9 |
## π Overview
|
| 10 |
|
| 11 |
+
**Munch-1 Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch-1 Urdu TTS Dataset](https://huggingface.co/datasets/humair025/munch-1). Instead of storing 3.28 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
|
| 12 |
|
| 13 |
- β
**Fast duplicate detection** across 3.86M+ audio samples
|
| 14 |
- β
**Efficient dataset exploration** without downloading terabytes
|
| 15 |
- β
**Quick metadata queries** (voice distribution, text stats, etc.)
|
| 16 |
- β
**Selective audio retrieval** - download only what you need
|
| 17 |
+
- β
**Storage efficiency** - 99.97% space reduction (3.28 TB β ~1 GB)
|
| 18 |
|
| 19 |
### π Related Datasets
|
| 20 |
|
| 21 |
+
- **Original Dataset**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - Full audio dataset (3.28 TB)
|
| 22 |
+
- **This Index**: [humair025/hashed_data_munch_1](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Hashed reference (~1 GB)
|
| 23 |
|
| 24 |
---
|
| 25 |
|
| 26 |
## π― What Problem Does This Solve?
|
| 27 |
|
| 28 |
### The Challenge
|
| 29 |
+
The original [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1) contains:
|
| 30 |
+
- π **3,856,500 audio-text pairs**
|
| 31 |
+
- πΎ **3.28 TB total size**
|
| 32 |
+
- π¦ **~7,714 separate parquet files** (~400 MB each)
|
| 33 |
|
| 34 |
This makes it difficult to:
|
| 35 |
- β Quickly check if specific audio exists
|
|
|
|
| 61 |
from datasets import load_dataset
|
| 62 |
import pandas as pd
|
| 63 |
|
| 64 |
+
# Load the entire hashed index (fast - only ~1 GB!)
|
| 65 |
+
ds = load_dataset("humair025/hashed_data_munch_1", split="train")
|
| 66 |
df = pd.DataFrame(ds)
|
| 67 |
|
| 68 |
print(f"Total records: {len(df)}")
|
|
|
|
| 121 |
|
| 122 |
# Download only the specific parquet file containing this audio
|
| 123 |
ds = load_original(
|
| 124 |
+
"humair025/munch-1",
|
| 125 |
data_files=[row['parquet_file_name']],
|
| 126 |
split="train"
|
| 127 |
)
|
|
|
|
| 158 |
| Field | Type | Description |
|
| 159 |
|-------|------|-------------|
|
| 160 |
| `id` | int | Original paragraph ID from source dataset |
|
| 161 |
+
| `parquet_file_name` | string | Source file in [munch-1](https://huggingface.co/datasets/humair025/munch-1) dataset |
|
| 162 |
| `text` | string | Original Urdu text |
|
| 163 |
| `transcript` | string | TTS transcript (may differ from input) |
|
| 164 |
| `voice` | string | Voice used (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) |
|
|
|
|
| 216 |
```python
|
| 217 |
# Download only specific voices
|
| 218 |
ash_files = df[df['voice'] == 'ash']['parquet_file_name'].unique()
|
| 219 |
+
ds = load_dataset("humair025/munch-1", data_files=list(ash_files))
|
| 220 |
|
| 221 |
# Download only short audio samples
|
| 222 |
small_files = df[df['audio_size_bytes'] < 40000]['parquet_file_name'].unique()
|
| 223 |
+
ds = load_dataset("humair025/munch-1", data_files=list(small_files[:10]))
|
| 224 |
```
|
| 225 |
|
| 226 |
### 4. **Deduplication Pipeline**
|
|
|
|
| 253 |
|
| 254 |
| Metric | Original Dataset | Hashed Index | Reduction |
|
| 255 |
|--------|------------------|--------------|-----------|
|
| 256 |
+
| Total Size | 3.28 TB | ~1 GB | **99.97%** |
|
| 257 |
+
| Records | 3,856,500 | 3,856,500 | Same |
|
| 258 |
+
| Files | 7,714 parquet | Consolidated | **~7,700Γ fewer** |
|
| 259 |
+
| Download Time (100 Mbps) | ~73 hours | ~90 seconds | **~3,000Γ** |
|
| 260 |
+
| Load Time | Minutes-Hours | Seconds | **~100Γ** |
|
| 261 |
+
| Memory Usage | Cannot fit in RAM | ~2-3 GB RAM | **Fits easily** |
|
| 262 |
|
| 263 |
### Content Statistics
|
| 264 |
|
| 265 |
```
|
| 266 |
π Dataset Overview:
|
| 267 |
+
Total Records: 3,856,500
|
| 268 |
+
Total Files: 7,714 parquet files (~400 MB each)
|
| 269 |
Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
|
| 270 |
+
Language: Urdu (primary)
|
| 271 |
Avg Audio Size: ~50-60 KB per sample
|
| 272 |
Avg Duration: ~3-5 seconds per sample
|
| 273 |
+
Total Duration: ~3,200-4,800 hours of audio
|
| 274 |
```
|
| 275 |
|
| 276 |
---
|
|
|
|
| 283 |
# Analyze all hash files
|
| 284 |
from datasets import load_dataset
|
| 285 |
|
| 286 |
+
ds = load_dataset("humair025/hashed_data_munch_1", split="train")
|
| 287 |
df = pd.DataFrame(ds)
|
| 288 |
|
| 289 |
# Group by voice
|
|
|
|
| 310 |
import hashlib
|
| 311 |
|
| 312 |
ds = load_dataset(
|
| 313 |
+
"humair025/munch-1",
|
| 314 |
data_files=[parquet_file],
|
| 315 |
split="train"
|
| 316 |
)
|
|
|
|
| 394 |
|
| 395 |
# Load index
|
| 396 |
start = time.time()
|
| 397 |
+
ds = load_dataset("humair025/hashed_data_munch_1", split="train")
|
| 398 |
+
df = pd.DataFrame(ds)
|
| 399 |
print(f"Load time: {time.time() - start:.2f}s")
|
| 400 |
|
| 401 |
# Query by hash
|
|
|
|
| 410 |
```
|
| 411 |
|
| 412 |
**Expected Performance**:
|
| 413 |
+
- Load full dataset: 10-30 seconds
|
| 414 |
- Hash lookup: < 10 milliseconds
|
| 415 |
- Voice filter: < 50 milliseconds
|
| 416 |
- Full dataset scan: < 5 seconds
|
|
|
|
| 423 |
|
| 424 |
```python
|
| 425 |
# 1. Query the index (fast)
|
| 426 |
+
df = pd.DataFrame(load_dataset("humair025/hashed_data_munch_1", split="train"))
|
| 427 |
target_rows = df[df['voice'] == 'ash'].head(100)
|
| 428 |
|
| 429 |
# 2. Get unique parquet files
|
|
|
|
| 432 |
# 3. Download only needed files (selective)
|
| 433 |
from datasets import load_dataset
|
| 434 |
ds = load_dataset(
|
| 435 |
+
"humair025/munch-1",
|
| 436 |
data_files=list(files_needed),
|
| 437 |
split="train"
|
| 438 |
)
|
|
|
|
| 455 |
### BibTeX
|
| 456 |
|
| 457 |
```bibtex
|
| 458 |
+
@dataset{munch_hashed_index_2025,
|
| 459 |
+
title={Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS},
|
| 460 |
author={humair025},
|
| 461 |
year={2025},
|
| 462 |
publisher={Hugging Face},
|
| 463 |
+
howpublished={\url{https://huggingface.co/datasets/humair025/hashed_data_munch_1}},
|
| 464 |
+
note={Index of humair025/munch-1 dataset with SHA-256 audio hashes}
|
|
|
|
|
|
|
| 465 |
}
|
| 466 |
|
| 467 |
+
@dataset{munch_urdu_tts_2025,
|
| 468 |
+
title={Munch-1: Large-Scale Urdu Text-to-Speech Dataset},
|
| 469 |
author={humair025},
|
| 470 |
year={2025},
|
| 471 |
publisher={Hugging Face},
|
| 472 |
+
howpublished={\url{https://huggingface.co/datasets/humair025/munch-1}}
|
|
|
|
|
|
|
| 473 |
}
|
| 474 |
```
|
| 475 |
|
| 476 |
### APA Format
|
| 477 |
|
| 478 |
```
|
| 479 |
+
humair025. (2025). Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS
|
| 480 |
[Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data_munch_1
|
| 481 |
|
| 482 |
+
humair025. (2025). Munch-1: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
|
| 483 |
Hugging Face. https://huggingface.co/datasets/humair025/munch-1
|
| 484 |
```
|
| 485 |
|
| 486 |
### MLA Format
|
| 487 |
|
| 488 |
```
|
| 489 |
+
humair025. "Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS."
|
| 490 |
+
Hugging Face, 2025, https://huggingface.co/datasets/humair025/hashed_data_munch_1.
|
| 491 |
|
| 492 |
+
humair025. "Munch-1: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025,
|
| 493 |
+
https://huggingface.co/datasets/humair025/munch-1.
|
| 494 |
```
|
| 495 |
|
| 496 |
---
|
|
|
|
| 515 |
|
| 516 |
## π License
|
| 517 |
|
| 518 |
+
This index dataset inherits the license from the original [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1):
|
| 519 |
|
| 520 |
**Creative Commons Attribution 4.0 International (CC-BY-4.0)**
|
| 521 |
|
|
|
|
| 531 |
|
| 532 |
## π Important Links
|
| 533 |
|
| 534 |
+
- π§ [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/munch-1) - Full 3.28 TB audio
|
| 535 |
- π [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Lightweight reference
|
| 536 |
+
- π¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data_munch_1/discussions) - Ask questions
|
| 537 |
+
- π [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data_munch_1/discussions) - Bug reports
|
| 538 |
|
| 539 |
---
|
| 540 |
|
| 541 |
## β FAQ
|
| 542 |
|
| 543 |
### Q: Why use hashes instead of audio?
|
| 544 |
+
**A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50KB per audio. This enables duplicate detection and fast queries without storing massive audio files.
|
| 545 |
|
| 546 |
### Q: Can I reconstruct audio from hashes?
|
| 547 |
+
**A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1) using the file reference provided.
|
| 548 |
|
| 549 |
### Q: How accurate are the hashes?
|
| 550 |
**A:** SHA-256 has virtually zero collision probability. If two hashes match, the audio is identical (byte-for-byte).
|
|
|
|
| 553 |
**A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/munch-1). See examples above.
|
| 554 |
|
| 555 |
### Q: Is this dataset complete?
|
| 556 |
+
**A:** Yes, this index covers all 3,856,500 rows across all 7,714 parquet files from the original Munch-1 dataset.
|
| 557 |
|
| 558 |
### Q: Can I contribute?
|
| 559 |
**A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
|
|
|
|
| 562 |
|
| 563 |
## π Acknowledgments
|
| 564 |
|
| 565 |
+
- **Original Dataset**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1)
|
| 566 |
- **TTS Generation**: OpenAI-compatible models
|
| 567 |
+
- **Voices**: 13 high-quality voices (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
|
| 568 |
- **Infrastructure**: HuggingFace Datasets platform
|
| 569 |
- **Hashing**: SHA-256 cryptographic hash function
|
| 570 |
|
|
|
|
| 572 |
|
| 573 |
## π Version History
|
| 574 |
|
| 575 |
+
- **v1.0.0** (December 2025): Initial release
|
| 576 |
+
- Processed all 7,714 parquet files
|
| 577 |
+
- 3,856,500 audio samples indexed
|
| 578 |
+
- SHA-256 hashes computed for all audio
|
| 579 |
+
- ~99.97% space reduction achieved
|
| 580 |
|
| 581 |
---
|
| 582 |
|
| 583 |
**Last Updated**: December 2025
|
| 584 |
|
| 585 |
+
**Status**: β
Complete
|
| 586 |
|
| 587 |
---
|
| 588 |
|