humair025 commited on
Commit
6acaf08
Β·
verified Β·
1 Parent(s): d720a50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -75
README.md CHANGED
@@ -1,47 +1,35 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - text-generation
5
- - text-to-speech
6
- - automatic-speech-recognition
7
- tags:
8
- - Urdu
9
- language:
10
- - ur
11
- pretty_name: ' Munch-1 Hashed Index '
12
- ---
13
- # Munch Hashed Index - Lightweight Audio Reference Dataset
14
 
15
- [![Original Dataset](https://img.shields.io/badge/πŸ€—%20Original-Munch-blue)](https://huggingface.co/datasets/humair025/munch-1)
16
  [![Hashed Index](https://img.shields.io/badge/πŸ€—%20Index-hashed__data-green)](https://huggingface.co/datasets/humair025/hashed_data_munch_1)
17
- [![Size](https://img.shields.io/badge/Size-~1000MB-brightgreen)]()
18
- [![Original Size](https://img.shields.io/badge/Original-3.28 TB-orange)]()
19
- [![Space Saved](https://img.shields.io/badge/Space%20Saved-99.99%25-success)]()
20
 
21
  ## πŸ“– Overview
22
 
23
- **Munch Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch Urdu TTS Dataset](https://huggingface.co/datasets/humair025/munch-1). Instead of storing 3+ TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
24
 
25
  - βœ… **Fast duplicate detection** across 3.86M+ audio samples
26
  - βœ… **Efficient dataset exploration** without downloading terabytes
27
  - βœ… **Quick metadata queries** (voice distribution, text stats, etc.)
28
  - βœ… **Selective audio retrieval** - download only what you need
29
- - βœ… **Storage efficiency** - 99.99% space reduction (3.28 TB β†’ ~1000 MB)
30
 
31
  ### πŸ”— Related Datasets
32
 
33
- - **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/munch-1) - Full audio dataset (3.86 TB)
34
- - **This Index**: [humair025/hashed_data](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Hashed reference (~1000 MB)
35
 
36
  ---
37
 
38
  ## 🎯 What Problem Does This Solve?
39
 
40
  ### The Challenge
41
- The original [Munch dataset](https://huggingface.co/datasets/humair025/munch-1) contains:
42
- - πŸ“Š **3.86M+ audio-text pairs**
43
- - πŸ’Ύ **3.7+ TB total size**
44
- - πŸ“¦ **7,000+ separate parquet files**
45
 
46
  This makes it difficult to:
47
  - ❌ Quickly check if specific audio exists
@@ -73,8 +61,8 @@ pip install datasets pandas
73
  from datasets import load_dataset
74
  import pandas as pd
75
 
76
- # Load the entire hashed index (fast - only ~150 MB!)
77
- ds = load_dataset("humair025/hashed_data", split="train")
78
  df = pd.DataFrame(ds)
79
 
80
  print(f"Total records: {len(df)}")
@@ -133,7 +121,7 @@ def get_audio_by_hash(audio_hash, index_df):
133
 
134
  # Download only the specific parquet file containing this audio
135
  ds = load_original(
136
- "humair025/Munch",
137
  data_files=[row['parquet_file_name']],
138
  split="train"
139
  )
@@ -170,7 +158,7 @@ wav_io = pcm16_to_wav(audio_bytes)
170
  | Field | Type | Description |
171
  |-------|------|-------------|
172
  | `id` | int | Original paragraph ID from source dataset |
173
- | `parquet_file_name` | string | Source file in [Munch](https://huggingface.co/datasets/humair025/munch-1) dataset |
174
  | `text` | string | Original Urdu text |
175
  | `transcript` | string | TTS transcript (may differ from input) |
176
  | `voice` | string | Voice used (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) |
@@ -228,11 +216,11 @@ long_text = df[df['text'].str.len() > 200]
228
  ```python
229
  # Download only specific voices
230
  ash_files = df[df['voice'] == 'ash']['parquet_file_name'].unique()
231
- ds = load_dataset("humair025/Munch", data_files=list(ash_files))
232
 
233
  # Download only short audio samples
234
  small_files = df[df['audio_size_bytes'] < 40000]['parquet_file_name'].unique()
235
- ds = load_dataset("humair025/Munch", data_files=list(small_files[:10]))
236
  ```
237
 
238
  ### 4. **Deduplication Pipeline**
@@ -265,21 +253,24 @@ print(f"Similar audio candidates: {len(similar)}")
265
 
266
  | Metric | Original Dataset | Hashed Index | Reduction |
267
  |--------|------------------|--------------|-----------|
268
- | Total Size | 2.17 TB | ~500 MB | **99%** |
269
- | Download Time (100 Mbps) | ~X hours | ~12 seconds | **Thousand TimeΓ—** |
270
- | Load Time | Minutes | Seconds | **~100Γ—** |
271
- | Memory Usage | Cannot fit in RAM | Fit | **Thousands XΓ—** |
 
 
272
 
273
  ### Content Statistics
274
 
275
  ```
276
  πŸ“Š Dataset Overview:
277
- Total Records: ~2,500,000
278
- Unique Audio: [Run analysis to determine]
279
  Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
280
- Languages: Urdu (primary), Mixed (some samples)
281
  Avg Audio Size: ~50-60 KB per sample
282
  Avg Duration: ~3-5 seconds per sample
 
283
  ```
284
 
285
  ---
@@ -292,7 +283,7 @@ print(f"Similar audio candidates: {len(similar)}")
292
  # Analyze all hash files
293
  from datasets import load_dataset
294
 
295
- ds = load_dataset("humair025/hashed_data", split="train")
296
  df = pd.DataFrame(ds)
297
 
298
  # Group by voice
@@ -319,7 +310,7 @@ def verify_hash_exists(audio_hash, parquet_file):
319
  import hashlib
320
 
321
  ds = load_dataset(
322
- "humair025/Munch",
323
  data_files=[parquet_file],
324
  split="train"
325
  )
@@ -403,7 +394,8 @@ import time
403
 
404
  # Load index
405
  start = time.time()
406
- df = pd.read_parquet('hashed_0_39.parquet')
 
407
  print(f"Load time: {time.time() - start:.2f}s")
408
 
409
  # Query by hash
@@ -418,7 +410,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
418
  ```
419
 
420
  **Expected Performance**:
421
- - Load single file: < 1 second
422
  - Hash lookup: < 10 milliseconds
423
  - Voice filter: < 50 milliseconds
424
  - Full dataset scan: < 5 seconds
@@ -431,7 +423,7 @@ print(f"Voice filter: {(time.time() - start)*1000:.2f}ms")
431
 
432
  ```python
433
  # 1. Query the index (fast)
434
- df = pd.read_parquet('hashed_index.parquet')
435
  target_rows = df[df['voice'] == 'ash'].head(100)
436
 
437
  # 2. Get unique parquet files
@@ -440,7 +432,7 @@ files_needed = target_rows['parquet_file_name'].unique()
440
  # 3. Download only needed files (selective)
441
  from datasets import load_dataset
442
  ds = load_dataset(
443
- "humair025/Munch",
444
  data_files=list(files_needed),
445
  split="train"
446
  )
@@ -463,46 +455,42 @@ If you use this dataset in your research, please cite both the original dataset
463
  ### BibTeX
464
 
465
  ```bibtex
466
- @dataset{munch_hashed_index_V2_2025,
467
- title={Munch Hashed Index: Lightweight Reference Dataset for Urdu TTS},
468
  author={humair025},
469
  year={2025},
470
  publisher={Hugging Face},
471
- howpublished={
472
- \url{https://huggingface.co/datasets/humair025/hashed_data_munch_1}
473
- },
474
- note={Index of humair025/Munch dataset with SHA-256 audio hashes}
475
  }
476
 
477
- @dataset{munch_urdu_tts_V2_2025,
478
- title={Munch V2: Large-Scale Urdu Text-to-Speech Dataset},
479
  author={humair025},
480
  year={2025},
481
  publisher={Hugging Face},
482
- howpublished={
483
- \url{https://huggingface.co/datasets/humair025/munch-1}
484
- }
485
  }
486
  ```
487
 
488
  ### APA Format
489
 
490
  ```
491
- humair025. (2025). Munch Hashed Index V2: Lightweight Reference Dataset for Urdu TTS
492
  [Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data_munch_1
493
 
494
- humair025. (2025). Munch: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
495
  Hugging Face. https://huggingface.co/datasets/humair025/munch-1
496
  ```
497
 
498
  ### MLA Format
499
 
500
  ```
501
- humair025. "Munch Hashed Index V2: Lightweight Reference Dataset for Urdu TTS."
502
- Hugging Face, 2024, https://huggingface.co/datasets/humair025/hashed_data.
503
 
504
- humair025. "Munch: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2024,
505
- https://huggingface.co/datasets/humair025/Munch.
506
  ```
507
 
508
  ---
@@ -527,7 +515,7 @@ We welcome suggestions for:
527
 
528
  ## πŸ“„ License
529
 
530
- This index dataset inherits the license from the original [Munch dataset](https://huggingface.co/datasets/humair025/Munch):
531
 
532
  **Creative Commons Attribution 4.0 International (CC-BY-4.0)**
533
 
@@ -543,20 +531,20 @@ Under the terms:
543
 
544
  ## πŸ”— Important Links
545
 
546
- - 🎧 [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/munch) - Full 3.28 TB audio
547
  - πŸ“Š [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Lightweight reference
548
- - πŸ’¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Ask questions
549
- - πŸ› [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data/discussions) - Bug reports
550
 
551
  ---
552
 
553
  ## ❓ FAQ
554
 
555
  ### Q: Why use hashes instead of audio?
556
- **A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50kb-12MB per audio. This enables duplicate detection and fast queries without storing massive audio files.
557
 
558
  ### Q: Can I reconstruct audio from hashes?
559
- **A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch dataset](https://huggingface.co/datasets/humair025/munch-1) using the file reference provided.
560
 
561
  ### Q: How accurate are the hashes?
562
  **A:** SHA-256 has virtually zero collision probability. If two hashes match, the audio is identical (byte-for-byte).
@@ -565,7 +553,7 @@ Under the terms:
565
  **A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/munch-1). See examples above.
566
 
567
  ### Q: Is this dataset complete?
568
- **A:** This index is continuously updated as new batches are processed. Check the file list to see coverage.
569
 
570
  ### Q: Can I contribute?
571
  **A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
@@ -574,9 +562,9 @@ Under the terms:
574
 
575
  ## πŸ™ Acknowledgments
576
 
577
- - **Original Dataset**: [humair025/Munch](https://huggingface.co/datasets/humair025/munch-1)
578
  - **TTS Generation**: OpenAI-compatible models
579
- - **Voices**: 13 high-quality voices
580
  - **Infrastructure**: HuggingFace Datasets platform
581
  - **Hashing**: SHA-256 cryptographic hash function
582
 
@@ -584,16 +572,17 @@ Under the terms:
584
 
585
  ## πŸ“ Version History
586
 
587
- - **v1.0.0** (December 2025): Initial release with hash index
588
- - Processed [X] out of N parquet files
589
- - [Y] unique audio hashes identified
590
- - [Z]% deduplication achieved
 
591
 
592
  ---
593
 
594
  **Last Updated**: December 2025
595
 
596
- **Status**: πŸ”„ Actively Processing (check file count for latest progress)
597
 
598
  ---
599
 
 
1
+ # Munch-1 Hashed Index - Lightweight Audio Reference Dataset
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ [![Original Dataset](https://img.shields.io/badge/πŸ€—%20Original-Munch--1-blue)](https://huggingface.co/datasets/humair025/munch-1)
4
  [![Hashed Index](https://img.shields.io/badge/πŸ€—%20Index-hashed__data-green)](https://huggingface.co/datasets/humair025/hashed_data_munch_1)
5
+ [![Size](https://img.shields.io/badge/Size-~1GB-brightgreen)]()
6
+ [![Original Size](https://img.shields.io/badge/Original-3.28TB-orange)]()
7
+ [![Space Saved](https://img.shields.io/badge/Space%20Saved-99.97%25-success)]()
8
 
9
  ## πŸ“– Overview
10
 
11
+ **Munch-1 Hashed Index** is a lightweight reference dataset that provides SHA-256 hashes for all audio files in the [Munch-1 Urdu TTS Dataset](https://huggingface.co/datasets/humair025/munch-1). Instead of storing 3.28 TB of raw audio, this index stores only metadata and cryptographic hashes, enabling:
12
 
13
  - βœ… **Fast duplicate detection** across 3.86M+ audio samples
14
  - βœ… **Efficient dataset exploration** without downloading terabytes
15
  - βœ… **Quick metadata queries** (voice distribution, text stats, etc.)
16
  - βœ… **Selective audio retrieval** - download only what you need
17
+ - βœ… **Storage efficiency** - 99.97% space reduction (3.28 TB β†’ ~1 GB)
18
 
19
  ### πŸ”— Related Datasets
20
 
21
+ - **Original Dataset**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1) - Full audio dataset (3.28 TB)
22
+ - **This Index**: [humair025/hashed_data_munch_1](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Hashed reference (~1 GB)
23
 
24
  ---
25
 
26
  ## 🎯 What Problem Does This Solve?
27
 
28
  ### The Challenge
29
+ The original [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1) contains:
30
+ - πŸ“Š **3,856,500 audio-text pairs**
31
+ - πŸ’Ύ **3.28 TB total size**
32
+ - πŸ“¦ **~7,714 separate parquet files** (~400 MB each)
33
 
34
  This makes it difficult to:
35
  - ❌ Quickly check if specific audio exists
 
61
  from datasets import load_dataset
62
  import pandas as pd
63
 
64
+ # Load the entire hashed index (fast - only ~1 GB!)
65
+ ds = load_dataset("humair025/hashed_data_munch_1", split="train")
66
  df = pd.DataFrame(ds)
67
 
68
  print(f"Total records: {len(df)}")
 
121
 
122
  # Download only the specific parquet file containing this audio
123
  ds = load_original(
124
+ "humair025/munch-1",
125
  data_files=[row['parquet_file_name']],
126
  split="train"
127
  )
 
158
  | Field | Type | Description |
159
  |-------|------|-------------|
160
  | `id` | int | Original paragraph ID from source dataset |
161
+ | `parquet_file_name` | string | Source file in [munch-1](https://huggingface.co/datasets/humair025/munch-1) dataset |
162
  | `text` | string | Original Urdu text |
163
  | `transcript` | string | TTS transcript (may differ from input) |
164
  | `voice` | string | Voice used (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan) |
 
216
  ```python
217
  # Download only specific voices
218
  ash_files = df[df['voice'] == 'ash']['parquet_file_name'].unique()
219
+ ds = load_dataset("humair025/munch-1", data_files=list(ash_files))
220
 
221
  # Download only short audio samples
222
  small_files = df[df['audio_size_bytes'] < 40000]['parquet_file_name'].unique()
223
+ ds = load_dataset("humair025/munch-1", data_files=list(small_files[:10]))
224
  ```
225
 
226
  ### 4. **Deduplication Pipeline**
 
253
 
254
  | Metric | Original Dataset | Hashed Index | Reduction |
255
  |--------|------------------|--------------|-----------|
256
+ | Total Size | 3.28 TB | ~1 GB | **99.97%** |
257
+ | Records | 3,856,500 | 3,856,500 | Same |
258
+ | Files | 7,714 parquet | Consolidated | **~7,700Γ— fewer** |
259
+ | Download Time (100 Mbps) | ~73 hours | ~90 seconds | **~3,000Γ—** |
260
+ | Load Time | Minutes-Hours | Seconds | **~100Γ—** |
261
+ | Memory Usage | Cannot fit in RAM | ~2-3 GB RAM | **Fits easily** |
262
 
263
  ### Content Statistics
264
 
265
  ```
266
  πŸ“Š Dataset Overview:
267
+ Total Records: 3,856,500
268
+ Total Files: 7,714 parquet files (~400 MB each)
269
  Voices: 13 (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
270
+ Language: Urdu (primary)
271
  Avg Audio Size: ~50-60 KB per sample
272
  Avg Duration: ~3-5 seconds per sample
273
+ Total Duration: ~3,200-4,800 hours of audio
274
  ```
275
 
276
  ---
 
283
  # Analyze all hash files
284
  from datasets import load_dataset
285
 
286
+ ds = load_dataset("humair025/hashed_data_munch_1", split="train")
287
  df = pd.DataFrame(ds)
288
 
289
  # Group by voice
 
310
  import hashlib
311
 
312
  ds = load_dataset(
313
+ "humair025/munch-1",
314
  data_files=[parquet_file],
315
  split="train"
316
  )
 
394
 
395
  # Load index
396
  start = time.time()
397
+ ds = load_dataset("humair025/hashed_data_munch_1", split="train")
398
+ df = pd.DataFrame(ds)
399
  print(f"Load time: {time.time() - start:.2f}s")
400
 
401
  # Query by hash
 
410
  ```
411
 
412
  **Expected Performance**:
413
+ - Load full dataset: 10-30 seconds
414
  - Hash lookup: < 10 milliseconds
415
  - Voice filter: < 50 milliseconds
416
  - Full dataset scan: < 5 seconds
 
423
 
424
  ```python
425
  # 1. Query the index (fast)
426
+ df = pd.DataFrame(load_dataset("humair025/hashed_data_munch_1", split="train"))
427
  target_rows = df[df['voice'] == 'ash'].head(100)
428
 
429
  # 2. Get unique parquet files
 
432
  # 3. Download only needed files (selective)
433
  from datasets import load_dataset
434
  ds = load_dataset(
435
+ "humair025/munch-1",
436
  data_files=list(files_needed),
437
  split="train"
438
  )
 
455
  ### BibTeX
456
 
457
  ```bibtex
458
+ @dataset{munch_hashed_index_2025,
459
+ title={Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS},
460
  author={humair025},
461
  year={2025},
462
  publisher={Hugging Face},
463
+ howpublished={\url{https://huggingface.co/datasets/humair025/hashed_data_munch_1}},
464
+ note={Index of humair025/munch-1 dataset with SHA-256 audio hashes}
 
 
465
  }
466
 
467
+ @dataset{munch_urdu_tts_2025,
468
+ title={Munch-1: Large-Scale Urdu Text-to-Speech Dataset},
469
  author={humair025},
470
  year={2025},
471
  publisher={Hugging Face},
472
+ howpublished={\url{https://huggingface.co/datasets/humair025/munch-1}}
 
 
473
  }
474
  ```
475
 
476
  ### APA Format
477
 
478
  ```
479
+ humair025. (2025). Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS
480
  [Dataset]. Hugging Face. https://huggingface.co/datasets/humair025/hashed_data_munch_1
481
 
482
+ humair025. (2025). Munch-1: Large-Scale Urdu Text-to-Speech Dataset [Dataset].
483
  Hugging Face. https://huggingface.co/datasets/humair025/munch-1
484
  ```
485
 
486
  ### MLA Format
487
 
488
  ```
489
+ humair025. "Munch-1 Hashed Index: Lightweight Reference Dataset for Urdu TTS."
490
+ Hugging Face, 2025, https://huggingface.co/datasets/humair025/hashed_data_munch_1.
491
 
492
+ humair025. "Munch-1: Large-Scale Urdu Text-to-Speech Dataset." Hugging Face, 2025,
493
+ https://huggingface.co/datasets/humair025/munch-1.
494
  ```
495
 
496
  ---
 
515
 
516
  ## πŸ“„ License
517
 
518
+ This index dataset inherits the license from the original [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1):
519
 
520
  **Creative Commons Attribution 4.0 International (CC-BY-4.0)**
521
 
 
531
 
532
  ## πŸ”— Important Links
533
 
534
+ - 🎧 [**Original Audio Dataset**](https://huggingface.co/datasets/humair025/munch-1) - Full 3.28 TB audio
535
  - πŸ“Š [**This Hashed Index**](https://huggingface.co/datasets/humair025/hashed_data_munch_1) - Lightweight reference
536
+ - πŸ’¬ [**Discussions**](https://huggingface.co/datasets/humair025/hashed_data_munch_1/discussions) - Ask questions
537
+ - πŸ› [**Report Issues**](https://huggingface.co/datasets/humair025/hashed_data_munch_1/discussions) - Bug reports
538
 
539
  ---
540
 
541
  ## ❓ FAQ
542
 
543
  ### Q: Why use hashes instead of audio?
544
+ **A:** Hashes provide unique fingerprints for audio files while taking only 64 bytes vs ~50KB per audio. This enables duplicate detection and fast queries without storing massive audio files.
545
 
546
  ### Q: Can I reconstruct audio from hashes?
547
+ **A:** No. SHA-256 is a one-way cryptographic hash. You must download the original audio from the [Munch-1 dataset](https://huggingface.co/datasets/humair025/munch-1) using the file reference provided.
548
 
549
  ### Q: How accurate are the hashes?
550
  **A:** SHA-256 has virtually zero collision probability. If two hashes match, the audio is identical (byte-for-byte).
 
553
  **A:** Use the `parquet_file_name` and `id` fields to locate and download the specific audio from the [original dataset](https://huggingface.co/datasets/humair025/munch-1). See examples above.
554
 
555
  ### Q: Is this dataset complete?
556
+ **A:** Yes, this index covers all 3,856,500 rows across all 7,714 parquet files from the original Munch-1 dataset.
557
 
558
  ### Q: Can I contribute?
559
  **A:** Yes! Help verify hashes, report inconsistencies, or suggest improvements via discussions.
 
562
 
563
  ## πŸ™ Acknowledgments
564
 
565
+ - **Original Dataset**: [humair025/munch-1](https://huggingface.co/datasets/humair025/munch-1)
566
  - **TTS Generation**: OpenAI-compatible models
567
+ - **Voices**: 13 high-quality voices (alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, amuch, dan)
568
  - **Infrastructure**: HuggingFace Datasets platform
569
  - **Hashing**: SHA-256 cryptographic hash function
570
 
 
572
 
573
  ## πŸ“ Version History
574
 
575
+ - **v1.0.0** (December 2025): Initial release
576
+ - Processed all 7,714 parquet files
577
+ - 3,856,500 audio samples indexed
578
+ - SHA-256 hashes computed for all audio
579
+ - ~99.97% space reduction achieved
580
 
581
  ---
582
 
583
  **Last Updated**: December 2025
584
 
585
+ **Status**: βœ… Complete
586
 
587
  ---
588