juancopi81 commited on
Commit
37a7972
·
verified ·
1 Parent(s): ebd49cb

Update README

Browse files
Files changed (1) hide show
  1. README.md +30 -1
README.md CHANGED
@@ -18,4 +18,33 @@ dataset_info:
18
  ---
19
  # Dataset Card for "lmd_clean_8bars_32th_resolution"
20
 
21
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
  # Dataset Card for "lmd_clean_8bars_32th_resolution"
20
 
21
+ [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
22
+
23
+ Available at [Portex](https://marketplace.portexai.com/creator-profile)
24
+
25
+ ## 🎵 Lakh MIDI to MMM-Style Text Dataset
26
+
27
+ This dataset converts the Lakh MIDI Dataset into a structured text format inspired by the [Multitrack Music Machine (MMM) paper](https://arxiv.org/abs/2008.01307). It includes **344,900 samples**, each representing an **8-bar symbolic music fragment**, tokenized into a language-model-friendly format.
28
+
29
+ Each line in the dataset is a music fragment composed of tokens like:
30
+
31
+ ```sql
32
+ PIECE_START COMPOSER=JOHN_FARNHAM PERIOD= GENRE=TIME_SIG=4/4 TRACK_START INST=122 DENSITY=0 BAR_START TIME_DELTA=48 BAR_END ...
33
+ ```
34
+
35
+ ### 🔍 Metadata
36
+ - **Modality**: Text (converted from MIDI)
37
+ - **Format**: One tokenized sequence per line (plain text)
38
+ - **Size**: 344,900 rows
39
+ - **Source**: Derived from the Lakh MIDI Dataset
40
+ - **Structure**: Each row represents an 8-bar segment tokenized to match MMM syntax
41
+
42
+
43
+ ### 🤖 Use Cases
44
+ - Pretraining or finetuning symbolic music models
45
+ - Sequence modeling research for music
46
+ - Input for generative transformer models
47
+ - Creative AI applications in music composition
48
+
49
+ ### 🧠 Why this dataset?
50
+ Symbolic music datasets in tokenized, language-model-ready formats are rare. This dataset bridges audio-derived symbolic data and the world of NLP modeling, saving hours of preprocessing and formatting work for researchers and ML developers.