Dataset Viewer
Auto-converted to Parquet Duplicate
mel
array 2D
label
class label
8 classes
track_id
int64
2
68.4k
artist_id
int64
1
16.2k
genre
stringclasses
8 values
[[-20.7232666015625,-10.857931137084961,-9.448110580444336,-4.953191757202148,-4.56533145904541,-4.7(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-2.5794074535369873,-1.5195419788360596,-1.2298798561096191,-0.007436379324644804,-0.2176851183176(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-0.9248364567756653,-3.108721971511841,-4.475480079650879,-4.102287292480469,-3.565981388092041,-5(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-3.702253818511963,-5.91628360748291,-4.139080047607422,-3.838698387145996,-4.846282005310059,-4.4(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-3.739495038986206,-4.155714511871338,-5.160146236419678,-3.6086580753326416,-3.373426914215088,-4(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-1.489098072052002,-1.7711772918701172,-2.105846643447876,-2.446784734725952,-3.5258333683013916,-(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-6.056722164154053,-1.8072986602783203,-1.8985154628753662,-2.6632235050201416,-4.16627311706543,-(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-2.47629714012146,-1.0933510065078735,-1.208378553390503,-1.6028320789337158,-1.8856889009475708,0(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-3.1770899295806885,-2.3804776668548584,1.1715660095214844,1.2794594764709473,0.8595252633094788,0(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
[[-1.858332872390747,-0.5659292936325073,0.07790933549404144,0.29215556383132935,-0.0504571832716465(...TRUNCATED)
3Hip-Hop
2
1
Hip-Hop
End of preview. Expand in Data Studio

FMA - Small: Pre - computed Log - Mel - Spectrograms for Music Genre Classification

Pre - processed FMA - Small dataset containing 155,153 log - mel - spectrogram segments ready for training audio genre classifiers. No audio decoding needed - load and train directly.

Dataset Details

Property Value
Source FMA-Small (8,000 tracks × 30s)
Representation Log - Mel - Spectrogram
Sample Shape (128, 300) - 128 mel bins × 300 time frames
Sample Rate 32,000 Hz
Segment Duration 3 seconds (1.5s overlap → ~19 segments/track)
Classes 8 genres
Split Strategy StratifiedGroupKFold on artist_id (zero artist leakage)

Features

Column Type Description
mel Array2D(float32) (128, 300) Log - mel - spectrogram segment
label ClassLabel Genre label [0–7]
track_id int64 FMA track identifier
artist_id int64 FMA artist identifier
genre string Human - readable genre name

Labels

0 Electronic · 1 Experimental · 2 Folk · 3 Hip - Hop · 4 Instrumental · 5 International · 6 Pop · 7 Rock

Splits

Split Samples Ratio
train 99,140 ~64%
validation 24,807 ~16%
test 31,206 ~20%

No artist appears in more than one split - enforced via StratifiedGroupKFold on artist_id to prevent data leakage.

Quick Start

from datasets import load_dataset

dd = load_dataset("minhqng/fma-small")
dd.set_format("torch", columns=["mel", "label"])

sample = dd["train"][0]
mel = sample["mel"]      # (128, 300) float32
label = sample["label"]  # int, 0–7

Audio Processing Pipeline

Each 30 - second MP3 track was processed as follows:

MP3 → decode (PyAV) → mono → resample 32kHz → segment 3s (50% overlap)
    → MelSpectrogram (n_fft=1024, hop=320, 128 bins, Slaney norm)
    → log(mel + 1e-9) → truncate to 300 frames → (128, 300)

Silent/corrupt tracks and segments were removed before dataset creation.

Citation

If you use this dataset, please cite the original FMA paper:

@inproceedings{defferrard2017fma,
  title = {{FMA}: A Dataset for Music Analysis},
  author = {Defferrard, Micha\"el and Benzi, Kirell and Vandergheynst, Pierre and Bresson, Xavier},
  booktitle = {18th International Society for Music Information Retrieval Conference (ISMIR)},
  year = {2017},
}
Downloads last month
67