MGSV-EC / README.md
xxayt's picture
update citation
f33c4f6
metadata
license: cc-by-nc-4.0
language:
  - en
size_categories:
  - 10K<n<100K
tags:
  - music
  - music recommendation
  - video-to-music retrieval
  - grounding
modality:
  - video
  - audio
  - tabular
  - text
formats:
  - csv
  - zip
configs:
  - config_name: default
    data_files:
      - split: train
        path: dataset/MGSV-EC/train_data.csv
      - split: val
        path: dataset/MGSV-EC/val_data.csv
      - split: test
        path: dataset/MGSV-EC/test_data.csv
    zip_files:
      - path: MGSV_feature.zip
        description: >-
          Pre-extracted features for video and music clips. Use
          hf_hub_download() to retrieve.

Music Grounding by Short Video E-commerce (MGSV-EC) Dataset

πŸ“„ [Paper]

πŸ“¦ [Feature File] (or Baidu drive (P:5cbq) / Google drive)

πŸ”§ [PyTorch Dataloader]

🧬 [Model Code]


πŸ“ Dataset Summary

MGSV-EC is a large-scale dataset for the new task of Music Grounding by Short Video (MGSV), which aims to localize a specific music segment that best serves as the background music (BGM) for a given query short video.
Unlike traditional video-to-music retrieval (V2MR), MGSV requires both identifying the relevant music track and pinpointing a precise moment from the track.

The dataset contains 53,194 short e-commerce videos paired with 35,393 music moments, all derived from 4,050 unique music tracks. It supports evaluation in two modes:

  • Single-music Grounding (SmG): the relevant music track is known, and the task is to detect the correct segment.
  • Music-set Grounding (MsG): the model must retrieve the correct music track and its corresponding segment.

πŸ“ Evaluation Protocol

Mode Sub-task Metric
Single-music Grounding (SmG) mIoU
Music-set Video-to-Music Retrieval (V2MR) Rk
Music-set Grounding (MsG) MoRk

πŸ“Š Dataset Statistics

Split #Music Tracks Avg. Music Duration(sec) #Query Videos Avg. Video Duration(sec) #Moments
Total 4,050 138.9 Β± 69.6 53,194 23.9 Β± 10.7 35,393
Train 3,496 138.3 Β± 69.4 49,194 24.0 Β± 10.7 31,660
Val 2,000 139.6 Β± 70.0 2,000 22.8 Β± 10.8 2,000
Test 2,000 139.9 Β± 70.1 2,000 22.6 Β± 10.7 2,000
  • 🎡 Music type ratio: ~60% songs, ~40% instrumental
  • πŸ“Ή Frame rate: 34 FPS; resolution: 1080Γ—1920

πŸš€ Quick Setup

To prepare and organize the dataset for local use, simply run the following code:

import datasets
from huggingface_hub import hf_hub_download
import os
import zipfile

# Load dataset splits (CSV)
dataset_map = datasets.load_dataset("xxayt/MGSV-EC")

# Save splits locally
csv_dir = "dataset/MGSV-EC"  # specify your local directory
os.makedirs(csv_dir, exist_ok=True)
dataset_map["train"].to_csv(os.path.join(csv_dir, "train_data.csv"))
dataset_map["val"].to_csv(os.path.join(csv_dir, "val_data.csv"))
dataset_map["test"].to_csv(os.path.join(csv_dir, "test_data.csv"))

# see dataset_map["train"][0] for more details

# Download and extract pre-extracted features (ZIP)
zip_path = hf_hub_download(  # here may take ~13 minutes
    repo_id="xxayt/MGSV-EC",
    filename="MGSV_feature.zip",
    repo_type="dataset"
)
target_dir = "features"  # specify your local directory
os.makedirs(target_dir, exist_ok=True)
with zipfile.ZipFile(zip_path, "r") as zip_ref:
    zip_ref.extractall(target_dir)
  • Final Directory Structure
.
β”œβ”€β”€ dataset
β”‚   └── MGSV-EC
β”‚       β”œβ”€β”€ train_data.csv
β”‚       β”œβ”€β”€ val_data.csv
β”‚       └── test_data.csv
β”œβ”€β”€ features
β”‚   └── Kuai_feature
β”‚       β”œβ”€β”€ ast_feature2p5/
β”‚       └── vit_feature1/
└── README.md

πŸ“ Data Format

Each row in the loaded CSV file represents a query video paired with a music track and a localized music moment. The meaning of each column is as follows:

Column Name Description
video_id Unique identifier for the short query video.
music_id Unique identifier for the associated music track.
video_start Start time of the video segment in full video.
video_end End time of the video segment in full video.
music_start Start time of the music segment in full track.
music_end End time of the music segment in full track.
music_total_duration Total duration of the music track.
video_segment_duration Duration of the video segment.
music_segment_duration Duration of the music segment.
music_path Relative path to the music track file.
video_total_duration Total duration of the video.
video_width Width of the video frame.
video_height Height of the video frame.
video_total_frames Total number of frames in the video.
video_frame_rate Frame rate of the video.
video_category Category label of the video content (e.g., "Beauty", "Food").

🧩 Feature Directory Structure

For each video-music pair, we provide pre-extracted visual and audio features for efficient training in Baidu drive (P:5cbq) / Google drive / MGSV_feature.zip. The features are stored in the following directory structure:

[Your data feature path]
.
β”œβ”€β”€ ast_feature2p5
β”‚   β”œβ”€β”€ ast_feature/      # Audio segment features extracted by AST (Audio Spectrogram Transformer)
β”‚   └── ast_mask/         # Segment-level masks indicating valid audio positions
└── vit_feature1
    β”œβ”€β”€ vit_feature/      # Frame-level visual features extracted by CLIP-ViT (ViT-B/32)
    └── vit_mask/         # Frame-level masks indicating valid visual positions

Each .pt file corresponds to a single sample and includes:

  • frame_feats: shape [B, max_v_frames, 512]
  • frame_masks: shape [B, max_v_frames], where 1 indicates valid frames, 0 for padding, used for padding control during batching
  • segment_feats: shape [B, max_snippet_num, 768]
  • segment_masks: shape [B, max_snippet_num], indicating valid audio segments

πŸ”§ Demo Code for Sample Construction

import os
import torch
import pandas as pd

def get_cw_propotion(gt_spans, max_m_duration):
    """
    Calculate the center and width proportions based on gt_spans and maximum music duration.

    Parameters:
        gt_spans: torch.Tensor of shape [1, 2], representing the start and end times of a music segment.
        max_m_duration: float, the maximum duration of the music.

    Returns:
        torch.Tensor of shape [1, 2], where the first column is the center proportion and the second is the width proportion.
    """
    # Clamp the end time to the maximum music duration
    gt_spans[:, 1] = torch.clamp(gt_spans[:, 1], max=max_m_duration)
    center_propotion = (gt_spans[:, 0] + gt_spans[:, 1]) / 2.0 / max_m_duration
    width_propotion = (gt_spans[:, 1] - gt_spans[:, 0]) / max_m_duration
    return torch.stack([center_propotion, width_propotion], dim=-1)

def get_data(data_csv_path, max_m_duration=240, frame_frozen_feature_path=None, music_frozen_feature_path=None):
    """
    Load CSV data and extract sample information.

    Parameters:
        data_csv_path: str, path to the CSV file.
        max_m_duration: float, maximum duration of the music.
        frame_frozen_feature_path: str, root directory for video features.
        music_frozen_feature_path: str, root directory for music features.

    Returns:
        List of dictionaries, each containing:
            - data_map: dict with loaded video and music features.
            - meta_map: dict with metadata information.
            - spans_target: torch.Tensor with target span proportions.
    """
    csv_data = pd.read_csv(data_csv_path)
    data_samples = []
    
    for idx in range(len(csv_data)):
        video_id = csv_data.loc[idx, 'video_id']
        music_id = csv_data.loc[idx, 'music_id']
        m_duration = float(csv_data.loc[idx, 'music_total_duration'])
        video_start_time = csv_data.loc[idx, 'video_start']
        video_end_time = csv_data.loc[idx, 'video_end']
        music_start_time = csv_data.loc[idx, 'music_start']
        music_end_time = csv_data.loc[idx, 'music_end']
        
        # Construct gt_windows and convert to torch.Tensor
        gt_windows = torch.tensor([[music_start_time, music_end_time]], dtype=torch.float)
        
        # Construct meta_map information
        meta_map = {
            "video_id": str(video_id),
            "music_id": str(music_id),
            "v_duration": torch.tensor(video_end_time - video_start_time, dtype=torch.float),
            "m_duration": torch.tensor(m_duration, dtype=torch.float),
            "gt_moment": gt_windows,
        }
        
        # Compute target span proportions, ensuring original gt_windows remains unchanged
        spans_target = get_cw_propotion(gt_windows.clone(), max_m_duration)
        
        # Load video features
        video_feature_path = os.path.join(frame_frozen_feature_path, 'vit_feature', f'{video_id}.pt')
        video_mask_path = os.path.join(frame_frozen_feature_path, 'vit_mask', f'{video_id}.pt')
        frame_feats = torch.load(video_feature_path, map_location='cpu')
        frame_mask = torch.load(video_mask_path, map_location='cpu')
        # Apply mask to zero out invalid regions
        frame_feats = frame_feats.masked_fill(frame_mask.unsqueeze(-1) == 0, 0)
        
        # Load music features
        music_feature_path = os.path.join(music_frozen_feature_path, 'ast_feature', f'{music_id}.pt')
        music_mask_path = os.path.join(music_frozen_feature_path, 'ast_mask', f'{music_id}.pt')
        segment_feats = torch.load(music_feature_path, map_location='cpu')
        segment_mask = torch.load(music_mask_path, map_location='cpu')
        segment_feats = segment_feats.masked_fill(segment_mask.unsqueeze(-1) == 0, 0)
        
        # Construct data_map information
        data_map = {
            "frame_feats": frame_feats,
            "frame_mask": frame_mask,
            "segment_feats": segment_feats,
            "segment_mask": segment_mask,
        }
        
        data_samples.append({
            "data_map": data_map,
            "meta_map": meta_map,
            "spans_target": spans_target
        })
    
    return data_samples

Note:

  • These pre-extracted features are compatible with our released PyTorch dataloader, see more details in dataloader_MGSV_EC_feature.py.
  • Feature file paths are not stored in the CSV. Instead, users should specify the base directories via the following arguments:
    • frame_frozen_feature_path: [Your data feature path]/vit_feature1
    • music_frozen_feature_path: [Your data feature path]/ast_feature2p5

πŸ“– Citation

If you find this work useful, please cite the following paper:

@inproceedings{xin2025mgsv,
  title={Music Grounding by Short Video},
  author={Xin, Zijie and Wang, Minquan and Liu, Jingyu and Chen, Quan and Ma, Ye and Jiang, Peng and Li, Xirong},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2025}
}

πŸ“œ License

License: CC BY-NC 4.0 This work is intended for non-commercial academic research and educational purposes only.
For commercial licensing or any use beyond research, please contact the authors.

πŸ“₯ Raw Vidoes/Music-tracks Access
The raw video and music files are not publicly available due to copyright and privacy constraints.
Researchers interested in obtaining the full media content can contact Kuaishou Technology at: wangminquan@kuaishou.com.

πŸ“¬ Contact for Issues For any questions about this project (e.g., corrupted files or loading errors), please reach out at: xinzijie@ruc.edu.cn