|
|
|
|
|
--- |
|
|
pretty_name: "LOOPerSet" |
|
|
license: "cc-by-4.0" |
|
|
tags: |
|
|
- compilers |
|
|
- code-optimization |
|
|
- polyhedral-model |
|
|
- performance-prediction |
|
|
task_categories: |
|
|
- other |
|
|
size_categories: |
|
|
- 10M<n<100M |
|
|
configs: |
|
|
- config_name: pact25_split |
|
|
data_files: |
|
|
- split: train |
|
|
path: "data/pact25_train.jsonl.gz" |
|
|
- split: validation |
|
|
path: "data/pact25_validation.jsonl.gz" |
|
|
|
|
|
- config_name: full |
|
|
data_files: |
|
|
- split: train |
|
|
path: "data/looperset_full_28m.jsonl.gz" |
|
|
--- |
|
|
|
|
|
# LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Optimization |
|
|
|
|
|
|
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
[](https://arxiv.org/abs/2510.10209) |
|
|
[](https://arxiv.org/abs/2403.11522) |
|
|
[](https://creativecommons.org/licenses/by/4.0/) |
|
|
[]() |
|
|
|
|
|
</div> |
|
|
|
|
|
|
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
`LOOPerSet` is a large-scale public dataset for machine learning-based compiler optimization. It provides labeled performance data for training and evaluating models that predict the effects of code transformations. |
|
|
|
|
|
The dataset contains over **28 million labeled data points** derived from approximately **220,000 unique, synthetically generated loop nests**. Each data point consists of a program, a specific sequence of applied loop transformations (e.g., fusion, tiling, skewing, parallelization), and its resulting ground-truth performance measurement. |
|
|
|
|
|
Transformation sequences were generated using a polyhedral compilation framework to ensure they were legal and semantics-preserving. `LOOPerSet` was originally created to train the cost model for the [LOOPer autoscheduler](https://tbd) (PACT '25). For a full description of the generation process and a diversity analysis, please see our [companion paper on arXiv](https://arxiv.org/abs/xxxx.xxxxx). |
|
|
|
|
|
|
|
|
### Supported Tasks |
|
|
|
|
|
|
|
|
The dataset can be used for several research applications in machine learning and compilers: |
|
|
|
|
|
* **Performance Prediction**: The dataset's primary use case. Train a model to map a program's features and a candidate optimization schedule to a predicted performance value (e.g., execution time or speedup). This forms the core of a learned cost model for guiding compiler optimization. |
|
|
* **Schedule Ranking**: A learning-to-rank task where a model learns to order a set of candidate schedules for a given program based on their relative performance. |
|
|
* **Compiler Heuristic Discovery**: A data analysis task to discover new optimization heuristics by finding correlations between program features and the effectiveness of transformation sequences. |
|
|
* **Program Representation Learning**: Develop and evaluate novel methods for featurizing programs, computer code, and transformation schedules, such as learning dense vector embeddings. |
|
|
* **Transfer Learning for Hardware Portability**: A general-purpose cost model can be pre-trained on `LOOPerSet` and then fine-tuned on a much smaller, target-specific dataset, significantly reducing the data collection cost for new architectures. |
|
|
|
|
|
### Dataset Configurations |
|
|
|
|
|
The dataset is provided in two configurations: |
|
|
|
|
|
* **`full`**: The complete ~28 million point dataset (composed of ~220k programs), available as a single `train` split. |
|
|
* **`pact25`** split: A 10-million-point version used to train the LOOPer cost model, pre-split into `train` (90%) and `validation` (10%) sets for reproducibility. This 10M set is a subset of the 28M one. |
|
|
|
|
|
## How to Use |
|
|
|
|
|
The dataset files are stored in `.jsonl.gz` format (gzipped JSON Lines), where each line is a complete JSON object representing one program. |
|
|
|
|
|
Bellow we provide a simple method to download the files and stream the data in Python. |
|
|
|
|
|
### Installation |
|
|
|
|
|
|
|
|
You will need the `huggingface-hub` library to download the files from the repository. |
|
|
|
|
|
|
|
|
```bash |
|
|
pip install huggingface-hub |
|
|
``` |
|
|
|
|
|
|
|
|
### Step 1: Download the Data Files |
|
|
|
|
|
The dataset is available in two configurations, with the following approximate file sizes: |
|
|
|
|
|
| File | Compressed Size | Decompressed Size | |
|
|
| ----------------------------- | --------------- | ----------------- | |
|
|
| `looperset_full.jsonl.gz` | ~3.7 GB | ~34 GB | |
|
|
| `looperset_pact25_train.jsonl.gz` | ~1.2 GB | ~22 GB | |
|
|
| `looperset_pact25_validation.jsonl.gz` | ~146 MB | ~5.3 GB | |
|
|
|
|
|
First, use the `hf_hub_download` function to fetch the dataset files you need. |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
import os |
|
|
|
|
|
REPO_ID = "Mascinissa/LOOPerSet" |
|
|
|
|
|
# --- Option 1: Download the full 28M dataset --- |
|
|
full_dataset_path = hf_hub_download( |
|
|
repo_id=REPO_ID, |
|
|
filename="data/looperset_full.jsonl.gz", |
|
|
repo_type="dataset", |
|
|
) |
|
|
print(f"Full dataset downloaded to: {full_dataset_path}") |
|
|
|
|
|
|
|
|
# --- Option 2: Download the PACT '25 splits --- |
|
|
pact25_train_path = hf_hub_download( |
|
|
repo_id=REPO_ID, |
|
|
filename="data/pact25/looperset_pact25_train.jsonl.gz", |
|
|
repo_type="dataset", |
|
|
) |
|
|
pact25_validation_path = hf_hub_download( |
|
|
repo_id=REPO_ID, |
|
|
filename="data/pact25/looperset_pact25_validation.jsonl.gz", |
|
|
repo_type="dataset", |
|
|
) |
|
|
print(f"PACT'25 train split downloaded to: {pact25_train_path}") |
|
|
print(f"PACT'25 validation split downloaded to: {pact25_validation_path}") |
|
|
``` |
|
|
|
|
|
### Step 2: Stream and Parse the Data |
|
|
|
|
|
Due to the large size of the dataset, we recommend streaming the data using a generator function. |
|
|
|
|
|
The following function reads a `.jsonl.gz` file line-by-line. |
|
|
|
|
|
```python |
|
|
import gzip |
|
|
import json |
|
|
|
|
|
def stream_jsonl_gz(file_path): |
|
|
""" |
|
|
Generator function to stream and parse a .jsonl.gz file. |
|
|
Yields one JSON object (as a Python dict) at a time. |
|
|
""" |
|
|
with gzip.open(file_path, 'rt', encoding='utf-8') as f: |
|
|
for line in f: |
|
|
yield json.loads(line) |
|
|
|
|
|
# --- Example: Iterate through the pact25_split training set --- |
|
|
# (Assuming you have run the download code from Step 1) |
|
|
data_stream = stream_jsonl_gz(pact25_train_path) |
|
|
|
|
|
print("First 3 programs from the stream:") |
|
|
for i, program in enumerate(data_stream): |
|
|
if i >= 3: |
|
|
break |
|
|
print(f"\n--- Program {i+1}: {program['program_name']} ---") |
|
|
print(f" Initial time: {program['initial_execution_time']:.4f} ms") |
|
|
print(f" Number of schedules: {len(program['schedules_list'])}") |
|
|
``` |
|
|
|
|
|
|
|
|
### Example 1: Generating Training Examples |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Each record in `LOOPerSet` represents a single **program**. This program contains a list of all **schedules** (optimization sequences) that were evaluated for it. To create training examples, one must iterate through each program and then through its `schedules_list`. |
|
|
|
|
|
|
|
|
Here is how you can use the streamer to create `(program, schedule, performance)` tuples. |
|
|
|
|
|
|
|
|
```python |
|
|
import numpy as np |
|
|
|
|
|
# (pact25_train_path is defined in the download step) |
|
|
data_stream = stream_jsonl_gz(pact25_train_path) |
|
|
|
|
|
training_examples = [] |
|
|
|
|
|
for processed_count, program in enumerate(data_stream): |
|
|
# iterate over the first 100 programs only |
|
|
if processed_count >= 100: |
|
|
break |
|
|
|
|
|
program_features = program['program_annotation'] |
|
|
initial_time = program['initial_execution_time'] |
|
|
|
|
|
for schedule in program['schedules_list']: |
|
|
schedule_features = schedule # Or a subset of its fields |
|
|
|
|
|
# The label is the median of the 30 execution times |
|
|
# Here we compute speedup over the un-optimized version |
|
|
median_time = np.median(schedule['execution_times']) |
|
|
|
|
|
speedup = initial_time / median_time |
|
|
|
|
|
training_examples.append({ |
|
|
"program_features": program_features, |
|
|
"schedule_features": schedule_features, |
|
|
"speedup": speedup |
|
|
}) |
|
|
|
|
|
print(f"Created {len(training_examples)} tuples from {processed_count} programs.") |
|
|
``` |
|
|
|
|
|
|
|
|
### Example 2: Finding the Best Schedule per Program |
|
|
The following example shows how to find the best speedup achieved for each program: |
|
|
|
|
|
|
|
|
```python |
|
|
import numpy as np |
|
|
|
|
|
# (pact25_train_path is defined in the download step) |
|
|
data_stream = stream_jsonl_gz(pact25_train_path) |
|
|
|
|
|
# Iterate through a few programs and find the best schedule for each |
|
|
num_programs_to_process = 5 |
|
|
processed_count = 0 |
|
|
|
|
|
|
|
|
# Iterate through a few programs and find the best schedule for each |
|
|
for processed_count, program in enumerate(data_stream): |
|
|
if processed_count >= 5: |
|
|
break |
|
|
|
|
|
program_name = program['program_name'] |
|
|
initial_time = program['initial_execution_time'] |
|
|
|
|
|
# Handle cases where the initial run might have failed |
|
|
if initial_time is None: |
|
|
print(f"\nProgram: {program_name} has no initial time. Skipping.") |
|
|
continue |
|
|
|
|
|
best_schedule_info = None |
|
|
min_time = initial_time |
|
|
|
|
|
for schedule in program['schedules_list']: |
|
|
# Ensure execution times are valid before calculating median |
|
|
if not schedule.get('execution_times'): |
|
|
continue |
|
|
|
|
|
current_time = np.median(schedule['execution_times']) |
|
|
|
|
|
if current_time < min_time: |
|
|
min_time = current_time |
|
|
best_schedule_info = schedule['sched_str'] |
|
|
|
|
|
speedup = initial_time / min_time if min_time > 0 else float('inf') |
|
|
|
|
|
print(f"\nProgram: {program_name}") |
|
|
print(f" - Initial Time: {initial_time:.4f} ms") |
|
|
if best_schedule_info: |
|
|
print(f" - Best Found Time: {min_time:.4f} ms (Speedup: {speedup:.2f}x)") |
|
|
print(f" - Best Schedule: {best_schedule_info}") |
|
|
else: |
|
|
print(" - No better schedule found in the dataset.") |
|
|
|
|
|
``` |
|
|
|
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
|
|
|
|
|
|
Each row in the dataset represents a single synthetic program and contains all optimization schedules explored for it. |
|
|
|
|
|
|
|
|
|
|
|
<details> |
|
|
|
|
|
<summary><b>Click to see a sample JSONL entry</b></summary> |
|
|
|
|
|
|
|
|
|
|
|
```json |
|
|
{ |
|
|
"program_name": "function12345", |
|
|
"program_annotation": { |
|
|
"memory_size": 4.19, |
|
|
"iterators": { "...": "..." }, |
|
|
"computations": { "...": "..." }, |
|
|
"buffers": { "...": "..." } |
|
|
}, |
|
|
"initial_execution_time": 1393.751, |
|
|
"schedules_list": [ |
|
|
{ |
|
|
"execution_times": [451.234, 465.112, 458.543, "..."], |
|
|
"sched_str": "F({CO,C1},1)T2({CO},L2,L3,32,32)...", |
|
|
"fusions": [["comp00", "comp01", 1]], |
|
|
"tree_structure": { "..." }, |
|
|
"comp00": { |
|
|
"tiling": {"tiling_depth": 2, "tiling_dims": ["i0", "i1"], "tiling_factors": [32, 32]}, |
|
|
"unrolling_factor": null, |
|
|
"parallelized_dim": null, |
|
|
"transformations_list": [ [1, 0, 1, 0, ...] ] |
|
|
}, |
|
|
"comp01": { |
|
|
"...": "..." |
|
|
} |
|
|
}, |
|
|
{ "...": "..." } |
|
|
] |
|
|
} |
|
|
|
|
|
``` |
|
|
</details> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Top-Level Fields |
|
|
|
|
|
* `program_name` (string): A unique identifier for the synthetic program (e.g., "function684979"). |
|
|
* `program_annotation` (dict): A detailed, structured representation of the original, untransformed program. This serves as the primary source for program feature engineering. |
|
|
* `initial_execution_time` (float): The median execution time (in ms) of the program before any optimizations. |
|
|
* `schedules_list` (list of dicts): A list of all optimization sequences explored for this program. Each dictionary in the list details a unique schedule and its performance. |
|
|
|
|
|
--- |
|
|
|
|
|
### The `program_annotation` Dictionary |
|
|
|
|
|
This object contains all the static information about the source program. |
|
|
|
|
|
* `memory_size` (float): The total memory footprint of all buffers in megabytes. |
|
|
* `iterators` (dict): Contains the full loop nest hierarchy of the program. Each key is an iterator name (e.g., `i0`), and the value contains its `lower_bound`, `upper_bound`, `parent_iterator`, and `child_iterators`. |
|
|
* `computations` (dict): Contains all computational statements. Each key is a computation name (e.g., `comp00`), and the value contains its properties, including: |
|
|
* `iterators`: The list of loops this computation is nested in. |
|
|
* `write_access_relation`: A string representing the write access pattern. |
|
|
* `accesses`: A list of all read memory accesses. |
|
|
* `expression_representation`: A tree-based representation of the arithmetic expression. |
|
|
* `buffers` (dict): Contains metadata for all data arrays (buffers) used in the program, including their dimensions, data types, and whether they are inputs or outputs. |
|
|
|
|
|
--- |
|
|
|
|
|
### The `schedules_list` Entries |
|
|
|
|
|
Each element in this list represents one complete optimization schedule applied to the program. |
|
|
|
|
|
* `execution_times` (list of float): A list of 30 raw execution time measurements (in ms) for this specific schedule. The ground-truth label for ML models is typically derived from this list (e.g., by taking the median). |
|
|
* `sched_str` (string): A human-readable summary string of the transformations applied in this schedule (e.g., `I(L0,L1)P(L0)U(L3,8)`). |
|
|
* `fusions` (list): A list detailing any loop fusion transformations. Each entry is a list of `[comp_1, comp_2, fusion_level]`. |
|
|
* `tree_structure` (dict): Represents the program's loop nest structure *after* fusion has been applied. |
|
|
* **Computation-specific transformations** (dict): For each computation in the program (e.g., `comp00`, `comp01`), there is a key holding a dictionary of the transformations applied to it: |
|
|
* `tiling` (dict): Details on tiling, including `tiling_depth`, `tiling_dims`, and `tiling_factors`. |
|
|
* `unrolling_factor` (int): The factor used for loop unrolling (if applied). |
|
|
* `parallelized_dim` (string): The name of the loop that was parallelized (if applied). |
|
|
* `transformations_list` (list): Each element in the list is a vector representing one affine transformation (interchange, reversal, or skewing). The order of vectors defines the order of application. |
|
|
|
|
|
|
|
|
<details> |
|
|
<summary><b>`transformations_list` format</b></summary> |
|
|
Each element in the list is a fixed-length (16-element) integer vector representing one affine transformation. The order of vectors in the list determines the order of application. |
|
|
|
|
|
The first element of the vector (`vector[0]`) is a **`type`** tag that specifies the transformation: |
|
|
* `1`: Loop Interchange |
|
|
* `2`: Loop Reversal |
|
|
* `3`: Loop Skewing |
|
|
|
|
|
The meaning of the subsequent elements depends on the `type` tag: |
|
|
|
|
|
* **If `type` is 1 (Interchange):** |
|
|
* `vector[1]` and `vector[2]` specify the two loop levels (as integer indices) to be interchanged. Other elements are unused. |
|
|
|
|
|
* **If `type` is 2 (Reversal):** |
|
|
* `vector[3]` specifies the loop level (as an integer index) to be reversed. Other elements are unused. |
|
|
|
|
|
* **If `type` is 3 (Skewing):** |
|
|
* `vector[4]`, `vector[5]`, and `vector[6]` specify the three loop levels (as integer indices) involved in the skewing transformation. |
|
|
* `vector[7]` through `vector[15]` specify the nine integer parameters of the 3x3 skewing submatrix. |
|
|
</details> |
|
|
|
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Generation Pipeline |
|
|
|
|
|
The data was generated using a three-stage pipeline: |
|
|
1. **Synthetic Program Generation**: A randomized generator created a diverse corpus of polyhedral programs with varied loop structures, memory access patterns, and computational complexities. |
|
|
2. **Transformation Space Sampling**: We used the beam search algorithm from the LOOPer autoscheduler to explore and sample meaningful optimization sequences for each program. This "relevance-guided" strategy ensures the dataset focuses on transformations a real-world compiler would consider. |
|
|
3. **Performance Label Generation**: Each `(program, schedule)` pair was compiled with Tiramisu and executed on a dual-socket **Intel Xeon E5-2695 v2** system. Each version was run up 30 times to collect a stable distribution of execution times. |
|
|
|
|
|
### Diversity Analysis |
|
|
|
|
|
|
|
|
A quantitative diversity analysis was performed to validate the dataset's quality. Using normalized Tree Edit Distance (nTED) to measure structural similarity between programs, the analysis showed that: |
|
|
1. `LOOPerSet` does not contain any accidental replications of PolyBench benchmarks. |
|
|
2. The dataset covers a broader and more varied structural space than existing benchmark suites. |
|
|
|
|
|
Full details are available in our [companion paper](https://arxiv.org/abs/xxxx.xxxxx). |
|
|
|
|
|
## Citation Information |
|
|
|
|
|
If you use this dataset, please cite the following paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{merouani2025looperset, |
|
|
title={LOOPerSet: A Large-Scale Dataset for Data-Driven Polyhedral Compiler Optimization}, |
|
|
author={Massinissa Merouani and Afif Boudaoud and Riyadh Baghdadi}, |
|
|
year={2025}, |
|
|
eprint={2510.10209}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.PL}, |
|
|
url={https://arxiv.org/abs/2510.10209}, |
|
|
} |
|
|
``` |
|
|
|
|
|
If you a building upon or comparing against the `LOOPer` cost model, please cite our PACT '25 paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{merouani24looper, |
|
|
title={LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers}, |
|
|
author={Massinissa Merouani and Khaled Afif Boudaoud and Iheb Nassim Aouadj and Nassim Tchoulak and Islem Kara Bernou and Hamza Benyamina and Fatima Benbouzid-Si Tayeb and Karima Benatchba and Hugh Leather and Riyadh Baghdadi}, |
|
|
year={2025}, |
|
|
eprint={2403.11522}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.PL}, |
|
|
url={https://arxiv.org/abs/2403.11522}, |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is licensed under the [Creative Commons Attribution 4.0 International (CC-BY 4.0) License](https://creativecommons.org/licenses/by/4.0/). |
|
|
|
|
|
|