The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
KubriCount
KubriCount is a large-scale synthetic benchmark for multi-grained visual counting, built for the research project Count Anything at Any Granularity.
The dataset targets open-world counting settings where the intended counting granularity must be explicit. A query may ask for a specific identity, an attribute variant, a category, an instance type, or a broader concept. KubriCount is generated with controllable 3D synthesis, mask-conditioned image editing, and VLM-based filtering, and provides dense instance-level supervision for training and evaluation.
Highlights
- Five counting granularities: identity, attribute, category, instance type, and concept.
- Controlled distractors for testing prompt following under fine-grained distinctions.
- Dense supervision including counts, center points, 2D boxes, negative categories, and scene-level metadata.
- Large scale: 110,507 released scenes/images, 157 categories, about 7.3M annotated objects, and up to 250 objects per image.
- Generalization splits for seen categories, unseen assets, and unseen categories.
Splits
| Split | Released scenes | Purpose |
|---|---|---|
train |
99,639 | Training split with seen categories. |
testA |
5,462 | Evaluation split with unseen assets from training categories. |
testB |
5,406 | Evaluation split with unseen categories. |
The released tar shards contain only scenes that passed the automatic quality filter.
Counting Levels
| Level | Granularity | Description |
|---|---|---|
| L1 | Identity-level | Count all instances of a single object type. |
| L2 | Attribute-level | Count objects distinguished by size or color. |
| L3 | Category-level | Count one category while excluding another category. |
| L4 | Instance-level | Count one instance type within the same category. |
| L5 | Concept-level | Count a category or concept with multiple instance types and plausible distractors. |
Dataset Structure
.
βββ README.md
βββ merged_train_metadata.json
βββ merged_test_metadata.json
βββ metadata/
β βββ all_pass_scenes.jsonl
β βββ train_pass_scenes.jsonl
β βββ testA_pass_scenes.jsonl
β βββ testB_pass_scenes.jsonl
β βββ shards.jsonl
βββ shards/
β βββ train/
β β βββ train-000000.tar
β β βββ train-000001.tar
β β βββ ...
β βββ testA/
β β βββ testA-000000.tar
β β βββ testA-000001.tar
β βββ testB/
β βββ testB-000000.tar
β βββ testB-000001.tar
βββ train/
β βββ extracted_metadata.json
βββ testA/
β βββ extracted_metadata.json
βββ testB/
βββ extracted_metadata.json
The image folders are stored inside tar shards. Each tar preserves the split/level/timestamp/scene structure:
train/level5/20260205_135900/scene_0431/edited_00000.png
train/level5/20260205_135900/scene_0431/metadata.json
train/level5/20260205_135900/scene_0431/rgba_00000.png
train/level5/20260205_135900/scene_0431/segmentation_00000.png
The release intentionally does not include metadata/dataset_stats.json or per-split vlm_filter_results.json files.
Path Convention
All KubriCount image paths in the released annotation files are relative paths. For example:
testA/level1/20260205_132725/scene_0213/edited_00000.png
After extracting the tar shards into a local directory, resolve an image_id with:
from pathlib import Path
root = Path("./KubriCount_restored")
image_path = root / "testA/level1/20260205_132725/scene_0213/edited_00000.png"
Annotation Files
train/extracted_metadata.json,testA/extracted_metadata.json,testB/extracted_metadata.json: split-level KubriCount annotations.merged_train_metadata.json: merged KubriCount training metadata.merged_test_metadata.json: combined test metadata fortestAandtestB.metadata/*_pass_scenes.jsonl: scene-to-shard manifests.metadata/shards.jsonl: one record per tar shard.
A typical annotation item is:
{
"image_id": "train/level1/20260205_132641/scene_0001/edited_00000.png",
"count": 104,
"box_examples_coordinates": [
[[742, 933], [742, 1024], [850, 1024], [850, 933]],
[[699, 782], [699, 888], [797, 888], [797, 782]]
],
"points": [
[796.0, 978.5],
[748.0, 835.0]
],
"H": 1024,
"W": 1024,
"category": "shoe",
"metadata": {
"level": 1,
"split": "train",
"config_file": "/kubric/config_gpt.json"
},
"negative_count": 0,
"negative_category": "",
"negative_box_examples_coordinates": [],
"negative_points": []
}
Field meanings:
image_id: relative path to the edited image after shard extraction.count: number of target-category objects.category: target category or target phrase.box_examples_coordinates: target-object 2D boxes represented by four corner points.points: target-object center points.H,W: image height and width.metadata.level: counting granularity level.metadata.split: dataset split.negative_category: distractor category or phrase, when applicable.negative_count: number of distractor objects.negative_box_examples_coordinates: distractor-object 2D boxes.negative_points: distractor-object center points.
Manifest Format
Each line in metadata/all_pass_scenes.jsonl describes one released scene and where it is stored:
{
"split": "testA",
"scene": "level1/20260205_132725/scene_0001",
"path_in_dataset": "testA/level1/20260205_132725/scene_0001",
"shard": "shards/testA/testA-000000.tar",
"num_files": 4,
"files": [
{
"path": "testA/level1/20260205_132725/scene_0001/edited_00000.png",
"name": "edited_00000.png",
"size_bytes": 1562567
}
]
}
Important fields:
split: dataset split.scene: scene path relative to the split folder.path_in_dataset: scene path after extraction.shard: tar shard containing this scene.num_files: number of files in this scene.files: files stored for this scene.
Download
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="liuchang666/KubriCount",
repo_type="dataset",
local_dir="./KubriCount",
)
Command line:
huggingface-cli download liuchang666/KubriCount \
--repo-type dataset \
--local-dir ./KubriCount
Restore the Folder Structure
Use the following script to extract the tar shards and copy the annotation JSON files to a restored directory:
from pathlib import Path
import shutil
import tarfile
repo_dir = Path("./KubriCount")
restore_dir = Path("./KubriCount_restored")
splits = ["train", "testA", "testB"]
restore_dir.mkdir(parents=True, exist_ok=True)
def safe_extract(tar, path):
path = path.resolve()
for member in tar.getmembers():
target = (path / member.name).resolve()
if path not in target.parents and target != path:
raise RuntimeError(f"Unsafe path in tar: {member.name}")
tar.extractall(path)
for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
print(f"Extracting {tar_path}")
with tarfile.open(tar_path, "r") as tar:
safe_extract(tar, restore_dir)
for p in repo_dir.glob("*.json"):
shutil.copy2(p, restore_dir / p.name)
for split in splits:
src_split_dir = repo_dir / split
dst_split_dir = restore_dir / split
dst_split_dir.mkdir(parents=True, exist_ok=True)
for p in src_split_dir.glob("*.json"):
shutil.copy2(p, dst_split_dir / p.name)
print(f"Restored dataset to: {restore_dir}")
After extraction:
KubriCount_restored/
βββ train/
β βββ extracted_metadata.json
β βββ level1/
βββ testA/
β βββ extracted_metadata.json
β βββ level1/
βββ testB/
β βββ extracted_metadata.json
β βββ level1/
βββ merged_train_metadata.json
βββ merged_test_metadata.json
Read Images Directly From Tar Shards
from pathlib import Path
import tarfile
repo_dir = Path("./KubriCount")
for tar_path in sorted((repo_dir / "shards").glob("*/*.tar")):
with tarfile.open(tar_path, "r") as tar:
for member in tar:
if member.isfile() and member.name.endswith(".png"):
data = tar.extractfile(member).read()
print(member.name, len(data))
break
To find the shard for a specific scene, use metadata/all_pass_scenes.jsonl.
Related Project
Research project: Count Anything at Any Granularity.
Contact
For questions, please contact liuchang666@sjtu.edu.cn.
- Downloads last month
- 63