The dataset viewer should be available soon. Please retry later.
GNN Constraint-Aware World Model Dataset (v3)
Real robot episodes with per-frame constraint graphs, SAM2 segmentation masks + 256-D feature embeddings, full 3D depth bundles, and synchronized robot states across two manipulation domains. Both domains share the v3 on-disk layout (same JSON/NPZ schemas, same delta-encoded frame_states, same fully-connected PyG expansion at load time) but have different component vocabularies and therefore different node-feature dimensions — the PyG loader adapts automatically via type_vocab.
- Project: CoRL 2026 — GNN world model for constraint-aware video generation
- Author: Chang Liu (Texas A&M University)
- Hardware: UR5e + Robotiq 2F-85 gripper, OAK-D Pro (static side view)
- Format version: v3.0 (updated 2026-04-16)
Domains at a glance
| Domain | Graph variants offered | Node vocab size | Node feature dim | Edge feature dim | Data root |
|---|---|---|---|---|---|
| Desktop disassembly | products-only and with-robot | 9 (8 products + robot) |
269 | 3 | session_<date>_<time>/episode_XX/ |
| Tower of Hanoi | products-only (rings only) | 4 (ring_1..ring_4) |
264 | 3 | hanoi/session_hanoi_<date>_<time>/episode_XX/ |
Node feature dim = 256 (SAM2 emb) + 3 (3D pos) + V (type one-hot) + 1 (visibility) where V = len(type_vocab).
Robot as graph node is Desktop-only for now. Hanoi has no robot segmentation in v1 — the side_robot/*.npz files are zero-filled for format uniformity but never contribute a node. Use load_pyg_frame_products_only for Hanoi; Desktop supports both products_only and with_robot loaders.
File layout (same for both domains)
episode_XX/
├── metadata.json # episode metadata (domain-specific extras)
├── robot_states.npy # (T, 13) float32 — joints + TCP + gripper
├── robot_actions.npy # (T-1, 13) float32 — frame deltas
├── timestamps.npy # (T, 3) float64
├── side/
│ ├── rgb/frame_XXXXXX.png # 1280×720 RGB
│ └── depth/frame_XXXXXX.npy # 1280×720 uint16 (mm)
├── wrist/ # raw wrist camera (not used in v3)
└── annotations/
├── side_graph.json # components, static edges, frame_states
├── side_masks/ # {component_id: (H,W) uint8} per frame
├── side_embeddings/ # {component_id: (256,) float32} per frame
├── side_depth_info/ # flat-keyed depth bundle per frame
├── side_robot/ # robot bundle per frame (visible flag)
└── dataset_card.json # format description
Alignment guarantee: every labeled frame index has files in all four of side_masks/, side_embeddings/, side_depth_info/, side_robot/. Files are keyed by the same integer frame index, so a loader can key off the mask directory and trust the rest to be present.
Pipeline
Collection. 30 Hz synchronous capture of side RGB + depth + robot state into episode_XX/ — no image processing or graph work happens here. Desktop is human teleop; Hanoi is autonomous, orchestrator samples one mission per episode (classical/single_ring/rearrange at 40/40/20) and writes metadata.json with goal_prompt, initial_state, target_state, solver_moves.
Auto-labeling. Separate offline step (Hanoi only in v3). python scripts/hanoi/auto_label.py <session_dir> produces the full v3 annotations/ tree. For each frame: HSV→bbox→SAM2 (Hanoi-FT checkpoint auto-loaded if present) → refined ring mask → 256-D pooled embedding → depth backprojection. Once per episode: detect grasp intervals in the gripper_pos trace, then symbolically unroll the constraint state from initial_state + solver_moves + held intervals — no per-frame ring detection in the image.
Verification / correction. bash scripts/run_annotator.sh --hanoi (or --desktop) opens the browser UI at localhost:8000 over labeled episodes. Per-frame edit with bbox / point / brush / eraser / polygon. Save writes back to the same annotations/side_masks/*.npz; the format is identical pre- and post-verification.
SAM2 FT retraining. scripts/sam2_finetune/collect_hanoi_samples.py pulls (RGB, mask, bbox) triples from any set of labeled episodes; scripts/sam2_finetune/train.py fine-tunes the SAM2 decoder and writes a domain-specific checkpoint (e.g. sam2_hanoi_ft.pt). auto_label.py auto-selects the checkpoint on its next run, closing the loop.
Desktop Disassembly Domain
Components (9 types)
Eight product types + one robot agent. Multiple instances (e.g. ram_1, ram_2) share the same one-hot and are disambiguated by SAM2 embedding + 3D position.
| Index | Type | Color | Notes |
|---|---|---|---|
| 0 | cpu_fan |
#FF6B6B | Always visible at start |
| 1 | cpu_bracket |
#4ECDC4 | Hidden at start (under fan) |
| 2 | cpu |
#45B7D1 | Hidden at start |
| 3 | ram_clip |
#96CEB4 | Multi-instance |
| 4 | ram |
#FFEAA7 | Multi-instance |
| 5 | connector |
#DDA0DD | Multi-instance |
| 6 | graphic_card |
#FF8C42 | Always visible |
| 7 | motherboard |
#8B5CF6 | Always visible (base) |
| 8 | robot |
#F5F5F5 | Agent node (stored separately in side_robot/) |
Sparse constraint edges
Directed prerequisite relations — A -> B means "A must be removed before B can be removed":
cpu_fan -> cpu_bracket (fan covers bracket)
cpu_fan -> motherboard
cpu_bracket -> cpu
cpu_bracket -> motherboard
cpu -> motherboard
ram_N -> motherboard
ram_clip_N -> motherboard
ram_clip_N -> ram_M (user pairs manually)
connector_N -> motherboard
graphic_card -> motherboard
Typical episode has 10-15 product nodes and 10-14 stored directed edges.
Node feature layout (269-D)
[0 : 256] SAM2 embedding (256) — masked avg pool over vision_features
[256 : 259] 3D position (3) — centroid in camera frame (meters)
[259 : 268] type one-hot (9) — cpu_fan, cpu_bracket, cpu, ram_clip,
ram, connector, graphic_card,
motherboard, robot
[268] visibility (1) — 1 if visible this frame, else 0
Available Desktop episodes
| Session / Episode | Labeled frames | Goal |
|---|---|---|
session_0408_162129/episode_00 |
346 | cpu_fan |
session_0410_125013/episode_00 |
473 | cpu_fan |
session_0410_125013/episode_01 |
525 | graphic_card |
Total: 1344 frames.
Tower of Hanoi Domain
Components (4 types) — rings only, no robot node in v1
Hanoi episodes use native ring IDs (ring_1 .. ring_4) in components and as npz keys — no desktop-proxy remapping, and no robot node in v1. type_vocab is ["ring_1", "ring_2", "ring_3", "ring_4"] (length 4). Robot segmentation is deferred; side_robot/*.npz is zero-filled per frame for format uniformity but never becomes a graph node.
| ID | Color | Disk size | Role |
|---|---|---|---|
ring_1 |
red (#E63946) | 32 mm | Smallest |
ring_2 |
yellow (#F1C40F) | 42 mm | — |
ring_3 |
green (#2ECC71) | 52 mm | — |
ring_4 |
blue (#2E86DE) | 62 mm | Largest |
Mask .npz files carry the literal keys ring_1, ring_2, ring_3, ring_4. No robot in type_vocab, no robot edges, no robot node appended at load time.
Mission kinds (40 / 40 / 20 sampling)
| Kind | Weight | Prompt template | Target |
|---|---|---|---|
classical |
0.40 | "Solve the puzzle: stack all rings on peg X" |
All 4 rings stacked in size order on one peg |
single_ring |
0.40 | "Move the <color> ring to peg X" |
One designated ring moved; others untouched |
rearrange |
0.20 | "Rearrange: red on peg A, green on peg B, ..." |
Uniformly sampled valid (larger-under-smaller) configuration |
Every Hanoi metadata.json records mission_kind, goal_prompt, initial_state, target_state, and solver_moves (the reference action sequence from the classical-Hanoi solver, one entry per pickup/release pair).
Structural edges (static, always 6)
The 6 smaller → larger directed pairs are stored verbatim in side_graph.json:
ring_1 -> ring_2 ring_1 -> ring_3 ring_1 -> ring_4
ring_2 -> ring_3 ring_2 -> ring_4
ring_3 -> ring_4
At PyG load time the loader expands to 4 × 3 = 12 fully-connected directed edges. The reverse (larger → smaller) direction carries the same has_constraint / is_locked but flipped src_blocks_dst.
Per-frame is_locked semantics
is_locked = 1 on edge (A, B) iff A is currently the immediately-stacked ring on top of B on the same peg (adjacent in the peg-stack with A above B). Every other pair — non-adjacent on the same peg, on different pegs, or with either ring in transit — gets is_locked = 0. This is strictly "physical stacking right now," not "A must move before B."
Held-ring rule (captures "constraint broken during transit")
When the robot holds a ring (gripper closed between grasp and release of that move), the ring is in transit and no longer touches any other ring. The auto-labeler flags held = 1 for that ring on every held frame, and every edge touching it gets is_locked = 0 — the constraint is physically broken mid-move. On release, the new adjacency emerges and that edge flips back to is_locked = 1.
Implementation: auto_label.py reads robot_states.npy[:, 12] (gripper position, Robotiq 2F-85, 0-255) and detects grasp intervals via baseline-mode thresholding (estimate "resting open" mode, threshold at baseline + margin, binary-close morphologically to bridge single-frame glitches). It then zips the resulting intervals with solver_moves in order — the k-th grasp interval is assigned to the k-th move. Validated on ep_00 (1 move, 1 interval), ep_01 (15 moves, 15 intervals), ep_02 (1 move, 1 interval). Per-frame held deltas are recorded as frame_states[f].held = {ring_id: True|False}.
Rule 2 — "larger must never sit on smaller"
Encoded without a new feature via the edge's existing src_blocks_dst bit:
| Edge direction | src_blocks_dst |
Meaning |
|---|---|---|
smaller → larger (e.g. ring_1 -> ring_3) |
1 | Legal — smaller may rest on larger |
larger → smaller (e.g. ring_3 -> ring_1) |
0 | Illegal — larger may not rest on smaller |
Three dimension-preserving ways the world model can respect Rule 2:
| Method | Where | One-liner | Guarantee |
|---|---|---|---|
| Training loss | objective | λ * (pred_is_locked * (1 - src_blocks_dst)).sum() |
Soft (shapes distribution) |
| Rollout mask | inference | Reject any predicted is_locked = 1 where src_blocks_dst = 0 |
Hard (eliminates illegal) |
| Dataset invariant | this spec | is_locked is never 1 on a larger→smaller edge in any training frame |
Hard (on training distribution) |
Node feature layout (264-D)
[0 : 256] SAM2 embedding (256)
[256 : 259] 3D position (3)
[259 : 263] type one-hot (4) — ring_1, ring_2, ring_3, ring_4 (no robot)
[263] visibility (1)
Mission metadata saved per episode
Every Hanoi side_graph.json carries goal_prompt, mission_kind, and target_state in addition to the fields shared with Desktop. Per-frame transitions (grasps, releases, re-stacks) are recorded as deltas in frame_states[f] with constraints, visibility, and held sub-dicts.
Hanoi episodes available
| Session / Episode | Frames | mission_kind |
goal_prompt |
Moves |
|---|---|---|---|---|
session_hanoi_0415_190808/episode_00 |
494 | single_ring |
"Move the red ring to peg B" |
1 |
session_hanoi_0415_190808/episode_01 |
6719 | classical |
"Solve the puzzle: stack all rings on peg C" |
15 |
session_hanoi_0415_190808/episode_02 |
266 | single_ring |
"Move the red ring to peg B" |
1 |
Total: 7479 frames.
Shared: PyG edge feature semantics (3-D, both domains)
edge_attr[k] = [has_constraint, is_locked, src_blocks_dst]
has_constraint |
is_locked |
src_blocks_dst |
Meaning |
|---|---|---|---|
| 0 | 0 | 0 | No physical constraint — message passing only. Used for: robot ↔ anything; Hanoi larger → smaller (non-edge at the pair level) |
| 1 | 1 | 1 | Constraint active, src is the blocker (physical Desktop) / src rests on top (physical Hanoi) |
| 1 | 1 | 0 | Same pair, reverse direction — src is the blocked / src is underneath |
| 1 | 0 | 1 | Constraint released, src was the blocker / legal rest direction with no contact right now |
| 1 | 0 | 0 | Same released pair, reverse direction |
Symmetry invariants: has_constraint and is_locked are symmetric per unordered pair (same value for (i, j) and (j, i)). src_blocks_dst flips between the two directions. Robot ↔ anything edges are always [0, 0, 0].
Shared: PyG loader — self-contained Python
Prerequisites
pip install torch numpy torch_geometric pillow
Save as gnn_world_model_loader.py
The key design property: node_dim = 256 + 3 + V + 1 where V = len(type_vocab), so the same loader produces 269-D nodes for Desktop (V=9, includes robot) and 264-D nodes for Hanoi (V=4, rings only) without any domain branching.
import json
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, List, Optional
import numpy as np
import torch
from torch_geometric.data import Data
def list_labeled_frames(episode_dir: Path) -> List[int]:
mask_dir = episode_dir / "annotations" / "side_masks"
if not mask_dir.exists():
return []
frames = []
for p in mask_dir.glob("frame_*.npz"):
try:
frames.append(int(p.stem.split("_")[1]))
except (ValueError, IndexError):
continue
return sorted(frames)
def resolve_frame_state(graph_json: dict, frame_idx: int):
constraints, visibility = {}, {}
for c in graph_json["components"]:
visibility[c["id"]] = True
for e in graph_json["edges"]:
constraints[f"{e['src']}->{e['dst']}"] = True
fs_dict = graph_json.get("frame_states", {})
for f in sorted([int(k) for k in fs_dict]):
if f > frame_idx:
break
fs = fs_dict[str(f)]
for k, v in fs.get("constraints", {}).items():
constraints[k] = v
for k, v in fs.get("visibility", {}).items():
visibility[k] = v
return constraints, visibility
def type_one_hot(comp_type, type_vocab):
return [1.0 if t == comp_type else 0.0 for t in type_vocab]
@dataclass
class FrameData:
graph: dict
masks: dict
embeddings: dict
depth_info: dict
robot: Optional[dict]
constraints: dict
visibility: dict
def load_frame_data(episode_dir, frame_idx):
anno = episode_dir / "annotations"
with open(anno / "side_graph.json") as f:
graph = json.load(f)
def _npz(p):
if not p.exists(): return {}
d = np.load(p)
return {k: d[k] for k in d.files}
masks = _npz(anno / "side_masks" / f"frame_{frame_idx:06d}.npz")
embeddings = _npz(anno / "side_embeddings" / f"frame_{frame_idx:06d}.npz")
depth_info = _npz(anno / "side_depth_info" / f"frame_{frame_idx:06d}.npz")
robot = None
rp = anno / "side_robot" / f"frame_{frame_idx:06d}.npz"
if rp.exists():
r = np.load(rp)
if r["visible"][0] == 1:
robot = {k: r[k] for k in r.files}
constraints, visibility = resolve_frame_state(graph, frame_idx)
return FrameData(graph, masks, embeddings, depth_info, robot, constraints, visibility)
def load_pyg_frame_products_only(episode_dir, frame_idx):
fd = load_frame_data(episode_dir, frame_idx)
graph = fd.graph
type_vocab = graph["type_vocab"]
V = len(type_vocab)
node_dim = 256 + 3 + V + 1
nodes = graph["components"]
N = len(nodes)
x_list = []
for node in nodes:
cid = node["id"]
emb = fd.embeddings.get(cid, np.zeros(256, dtype=np.float32))
dvk = f"{cid}_depth_valid"
ck = f"{cid}_centroid"
if dvk in fd.depth_info and int(fd.depth_info[dvk][0]) == 1:
pos = fd.depth_info[ck].astype(np.float32)
else:
pos = np.zeros(3, dtype=np.float32)
vis = 1.0 if fd.visibility.get(cid, True) else 0.0
if vis == 0.0:
emb = np.zeros(256, dtype=np.float32)
pos = np.zeros(3, dtype=np.float32)
feat = np.concatenate([
emb.astype(np.float32), pos,
np.array(type_one_hot(node["type"], type_vocab), dtype=np.float32),
np.array([vis], dtype=np.float32),
])
x_list.append(feat)
x = torch.tensor(np.stack(x_list), dtype=torch.float32) if x_list else torch.empty((0, node_dim))
constraint_set = {(e["src"], e["dst"]) for e in graph["edges"]}
pair_forward = {frozenset([s, d]): (s, d) for s, d in constraint_set}
src_idx, dst_idx, edge_attr = [], [], []
for i in range(N):
for j in range(N):
if i == j:
continue
src_id, dst_id = nodes[i]["id"], nodes[j]["id"]
src_idx.append(i); dst_idx.append(j)
pair_key = frozenset([src_id, dst_id])
if pair_key in pair_forward:
fwd = pair_forward[pair_key]
is_locked = fd.constraints.get(f"{fwd[0]}->{fwd[1]}", True)
sb = 1.0 if src_id == fwd[0] else 0.0
edge_attr.append([1.0, 1.0 if is_locked else 0.0, sb])
else:
edge_attr.append([0.0, 0.0, 0.0])
return Data(
x=x,
edge_index=torch.tensor([src_idx, dst_idx], dtype=torch.long),
edge_attr=torch.tensor(edge_attr, dtype=torch.float32),
y=torch.tensor([frame_idx], dtype=torch.long),
num_nodes=N,
)
def load_pyg_frame_with_robot(episode_dir, frame_idx):
data = load_pyg_frame_products_only(episode_dir, frame_idx)
fd = load_frame_data(episode_dir, frame_idx)
if fd.robot is None:
return data
graph = fd.graph
type_vocab = graph["type_vocab"]
products = graph["components"]
N_prod = len(products); N = N_prod + 1
robot_emb = fd.robot["embedding"].astype(np.float32)
robot_pos = (fd.robot["centroid"].astype(np.float32)
if int(fd.robot["depth_valid"][0]) == 1
else np.zeros(3, dtype=np.float32))
robot_feat = np.concatenate([
robot_emb, robot_pos,
np.array(type_one_hot("robot", type_vocab), dtype=np.float32),
np.array([1.0], dtype=np.float32),
])
x = torch.cat([data.x, torch.tensor(robot_feat, dtype=torch.float32).unsqueeze(0)], dim=0)
constraint_set = {(e["src"], e["dst"]) for e in graph["edges"]}
pair_forward = {frozenset([s, d]): (s, d) for s, d in constraint_set}
src_idx, dst_idx, edge_attr = [], [], []
for i in range(N_prod):
for j in range(N_prod):
if i == j:
continue
src_id, dst_id = products[i]["id"], products[j]["id"]
src_idx.append(i); dst_idx.append(j)
pair_key = frozenset([src_id, dst_id])
if pair_key in pair_forward:
fwd = pair_forward[pair_key]
is_locked = fd.constraints.get(f"{fwd[0]}->{fwd[1]}", True)
sb = 1.0 if src_id == fwd[0] else 0.0
edge_attr.append([1.0, 1.0 if is_locked else 0.0, sb])
else:
edge_attr.append([0.0, 0.0, 0.0])
robot_idx = N_prod
for i in range(N_prod):
src_idx.append(robot_idx); dst_idx.append(i); edge_attr.append([0.0, 0.0, 0.0])
src_idx.append(i); dst_idx.append(robot_idx); edge_attr.append([0.0, 0.0, 0.0])
data = Data(
x=x,
edge_index=torch.tensor([src_idx, dst_idx], dtype=torch.long),
edge_attr=torch.tensor(edge_attr, dtype=torch.float32),
y=torch.tensor([frame_idx], dtype=torch.long),
num_nodes=N,
)
data.robot_point_cloud = torch.tensor(fd.robot["point_cloud"], dtype=torch.float32)
data.robot_pixel_coords = torch.tensor(fd.robot["pixel_coords"], dtype=torch.int32)
data.robot_mask = torch.tensor(fd.robot["mask"], dtype=torch.uint8)
return data
Usage examples
Desktop — 15 products + 1 robot agent = 16 nodes, 269-D features:
from pathlib import Path
from gnn_world_model_loader import load_pyg_frame_with_robot
episode = Path("session_0408_162129/episode_00")
data = load_pyg_frame_with_robot(episode, frame_idx=42)
print(data)
# → Data(x=[16, 269], edge_index=[2, 240], edge_attr=[240, 3])
Hanoi — 4 rings only (no robot node in v1) = 4 nodes, 264-D features. Use products_only, not with_robot:
from pathlib import Path
from gnn_world_model_loader import load_pyg_frame_products_only
episode = Path("hanoi/session_hanoi_0415_190808/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=250)
print(data)
# → Data(x=[4, 264], edge_index=[2, 12], edge_attr=[12, 3])
Shared: common v3 file schemas
side_graph.json
{
"episode_id": "episode_00",
"goal_component": "ring_1", // Desktop: a product id; Hanoi: a ring id
"view": "side",
"components": [
{"id": "ring_1", "type": "ring_1", "color": "#FF0000"}
],
"edges": [
{"src": "ring_1", "dst": "ring_3", "directed": true}
],
"frame_states": {
"0": {"constraints": {"ring_1->ring_3": true}, "visibility": {"ring_1": true}, "held": {}},
"120": {"constraints": {"ring_1->ring_3": false}, "held": {"ring_1": true}}
},
"node_positions": {"ring_1": [640, 360]},
"type_vocab": ["ring_1", "ring_2", "ring_3", "ring_4"], // Hanoi v1 — no robot
"embedding_dim": 256,
"feature_extractor": "sam2.1_hiera_base_plus",
// Hanoi-only extras:
"goal_prompt": "Move the red ring to peg B",
"mission_kind": "single_ring",
"target_state": {"peg_A": [], "peg_B": ["ring_1"], "peg_C": []}
}
side_depth_info/frame_XXXXXX.npz — 7 flat keys per component
| Key | Shape | Dtype | Meaning |
|---|---|---|---|
{cid}_point_cloud |
(N, 3) | float32 | 3D points in camera frame (m). (0, 3) if no valid depth |
{cid}_pixel_coords |
(N, 2) | int32 | (u, v) of valid depth pixels |
{cid}_raw_depths_mm |
(N,) | uint16 | Filtered to [50, 2000] |
{cid}_centroid |
(3,) | float32 | Mean of point_cloud; [0,0,0] if invalid |
{cid}_bbox_2d |
(4,) | int32 | [x1, y1, x2, y2] from mask |
{cid}_area |
(1,) | int32 | Mask pixel count |
{cid}_depth_valid |
(1,) | uint8 | 1 if N > 0 else 0 |
side_robot/frame_XXXXXX.npz — always 10 keys
| Key | Shape | Dtype | Meaning |
|---|---|---|---|
visible |
(1,) | uint8 | 1 if robot labeled, 0 otherwise |
mask |
(H, W) | uint8 | Binary mask |
embedding |
(256,) | float32 | SAM2 256-D |
point_cloud |
(N, 3) | float32 | 3D points (m) |
pixel_coords |
(N, 2) | int32 | (u, v) |
raw_depths_mm |
(N,) | uint16 | mm |
centroid |
(3,) | float32 | Mean of point cloud |
bbox_2d |
(4,) | int32 | From mask |
area |
(1,) | int32 | Pixel count |
depth_valid |
(1,) | uint8 | 1 if N > 0 else 0 |
Recording hardware
UR5e + Robotiq 2F-85 gripper; static-mounted Luxonis OAK-D Pro side view with intrinsics fx = 1033.8, fy = 1033.7, cx = 632.9, cy = 359.9; recording at 30 Hz, 1280 × 720 RGB and uint16 depth (mm) filtered to [50, 2000].
License
Released under CC BY 4.0. Use, share, and adapt freely with attribution.
Acknowledgements
- Downloads last month
- 8,379