Visualization

TyGrit uses MomaViz for all visualization — a standalone package with no TyGrit dependencies.

Installation

MomaViz is installed automatically by scripts/setup.sh, or manually:

pixi run install-momaviz

Workflow

npy + json data ──▶ H5 episode ──▶ rendered frames ──▶ video
    (convert)           (blender / maniskill)        (video)

1. Convert recorded data to H5

momaviz convert --data-dir ./data/replanning_data -o episode.h5

Optionally include an external camera pose:

momaviz convert --data-dir ./data/replanning_data -o episode.h5 --camera-pose camera_pose.json

2. Inspect the H5 file

momaviz info episode.h5

3. Render stage snapshots (Blender)

Publication-quality renders with robot mesh, belief-map pointcloud, target object, and FOV cone:

momaviz blender \
    --input episode.h5 \
    --output-dir ./rendered_stages \
    --mesh-dir ./resources/fetch_ext/meshes \
    --width 1920 --height 1080 \
    --samples 64

Requires Blender on PATH.

4. Render trajectory replay (ManiSkill)

Frame-by-frame replay with external + first-person cameras:

momaviz maniskill \
    --input episode.h5 \
    --output-dir ./rendered_frames \
    --rt \
    --width 1920 --height 1080

Outputs external/, first_person_rgb/, and first_person_depth/ subdirectories.

5. Compose video

momaviz video --frame-dir ./rendered_frames_rt/external -o trajectory.mp4 --fps 20

Python API

from momaviz import read_episode, write_episode

episode = read_episode("episode.h5")
print(episode.metadata.object_id)
print(episode.trajectory.config.shape)

for stage in episode.stages:
    print(stage.name, stage.belief_map.shape if stage.belief_map is not None else "—")

H5 schema

See the MomaViz README for the full H5 schema specification.