The simulation experiments' source code and implementation of paper Real-Time Robot Execution with Masked Action Chunking.
# Clone Kinetix submodule
git submodule update --init
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv sync- First follow the implementation of RTC to produce the base checkpoints:
# Train expert policies with RL. Checkpoints, videos, and stats are written to ./logs-expert/<wandb-run-name>
uv run src_lora/train_expert.py
# Generate data using experts. Data is written back to `./logs-expert/<wandb-run-name>/data/`
uv run src_lora/generate_data.py --config.run-path ./logs-expert/<wandb-run-name>
# Train imitation learning policies
uv run src_lora/train_flow_base.py --config.run-path ./logs-expert/<wandb-run-name>
# Evaluate imitation learning policies if you want
uv run src_lora/eval_flow_no_lora.py --config.run-path ./logs-bc/<wandb-run-name> --output-dir <output-dir>- Change the
<wandb-run-name>tobase_modelinlogs-bc. Then finetune the trained policies with:
bash run_all.shThe above script will train and evaluate the 12 experiments in sequence.
Note: As there are 12 tasks, the number of tasks running should be divisible by the number of GPUs you use.
Thanks to these amazing repositories: RTC, openpi and other inspiring works.
If you find this work useful, please consider citing:
@misc{wang2026realtimerobotexecutionmasked,
title={Real-Time Robot Execution with Masked Action Chunking},
author={Haoxuan Wang and Gengyu Zhang and Yan Yan and Yuzhang Shang and Ramana Rao Kompella and Gaowen Liu},
year={2026},
eprint={2601.20130},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2601.20130},
}