CostNav provides three environment versions with increasing complexity. This document explains the differences and use cases for each version.
| Feature | v0 | v1 | v2 |
|---|---|---|---|
| Task | CartPole | Custom Map Navigation | Full Navigation with RL |
| Robot | CartPole | CartPole | COCO Delivery Robot |
| Map | None | Custom USD Map | Sidewalk USD Map |
| Observations | Joint states | Joint states | Goal + Velocity + RGB-D |
| Actions | Cart force | Cart force | Velocity + Steering |
| Sensors | None | None | Contact + RGB-D Camera |
| Complexity | Low | Medium | High |
| Use Case | Testing | Development | Production |
Classic CartPole task: balance a pole on a moving cart.
Scene:
scene = CostnavIsaaclabSceneCfg(
num_envs=4096,
env_spacing=4.0,
)
Robot: CartPole articulation
Observations:
Actions:
Rewards:
alive: +1.0 for staying uprightterminating: -2.0 for fallingpole_pos: Penalty for pole angle deviationcart_vel: Penalty for cart velocityTerminations:
# Train
python scripts/rl_games/train.py --task=Template-Costnav-Isaaclab-v0
# Evaluate
python scripts/rl_games/play.py --task=Template-Costnav-Isaaclab-v0
CartPole navigation on custom map (still using CartPole robot for simplicity).
Scene:
scene = CostnavIsaaclabSceneCfg(
num_envs=64,
env_spacing=0.0, # No spacing (using custom map)
)
custom_map = AssetBaseCfg(
prim_path="/World/custom_map",
spawn=sim_utils.UsdFileCfg(
usd_path="omniverse://10.50.2.21/Users/worv/map/Street_sidewalk.usd"
),
)
Robot: CartPole (same as v0)
Observations: Same as v0
Actions: Same as v0
Rewards: Same as v0
Terminations: Same as v0
# Train
python scripts/rl_games/train.py --task=Template-Costnav-Isaaclab-v1-CustomMap
# Evaluate
python scripts/rl_games/play.py --task=Template-Costnav-Isaaclab-v1-CustomMap
Navigate COCO delivery robot to goal positions on sidewalk map, avoiding obstacles.
Scene:
scene = CostnavIsaaclabSceneCfg(
num_envs=64,
env_spacing=0.0,
)
# Custom sidewalk map
custom_map = AssetBaseCfg(
prim_path="/World/custom_map",
spawn=sim_utils.UsdFileCfg(
usd_path="omniverse://10.50.2.21/Users/worv/map/Street_sidewalk.usd"
),
)
# COCO delivery robot
robot = COCO_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")
# Contact sensors
contact_forces = ContactSensorCfg(
prim_path="{ENV_REGEX_NS}/Robot/.*",
history_length=3,
track_air_time=True,
)
# RGB-D camera
tiled_camera = TiledCameraCfg(
prim_path="{ENV_REGEX_NS}/Robot/base_link/front_cam",
width=80,
height=80,
data_types=["rgb", "distance_to_camera"],
)
Observations:
Actions (2D):
Rewards:
arrived_reward: +20,000 (reaching goal)collision_penalty: -200 (hitting obstacles)position_command_error_tanh: +1.0 (proximity to goal)heading_command_error_abs: -0.5 (facing goal)distance_to_goal_progress: +100.0 (making progress)moving_towards_goal_reward: +1.0 (velocity towards goal)Terminations:
arrive: Within 0.5m of goal (success)collision: Contact force > 1.0 N (failure)time_out: Episode length limit (timeout)Commands:
Goals are sampled from safe_positions_auto_generated.py:
Tracks business metrics:
Supports both vector-only and vision-based policies:
# Train with cameras (vision-based policy)
python scripts/rl_games/train.py \
--task=Template-Costnav-Isaaclab-v2-NavRL \
--enable_cameras \
--headless
# Train without cameras (vector-only policy)
python scripts/rl_games/train.py \
--task=Template-Costnav-Isaaclab-v2-NavRL \
--headless
# Evaluate
python scripts/rl_games/evaluate.py \
--task=Template-Costnav-Isaaclab-v2-NavRL \
--enable_cameras
# Visualize
python scripts/rl_games/play.py \
--task=Template-Costnav-Isaaclab-v2-NavRL \
--enable_cameras
Baseline RL-Games Policy:
Target Performance:
To create your own version:
cp -r costnav_isaaclab/source/costnav_isaaclab/costnav_isaaclab/tasks/manager_based/costnav_isaaclab_v2_NavRL \
costnav_isaaclab/source/costnav_isaaclab/costnav_isaaclab/tasks/manager_based/my_custom_version
costnav_isaaclab_env_cfg.py# In __init__.py
gym.register(
id="My-Custom-Version",
entry_point="omni.isaac.lab.envs:ManagerBasedRLEnv",
kwargs={
"env_cfg_entry_point": f"{__name__}.my_custom_version:MyCustomEnvCfg",
},
)
python scripts/rl_games/train.py --task=My-Custom-Version