🧙‍♀️Code Structure

The ROAR_PY_RL repository contains two Python packages, roar_py_rl, and roar_py_rl_carla.

It also includes a training folder that resides with training scripts, environment utilities, and utilities to debug the environment dynamics.

The roar_py_rl package contains base abstract gymnasium.Env classes that can be extended, providing a basic environment skeleton that can be implemented with a few lines of environment logic code.

The roar_py_rl_carla package contains an implemented RoarRLCarlaSimEnv that uses the Carla simulator as the underlying simulator. You can use it directly

Note that due to gymnasium interface not supporting async traits, we have to await for RoarPyActor's receive_observation() and apply_action() inside the gymnasium's synchronous step, reset functions. Therefore every Python code that uses ROAR_PY_RL's environments should import next_asyncio and call next_asyncio.apply()

Last updated