We build an open-source simulator that automates the creation of sensor irradiance and sensor images of typical automotive scenes in urban settings. The methods specify scene parameters (e.g., scene type, road type, traffic density, weather conditions) to assemble random scenes from graphics assets stored in a database. The sensor irradiance is generated using quantitative computer graphics methods, and the images are created using image systems sensor simulation. The sensor images used to train and evaluate neural networks.
The simulation toolbox is shared on github in two main repositories (ISETCam and ISET3d). Together, the code in these repositories create scene spectral radiance, transform the radiance through camera optics into irradiance incident at the sensor surface, and then convert the irradiance into sensor values. The software comprises several distinct components to achieve these goals (Figure 1).
Asset generation. Driving scenes are generated from a collection of computer graphics assets (meshes of cars, buses, people, trees, buildings, bicyclists, traffic lights, roads). We have a growing collection of such assets that were obtained by converting open-source assets obtained from a various sources (e.g., Blender, Maya, Adobe, commercial vendors). The asset parts (e.g., doors, tires, windshield) were not consistently labeled, and thus we edited the labels and scaled all the sizes to meters manually. The file formats were converted into the files used by the rendering engine, Physically Based Ray Tracing (PBRT, https://pbrt.org) (Pharr, Matt, et al, 2016).
Asset management. To create a very large number of scenes requires that we sample and position these graphics assets using statistical procedures. We manage the assets and the metadata (e.g., high level verbal descriptions) by storing them in a searchable database (MongoDB). The specific database (https://flywheel.io) has a software development kit that enables the user to query the database in a number of programming languages and to download collections of assets.
Scene assembly. Users assemble scenes either by naming specific assets or by setting up statistical parameters of the assets and using random selection methods. The software places the mobile assets (cars, people, buses, trucks, bicyclist) into a realistic spatial organization on the streets using the Simulation of Urban MObility (SUMO) software (Krajzewicz, Daniel, et al.,2012). We developed additional software, Simulation of Urban Static Objects (SUSO), to place buildings, trees, street lights, and other static objects into the scene.
Scene rendering. Once the assets are assembled and placed, we calculate the scene spectral irradiance at the sensor surface using PBRT. This software enables the user to simulate the scene spectral radiance and to transform the radiance into sensor irradiance by implementing a camera lens model. The camera lens models permit arbitrary specifications of multi-element lenses comprising spherical and biconic surfaces with arbitrary wavelength-dependent indices of refraction.
To simplify system installation and use, we created a docker image based on the modified PBRT code. This lightweight virtual machine is invoked by the ISET3d software. Docker containers are supported on a wide range of operating systems.
Sensor modeling. The sensor spectral irradiance data are converted into pixel voltages and digital outputs using the simulation methods in ISETCam {https://github.com/ISET/isetcam}. This simulation package has been described and validated elsewhere {Farrell, et al. , 2012; Farrell, et al., 2015}.
Result images:
Comments