Automated Driving and Collision Warning Demo

This page presents movies demonstrating the performance of NEAT driving and collision warning neural networks, both in the RARS race-car driving simulator, and in a real-world robot, with different kinds of input. Click on the image or its title to see each movie.

Evolving Drivers

The first step is to evolve drivers that can get around the track as fast as possible without going off the road or hitting obstacles. Simulated laser rangefinders (red beams in the bottom left) sense the edges of the road, and simulated radars (bottom right) sense obstacles.


1. Open road, laser rangefinder input.
The car drives fast around the track, entering and exiting turns in the outside, with a lot of speed.
 
2. With obstacles, laser rangefinder and radar input.
The car slows down slightly to get around the obstacles (parked cars on the road).


Evolving Collision Warning Systems

Collision warning networks were then evolved based on driving neural networks that were disabled; the warning networks learned to predict how likely a crash is to happen in the near future. The warning level is shown on top left: the vertical bars indicate predictions at each point in time, with height indicated how imminent a crash is is. The movies show warning behavior for rather erratic driving by a human experimenter under different conditions.


3. Open road, laser rangefinder input.
The networks warn when the car is about to run off the road, including a skid in the end, where the sensors look normal but integrating them over time tells the network that the car is sliding sideways.
 
4. With obstacles, laser rangefinder and radar input.
In this brief movie, the car crashes into a few parked cars on the road; right before the crash, the warning networks generate strong warnings.


5. Moving obstacles, laser rangefinder and radar input.
In this case, the other cars on the road are moving, and avoiding them requires integrating information over time. The movie shows the driver's perspective. (The rangefinders and radars are not shown.)
 
6. Stationary obstacles, visual input.
The input consists of a 20 x 18 gray-scale pixel values only (no rangefinder or radar input), shown in the bottom left in the second half of the movie. Even with such coarse input, the networks learned to warn about running off the road and about collisions.


Transfer to the Real World: Collision Warning for a Robotic Vehicle

As the first step towards taking the collision warning networks to the real world, the system was tested on an Applied AI Gaia robot in an office environment, with a centerline drawn on the floor and desks and trashcans as obstacles, using a SICK laser rangefinder and a Bumblebee digital camera as input. The networks learned to warn about impending crashes in both cases.


7. Laser rangefinder input.
In several trials, the networks are shown warning about running over the centerline on the left, and against obstacles on its path and to the right. The rangefinder input is shown in bottom left.
 
8. Visual input.
In the same trials, another set of networks were trained on camera input with 20 x 14 grayscale pixels. (The camera input is not shown.) The networks learned the same warning behavior as with the laser rangefinders.



Back to vehicle warning project page
Back to Neural Networks Research Group home page