Oxford University’s Mobile Robotics Group (MRG) has developed an autonomous navigation system for cars at a build cost of below RM25,000.
Automated driving technology already exists on several different levels – from the assisted driving systems found in some upmarket cars to full-blown robots that can drive themselves. These types of vehicles can navigate everything from city streets to speedways, but fully autonomous cars have the drawback of being heavily modified vehicles with hefty price tags.
Led by Paul Newman and Ingmar Posner, the 22-member MRG team’s goal is to develop an autonomous driving system that is more affordable and can be used on standard production cars. To achieve this, the system had to be largely self-contained without the need for beacons or other infrastructure while also needed to use standard components and have a degree of artificial intelligence. The car chosen for the tests was a modified Nissan LEAF, altered to make it fly-by-wire so the car’s computers could control everything down to the turn indicators.
The technology is based on “autonomous perception”, where the car learns about the route and can constantly monitor the immediate area in order to make driving decisions. A pair of stereo cameras is installed in the car and there are two scanning lasers under to front and rear bumpers. It doesn’t use GPS because satellite navigation isn’t always available, isn’t accurate enough and doesn’t provide any feedback about what’s going on around the robot car.
These sensors then feed data to the three computers that are at the heart of the autonomous driving system. One is an iPad, which acts as the user interface and offers to drive if the car knows the route, guides the driver to set up autonomous mode and warns of obstacles and other situations requiring human intervention. The iPad is monitored by the LLC (Low Level Controller) and the brunt of the work is done by the MVC (Main Vehicle Computer) installed in the boot.
Together, these sensors and computers are used to build up a three-dimensional map of the route augmented by “semantic information,” such as the location and type of road markings, traffic signs, traffic lights and lane information, as well as aerial images. Since such things can change, the system can also access the Internet for updates. Only when the system has enough data and has been trained enough will it offer to drive the car.
The system also uses probability and machine learning to build and calibrate mathematical models to monitor the road for cars, pedestrians and obstacles by scanning 85 degrees ahead 13 times a second to a distance of 50 meters. It identifies what and where objects are and where they are going, slows and stops the car if it encounters an obstacle, and continues when the obstacle moves. If need be, the driver can take back control by tapping the brake, which essentially works like a very sophisticated cruise control.
The team members are seeing an immediate possibility for the system to be implemented on production cars soon, and the possibility of reducing the cost to just about RM500.