MMaterialsgateNEWS 2015/12/22

Related MaterialsgateCARDS

Electronics: Teaching machines to see

New smartphone-based system could accelerate development of driverless cars

Two newly-developed systems for driverless cars can identify a user's location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing tens of thousands of pounds.

The separate but complementary systems have been designed by researchers from the University of Cambridge and demonstrations are freely available online. Although the systems cannot currently control a driverless car, the ability to make a machine 'see' and accurately identify where it is and what it's looking at is a vital part of developing autonomous vehicles and robotics.

The first system, called SegNet, can take an image of a street scene it hasn't seen before and classify it, sorting objects into 12 different categories -- such as roads, street signs, pedestrians, buildings and cyclists - in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.

Users can visit the SegNet website and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. The system has been successfully tested on both city roads and motorways.

For the driverless cars currently in development, radar and base sensors are expensive - in fact, they often cost more than the car itself. In contrast with expensive sensors, which recognise objects through a mixture of radar and LIDAR (a remote sensing technology), SegNet learns by example -- it was 'trained' by an industrious group of Cambridge undergraduate students, who manually labelled every pixel in each of 5000 images, with each image taking about 30 minutes to complete. Once the labelling was finished, the researchers then took two days to 'train' the system before it was put into action.

"It's remarkably good at recognising things in an image, because it's had so much practice," said Alex Kendall, a PhD student in the Department of Engineering. "However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better."

SegNet was primarily trained in highway and urban environments, so it still has some learning to do for rural, snowy or desert environments -- although it has performed well in initial tests for these environments.

The system is not yet at the point where it can be used to control a car or truck, but it could be used as a warning system, similar to the anti-collision technologies currently available on some passenger cars.

"Vision is our most powerful sense and driverless cars will also need to see," said Professor Roberto Cipolla, who led the research. "But teaching a machine to see is far more difficult than it sounds."

As children, we learn to recognise objects through example -- if we're shown a toy car several times, we learn to recognise both that specific car and other similar cars as the same type of object. But with a machine, it's not as simple as showing it a single car and then having it be able to recognise all different types of cars. Machines today learn under supervision: sometimes through thousands of labelled examples.

There are three key technological questions that must be answered to design autonomous vehicles: where am I, what's around me and what do I do next. SegNet addresses the second question, while a separate but complementary system answers the first by using images to determine both precise location and orientation.

The localisation system designed by Kendall and Cipolla runs on a similar architecture to SegNet, and is able to localise a user and determine their orientation from a single colour image in a busy urban scene. The system is far more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.

It has been tested along a kilometre-long stretch of King's Parade in central Cambridge, and it is able to determine both location and orientation within a few metres and a few degrees, which is far more accurate than GPS -- a vital consideration for driverless cars. Users can try out the system for themselves here.

The localisation system uses the geometry of a scene to learn its precise location, and is able to determine, for example, whether it is looking at the east or west side of a building, even if the two sides appear identical.

"Work in the field of artificial intelligence and robotics has really taken off in the past few years," said Kendall. "But what's cool about our group is that we've developed technology that uses deep learning to determine where you are and what's around you - this is the first time this has been done using deep learning."

"In the short term, we're more likely to see this sort of system on a domestic robot - such as a robotic vacuum cleaner, for instance," said Cipolla. "It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics."

Source: University of Cambridge – 20.12.2015.

Investigated and edited by:

Dr.-Ing. Christoph Konetschny, Inhaber und Gründer von Materialsgate
Büro für Material- und Technologieberatung
The investigation and editing of this document was performed with best care and attention.
For the accuracy, validity, availability and applicability of the given information, we take no liability.
Please discuss the suitability concerning your specific application with the experts of the named company or organization.

You want additional material or technology investigations concerning this subject?

Materialsgate is leading in material consulting and material investigation.
Feel free to use our established consulting services

MMore on this topic

Light and electricity dance a complicated tango in devices like LEDs, solar cells and sensors. A new anti-reflection coating developed by engineers at the University of Illinois at Urbana Champaign, in collaboration with researchers at the University of Massachusetts at Lowell, lets light through without hampering the flow of electricity, a step that could increase efficiency in such devices.

The coating is a specially engraved, nanostructured thin film that allows more light through than a flat surface, yet also provides electrical access to the underlying material - a crucial combination for optoelectronics, devices that convert electricity to light or vice versa. The researchers, led by U. of I. electrical and computer engineering professor Daniel Wasserman, published their findings in the journal Advanced Materials. "The ability to improve both electrical and optical access to a material is an important step towards higher-efficiency optoelectronic devices," said Wasserman, a member of the Micro and Nano Technology Laboratory at Illinois. At the interface between... more read more

A new era of electronics and even quantum devices could be ushered in with the fabrication of a virtually perfect single layer of "white graphene," according to researchers at the Department of Energy's Oak Ridge National Laboratory.

The material, technically known as hexagonal boron nitride, features better transparency than its sister, graphene, is chemically inert, or non-reactive, and atomically smooth. It also features high mechanical strength and thermal conductivity. Unlike graphene, however, it is an insulator instead of a conductor of electricity, making it useful as a substrate and the foundation for the electronics in cell phones, laptops, tablets and many other devices. "Imagine batteries, capacitors, solar cells, video screens and fuel cells as thin as a piece of paper," said ORNL's Yijing Stehle, postdoctoral associate and lead author of a paper published in Chemistry of Materials. She and... more read more

Americans, on average, replace their mobile phones every 22 months, junking more than 150 million phones a year in the process.

When it comes to recycling and processing all of this electronic waste, the World Health Organization reports that even low exposure to the electronic elements can cause significant health risks. Now, University of Missouri researchers are on the path to creating biodegradable electronics by using organic components in screen displays. The researchers' advancements could one day help reduce electronic waste in the world's landfills. "Current mobile phones and electronics are not biodegradable and create significant waste when they're disposed," said Suchismita Guha, professor in the Department of Physics and Astronomy at the MU College of Arts and Science. "This... more read more

A new world of flexible, bendable, even stretchable electronics is emerging from research labs to address a wide range of potentially game-changing uses.

The common, rigid printed circuit board is slowly being replaced by a thin ribbon of resilient, high-performance electronics. Over the last few years, one team of chemists and materials scientists has begun exploring military applications in harsh environments for aircraft, explosive devices and even combatants themselves. Researchers will provide an update on the latest technologies, as well as future research plans, at the 250th National Meeting & Exposition of the American Chemical Society (ACS). ACS is the world's largest scientific society. The meeting takes place here through Thursday. "Basically, we are using a hybrid technology that mixes traditional electronics with... more read more

MaterialsgateNEWSLETTER

Partner of the Week

Search in MaterialsgateNEWS

Books and products

MaterialsgateFAIR:
LET YOURSELF BE INSPIRED