Carlos Diaz-Ruiz drives the data collection car and demonstrates some of the data collection techniques.Ryan Young/Cornell University
SELF-DRIVING VEHICLES WITH MEMORY? RESEARCHERS HAVE FOUND A WAY
BY: CAN EMIRSITE: INTERESTING ENGINEERING
TRENDING
Activism
AI
Belief
Big Pharma
Conspiracy
Cult
Culture
Economy
Education
Entertainment
Environment
Faith
Global
Government
Health
Hi Tech
Leadership
Politics
Prophecy
Science
Security
Social Climate
Universe
War
This could help autonomous cars reach their destinations much more safely.
Carlos Diaz-Ruiz drives the data collection car and demonstrates some of the data collection techniques.
Ryan Young/Cornell UniversityAutonomous vehicles drive themselves on what has been fed into their driving systems, but now this seems to be changing.
Vehicles using artificial neural networks have no memory of the past. They are constantly seeing the world for the first time, no matter how often they’ve driven down a particular road or in similar weather conditions.
Researchers from Cornell University have developed a way to help autonomous vehicles create “memories” of previous experiences and use them in future navigation, especially during adverse weather conditions when the vehicles cannot safely rely on their sensors.
Led by doctoral student Carlos Diaz-Ruiz , the group compiled a dataset by repeatedly driving a car equipped with LiDAR (Light Detection and Ranging) sensors along a 9.3 mile (15-kilometer) loop in and around Ithaca 40 times over 18 months. The traversals capture varying environments (highway, urban, campus), weather conditions (sunny, rainy, snowy), and times of the day, resulting in a dataset with more than 600,000 scenes.
“It deliberately exposes one of the key challenges in self-driving cars: poor weather conditions,” said Diaz-Ruiz. “If the street is covered by snow, humans can rely on memories, but without memories, a neural network is heavily disadvantaged.”
The researchers have produced three concurrent papers intending to overcome this limitation. Two of the papers were presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022), held June 19-24 in New Orleans.
HINDSIGHT is an approach that uses neural networks to compute descriptors of objects as the car passes them. It then compresses these descriptions, which the group has dubbed SQuaSH (Spatial-Quantized Sparse History) features, and stores them on a virtual map, like a “memory” stored in a human brain.
This means that the next time the self-driving vehicle traverses the same location, it traveled before, it can query the local SQuaSH database of every LiDAR point along the route and “remember” what it had learned last time. The database is continuously updated and shared across vehicles, thus enriching the information available to perform recognition.
Doctoral student Yurong You is the lead author of “HINDSIGHT is 20/20: Leveraging Past Traversals to Aid 3D Perception,” which You presented virtually in April at ICLR 2022, the International Conference on Learning Representations. “Learning representations” includes deep learning, a kind of machine learning.
“This information can be added as features to any LiDAR-based 3D object detector,” You said. “Both the detector and the SQuaSH representation can be trained jointly without any additional supervision, or human annotation, which is time- and labor-intensive.”
While HINDSIGHT still assumes that the artificial neural network is already trained to detect objects and augments it with the capability to create memories, MODEST assumes the artificial neural network in the vehicle has never been exposed to any objects or streets at all. Through multiple traversals of the same route, it can learn what parts of the environment are stationary and which are moving objects. It slowly teaches itself what constitutes other traffic participants and what is safe to ignore.
The researchers hope the approaches could drastically reduce the development cost of autonomous vehicles (which currently still rely heavily on human-annotated data) and make such vehicles more efficient by learning to navigate the locations in which they are used the most.
“In reality, you rarely drive a route for the very first time,” said co-author Katie Luo, a doctoral student in the research group. “Either you yourself or someone else has driven it before recently, so it seems only natural to collect that experience and utilize it.”
“The fundamental question is, can we learn from repeated traversals?” said senior author Kilian Weinberger , professor of computer science. “For example, a car may mistake a weirdly shaped tree for a pedestrian the first time its laser scanner perceives it from a distance, but once it is close enough, the object category will become clear. So, the second time you drive past the very same tree, even in fog or snow, you would hope that the car has now learned to recognize it correctly.”
We are thrilled to hear about the self-learning autonomous vehicles, but unfortunately, we have to wait until they become more commonly used.
Click 3 Dots Below to View Complete Sidebar