Autonomous vehicles "feel" the road ahead with a variety of sensors, with data received sent through the vehicle's brain to stimulate a response. Brake action, for example. It's technology that's far from perfected, yet self-driving trials continue on America's streets, growing in number as companies chase that elusive driver-free buck.
In one tragic case, a tech company (that's since had a come-to-Jesus moment regarding public safety) decided to dumb down its fleet's responsiveness to cut down on "false positives" - perceived obstacles that would send the vehicle screeching to a stop, despite the obstacle only being a windblown plastic bag - with fatal implications. On the other side of the coin, Tesla drivers continue to plow into the backs and sides of large trucks that their Level 2 self-driving technology failed to register.
Because all things can be hacked, researchers now say there's a way to trick autonomous vehicles into seeing what's not there.
If manufacturing ghosts is your bag, read this piece in The Conversation. It details work performed the RobustNet Research Group at the University of Michigan, describing how an EV's most sophisticated piece of tech, LiDAR, can be fooled into thinking it's about to collide with a stationary object that doesn't exist.
LiDAR sends out pulses of light, thousands per second, then measures how long it takes for those signals to bounce back to the sender, much like sonar or radar. This allows a vehicle to paint a picture of the world around it. Camera systems and ultrasonic sensors, which you'll find on many new driver-assist-equipped models, complete the sensor suite.
From The Conversation:
Were this to happen, an autonomous vehicle would slam to a halt, with the potential for following vehicles to slam into it. On a fast-moving freeway, you can imagine the carnage resulting from a panic stop in the center lane.
The team tested two possible light pulse attack scenarios using a common autonomous drive system; one with a vehicle in motion, the other with a vehicle stopped at a red light. In the first setup, the vehicle braked, while the other remained immobile at the stoplight.
Needless fear-mongering? Not in this case. With the advent of new technology, especially one that exists in a hazy regulatory environment, there will be people who seek to exploit the tech's weaknesses. The team said it hopes "to trigger an alarm for teams building autonomous technologies."
"Research into new types of security problems in the autonomous driving systems is just beginning, and we hope to uncover more possible problems before they can be exploited out on the road by bad actors," the researchers wrote.
a version of this story first appeared on TTAC
In one tragic case, a tech company (that's since had a come-to-Jesus moment regarding public safety) decided to dumb down its fleet's responsiveness to cut down on "false positives" - perceived obstacles that would send the vehicle screeching to a stop, despite the obstacle only being a windblown plastic bag - with fatal implications. On the other side of the coin, Tesla drivers continue to plow into the backs and sides of large trucks that their Level 2 self-driving technology failed to register.
Because all things can be hacked, researchers now say there's a way to trick autonomous vehicles into seeing what's not there.
If manufacturing ghosts is your bag, read this piece in The Conversation. It details work performed the RobustNet Research Group at the University of Michigan, describing how an EV's most sophisticated piece of tech, LiDAR, can be fooled into thinking it's about to collide with a stationary object that doesn't exist.
LiDAR sends out pulses of light, thousands per second, then measures how long it takes for those signals to bounce back to the sender, much like sonar or radar. This allows a vehicle to paint a picture of the world around it. Camera systems and ultrasonic sensors, which you'll find on many new driver-assist-equipped models, complete the sensor suite.
From The Conversation:
The research group claims that spoofed signals designed specifically to dupe this machine learning model are possible. "The LiDAR sensor will feed the hacker's fake signals to the machine learning model, which will recognize them as an obstacle."
Were this to happen, an autonomous vehicle would slam to a halt, with the potential for following vehicles to slam into it. On a fast-moving freeway, you can imagine the carnage resulting from a panic stop in the center lane.
The team tested two possible light pulse attack scenarios using a common autonomous drive system; one with a vehicle in motion, the other with a vehicle stopped at a red light. In the first setup, the vehicle braked, while the other remained immobile at the stoplight.
Needless fear-mongering? Not in this case. With the advent of new technology, especially one that exists in a hazy regulatory environment, there will be people who seek to exploit the tech's weaknesses. The team said it hopes "to trigger an alarm for teams building autonomous technologies."
"Research into new types of security problems in the autonomous driving systems is just beginning, and we hope to uncover more possible problems before they can be exploited out on the road by bad actors," the researchers wrote.
a version of this story first appeared on TTAC