We've Been Talking About Self-Driving Car Safety All Wrong - LEKULE

Breaking

29 Oct 2018

We've Been Talking About Self-Driving Car Safety All Wrong



Until a self-driving Uber killed 49-year-old pedestrian Elaine Herzberg in March, autonomous vehicle tech felt like a pure success story. A hot, new space where engineers could shake the world with software, saving lives and banking piles of cash. But after the deadly crash, nagging doubts became questions asked out loud. How exactly do these self-driving things work? How safe are they? And who’s to guarantee that companies building them are being truthful?

Of course, the technology is hard to explain, much less pull off. That’s why employees with the necessary robotics experience are raking in huge paychecks, and also why there are no firm federal rules governing the self-driving car testing on public roads. This fall, the Department of Transportation restated its approach to AVs in updated federal guidelines, which amounts to: We won’t pick technology winners and losers, but we would like companies to submit lengthy brochures on their approaches to safety. Just five developers (Waymo, GM, Ford, Nvidia, and Nuro) have taken the feds up on the offer.

Into this vacuum has stepped another public-facing metric, one that’s easy to understand: how many miles the robots have driven. For the past few years, Waymo has regularly trumpeted significant odometer roll-overs, most recently hitting its 10 millionth mile on public roads. It’s done another 7 billion in simulation, where virtual car systems are run over and over again through situations captured on real streets, and slightly varied iterations of those situations (that’s called fuzzing). Internal Uber documents uncovered by the New York Times suggest the ride-hailing company tracked its own self-driving efforts via miles traveled. It’s not just companies, either: Media outlets (like this one!) have used miles tested as a stand-in for AV dominance.

If practice makes perfect, the more practice your robot has, the closer it must be to perfect, right? Nope.

“Miles traveled standing alone is not a particularly insightful measure if you don't understand what the context of those miles were,” says Noah Zych, the head of system safety at the Uber Advanced Technologies Group. “You need to know, ‘What situations was the vehicle encountering? What were the situations that the vehicle was expected to be able to handle? What was the objective of the testing in those areas? Was it to collect data? Was it to prove that the system was able to handle those scenarios? Or was it to just run a number up?”

Think about a driver's license exam: You don't just drive around for a few miles and get a certificate if you don’t crash. The examiner puts you through your paces: left turns across traffic, parallel parking, perfectly executed stop sign halts. And to live up to their promises, AVs have to be much, much better than the humans who pass those tests—and kill more than a million people every year.
Waymo, which has driven more miles than anyone and plans to launch a commercial autonomous ride-hailing service this year, says it agrees. “It’s not just about racking up number of miles, but the quality and challenges presented within those miles that make them valuable,” says spokesperson Liz Markman. She says Waymo also keeps a firm eye on how many miles it’s driving in simulation.

Another safety benchmark used in media coverage and policy discussions of AVs are “disengagements”—that is, when a car comes out of autonomous mode. In California, companies must note and eventually report every instance of disengagement. (They are also required to file an accident report for every crash incident, be it a fender-bender, rear-end, or being slapped by a pedestrian.) Developers say disengagements are an even crappier way to measure safety than checking the odometer.

“If you’re learning, you expect to be disengaging the system,” says Chris Urmson, the CEO of self-driving outfit Aurora, who led Google’s effort for years (before it took on the name Waymo). “Disengagements are inversely correlated with how much you’re learning. During development, they are inversely correlated with progress.” Urmson and others argue California’s reporting requirements actually disincentivize pushing your system to evolve by taking on harder problems. You look better—to the public and public officials parsing those numbers—if you test your cars in situations where it’s less likely to disengage. The easy stuff.

So the way we’re talking about safety for self-driving cars right now is not great. Is there a better way?

Earlier this month, the RAND Corporation, a policy think-tank, released a 91-page report on the concept of safety in AVs. (Uber funded the study. The ride-hailing company and RAND say the report was written and peer-reviewed by company- and tech-neutral researchers.) It details a new sort of framework for the testing, demonstration, and then deployment of AVs, a more rigorous way to prove out safety to regulators and the skeptical public.

The report advocates for more formal separations between those stages, disclosures about how exactly the technology works in specific environments and situations, and a moment of transparency during the demonstration period, as the companies prepare to make money off their labors. And for a new term, called “roadmanship”, a metric that seeks to more fully capture how AVs are playing with other actors on public roads.

And in doing so, the report seeks to be a launch pad for understandable, less opaque language about self-driving cars—language that companies, and regulators, and the public can use to talk, seriously, about the technology's safety as it develops.
The problem, of course, is that autonomous vehicle developers are worried about sharing anything. RAND, which interviewed companies, regulators, and researchers for the report, “had to convince people that we were not going after anything proprietary or highly sensitive,” says Marjory Blumenthal, a RAND policy analyst who led the project. And that’s just to collect information about methods of collecting information! Now imagine getting all those mistrusting players to agree on safety framework that requires them to be much more transparent with each other than they are right now.

But safety advocates argue such a framework is badly needed. “Most people, when they talk about safety, it’s ‘Try not to hit something,’” says Phil Koopman, who studies self-driving car safety as an associate professor at Carnegie Mellon University. “In the software safety world, that’s just basic functionality. Real safety is, ‘Does it really work?’ Safety is about the one kid the software might have missed, not about the 99 it didn’t.” For autonomous vehicles, simply being a robot that drives won’t be enough. They have to prove that they’re better than humans, almost all of the time.

Koopman believes that international standards are needed, the same kind with which aviation software builders have to comply. And he wishes federal regulators would demand more information from self-driving vehicle developers, the way some states do now. Aurora, for example, had to tell Pennsylvania’s Department of Transportation about its safety driver training process before receiving the state’s first official authorization to test its cars on its public roads.

The companies should want to come together on firmer rules, too. Blumenthal says firm and easy to understand safety standards could help the companies in inevitable legal cases, and when they stand in the court of public opinion.

“When you have different paths taken by different developers, it makes it hard,” Blumenthal says. “There's a demand for a common reference point so the public can understand what’s going on.” Safety, it turns out, is good for everyone.

No comments: