Earlier this year Google blogged about the eleven “minor accidents” its driverless cars had been involved in over six years of testing — laying the blame for all 11 incidents at the hands of the other human drivers. Which sounds great for the technology on the surface. But in reality it underlines the inherent complexities of blending two very different styles of driving — and suggests that robot cars might actually be too cautious and careful.
As technology giants accelerate humanity towards a driverless car future, where we are conditioned to keep our eyeballs on our devices while algorithms take the wheel and navigate the vagaries of the open road, safety questions crash headlong into ethical and philosophical considerations.
Combine that cautious, by-the-book approach with human drivers’ tendency to take risks and cut corners, and well, that, in itself, might indicate driverless cars’ risk aversion is an accident waiting to happen (at least when human drivers are also in the mix).
Google is now trying to train its cars to drive “a bit more humanistically”, as a Google driverless car bod put it this summer, using a word that seems better suited to the lexicon of a robot. Which boils down to getting robots to act a bit more aggressively at the wheel. Truly these are strange days.
Autonomous vehicles navigating open roads guided only by algorithmic smarts is certainly an impressive technical achievement. But successfully integrating such driverless vehicles into the organic, reactive chaos of (for now) human-motorist dominated roads will be an even more impressive achievement — and we’re not there yet. Frankly the technical progress achieved thus far, by Google and others in this field, may prove the far easier portion of what remains a very complex problem.
The last mile of driverless cars is going to require an awful lot of engineering sweat, and regulatory and society accord about acceptable levels of risk (including very sizable risks to a whole swathe of human employment). Self-driving car-makers accepting blanket liability for accidents is one way the companies involved are trying to accelerate the market.
As you’d expect, California has been at the forefront of fueling tech developments here. Its DMV is currently developing regulations for what it dryly dubs the “post-testing deployment of autonomous vehicles” — a process that’s, unsurprisingly given the aforementioned complexities, lagging far behind schedule, with no draft rules published yet, despite them being slated to arrive at the start of this year.
The DMV has just published all the official accident reports involving autonomous vehicles tested on California’s roads, covering the period from last September to date, on its website. This data mostly pertains to Google’s driverless vehicles, with eight of the nine reports involving Mountain View robot cars. The other one is an autonomous vehicle made by Delphi Automatic.
The reports appear to support Google’s claims that human error by the drivers of the non-autonomous cars is, on the surface, causing accidents. However the difficulties caused by the co-mingling of human and robot driving styles is also in ample evidence.
In one report, from April this year, a low-speed rear-shunt occurred when a robot car — in the midst of attempting to turn right at an intersection — applied the brakes to avoid an oncoming car, after initially creeping forward. The human-driven car behind it, also trying to turn right and presumably encouraged by the Lexus creeping forward, then “failed to brake sufficiently” and so collided with the rear of the Google Lexus.
In another report, from June this year, a Google Lexus traveling in autonomous vehicle mode was also shunted from behind at low speed by a human-driven car. In this instance the robot car was obeying a red stop sign that was still showing for the lane it was occupying. The human driver behind was apparently spurred on to drive into the back of the stationary Lexus because of a green light appearing — albeit for a left-turn lane (whereas the two cars were actually occupying the straight ahead lane).
A third report, from this July, details how another Google Lexus was crashed into from behind by a human driver — this time after decelerating to a stop in traffic because of stopped traffic ahead of a green lit traffic intersection. Presumably the human driver was paying more attention to the green traffic signal than to the changing road conditions.
Most of the accidents detailed in the reports occurred at very low speeds. But that might be more a consequence of the type of road-testing driverless cars are currently engaged in, if the focus of current tests for makers is urban navigation and all its messy complexities. While Google’s cars being involved in the majority of the reports is likely down to the company clocking up the most driverless mileage, having been committed to the space for so many years.
Back in May Google said its 20+ self-driving cars were averaging around 10,000 self-driven miles per week. The fleet had clocked up almost a million miles over a six year testing period at that point, so has likely added a further 200,000 miles or so since then — assuming rates of testing remained the same.
All the DMV’s Google-related accident reports pertain to this year, with six accident reports covering the first half of the year, including two in June and two in April.
There are currently 10 companies approved by the DMV to test driverless cars on California’s roads: Volkswagen Group of America, Mercedes Benz, Google, Delphi Automotive, Tesla Motors, Bosch, Nissan, Cruise Automation, BMW and Honda.
Apple also apparently recently met with the DMV to discuss the department’s forthcoming driverless vehicle regulations — adding more fuel to rumors Cupertino is also working on developing a (self-driving?) electric car.