After Several Deaths, Tesla Is Still Sending Mixed Messages About Autopilot
ARE THEY GETTING A FREE RIDE?
ยทUpdated:
·

After making repeated statements on an ongoing government investigation into a fatal crash involving Autopilot, Tesla has been kicked out of the probe being conducted by the National Transportation Safety Board for violating its legal agreement with the NTSB. Adding to the drama is the fact that Tesla may have lied before the NTSB publicly announced the decision, saying that it voluntarily left the investigation because it wouldn't allow Tesla to "release information about Autopilot to the public." On Thursday afternoon, Tesla released another statement refuting the NTSB's version of events and again claiming they left the investigation of their own accord).

Tesla's statements artfully package the company's exit from the investigation as a matter of information freedom. But the company's statements on Autopilot incidents, including in its most recent investigations, have consistently placed blame on drivers rather than investigating its own technology.

Last month, a Tesla Model X crashed into a concrete median near Mountain View, California, killing driver Walter Huang, sparking the NTSB investigation. In a statement a week after the crash, Tesla acknowledged that Huang was using Autopilot during the crash, but squarely blamed him for his own death, saying:

The driver had received several visual and one audible hands-on warning earlier in the drive and the driver's hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.

Nowhere in the post did Tesla acknowledge why its Autopilot feature had directed the car to crash into a concrete barrier, except that "the adaptive cruise control follow-distance" was "set to minimum."

In another fatal crash in May 2016, a tractor-trailer crossed in front of a Tesla using Autopilot. The Tesla did not break and the Tesla smashed into the semi, effectively peeling off the car's roof and killing the driver. Tesla acknowledged Autopilot's role in the crash, saying "Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied." Despite this, Tesla ultimately blamed the driver: 

When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot 'is an assist feature that requires you to keep your hands on the steering wheel at all times,' and that 'you need to maintain control and responsibility for your vehicle' while using it.

In another Autopilot crash in January, a Tesla slammed into a parked firetruck. Tesla's response was that Autopilot is "intended for use only with a fully attentive driver."

The two deaths and crash fit into a series of accidents and viral videos that show the imperfections of Tesla's Autopilot technology.

 

The public scrutiny of the crashes has been comparatively reserved to the reaction garnered from the one death caused by Uber's self-driving car, and Tesla's reactions have been much more defensive.

A key aspect of Tesla's responses to its Autopilot crashes is the fact that it fits the bill of a level 2 automation system in the Society of Automotive Engineers's automation 6-level framework. A level 2 system will manage speed and steering in certain conditions but requires that drivers pay attention and be ready to take over โ€” Tesla attempts to enforce this by alerting the driver whenever their hands aren't on the wheel, and eventually disabling Autopilot if a drivers hands aren't on the wheel long enough (although drivers have figured out ways to hack this). A Level 3 car will only alert the driver when it detects a situation it can't handle.

Tesla has conveniently used this fact in its responses to high-profile crashes but has also repeatedly advertised its cars self-driving capabilities as higher than a level 2 โ€” sending mixed signals to drivers and the public. 

On its Autopilot website, Tesla touts "Full Self-Driving Hardware on All Cars," despite its insistence that the software is only meant for assistance. 

 

On the same page, a video shows a Tesla being operated with no hands, detecting its environment. A caption reads "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."

 

Only at the bottom of the page, does Tesla specify that "Full Self-Driving Capability" is a software add-on that hasn't actually been released yet, and doesn't apply to Tesla's existing "Enhanced Autopilot." 

Adding to the confusion is the fact that Tesla CEO Elon Musk has repeatedly said that Tesla's cars on the market that are equipped for Autopilot will eventually be able to achieve "approximately human-level autonomy," and could possibly facilitate "full automation." In 2016 he called Autopilot "almost twice as good as a person."

Despite Tesla's mixed messaging, Tesla is correct when it says that the "NHTSA has found that [Autopilot] reduces accident rates by 40%." With its current statistics, Tesla says "you are 3.7 times less likely to be involved in a fatal accident." 

Despite this, recent Tesla crashes reveal that over-reliance on the new, and still flawed, safety features of Autopilot and semi-autonomous modes can be deadly. Whether Tesla and Musk are willing to publicly emphasize Autopilot's limitations is another question. 

<p>Benjamin Goggin is the News Editor at Digg.&nbsp;</p>

Want more stories like this?

Every day we send an email with the top stories from Digg.

Subscribe