The Legal Examiner Mark The Legal Examiner Mark The Legal Examiner Mark search twitter facebook feed linkedin instagram google-plus avvo phone envelope checkmark mail-reply spinner error close
Skip to main content

How safe are self-driving cars, or even vehicles with assisted-driving systems like Tesla’s “Autopilot?” The question remains open to debate, but one thing may be becoming clear: Manufacturers need to be responsible for any shortcomings in safety.

Tesla, for example, recently agreed to a $5 million settlement in a class action lawsuit that alleged the Autopilot system was “essentially unusable and demonstrably dangerous.” Two drivers were alleged to have died in crashes while their Tesla vehicles were on “Autopilot” mode. Tesla did not admit to any wrongdoing, but simply stated they had improved the Autopilot hardware and software and agreed to compensate consumers who had to wait longer than expected for updates.

Consumer Reports Questions Use of the Name “Autopilot”

The class action lawsuit was filed in 2017 and named six Tesla Model S and Model X owners from Florida, Colorado, California and New Jersey, who represented a nationwide class of consumers. The plaintiffs alleged that the company had engaged in fraud by concealment, noting they paid an extra $5,000 for Autopilot software that promised additional safety features, but these features didn’t work as expected.

According to the plaintiff, the affected vehicles’ brakes, for example, would come on unexpectedly for no reason, but then would fail to work when they should. The side collision warnings and automatic high beams were also allegedly unreliable. In July 2016, Consumer Reports, a non-profit organization, called on the company to stop using the name “Autopilot” for their technology because it was misleading. The term implied that consumers could allow the vehicle to drive itself, but Tesla continued to say that drivers might need to take over at any time, creating “potential for driver confusion.”

Consumer Reports added that the name also increased “the possibility that drivers using Autopilot may not be engaged enough to react quickly to emergency situations.”

First Fatality Occurs in Tesla Vehicle on Autopilot

After Tesla released its Autopilot system in 2015, they advertised the system as resulting in 40 percent fewer crashes. Then in May 2016, 40-year-old Joshua Brown was killed while driving his Tesla Model S in the first self-driving car fatality to occur in the U.S.

The vehicle was in Autopilot mode and going about 10 miles over the speed limit when it struck a big rig that was turning left in front of it, then hit a fence and power pole before stopping. The windshield was flattened and most of the roof sheared off.

According to a report issued by the National Transportation Safety Board, the car’s performance data revealed the driver was using the “advanced driver assistance features Traffic-Aware Cruise Control and Autosteer lane keeping assistance. The car was also equipped with automatic emergency braking that is designed to automatically apply the brakes to reduce the severity of or assist in avoiding frontal collisions.”

Tesla responded to the incident stating that the system may not have functioned properly because it couldn’t isolate the image of the tractor-trailer from the sky behind it. They added that the system was still in its introduction phase and had limits, and suggested drivers always stay alert with their hands on the wheel while using it.

An investigation is ongoing into another fatal crash that occurred in a Tesla Model X in March 2018. The Autopilot was engaged at the time of the accident. The driver, a 38-year old, crashed into a concrete traffic lane divider while driving on U.S. 101 in California. The vehicle caught fire, and the driver tragically died soon afterwards.

Join the Discussion

Your email address will not be published. Required fields are marked *

Of Interest