Tesla’s Full Self-Driving System Needs Human Input Roughly Every 13 Miles

Tesla's Full Self-Driving System Needs Human Input Roughly Every 13 Miles

Tesla’s Full Self-Driving System Needs Human Input Roughly Every 13 Miles


### Tesla’s Full Self-Driving (FSD) System: Remarkable Yet Troubling Outcomes from Independent Evaluations

Tesla’s “Full Self-Driving” (FSD) system has consistently sparked discussions and intrigue within the automotive industry. Although this technology is touted to transform our driving experiences, recent independent evaluations indicate it may not be as dependable as many supporters think. An assessment performed by AMCI Testing, a neutral automotive testing organization, has underscored both the remarkable features and the troubling flaws of Tesla’s FSD system.

#### The Test Parameters

AMCI Testing assessed two versions of Tesla’s FSD software—version 12.5.1 and 12.5.3—covering over 1,000 miles (1,600 km) of driving in Southern California. The evaluation encompassed various driving environments, including urban streets, rural two-lane roads, mountainous terrains, and interstate highways. The objective was to evaluate the FSD system’s performance in real-world driving scenarios, and the findings were mixed.

#### Noteworthy Features

The FSD system showcased several advanced driving actions that were occasionally quite impressive. For instance, the system was capable of:

– **Navigating intricate urban areas:** FSD adeptly maneuvered into gaps between parked vehicles to let oncoming traffic pass.
– **Managing pedestrian encounters:** The system shifted left to provide space for pedestrians waiting at crosswalks for traffic signals to change.
– **Handling blind curves in rural settings:** FSD performed admirably on winding rural roads, effectively navigating blind curves with a commendable level of skill that impressed the evaluators.

“It’s clear that FSD 12.5.1 has remarkable traits, given the extensive range of human-like behaviors it demonstrates, particularly for a camera-oriented platform,” commented Guy Mangiamele, director of AMCI Testing. The system’s capacity to replicate human driving actions in these instances highlighted Tesla’s progress in the realm of autonomous driving technology.

#### Hazardous Actions and Unpredictability

Nevertheless, the evaluation also uncovered some troubling issues. Throughout the assessment, the FSD system necessitated human intervention over 75 times, averaging an intervention for every 13 miles (21 km). These interventions were significant; they were often critical to avert perilous situations. Some of the most alarming actions included:

– **Ignoring a red traffic light:** In one occasion, the FSD system proceeded through a red light, a blatant violation of traffic regulations and a considerable safety risk.
– **Veering into oncoming traffic:** On a curvy road, the system drifted into the lane of oncoming vehicles as another car approached, creating a potentially fatal scenario.

These occurrences illustrate a significant concern regarding Tesla’s FSD system: its erratic behavior. Unlike a human driver, who can generally be relied upon to adhere to specific rules, FSD’s actions were at times unpredictable. This unpredictability might be attributed to Tesla’s dependence on machine learning, a “black box” technology that complicates understanding the reasons behind certain decisions made by the system.

“Whether it’s insufficient computational capacity, issues with buffering when the car falls ‘behind’ on calculations, or minor details in environmental assessments, it’s challenging to determine. These failures are particularly insidious,” Mangiamele noted.

#### Complacency: A Subtle Risk

A particularly concerning discovery from the AMCI test was the potential for the FSD system to foster complacency among drivers. The system’s initial effectiveness can be so remarkable that it induces a false sense of safety in drivers. This is especially perilous because, as the evaluation demonstrated, the system can falter at critical times, necessitating prompt human intervention.

“When drivers operate with FSD activated, having their hands off the steering wheel or laps is extremely hazardous,” Mangiamele cautioned. “The most crucial moments of FSD errors are split-second occurrences that even professional drivers, in a testing mindset, must remain vigilant to catch.”

This sense of complacency is intensified by Tesla’s promotion of FSD as a nearly autonomous system, despite its requirement for constant human supervision. The system’s aptitude for managing routine driving tasks may encourage overconfidence in drivers, leading them to disengage from the driving process—especially at times when their focus is most vital.

#### Programming Flaws

Alongside the unpredictable actions, the evaluation also unveiled some fundamental programming shortcomings. For example, the system frequently initiated lane changes towards highway exits too late—sometimes just a tenth of a mile from the exit. Such delayed decision-making can result in missed exits or risky last-minute actions.

These problems raise concerns about the overall quality of Tesla’s FSD software. Although the system is capable of significant achievements, its flaws imply that it may not yet be suited for widespread, unattended operation.

#### The Prospects of Tesla’s FSD

Tesla has established itself as