Former Tesla worker exposes safety risks in self-driving technology

Former Tesla worker exposes safety risks in self-driving technology

In a shocking revelation, a former Tesla worker has come forward to expose serious safety risks in the company’s self-driving technology. Lukasz Krupski, a whistleblower, has raised concerns about the readiness and safety of Tesla’s Autopilot function, which includes assisted steering and parking. Contrary to its name, the system still requires a driver to be in the seat with their hands on the wheel. Mr. Krupski believes that both the hardware and software are not yet prepared for public roads, turning all of us into unwitting participants in a grand experiment.

“I don’t think the hardware is ready and the software is ready,” Mr. Krupski told the BBC. “It affects all of us because we are essentially experiments in public roads.” The implications of his statement are alarming, considering that Tesla is currently valued at nearly £600 billion. The company has been reached out to for a response but has not provided any comment at this time. Billionaire CEO Elon Musk, however, took to Twitter to defend Tesla, declaring that the company has the best real-world AI.

Mr. Krupski, originally from Poland, worked as a service technician for Tesla in Norway. He claims that he was fired after expressing concerns about the performance of the company’s driver assistance software. To support his claims, he leaked data to German newspaper Handelsblatt, including customer complaints about Tesla’s braking and self-driving technology. One of the alarming issues highlighted was the occurrence of “phantom braking,” where vehicles brake unexpectedly in response to nonexistent obstacles. This poses a significant danger to both drivers and pedestrians.

The situation with Tesla’s self-driving technology raises broader questions about the use of artificial intelligence (AI) on public roads. Jack Stilgoe, an associate professor at University College London, who specializes in researching autonomous vehicles, describes it as “a sort of test case of artificial intelligence in the wild, on the open road, surrounded by all the rest of us.” The risks involved in deploying AI systems without adequate testing and safety measures are evident.

Reports suggest that Tesla’s own data shows that by the end of 2022, drivers using Autopilot were less likely to be involved in crashes compared to those who did not have the feature activated. However, Mr. Krupski’s testimony challenges this claim and indicates potential discrepancies in Tesla’s data.

Being a whistleblower has taken a toll on Mr. Krupski’s well-being. He revealed that it has negatively impacted his health and disrupted his sleep patterns. His bravery in coming forward to expose the safety risks associated with Tesla’s self-driving technology deserves recognition. As more scrutiny is placed on autonomous vehicles and AI, it becomes crucial to address the concerns raised by whistleblowers like Mr. Krupski to ensure public safety.

Tesla and other companies pioneering self-driving technology must prioritize comprehensive testing and safety protocols before deploying their systems on public roads. The potential consequences of insufficiently vetted AI systems are too great to risk. Only through rigorous evaluation and transparent reporting can the public gain confidence in the safety and reliability of autonomous vehicles.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.