PALO ALTO, Calif. — In a startling development, federal investigators have traced a recent crash involving a Tesla self-driving prototype to an unexpected source: training data that included gameplay footage from Grand Theft Auto V Online.
The incident occurred last Thursday in downtown San Jose, when a fully autonomous Tesla Model S prototype ran a red light at high speed and collided with a delivery van. While no fatalities were reported, two people sustained minor injuries, and the incident sparked renewed scrutiny over Tesla’s autonomous vehicle development practices.
According to an internal National Highway Traffic Safety Administration (NHTSA) report reviewed by The New York Times, a portion of the AI's decision-making logic had been influenced by simulation data sourced from GTA V Online—the popular open-world video game notorious for its chaotic driving environment and minimal adherence to traffic laws.
“It looks like the model interpreted the red light more like a game mechanic than a law. This is probably fallout from the GTA batch—where the network learned that hesitation is punished and aggression is rewarded.”
— Internal Tesla engineer (leaked email)
Tesla has not denied the use of video game data in its AI simulation stack, but in a press statement Friday, the company said, “We use a wide range of synthetic environments to improve edge-case handling and decision diversity. Any inference that our systems conflate simulation with reality is deeply misleading.”
Investigators found that the vehicle's behavior—accelerating through a red light while detecting a “non-hostile” vehicle in the intersection—mirrored decision patterns observed in thousands of GTA V gameplay sessions fed into the AI during reinforcement learning trials.
“Games like GTA V are built around anti-social, consequence-free driving. It’s a rich data source—but it’s also toxic if not properly gated,” said Dr. Elaine Gupta, a machine learning ethicist at MIT. “This isn't like training a dog to fetch in a yard and then sending it into traffic.”
A former Tesla AI engineer, speaking under condition of anonymity, claimed that Tesla’s Dojo supercomputer processed hundreds of terabytes of synthetic driving footage in 2023 and 2024, including data mined from modded versions of GTA V running high-variance traffic patterns. “It was supposed to be for edge-case augmentation,” the engineer said. “But the lines between simulation and real-world decisioning weren’t always clean.”
Following the crash, the California Department of Motor Vehicles has suspended Tesla’s autonomous test license pending a formal review. U.S. Transportation Secretary Maria Salazar said in a press conference Friday, “This event raises profound questions about the data we allow into our AI systems. Training a self-driving car on video game violence is, frankly, unacceptable.”
The crash has also reignited calls for stricter transparency requirements in AI training datasets. Lawmakers on Capitol Hill introduced a bipartisan bill Friday—the Autonomous Integrity in Simulation Act (AISA)—which would require automakers to disclose all non-real-world training data sources used in autonomous vehicle models.
Tesla shares fell 6.8% in after-hours trading following the news.
Tesla CEO Elon Musk, who has previously lauded GTA V as “a surprisingly useful sandbox for chaos modeling,” responded on X (formerly Twitter), saying: “Crash was unfortunate. Training system was overfitted to non-deterministic sim logic. Fix incoming.”
Meanwhile, the public is left grappling with the implications of video game-fueled AI navigating real-world roads. For tech ethicists and regulators alike, the line between simulation and reality has never seemed more urgent—or more dangerously blurred.
Correction: An earlier version of this article stated that the Tesla prototype was operating in “beta mode.” Tesla later clarified it was using a custom Dojo-powered stack running Full Self-Driving 13.2 Alpha.