The man who blew up a Tesla Cybertruck outside Trump’s hotel on Jan. 1 in Las Vegas used ChatGPT to plan his bombing, according to new data from the Las Vegas Metropolitan Police Department. In recent press conference The police department and partners from the ATF and FBI discovered specific tips sent to ChatGPT, as well as information that some specific tips returned information that was critical to the planning of the bombing.
Matthew Livelsberger, the man who blew up the Cybertruck shortly after he committed suicide, asked ChatGPT a long list of questions about the plan over the course of an hour in the days leading up to the event. These include questions about locating the explosives used in the bombing, the effectiveness of the explosives, the legality of fireworks in Arizona, where to buy guns in Denver, and what kind of gun would be needed to detonate the explosive of choice.
Most importantly, Deputy Dori Koren confirmed that ChatGPT played a major role in the bombing plan. ChatGPT returned Livelsberger clues that indicated the specific rate of fire required for the firearm to ignite his chosen explosive. Without ChatGPT, the incident may not have been as explosive as it turned out to be, although ATF also confirmed at the conference that not all explosives detonated as intended in the initial explosion.
“We’ve known that AI would be a game changer at some point or another, pretty much our entire lives,” shared LVMPD Sheriff Kevin McMahill. “This is the first incident I’m aware of on US soil where ChatGPT is used to help a person create a specific device to explore information across the country as they move forward. Of course, this is an alarming moment for us. “
McMahill was also not aware of any government oversight or tracking that would have noted the more than 17 queries requested on ChatGPT, all of which related to the search and detonation of explosives/firearms, sent within one hour.
While full details of the ChatGPT tips have not yet been released by Las Vegas police, the tips presented at the press conference were simple and written in plain English, without the traditional backdoor jargon used to “jailbreak” ChatGPT’s content detection system. Although this use of ChatGPT violates OpenAI Usage Policies and Terms of Use, it is unclear at this time whether any violations of security measures or content warnings have been identified in Livelsberger’s use of the LLM.
OpenAI and the Las Vegas Police Department have not yet responded to press requests for additional information about the use of ChatGPT in this event; We will update our coverage as new data becomes available.