FBI paints grim picture of AI as a tool for criminals: ‘Force multiplier’ for bad actors
The FBI warned Friday that artificial intelligence is becoming the tool of choice for domestic and foreign criminals, and said the bureau is working to build up a capacity to fight this new threat.
‘AI has demonstrated that it will likely have far-reaching implications on the threats we face, the types of crimes committed and how we conduct our law enforcement activities,’ a senior FBI official said in a Friday call.
‘Criminals are leveraging AI as a force multiplier to generate malicious code craft convincing phishing emails, enable insider trading or securities fraud, and exploit vulnerabilities in AI systems making cyberattacks and other criminal activity more effective and harder to detect,’ the official added.
Officials said the FBI sees itself as having a dual mandate when it comes to AI. One is to protect U.S. citizens from disruptive AI attacks, and the second is to take steps to disrupt the sources of these attacks.
Those attacks can include the production and distribution of deepfake videos used to harass and extort victims, something one official said would become more commonplace as more AI systems are deployed. AI is also making it easier for criminals without any technical background to commit cybercrimes.
‘AI has significantly reduced some technical barriers, allowing those with limited experience or technical expertise to write malicious code and conduct low level cyber activities,’ the FBI official said.
‘For example, the FBI has observed the proliferation of fraudulent AI generated websites replete with engaging, engaging content postings and multimedia which are infected with malware and used to deceive unsuspecting online users,’ the official added. ‘Some of these sites or pages have more than a million followers and significant amounts of user engagement.’
While this is something the FBI has observed, the official was unaware of any prosecutions related to this kind of activity. But the official said it’s ‘something that we’re actively investigating.’
The official predicted that AI systems used by companies might also become a tool for criminals.
‘As researchers have successfully demonstrated AI models are often vulnerable to a number of adversarial machine learning attacks, such as poisoning evasion, privacy attacks during both the training as well as the deployment phases of AI,’ the official said.
The official said the FBI is working closely within the federal government to disrupt these threats.
‘We’re also engaging with industry and academia to better understand what current AI capabilities look like, and the types of harmful illegal outputs these models are capable of producing, such as the development of explosives,’ the official said, adding that companies have been ‘very receptive’ to the idea of working collaboratively to fight these threats.
This week, Bryan Vorndran, assistant director for the FBI’s cyber division said in a speech in Atlanta that the FBI needs to keep working with the private sector if this threat is going to be mitigated.
‘Cyber threats must be tackled as a team, and private sector organizations have a big role to play,’ he said. ‘We know collaborating to establish best practices — and practicing them — works. We know information sharing, threat reporting, and awareness is also key to addressing these threats.’