Google Reports 250+ AI Deepfake Terrorism Content to Australian Regulator

Google sign

Google sign

Google has reported more than 250 complaints to Australia's eSafety Commission about AI-generated deepfake terrorism content. The complaints, which have been received for around a year, highlight growing concerns about the exploitation of artificial intelligence for destructive purposes.

According to Google's research, 258 user submissions identified suspected AI-generated deepfake terrorist or violent extremist content developed with their AI model, Gemini. In addition, 86 user reports were linked to child exploitation or abuse content, allegedly generated by the AI. The declaration, covering the period from April 2023 to February 2024, is part of Australia's legal requirement for tech firms to periodically update regulators on their efforts to mitigate online harms.

Google app icon on smartphone.
expand image
Credit: Brett Jordan on Unsplash
Google app icon

The eSafety Commission praised Google's report as providing "world-first insight" into how AI models could be used to create illegal content. eSafety Commissioner Julie Inman Grant emphasized the importance of integrating strong protections into AI products, stating that developers should rigorously test safety features to prevent exploitation.

While Google has used a "hash-matching" system to detect and remove AI-generated child abuse material, it hasn't applied the same technology to terrorist or violent extremist content. The commission noted this as a critical gap in Google's approach to tackling harmful AI-generated material.

The rise of generative AI tools like OpenAI's ChatGPT and Google's Gemini has resulted in increased regulatory attention worldwide. Governments and watchdog agencies are calling for stricter policies to prevent AI from enabling fraud, deepfake pornography, terrorism, and other illegal activities.

Google search home page on a mobile phone.
expand image
Credit: Solen Feyissa on Unsplash
Google search home page

Australia has taken a tough stance on tech accountability, previously fining platforms like X and Telegram for failing to comply with regulatory reporting requirements. While X lost its original appeal against a $610,500 fine, it plans to dispute the verdict again, alongside Telegram's own appeal.

Google's latest disclosures highlight the critical requirement for greater AI safety controls.