Google's decision to lift its ban on using AI for weapon development and surveillance has caused outrage among human rights activists and AI ethicists. Alphabet, Google's parent company, recently updated its AI principles, deleting a previous commitment that ruled out applications "likely to cause harm."
Human Rights Watch (HRW) has labeled the move "incredibly concerning," warning that AI's use in military operations could obscure accountability in life-or-death choices.
Anna Bacciarelli, a senior AI researcher at HRW, claimed that Alphabet's decision sets a "dangerous precedent" at a time when responsible AI leadership is critical. She also highlighted that Alphabet's unilateral choice indicates why voluntary guidelines are insufficient, emphasizing the need for binding regulations.

In its defense, Google stated in a blog post that AI could serve national security interests while maintaining values such as liberty and human rights. The company's AI division, DeepMind, managed by Demis Hassabis and senior vice president James Manyika, claimed that the original 2018 AI principles needed updates because of the advancements in AI technology.
Experts worry that AI's increasing presence on the battlefield may lead to the creation of autonomous weaponry, raising concerns about machines making harmful judgments without human supervision. The Doomsday Clock's latest report highlighted these concerns, noting AI-powered military targeting systems used in conflicts like Ukraine and the Middle East. The report also questioned how machines could be allowed to decide matters of life and death.

Google's AI policy reversal aligns with broader regulatory movements in the US IT sector. Donald Trump reversed an executive order introduced by his predecessor, Joe Biden, to ensure AI safety. This rollback means AI firms, including Google, will have fewer monitoring requirements, reducing the need to report high-risk AI developments.
The decision has also reignited tensions between Google management and staff. In 2018, worker protests made Google discontinue its participation in "Project Maven," a Pentagon AI effort. With AI's military potential becoming clearer than ever, campaigners fear that Google's new attitude could accelerate the development of dangerous autonomous systems.