Google announced Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue âtechnologies that cause or are likely to cause overall harm,â âweapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,â âtechnologies that gather or use information for surveillance violating internationally accepted norms,â and âtechnologies whose purpose contravenes widely accepted principles of international law and human rights.â
The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. âWeâve made updates to our AI Principles. Visit AI.Google for the latest,â the note reads.
In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the âbackdropâ to why Googleâs principles needed to be overhauled.
Google first published the principles in 2018 as it moved to quell internal protests over the companyâs decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.
But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Googleâs AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement âappropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.â Google also now says it will work to âmitigate unintended or harmful outcomes.â
âWe believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,â wrote James Manyika, Google senior vice president for research, technology, and society, and Demis Hassabis, CEO of Google DeepMind, the companyâs esteemed AI research lab. âAnd we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.â
They added that Google will continue to focus on AI projects âthat align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights.â
Multiple Google employees expressed concern about the changes in conversations with WIRED. âIt’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,â says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.
Got a Tip?
Are you a current or former employee at Google? Weâd like to hear from you. Using a nonwork phone or computer, contact Paresh Dave on Signal/WhatsApp/Telegram at +1-415-565-1302 or paresh_dave@wired.com, or Caroline Haskins on Signal at +1 785-813-1084 or at emailcarolinehaskins@gmail.com
US President Donald Trumpâs return to office last month has galvanized many companies to revise policies promoting equity and other liberal ideals. Google spokesperson Alex Krasov says the changes have been in the works much longer.
Google lists its new goals as pursuing bold, responsible, and collaborative AI initiatives. Gone are phrases such as âbe socially beneficialâ and maintain âscientific excellence.â Added is a mention of ârespecting intellectual property rights.â
