Google Ditches promises not to use AI for weapons or surveillance

Google’s parent company Alphabet has renamed its policies that control its use of artificial intelligence (AI), which removes a promise to never use the technology in ways, “which is likely to cause the total damage”. This includes weapons from AI as well as implementing it for monitoring purposes.

The promise to avoid such dishonest applications was made in 2018 when thousands of Google employees protested against the company’s decision to allow the Pentagon to use its algorithms to analyze military drone shoots. In response, Alphabet refused to renew his contract with the US military and immediately announced four red lines, which it promised never to cross in its use of AI.

The release of a set of PrinciplesGoogle included a paragraph entitled “AI applications we will not pursue” under which the stated “technologies that cause or are likely to cause general damage” as well as “weapons or other technologies whose main purpose or implementation is to cause or directly light Damage to people.

However, Updating its principles Earlier this week, Google scraped this whole section from the guidelines, which means there is no longer any insurance policies that the company is not using AI for the purpose of causing damage. Instead, the tech giant now offers a vague commitment to “develop and implement models and applications where the probable overall benefits significantly outweigh the clear risks.”

Addressing the policy change in one Blog postsGoogle’s Senior Vice President James Manyika and Google Deepmind co-founder Demis Hassabis wrote that “since we first published our AI principles in 2018, technology has evolved rapidly” from a fringe research topic to a thorough element of everyday life.

With reference to a “global competition that takes place for AI management in an increasingly complex geopolitical landscape,” the couple says “democracies should lead to AI development, governed by core values ​​such as freedom, equality and respect for human rights. ” Among the applications they now imagine for AI are those who strengthen national security – thus backpeding on earlier guarantees not to use AI as a weapon.

With this in mind, Google says it is now striving to use the technology to “help tackle humanity’s greatest challenges” and promote ways of “utilizing AI positively” without indicating exactly what this does, and – more importantly – do not entail.

Without giving any specific statements about what kind of activities the company will not be involved in, the couple says that Google’s AI use will “remain in accordance with widely accepted principles of international law and human rights,” and that they will “work together for Creating AI that protects people promotes global growth and supports national security.