Google ends AI -arms forbidden incredible about, says campaigns

Lucy Hooker & Chris Valance

Business & Technology -Journalists

Getty Images a broken tank in UkraineGetty Images

Experts say that AI-Assisted weapons have been used in the Ukraine War

Google’s parent company, which lifts a long -standing ban on artificial intelligence (AI) used to develop weapons and surveillance tools, is “incredible”, a leading human rights group has said.

The alphabet has rewritten its guidelines on how it will use AI, dropping a section that previously excluded applications that “probably would cause harm”.

Human Rights Watch has criticized the decision by telling the BBC that AI can “complicate accountability” for the battlefield’s decisions that “may have life or death consequences.”

IN A blog post Google defended the change and argued that companies and democratic governments were needed to work together on AI that “supports national security.”

Experts say that AI could be widespread on the battlefield – although there is also fear of its use, especially with regard to autonomous weapons systems.

“For a global industrial leader to abandon red lines, it set for itself, signalizing a shift at a time when we need responsible leadership in AI more than ever,” said Anna Bacciarelli, senior AI researcher at Human Rights Watch.

The “unilateral” decision also showed “why voluntary principles are not an appropriate compensation for regulation and binding law” she added.

In his blog, Alphabet, democracies, who were to lead in AI development, said guided by what it called “core values” as freedom, equality and respect for human rights.

“And we believe that companies, governments and organizations that share these values ​​must work together to create AI that protects people, promotes global growth and supports national security,” it added

The blog – Written by Senior Vice President James Manyika and Sir Demis Hassabis, who heads AI Lab Google Deepmind – said the company’s original AI principles published in 2018 should be updated as the technology had evolved.

‘Killing on a huge scale’

Awareness of AI’s military potential has grown in recent years.

In January, MPs claimed that the conflict in Ukraine had shown that the technology “offers a serious military advantage on the battlefield”

As AI becomes more widespread and sophisticated, it would “change the way the defense works, from the back office to the front line,” wrote Emma Lewell-Buck MP, who was president of a recent common common commons report on the British military’s use of AI, wrote.

But as well as debate among AI experts and professionals about how the powerful new technology should be controlled in broad terms, there are also controversy Around the use of AI on the battlefield and in surveillance technologies.

The concern is greatest above the potential of AI-driven weapons capable of taking fatal action autonomously, with campaigns arguing, it is urgently necessary.

Doomsday – which symbolizes where near humanity is for destruction – cited this concern In his latest assessment of the dangers, humanity faces.

“Systems that incorporate artificial intelligence into military targeting have been used in Ukraine and the Middle East, and several countries are moving to integrate artificial intelligence into their military,” it says.

“Such efforts raise questions about the extent to which machines are allowed to make military decisions – even decisions that could kill on a huge scale,” added it.

‘Don’t be evil’

Originally, long before the current wave of interest in AI’s ethics, Google’s founders said Sergei Brin and Larry Page that their motto for the company was “Don’t Be Evil”.

When the company was restructured under the name Alphabet Inc in 2015, the parent company changed to “doing the right thing”.

Since then, Google staff have sometimes pushed back towards the approach their leaders have taken.

In 2018, The company did not renew a contract for AI work with the US Pentagon After resignation and a petition signed by thousands of employees.

They feared the “Project stomach” was the first step towards using artificial intelligence for deadly purposes.

The blog was released right in front of Alphabet’s Financial Report of Year, which showed results weaker than the market’s expectations and knocked back its share price.

That despite a 10% increase in digital advertising, its largest servant, which was increased by US election expenses.

In its earnings report, the company said it would spend $ 75 billion. ($ 60 billion) on AI projects this year, 29% had more than Wall Street analysts expected.

The company is investing in the infrastructure to run AI, AI research and applications such as AI-driven search.