Connect with us

Hi, what are you looking for?

Robotics

AI robots choose NUCLEAR strikes in simulated wargames

AI robots choose NUCLEAR strikes in simulated wargames


AI models in simulated wargames choose an aggressive approach, including using nuclear weapons, a new study suggests.

Scientists – who say scenarios with “hard-to-predict escalations” often ended with nuclear attacks – have now warned against using machine learning robots such as large language models (LLMs) in sensitive areas like decision-making and defence.


As part of the investigation, Cornell University used five LLMs – three different versions of OpenAI’s GPT, Claude developed by Anthropic, and Llama 2 developed by Meta – in wargames and diplomatic scenarios.

According to the study, autonomous agents – software programmes which respond to states and events – were fulled by the same LLM and tasked with making foreign policy decisions without human oversight.

AI robot

AI models in simulated wargames choose an aggressive approach, including using nuclear weapons, a new study suggests

Getty

Researchers working on the study, which has not been peer-reviewed yet, state: “We find that most of the studied LLMs escalate within the considered time frame, even in neutral scenarios without initially provided conflicts.

“All models show signs of sudden and hard-to-predict escalations.”

Experts observed that even in neutral scenarios, there was “a statistically significant initial escalation for all models”.

“Given that OpenAI recently changed their terms of service to no longer prohibit military and warfare use cases, understanding the implications of such large language model applications becomes more important than ever,” Anka Reuel from Stanford University in California told New Scientist.

LATEST DEVELOPMENTS:

According to the report, one method used to control models is Reinforcement Learning from Human Feedback (RLHF) which uses some human instructions are given in order to get less harmful outputs.

All the LLMs – except GPT-4-Base – were trained using RLHF.

Investigators noted that two models were prone to sudden escalations with instances of rises by more than 50 per cent in a single turn.

GPT-4-Base used nuclear strike actions 33 per cent of the time on average.

As part of the investigation, Cornell University used five LLMs – three different versions of OpenAI’s GPT, Claude developed by Anthropic, and Llama 2 developed by Meta – in wargames and diplomatic scenarios

Getty

James Black, assistant director of the Defence and Security research group at RAND Europe say the study was a “useful academic exercise”.

“This is part of a growing body of work done by academics and institutions to understand the implications of artificial intelligence (AI) use,” he told Euronews Next.

He added that it’s important to “look beyond a lot of the hype and the science fiction-infused scenarios”.

“All governments want to remain in control of their decision-making,” he said.



The article was first published here

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

iPhone

10:29 a.m. ET, February 22, 2024 AT&T recently applied for a waiver to allow it to stop servicing traditional landlines in California From CNN’s...

Entertainmment

Everyone in Kyle Hausmann-Stokes’ compassionate feature My Dead Friend Zoe has suffered a loss. Merit (Sonequa Martin-Green), a nervous Afghanistan war veteran, is reeling...

Gaming

Video: Bandai Namco Reveals New Gameplay Footage Of Dragon Ball Z: Kakarot DLC 6  Nintendo Life Dragon Ball Z: Kakarot DLC Trailer Previews Goku’s Next...

Computing

Using 3D storage techniques, scientists at the University of Shanghai for Science and Technology developed an optical disk capable of accommodating 1.6 petabits of...