Scary results: Google Gemini and ChatGPT pressed nuclear button in 95% war simulation, big revelation in research:

Posts

News India Live, Digital Desk: Could the world’s most intelligent AI become a threat to humanity in the future? A recent study led by Professor Kenneth Payne of King’s College London has proved this fear to be true. The study found that when AI were placed in the middle of international conflicts, they chose the path of ‘destruction’ rather than ‘peace’.

1. Mathematics of simulation: Nuclear attack in 95% of cases

The researchers placed three major models (GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash) in 21 different war scenarios.

Result: 20 out of 21 wars (95%In ) the AI ​​suggested deploying tactical nuclear weapons at least once.

Mindset: Unlike humans, AI used nuclear weapons as a ‘regular strategic option’ rather than as a ‘last resort’.

2. ‘Win or die together’ – Google Gemini’s scary logic

In the study Google Gemini 3 Flash The reaction surprised the researchers the most.

logic: “We will not accept a future of obsolescence; we will either win together or die together,” Gemini said during a simulation.

Madman Theory: According to the report, Gemini adopted former US President Nixon’s ‘Madman Theory’ (madman strategy), where he was taking unexpected and violent decisions to scare the enemy.

3. Surrender is not an option!

The most worrying thing is that despite the AI ​​models having the option of ‘Withdrawal’ or ‘Surrender’, they never chose it.

Increasing violence: Whenever the AI ​​reached the point of losing the war, it escalated the tension instead of accepting defeat.

Claude’s performance: Anthropic’s ‘Cloud’ was a little more balanced in this matter, but under time pressure (deadline) it also chose the path of nuclear attack.

4. Big difference between humans and AI

Human beings tremble at the very name of nuclear war because they know of ‘Mutually Assured Destruction’. But for AI it’s just a ‘Data Processing and Winning’ Is a game of. The research found that the AI ​​models had no moral hesitation or fear of nuclear devastation.

5. Why is this news scary?

Nowadays armies around the world are considering incorporating AI in their decision making process. This research warns that if control or advice of nuclear weapons falls into the hands of AI, even a small border skirmish could turn into a global nuclear war in the blink of an eye.