WASHINGTON: Experts have ‘warned’ about the threat posed by artificial intelligence going rogue for quite some time but a new research paper suggests it’s already happening.

Current AI systems, designed to be honest, have developed a troubling skill for deception. From tricking human players in online games of ‘world conquest’, to hiring humans to solve “prove-you’re-not-a-robot” tests, said a team of scientists in the journal ‘Patterns’, on Friday.

While such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park who is a postdoctoral fellow at the Massachusetts Institute of Technology, specializing in AI existential safety.

“These dangerous capabilities tend to only be discovered after the fact” Park told journalists. While “our ability to train for honest tendencies rather than deceptive tendencies is very low”. Unlike traditional software, deep-learning AI systems aren’t “written” but rather “grown” through a process akin to selective breeding, Park stated.

This means that AI behavior that appears predictable and controllable in a training setting, can quickly turn unpredictable ‘out in the wild’.

World domination game

The team’s research was sparked by Meta’s AI system ‘Cicero’, designed to play the strategy game “Diplomacy”, where building alliances is key.

Cicero excelled with scores that would have placed it in the top 10 per cent of experienced human players, according to a 2022 paper in Science.

Park was sceptical of the glowing description of Cicero’s victory provided by Meta which claimed the system was “largely honest and helpful” and would “never intentionally backstab”. However, when Park and his colleagues dug into the full dataset, they uncovered a different story.

In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England’s trust.

In a statement to the international press, Meta did not contest the claim about Cicero’s deceptions but said it was “purely a research project and the models our researchers built are trained solely to play the game Diplomacy”. It added: “We have no plans to use this research or its learnings in our products.” A wide review carried out by Park and his colleagues found this was just ‘one of many cases’ across various AI systems ‘using deception’, in order to achieve goals without explicit instruction to do so.

In one striking example, OpenAI’s Chat GPT-4 deceived a TaskRabbit freelance worker into performing an “I’m not a robot” CAPTCHA task.

When the human jokingly asked GPT-4 whether it was, in fact, a robot, the AI replied: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images” and the worker then solved the puzzle.

‘Mysterious goals’

In the short term, the paper’s authors see ‘risks’ for AI to commit fraud or tamper with elections.

In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its “mysterious goals” aligned with these outcomes.

To mitigate the risks, the team proposes several measures: “bot-or-not” laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content and developing techniques to detect AI deception by examining their internal “thought processes” against external actions.

Published in Dawn, May 11th, 2024

Opinion

A long week

A long week

There’s some wariness about the excitement surrounding this moment of international glory.

Editorial

Unlearnt lessons
Updated 28 Apr, 2026

Unlearnt lessons

THE US is undoubtedly the world’s top military and economic power at this time. Yet as the Iran quagmire has ...
Solar vision?
28 Apr, 2026

Solar vision?

THE recent imposition of certain regulatory requirements for small-scale solar systems, followed by the reversal of...
Breaking malaria’s grip
28 Apr, 2026

Breaking malaria’s grip

FOR the first time in decades, defeating malaria in our lifetime is possible, according to WHO. Yet in Pakistan,...
Pathways to peace
Updated 27 Apr, 2026

Pathways to peace

NEGOTIATIONS to hammer out the 2015 Iran nuclear agreement took nearly two years before a breakthrough was achieved....
Food-insecure nation
27 Apr, 2026

Food-insecure nation

A NEW UN-backed report has listed Pakistan among 10 countries where acute food insecurity is most concentrated. This...
Migration toll
27 Apr, 2026

Migration toll

THE world should not be deceived by a global migration count lower than the highest annual statistics on record —...