Shackling artificial intelligence

Published September 23, 2023
The writer completed his doctorate in economics on a Fulbright scholarship.
The writer completed his doctorate in economics on a Fulbright scholarship.

IN March this year, shocking images of former US president Donald Trump’s arrest suddenly surfaced on the internet. In one of the images, Donald Trump was seen darting across the street with the police on his heels, while another showed him being overpowered by law-enforcement officials. Despite the initial stir, it was quickly discovered that these images were ‘deepfakes’.

Deepfakes refer to near real-quality content — images, audio, or video — that is generated by using artificial intelligence, or AI. At times, deepfakes are created for comic relief but the proliferation of bogus or malicious deepfakes for political propaganda is now becoming increasingly common.

There are some giveaways when it comes to identifying deepfakes since the technology is not perfect right now, but deepfakes will become progressively harder to spot as technology evolves, according to experts.

Pakistan’s politics is no stranger to deepfakes with different political parties allegedly having used deepfakes to malign opponents. Recently, however, a mainstream political party shared an AI-generated deepfake image of a woman standing up to riot police through social media in order to ostensibly bump up support in the aftermath of civil unrest in May.

Such ever-increasing AI capabilities pose a number of serious and urgent challenges for policymakers. For starters, a fifth column could use AI-generated deepfakes to create fake hate speech and misinformation vis-à-vis deeply polarised polities to unleash communal riots or widespread violence targeted at religious minorities.

These potential pitfalls forced the United Nations to sound an alarm over the potential misuse of AI-generated deepfakes in a report this year. The UN report expressed grave concern about AI-generated misinformation such as deepfakes in conflict zones, especially as hate speech has often been seen as a forerunner to crimes like genocide.

AI-generated deepfakes also pose a serious challenge in the arena of international conflict. For instance, deepfakes can be used to falsify orders from a country’s military leadership. After hostilities erupted between Russia and Ukraine, the Ukrainian president was featured in a deepfake video asking citizens to surrender.

Deepfakes can also be used to sow confusion and mistrust between the public and the armed forces, thereby raising serious possibility of a civil war. Finally, deepfakes can also be used to provide legitimacy to various uprisings or wars, making it very difficult to contain conflicts.

Taking cognisance of such dangers, Brookings Institution, a US think tank, recently published a report on the linkages between AI and international conflict. Though the report did not advise against ever using deepfake technology, it did ask world leaders to employ a cost-benefit analysis before unleashing deepfake technology against high-profile targets.

Perhaps the most serious challenge from AI-generated deepfakes will manifest itself in the shape of weakening and ultimately demolishing democracy. As such, democracy is already in retreat the world over with democratic breakdowns, sometimes called democratic regression, on the rise since 2006.

Democracy, in broad strokes, is premised on the idea that sovereignty only belongs to the people, who are capable of electing the best representatives through elections. This is the reason why politicians throughout history have sought to appeal directly to the masses. But many people never get a chance to interact or listen to a politician in person. Still, many voters are able to make up their minds by assessing a politician’s acumen through campaign ads, debates and now increasingly through social media.

In a sense, the opportunity to assess the capabilities of a politician through real and un-doctored content is crucially important in a democracy. For this reason, intentionally distributing AI-generated deepfakes that portray a particular politician uncharitably in order to influence voter choice, is akin to election fraud as this is tantamount to subverting the will of the people.

Over a period of time, such nefarious actions will not only reduce the quality of democracy, but also lead to people completely losing faith in democracy, thereby offering space for authoritarian takeover.

What this points out is that policymakers will have to find ways to regulate AI both in the short and long term. In the short term, policymakers will have to find ways to stop the malicious use of AI-generated deepfakes aimed at influencing voter choice in democratic elections.

The Election Commission of Pakistan should immediately require all political parties to assign their digital signature to all videos circulated by them for campaigning. The ECP should also set up a body to monitor other deepfakes through using blockchain technology — blockchain provides decentralised validation of authenticity so that anyone can verify the originality of information.

In the long term, AI itself will need regulation as there is a probability, however small, that human beings will lose control of it causing unbridled AI to eventually supersede them. Different regulatory models are being contemplated at the moment in China, Europe and the US.

The Chinese model requires AI providers, whose products can impact public opinion, to submit for security reviews. Experts believe this approach could limit the offerings of AI-based services to Chinese consumers.

The US has opted to let the industry self-regulate, for now, with various AI companies agreeing to uphold voluntary commitments at the White House this July. It goes without saying that Pakistan’s policymakers will have to come up with an AI regulatory model with Pakistan’s ground realities in mind.

We are truly entering a new age where AI’s impact can now be found almost everywhere. Where these technological advancements need to be celebrated, there is also a dark and malicious side to AI that needs to be watched very carefully. Nations cannot — and should not — prohibit or ban AI, but this does not mean that this leviathan should be allowed to run amok. It must be shackled.

The writer completed his doctorate in economics on a Fulbright scholarship.

aqdas.afzal@gmail.com

X (formerly Twitter): @AqdasAfzal

Published in Dawn, September 23th, 2023

Opinion

Editorial

Punishing evaders
02 May, 2024

Punishing evaders

THE FBR’s decision to block mobile phone connections of more than half a million individuals who did not file...
Engaging Riyadh
Updated 02 May, 2024

Engaging Riyadh

It must be stressed that to pull in maximum foreign investment, a climate of domestic political stability is crucial.
Freedom to question
02 May, 2024

Freedom to question

WITH frequently suspended freedoms, increasing violence and few to speak out for the oppressed, it is unlikely that...
Wheat protests
Updated 01 May, 2024

Wheat protests

The government should withdraw from the wheat trade gradually, replacing the existing market support mechanism with an effective new one over the next several years.
Polio drive
01 May, 2024

Polio drive

THE year’s fourth polio drive has kicked off across Pakistan, with the aim to immunise more than 24m children ...
Workers’ struggle
Updated 01 May, 2024

Workers’ struggle

Yet the struggle to secure a living wage — and decent working conditions — for the toiling masses must continue.