AI guardrails

Published May 6, 2026 Updated May 6, 2026 06:23am
The writer is a tech founder and CEO. She writes on innovation and digital ecosystems in emerging markets.
The writer is a tech founder and CEO. She writes on innovation and digital ecosystems in emerging markets.

CONSIDER a farmer in South Punjab applying for a loan through a banking app. He has no credit history, no employment record, no documented collateral. What he does have is 30 years of ploughing the same piece of land, paying his debts in cash, and raising a family on seasonal income. Most Western-trained AI models do not understand his reality. They see absence where there is, in fact, a life that qualifies. Rejection is swift. Second chances, rare.

This is what algorithmic bias looks like for Pakistan: AI that is imported, imposed, and institutionalised.

Let’s get one thing out of the way. Pakistan is unlikely to develop its own frontier AI models anytime soon. The cost of a single training run now runs into hundreds of millions of dollars. In a multi-trillion-dollar industry, US tech firms alone are projected to invest $650 billion by 2026, a figure much larger than the size of Pakistan’s economy. So, despite our fanciful ambitions, no ‘ApnaGPT’ is underwriting our national debt anytime soon.

What we will continue to do is adapt global models for local use. This poses enormous risks, ones our AI Policy 2025 has largely sidestepped.

Our AI policy has not addressed some big risks.

Data first. Every time a Pakistani institution integrates a foreign AI system, it exports your information, such as financial records, health data or behavioural patterns into jurisdictions beyond its control. With no enforceable data protection laws, this isn’t integration. It’s extraction — a new form of colonialism where data is gold, its suppliers little more than a banana republic.

Then there is the question of harm. In 2018, a UN fact-finding mission found that Facebook’s algorithmic amplification of hate speech in Myanmar played a “determining role” in violence against the Rohingya; an extreme demonstration of what happens when a powerful foreign technology operates in a vulnerable society without local accountability. Today, Pakistan’s regulatory institutions are no better positioned to intervene than Myanmar’s were.

Meanwhile, a different kind of risk is emerging at home. Pakistan’s tech ecosystem is quietly coding away on open-source, open-weight AI models like Llama, Qwen and DeepSeek, ready for integration across consumer platforms. These models lower barriers to innovation and hold real promise for the Global South, but also come with easily exploitable vulnerabilities.

In the absence of state-mandated safety evaluations and incident reporting requirements, when these highly capable systems fail through fraud, error or manipulation, recourse will be limited, and liability unclear.

This is not an argument against open-source innovation, for therein lies Pakistan’s shot at greatness. Instead, it is an argument for guardrails — enforceable national standards that test these systems for safety, accountability and real-world fit. Without them, scale will only multiply risk, not responsible innovation.

Elsewhere in the world, governments are at least trying to keep pace. The EU’s AI Act classifies systems by risk before public deployment. Singapore mandates auditability and accountability. Meanwhile, Pakistan’s AI policy promises to train one million professionals by 2030 but says little about how it will protect you and me from this AI tsunami.

Having said that, more regulation alone is not the answer, because in Pakistan it can often mean control, constraining civil liberties and stifling innovation — something we’re already witnessing with the Prevention of Electro­nic Crimes Act. AI go­­vernance must not re­­peat this pattern. We need oversight to limit harm, not overreach to institutionalise it.

The window for shaping Pakistan’s AI trajectory cannot wait till 2030. It must begin now.

First, establish data sovereignty. Clear rules that govern where Pakistani data is stored, processed and shared, particularly in model training and deployment. High-stakes sectors such as finance, healthcare, and public services should require baseline testing for bias, safety and contextual relevance.

Second, establish liability. Define a clear chain of responsibility before AI deployments. In case of an incident, all parties should know where the buck stops.

Third, invest in public awareness. Citizens need to understand how these powerful systems can be misused and what legal remedies are available.

Nothing radical. Everything foundational.

Pakistan is 250 million strong. Young, wired, and already in the data stream. The choice is ours: to enter the next phase of humanity as a nation of sovereigns or as a colony of slaves. Let history not repeat itself.

The writer is a technology founder & ethical AI strategist.

danielle.sharaf@gmail.com

Published in Dawn, May 6th, 2026

Opinion

Editorial

The May war
Updated 06 May, 2026

The May war

Rationality demands that both states come to the table and discuss their grievances, and their solutions in a mature manner.
Looking inwards
06 May, 2026

Looking inwards

REGULAR appraisals by human rights groups and activists should not be treated by the authorities as attempts to ...
Feeling the heat
06 May, 2026

Feeling the heat

ANOTHER heatwave season has begun, and once again, the state is scrambling to respond to conditions it has long been...
Energy shock
Updated 05 May, 2026

Energy shock

The longer the crisis persists, the more profound its consequences will be.
Unchecked HIV
05 May, 2026

Unchecked HIV

PAKISTAN’S HIV surge is no longer a slow-burning public health concern. It is now a system failure unfolding in...
PSL thrills
05 May, 2026

PSL thrills

BY the end of it all, in front of fans who had been absent for almost the entire 11th season of the Pakistan Super...