AI as governance

Published January 26, 2026
The writer is an international commercial lawyer and innovator.
The writer is an international commercial lawyer and innovator.

MOST large companies today describe their AI strategy in practical terms. They identify a set of potential use cases, rank them by expected impact and launch pilots in the most promising areas. This is a sensible, often necessary, starting point. It allows organisations to build technical capability, develop internal confidence and observe how AI behaves in real operating environments.

Over time, however, a consistent pattern becomes visible. The organisations that move beyond experimentation are not the ones that simply expand their portfolio of use cases. They are the ones that treat early deployments as the foundation for a second phase, where AI begins to shape how work itself is organised, coordinated and governed. This distinction points to a more fundamental divide between delivering AI initiatives and shaping an AI strategy. The line is not drawn by spend, scale or technical prowess but by what the organisation treats as its central problem or identifies as its key points of friction.

The first phase is typically anchored in technology and marked by a certain amount of technical optimism. Models appear in discrete corners, for example, customer service, marketing, forecasting and compliance. Each deployment becomes a small island of capability, generating momentum and insight within its own boundaries. The second phase begins somewhere less fashionable, with the problems organisations have been living with for years. These are not innovation challenges but structural ones — decisions that take too long, information trapped in silos or processes that depend on a small number of individuals who ‘just know how things are done’, commercial teams that struggle to translate strategy into execution or compliance functions that spend more time assembling reports than managing risk.

When organisations begin to focus here, AI stops being a tool and starts becoming part of the operating and governance model, embedded in the way authority, accountability and coordination are designed. This carries particular weight in institutional environments like Pakistan’s, where firms often operate within weak enforcement, informal practices and a heavy reliance on personal authority rather than formal systems. In such contexts, technology does more than improve efficiency. It reshapes how organisational power is exercised.

AI is reshaping authority.

An AI system that screens contracts, flags regulatory exposure or prioritises customers is no longer merely software. It is performing a form of delegated judgement. It shapes what the organisation sees, what it ignores and what it acts upon. This is why the second phase of adoption is less about accumulation and more about alignment. Use cases, left to themselves, tend to perfect their own local worlds. Each function grows sharper, faster and more capable in isolation. The larger question is whether these sharpened parts begin to compose a shared way of working, a common logic of movement and response.

Pilots, in this light, take on a different character. They are no longer simply tests of technical possibility. They become quiet probes into the organisation’s own readiness, its willingness to formalise judgement, to distribute accountability and to allow certain decisions to be carried by design rather than by habit. In this sense, pilots become as much governance exercises as technology trials. The organisations that are willing to institutionalise delegated judgement are the ones that will see the most optimum results. However, such delegation must be structured and in line with the organisation’s own governance principles. In return, technology platforms must clearly feed back into the governance loop with advanced transparency showing whe­re such delegated authority has been exercised.

Many boards still treat AI primarily as a technology topic, something to be managed by IT or digital teams. In practice, it increasingly functions closer to financial controls or legal authority. Systems that draft, filter or prioritise information are shaping corporate judgement in much the same way as policy frameworks and compliance regimes.

The question is no longer whether a model is technically accurate. It is whether the organisation is deliberately shaping the framework within which AI-informed decisions are formed and sustained, and this is a question of governance architecture rather than technology alone.

The strategic choice, then, is not whether to adopt AI. That has already been answered in the affirmative by competition and efficiency pressures. It is whether organisations take the step of building the frameworks that determine how AI-informed decisions are authorised, reviewed and sustained. In the end, it is not the machinery that endures but the form of the institution it leaves behind.

The writer is an international commercial lawyer and innovator.

Published in Dawn, January 26th, 2026

Opinion

Editorial

The May war
Updated 06 May, 2026

The May war

Rationality demands that both states come to the table and discuss their grievances, and their solutions in a mature manner.
Looking inwards
06 May, 2026

Looking inwards

REGULAR appraisals by human rights groups and activists should not be treated by the authorities as attempts to ...
Feeling the heat
06 May, 2026

Feeling the heat

ANOTHER heatwave season has begun, and once again, the state is scrambling to respond to conditions it has long been...
Energy shock
Updated 05 May, 2026

Energy shock

The longer the crisis persists, the more profound its consequences will be.
Unchecked HIV
05 May, 2026

Unchecked HIV

PAKISTAN’S HIV surge is no longer a slow-burning public health concern. It is now a system failure unfolding in...
PSL thrills
05 May, 2026

PSL thrills

BY the end of it all, in front of fans who had been absent for almost the entire 11th season of the Pakistan Super...