• Harassment often uses coded language, local slang and political insinuations
• Common complaints include hacking, sextortion, threats, deepfake abuse

ISLAMABAD: The Digital Secu­rity Helpline of the Digital Rights Foundation (DRF) has highlighted that harassment in Pakistan often relies on coded language, local slang, religious and political insinuations, and context-specific hate campaigns.

As a result, moderating systems — whether human or automated — are often unable to accurately interpret such harassment. Consequently, hate speech and abuse are more likely to be dismissed as non-violating, even when they clearly contain threats or incite offline harm against local communities.

The report documents the digital threats faced by at-risk communities in Pakistan, particularly women, religious minorities, and gender minorities, who experience digitally mediated harm based on their identities and amplified by social media platforms’ algorithms and dynamics.

The DRF report aims to fill the gap in quantifiable evidence on digital threats in Pakistan by using helpline incidents as real-time indicators and combining them with feedback on the effectiveness of digital tools used in crisis response.

The report recommends that safety and reporting tools should be made more accessible through regional language support and audio assistance for differently-abled individuals.

It also suggests that anti-amplification safeguards should be introduced to reduce the viral spread of harmful content while credible complaints are under review, thereby preventing irreversible reputational damage.

The report’s analysis triangulates cumulative helpline caseload trends and includes interviews with high-risk individuals working in journalism, law, minority rights activism, hate speech monitoring, student activism, and transgender community protection.

During the data collection period from May 2024 to December 2025, DRF’s helpline handled 5,041 new cases.

Across gender-segregated issue categories, a large number of complaints involved hacking, blackmail or sextortion, threats, image-based abuse — including edited or deepfake imagery — and social engineering or financial fraud.

The DRF also recommended the use of digital tools to address account compromises and hacking-related incidents.

According to the helpline survey conducted between May 2024 and December 2025, 64 per cent of respondents received an initial response within minutes, 93pc received digital safety advice, and 92pc reported reduced risk after receiving support.

Both the surveys and interviews showed that survivors prioritise rapid, guided triage and recovery support, and report improved feelings of safety after assistance. The findings also revealed uneven adoption of digital safety tools, driven less by lack of awareness and more by issues related to cost, usability, and limited platform responsiveness.

The DRF’s Digital Security Helpline, formerly known as the Cyber Harassment Helpline, emerged from the organisation’s direct engagement with individuals facing online abuse and insecurity in Pakistan.

Established in 2016, the helpline was shaped by the urgent need for practical, survivor-centred support after DRF’s online safety trainings revealed how many women were experiencing harassment and seeking immediate guidance.

Over time, the service expanded beyond addressing technology-facilitated gender-based violence to tackle a broader range of digital threats affecting civil society actors, journalists, human rights defenders, and other at-risk communities.

Its relaunch as the Digital Security Helpline reflects this broader mandate of providing specialised crisis support, tailored digital safety guidance, and informed tool recommendations to people navigating increasingly complex forms of online harm and surveillance.

The report added that the issue was more of a contextual problem than simply a language problem, as understanding harassment requires interpreting identity markers, local political triggers, and community norms.

This results in uneven protection, as the same content that may trigger takedowns or moderation action in one region may be ignored in another, systematically disadvantaging marginalised communities.

The report’s findings demonstrate that digital threats do not occur in isolation and cannot be reduced to mere online abuse.

It noted that the visibility of transgender identities and public service roles often leads to sexualised abuse and death threats, particularly when selectively edited media clips go viral.

Advocacy for religious minority rights also attracts coordinated digital hate campaigns that can increase offline and physical risks.

Similarly, political speech and student activism face layered threats, including propaganda attacks and algorithmic suppression that reduces the reach of human rights defenders and increases uncertainty.

The report further observed that women journalists and feminist lawyers frequently face sexualised harassment, self-censorship, and content deletion in an effort to protect their professional credibility.

Published in Dawn, May 15th, 2026

Opinion

Editorial

Growth below target
15 May, 2026

Growth below target

Pakistan lacks the export-oriented industrial expansion that has driven sustained high growth in other economies.
Limited openings
15 May, 2026

Limited openings

FOR years, even the smallest suggestion of engagement with Pakistan would trigger outrage in India’s political...
Meetings denied
15 May, 2026

Meetings denied

FORMER prime minister Imran Khan and his wife, Bushra Bibi, continue to be held incommunicado inside Adiala Jail....
Trump in Beijing
Updated 14 May, 2026

Trump in Beijing

China is no longer just a rising economic power.
Growing numbers
14 May, 2026

Growing numbers

FORWARD-looking nations do not just celebrate their advantages; they turn them into tangible gains. They also ...
No culling
14 May, 2026

No culling

CRUELTY implies an administrative failure to adopt humane solutions. Despite the Lahore High Court’s orders to use...