SAN FRANCISCO: YouTube on Tuesday said it will soon allow users to request that artificial intelligence-created imp­osters be removed from the platform, and will require labels on videos featuring realistic-looking “synthetic” content.

New rules aimed at AI-generated video material will go into force in the coming months as fears mount over the technology being abused to promote scams and misinformation, or even to falsely depict people appearing in pornography.

“We’ll make it possible to request the removal of AI-generated or other synthetic or manipulated content that simulates an id­­e­ntifiable individual, in­­cl­u­ding their face or voice,” YouTube product management vice presidents Emily Moxley and Jennifer Flannery O’Connor said in a blog post.

In evaluating removal requests, the Alphabet-owned site will consider whether videos are parodies and whether the real people depicted can be identified.

YouTube also plans to start requiring creators to disclose when realistic video content was made using AI so viewers can be informed with labels.

“This could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do,” Moxley and O’Connor said in the post.

“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials.” Video makers violating the disclosure rule may have content removed from YouTube or be suspended from its partner program that shares ad revenue, according to the platform.

“We’re also introducing the ability for our music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice,” Moxley and O’Connor added.

Elsewhere on the internet, Meta last week said that advertisers will soon have to disclose on its platforms when AI or other software is used to create or alter imagery or audio in political ads.

The requirement will take effect globally at Facebook and Instagram at the start of next year, parent company Meta said.

Advertisers will also have to reveal when AI is used to create completely fake yet realistic people or events, according to Meta.

Meta will add notices to ads to let viewers know what they are seeing or hearing is the product of software tools, the company said.

“The world in 2024 may see multiple authoritarian nation states seek to interfere in electoral processes,” warned a recent blog post from Microsoft’s chief legal officer Brad Smith and corporate vice president Teresa Hutson, whose company runs the trailblazing generative AI platform ChatGPT.

“And they may combine traditional techniques with AI and other new technologies to threaten the integrity of electoral systems.”

Published in Dawn, November 15th, 2023

Opinion

Editorial

On press freedoms
Updated 03 May, 2026

On press freedoms

THE citizenry forgets, to its own peril, how important a free and independent media is in the preservation of their...
Inflation strain
03 May, 2026

Inflation strain

PAKISTAN’S return to double-digit inflation after 21 months signals renewed economic strain where external shocks...
Troubled waters
03 May, 2026

Troubled waters

PAKISTAN’S water crisis is often framed in terms of scarcity. Increasingly, it is also a crisis of contamination....
Iran stalemate
Updated 02 May, 2026

Iran stalemate

THE US and Iran are currently somewhere between war and peace. While a tenuous ceasefire — extended largely due to...
Tax shortfall
02 May, 2026

Tax shortfall

THE Rs684bn shortfall in tax collection during the first 10 months of the fiscal year is a continuation of a...
Teaching inclusion
02 May, 2026

Teaching inclusion

DISCRIMINATORY and exclusionary content in Punjab’s textbooks has been flagged in Inclusive Education for a United...