THIS is with reference to the report “Meta to start using ‘Made with AI’ labels next month” (April 6). The decision by Meta to label content generated by artificial intelligence (AI) that is posted on its social media platforms is a welcome step towards addressing the growing menace of deepfakes and disinformation.
As technology advances, so does the potential for its manipulation. Deepfakes — manipulated content often featuring people in fabricated scenarios — pose a serious threat to our trust in online information. For instance, an abrupt surge recently in AI-generated deepfakes has underlined the gravity of the situation. Last year, alarming images showing former US president Donald Trump’s arrest circulated online, leaving the public confused, but later they were found to be nothing but deepfakes.
Similarly, in New Hampshire, thousands of voters received robocalls featuring a deepfake of President Joe Biden, urging them to abstain from voting. Such incidents highlight the urgent need for concerted action. The potential consequences of failure to do anything in this regard are dire. Imagine manipulated military orders or messages from world leaders causing chaos.
To combat this fast-growing threat, a multi-pronged approach is needed. Tech companies, like Meta, have a role to play, but so do authorities and civil society. Improving the level of digital literacy of the masses will empower the users to be discerning about online content. Addi- tionally, stringent content moderation policies are needed to curb the spread of harmful deepfakes. While AI offers immense potential, we must be vigilant against its misuse. Indeed, this powerful technology needs robust safeguards to ensure it does not spiral out of control.
Sajjad Ali Mugheri
Larkana
Published in Dawn, April 30th, 2024
Dear visitor, the comments section is undergoing an overhaul and will return soon.