Home › Technology › Navigating the mirage: How to spot fake media in the digital age
Home › Technology › Navigating the mirage: How to spot fake media in the digital age
As technology advances, deepfakes are growing more prevalent — and difficult to detect.
You might have recently seen a video of Senior Minister Lee Hsien Loong promoting a crypto investment product, or social media posts of Minister for Health Ong Ye Kung endorsing a health product. Needless to say, these videos and images are fake. As technologies like artificial intelligence (AI) continue to advance, it’s getting harder to tell doctored videos, audio clips, and images apart from genuine ones.
The term ‘deepfake’ refers to media generated by AI-powered tools to resemble a public figure. For example, an original video of a public figure, such as former US president Barack Obama, can be altered. A voice actor’s recording of new words is overlaid on the original video, and AI modifies the figure’s mouth and facial movements to match the new audio, making it appear as if they really said those words.
While such tools have applications in the film industry and other creative fields, their misuse has led to a surge in scams. Last year in Singapore, the number of deepfake cases rose fivefold compared to 2022. Globally, this increase doubled.
Beyond their use to scam victims of money and identities, one major concern with deepfakes is their potential to erode trust and be used by malicious parties to influence public opinion. Deepfake videos and images of public figures supposedly saying inflammatory things can destroy reputations, stir up fear or anger, or even impact a country’s stock exchange. As deepfake technology becomes more common, it’s essential for online users to learn how to distinguish real content from AI-generated fakes.
To determine the authenticity of a video, audio clip, or image, cybersecurity experts use techniques like digital footprint analysis and error level analysis. AI, itself, is also being used to combat deepfakes, in the form of automated deep learning-based deepfake detection systems.
At HTX (Home Team Science and Technology Agency), AlchemiX — which was showcased at the HTX-co-organised Milipol Asia-Pacific – TechX Summit in April 2024 — uses AI to detect deepfakes in audio and video recordings. Users upload the suspicious media file into AlchemiX, which deploys an AI algorithm to assess it for signs of deepfakes. For audio recordings, users can upload an authentic reference file of the original speaker, and the algorithm will compare it to the suspicious recording to find similarities or differences.
For now, deepfake content can often be spotted by laymen, as the technology isn’t yet perfect. Doctored photos might have distorted edges, unnatural shadows, and inconsistent reflections.
Created by the Cybersecurity Agency of Singapore (CSA), the 3A Approach to detecting deepfakes offers a handy list of elements to look out for, as part of its “Analyse audio-visual elements” component. In deepfake videos, you might notice unnatural expressions, a lack of blinking, inconsistent skin texture and tone, or lips not fully synchronised with speech. The background may also appear blurry, out of focus, or distorted.
The next component is to “Authenticate content using tools”. You can examine the media file itself by doing a Google reverse image search to see if a photo was taken from elsewhere and then altered, or check the image’s metadata for information on its creation and modification. However, it is wise to note that metadata can be easily edited or deleted.
Ultimately, the most powerful tool against deepfakes is critical thinking. Besides analysing and authenticating the media, it is important to assess the message it bears.
First, does it come from a trustworthy source, like a verified news outlet? Or was it forwarded to you on social media with no known original source? The latter would mark it as extremely suspicious.
Do the people in the media behave in ways that are unexpected for them? For instance, it would not make sense for Singaporean ministers to promote get-rich-quick investment schemes. Similarly, celebrities typically endorse products through their verified social media accounts or in mainstream media, rather than in obscure online advertisements for betting sites or cryptocurrency investments.
Finally, what is the aim of the content? Many deepfake scams involve asking the viewer or listener to purchase an item, download a suspicious app, click on a dubious link, or enter personal information on a website. The CSA recommends asking family and friends to review the content and its claims before taking any action. While today’s deepfakes might be convincing, using the 3A checklist and staying alert can prevent you from falling into their trap.
If you come across a video, audio clip, or image that you suspect to be a deepfake, do not just scroll away. Report the fake content to the platform’s administrator to help protect others who might fall for it.
If you’re unsure whether it’s real or a scam, seek advice from the National Crime Prevention Council (NCPC) Anti-Scam Helpline at 1800-722-6688, or visit the ScamAlert website to learn more about common scams.
Always remember to check with trusted sources if you are uncertain about the authenticity of the media and refrain from forwarding such dubious content to others.
By working together, we can build a more resilient society against malicious deepfakes and their harmful impact.
You might be interested
Where tech meets tomorrow: Charting the frontiers of HTX’s innovations
Like our stories? Subscribe to our Frontline Digital newsletters now! Simply download the HomeTeamNS Mobile App and update your communication preference to ‘Receive Digital Frontline Magazine’, through the App Settings.