In 2019, a manipulated video of Philippine Senator Leila De Lima began circulating on social media. It wasn’t technically a “deepfake,” but rather a spliced edit of real footage. Yet the effect was the same: reputational damage, confusion, and the fueling of political narratives. That case was an early local warning that manipulated media would soon evolve into a more sophisticated, and more dangerous, form.
What Are Deepfakes?
“Deepfakes” are synthetic media generated by deep learning systems, capable of mapping and recreating facial expressions, voices, and gestures with unnerving precision. The term itself comes from the fusion of “deep learning” and “fake.” What started as research in computer vision has become a powerful, and controversial, tool.
While early iterations were clunky, today’s versions are frighteningly realistic. In fact, Google recently introduced Veo, a next-generation AI video tool that can generate high-definition footage from text prompts. Combined with faceswapping applications, the technology now enables almost anyone with a computer to make a convincing fake video. What once took Hollywood budgets is now in the hands of hobbyists, and bad actors.
Why This Matters to Finance
For the financial world, the implications go far beyond politics. Trust is the currency of markets, and deepfakes directly attack it. Imagine a CEO “caught” on video announcing a bankruptcy that doesn’t exist, or a regulator appearing to endorse a fraudulent investment scheme. Just a few minutes of circulation on social media could tank stock prices or move billions in capital before the truth catches up.
Deepfake-driven misinformation is not speculative. In 2020, fraudsters used AI-generated voice cloning to impersonate a German energy executive, tricking a subsidiary into transferring $243,000. With faceswapping and generative video tools evolving, the same tactic could soon target investor briefings, earnings calls, or televised interviews.
Identity and Ownership
The legal and ethical gray areas compound the risk. If a deepfake version of you delivers a speech, or appears in an ad, do you own that likeness? Performers and influencers already face unauthorized digital replicas, and corporations must consider the same. How should companies defend their brands and executives from synthetic impersonation? The answers remain unsettled.
Entertainment vs. Exploitation
It’s worth noting that deepfake technology isn’t inherently malicious. Hollywood has used digital re-creations to de-age actors or bring back characters like Princess Leia in Star Wars: Rogue One. Google’s Veo and other generative tools will undoubtedly unlock creative opportunities in advertising, filmmaking, and even financial education content.
But the line between entertainment and exploitation is perilously thin. A parody video of a public figure might draw laughs; a fabricated video of that same figure endorsing a scam could spark financial panic. The difference is intent, and intent is hard to legislate.
Regulation and Defense
Lawmakers have begun to respond. In the Philippines, the so-called “Anti-Fake News Bill,” was proposed to penalize disinformation campaigns. Globally, platforms like Meta and TikTok are testing watermarking systems, while researchers at DARPA are building detection frameworks. Yet, detection often lags behind innovation, creating an arms race between creators and fact-checkers.
For investors and professionals, the takeaway is clear: vigilance is no longer optional. Just as you analyze financial statements with skepticism, you must view media, especially video, with a critical eye. Trust, once eroded, is expensive to rebuild.
In finance, perception drives reality. And in the age of deepfakes, reality can be rewritten with a few lines of code.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.