Deepfake technology has matured from a fringe curiosity into a destabilizing force with the power to distort reality. Built on “deep learning” and synthetic media generation, deepfakes create videos and audio so convincing that the boundary between truth and fabrication collapses. While the risks to privacy and reputation are obvious, the bigger danger lies in how this technology can be weaponized to sway public opinion, and with it, democratic and financial stability.
From Manipulated Media to Market Risk
Disinformation in politics is not new. Campaigns have long relied on selective editing, misleading statistics, and aggressive advertising. But deepfakes represent a quantum leap. Unlike conventional misinformation, which can often be fact-checked, deepfakes exploit our instinctive trust in what we see and hear.
For financial markets, the implications are sobering. Investor sentiment is highly sensitive to political risk. A convincing fake of a central banker announcing a rate hike, or a video of a corporate leader confessing to fraud, could trigger sell-offs before truth has a chance to catch up. The cost of delayed detection would be measured in billions.
A Case Study: Google Veo in Philippine Politics
The risks are no longer hypothetical. In early 2025, Philippine Senator Ronald “Bato” Dela Rosa circulated deepfakes generated using Google’s Veo platform to influence public opinion during the impeachment trial of Vice President Sara Duterte. The fabricated clips portrayed artificial students commenting about the impeachment ala “man-on-the-street” interviews, blurring the line between satire, propaganda, and outright deception.
The incident demonstrates how easily deepfakes can be weaponized to inflame division, shape narratives, and tilt political outcomes. In markets, such tactics erode trust in governance, add volatility to emerging economies, and cast doubt on the credibility of official communication. For global investors, the weaponization of AI in politics is no longer a distant scenario, it is here.
Lessons from Digital Manipulation
We’ve seen before how digital tools reshape elections and markets. Cambridge Analytica’s micro-targeting during the 2016 U.S. elections underscored how data misuse could sway voter behavior. In Indonesia, Prabowo Subianto’s campaign leveraged AI-assisted media to court younger demographics effectively. These cases highlight a trend: technology not only informs public opinion but increasingly manipulates it.
Deepfakes represent the next escalation, shifting influence from persuasion to outright fabrication. For markets that prize transparency and credibility, this is an existential threat.
Why Regulation Is Urgent
The fight against deepfakes must be multi-pronged. Detection technologies are improving, but they operate in an arms race where creators stay one step ahead. Social media platforms, where most deepfakes spread, must commit to faster takedowns and clearer labeling. Public education in media literacy is essential.
But technology and awareness are not enough. What is missing, and urgently required, is regulation with teeth. Governments should impose severe penalties for malicious deepfake creation and dissemination, particularly when intended to distort political outcomes or manipulate markets. Just as insider trading laws exist to punish financial deceit, deepfake abuse must carry criminal liability.
A Call to Action
Democracy and markets share a foundation: trust. Both require participants to make informed decisions based on reliable information. Deepfakes threaten to erode that foundation. Unless regulators, platforms, and civil society act decisively, we risk a world where perception is perpetually distorted, and truth becomes just another variable to trade on.
The lesson from Senator Dela Rosa’s Veo deepfakes is clear: the danger is real, present, and global. To safeguard both democratic processes and financial stability, we must regulate deepfakes not tomorrow, but today.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.