They show up at every conference panel, every policy roundtable, every workshop on technology’s social impact: the Doomer Next Door. Once a stalwart advocate for ICT rights, they now shake their head at every AI demo as if it were a Ouija board channeling our collective demise. They worry about hallucinations not as bugs, but as harbingers, a ghost in the machine whispering portents of collapse.
The Doomer Next Door isn’t an AI researcher, or even much of a practitioner. Their roots are in digital rights, cybersecurity, maybe blockchain. They cut their teeth fighting surveillance laws or championing open networks. They know how to mobilize against Big Brother, but when it comes to artificial intelligence, their instincts betray them. They cannot resist anthropomorphizing every quirk of a large language model as intention, every garbled fact as agenda, every unexpected output as proof of creeping autonomy.
Ironically, the same person who once cautioned us not to “fetishize the algorithm” now warns us that GPT’s hallucinations are signs of sentience. What once was a campaigner’s clear-eyed demand for transparency has hardened into a sci-fi script.
Jailbreaking as Rebellion, Prompting as Denial
The Doomer Next Door loves AI jailbreaking. They grin as they coax a model into saying what it shouldn’t, reveling in their rebel role. They share exploits like samizdat literature, proof that the AI police can be tricked. But here’s the paradox: they can’t quite admit what this proves, that AI is less autonomous than they claim. If a prompt can rewire the system’s behavior, then the system is pliable, not unstoppable. Prompt engineering is power, not prophecy. Yet to concede that is to surrender the romance of doom.
Disinformation Without Users
This doomer has lobbied against disinformation for years. They’ve testified in hearings, written manifestos, urged regulation of platforms. They understand that humans spread lies, that ecosystems amplify them, that context matters. But when AI enters the conversation, suddenly all agency evaporates. Now it’s the machine that misinforms, not the troll farm that deploys it, not the user who shares it, not the platform that monetizes it. In this script, people are innocent bystanders swept up in a tidal wave of synthetic nonsense.
It’s a comforting narrative for someone who once spent decades fighting human negligence and malice. If the culprit is the machine, then the problem feels bigger, scarier, more cinematic. It also conveniently absolves the human institutions, governments, platforms, communities, from their ongoing failures.
Why the Doomer Next Door Persists
The Doomer Next Door isn’t just a personality quirk. They reflect a deeper cultural anxiety in tech activism. For two decades, ICT rights work was about checking concentrated power, governments overreaching, corporations surveilling, platforms censoring. Now, AI arrives as a new power center, but its inner workings are stranger, less familiar. It feels alive, even when it isn’t. It feels uncontrollable, even when it’s endlessly steered by prompts and guardrails. So the instinct is to elevate it into an existential foe.
But here’s the rub: by turning AI into the monster under the bed, the Doomer Next Door distracts us from the actual fight, the here-and-now harms of labor precarity, of bias in training data, of monopolistic capture of compute resources, of users weaponizing tools to do what humans have always done.
The Neighbor We Need
We don’t need to exile the Doomer Next Door. They still bring the passion, the suspicion of power, the organizing skills that once kept surveillance bills at bay. But we need them to see AI not as a sentient trickster, but as an accelerant of human choices, good and bad. We need them to stop blaming the hallucinations for our hallucinations about technology. And we need them to remember: the hardest problem in technology has never been the code. It has always been us.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.