As the new year begins, I keep coming back to one simple thought: AI feels scary and confusing to a lot of people, not just because of the technology, but because of how we talk about it.
Online, the conversation around AI feels loud and messy. If you spend enough time scrolling, it starts to feel like you have to pick a team. One side says AI will destroy us. Another says it’s already hurting people. Another says it’s just a tool. Another says it will save the world.
I don’t think choosing a side is the answer.
From what I see, there are four main camps in the AI conversation. Each one speaks with confidence. Each one believes they are right. And most of the time, they talk past each other.
The first camp is driven by fear.
These are the people who believe AI is getting powerful very fast. They worry that something like human-level intelligence, or even super-intelligence, is coming soon. In their view, that kind of AI could be extremely dangerous. Some even believe it could wipe out humanity. Because of that, they want AI development to slow down or stop until we can be sure it’s safe.
I understand this fear. If you truly believe the stakes are that high, slowing down feels responsible. But this camp often lives in future disasters that haven’t happened yet. When you focus too much on worst-case stories, it becomes hard to see what AI actually is today.
The second camp also wants to slow things down, but for a very different reason.
These are the ethicists. They aren’t impressed by AI hype at all. They care about the harm happening right now. They talk about biased systems, fake information, surveillance, stolen data, and workers losing jobs or being treated unfairly. Their message is clear: stop worrying so much about imaginary super-AI and start fixing real problems hurting real people.
I find this camp important and grounding. They remind us that technology is never neutral. But sometimes, they talk as if AI is only a problem to control, not a tool that could also help.
The third camp feels more practical to me.
These are the pragmatists. They don’t see AI as a god or a monster. They see it as technology. Useful in some ways. Weak in others. This group believes AI should keep developing, but with rules, testing, and safety checks that improve over time. They don’t panic, and they don’t overpromise. They look at evidence and real-world results.
Most days, this is the camp I feel closest to.
The fourth camp is the most optimistic.
These are the futurists and builders. They believe powerful AI is coming, and they’re excited about it. They talk about curing diseases, speeding up science, and solving huge problems humans struggle with. They want to move fast because they worry that slowing down means falling behind.
I admire their ambition. Their excitement can be inspiring. But sometimes that excitement turns into blind confidence. Risks get brushed aside. Concerns are treated as annoying obstacles instead of real issues that deserve attention.
So who is right?
What stands out to me is that each camp holds a piece of the truth. Fear can protect us. Ethics can keep us grounded. Practical thinking can guide smart decisions. Optimism can push progress forward.
The problem isn’t that these camps exist. The problem is that online, they act like only one of them can be right.
As we move into a new year, I don’t think the answer is picking a camp or finding some perfect middle ground. I think the real work is learning how to take the best ideas from all four, while rejecting the extremes that dominate online arguments.
Less panic. Less hype. More honesty.
That’s how we start making sense of AI. And that’s how we decide what kind of future we actually want.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.
![]()

