Lately, every conversation about AI feels loud. One group says it will save us, fix everything, and lead us into a better future. Another group says it will steal our jobs, spread lies, and maybe even end the world. It feels like every discussion turns into a fight.
Recently, I spent time listening to the four main camps that shape this debate. After hearing all sides, I’ve reached a simple conclusion: none of them has the full answer.
AI is not magic. And it’s not evil. It’s a tool. Like any powerful tool, it can help people or hurt them. That depends on how we use it. Shutting everything down isn’t realistic. Moving full speed without thinking is reckless. The real work is finding a practical path in between.
The truth is, AI already does real good. It helps doctors spot patterns they might miss. It helps researchers move faster. It helps regular people write, plan, learn, and get more done. These benefits are not imaginary. They are happening right now.
But harm is also happening right now.
Bias shows up in systems people rely on. Misinformation spreads faster than ever. Privacy feels harder to protect. Workers worry about whether their jobs will still exist. These are not science fiction problems from the future. They are real, present issues. That’s why I care more about dealing with today’s harm than arguing endlessly about far-off doomsday scenarios.
What worries me most is how uneven our priorities are.
We spend huge amounts of time and money making AI more powerful. But we spend far less effort making it safer. Too often, we build first and ask questions later. That doesn’t make sense. If a system is powerful enough to affect millions of lives, it should meet clear safety standards before it’s released.
We already expect this from buildings, airplanes, and medicine. AI should not get a free pass just because it’s new or exciting.
At the same time, I don’t believe heavy, rigid rules are the answer either. AI changes too fast. New tools appear almost every day. That means we need rules that can adapt as we learn. Transparency matters. Independent checks matter. Admitting failures matters. And because AI doesn’t stop at borders, countries need to work together, especially when the risks are high.
Another part of this conversation makes me uneasy: who gets heard.
Right now, the loudest voices belong to big companies, investors, and people who want everything to move faster. Workers, communities, and everyday users often feel left out. That’s a problem. AI will shape how we live and work, so the people affected by it deserve a real seat at the table. Power is already gathering in the hands of a few, and without broader voices, that gap will only grow.
So where does that leave me?
I try to stay curious. AI is not a mystery meant only for experts. It’s a skill. Like learning a new app or a spreadsheet, the best way to understand it is to use it. I experiment. I practice. I see where it fits into my work and creative projects. I don’t pretend to know everything. I don’t. But I keep learning anyway.
At the same time, I stay critical. I don’t treat AI as a source of truth. I assume it can be wrong, biased, or misleading. I use it to brainstorm or draft, but I stay in control. I check facts. I check tone. I think about consequences before acting on what it gives me.
I also take privacy seriously. I’m careful about what personal or sensitive information I share. If a tool doesn’t give me control, I walk away. Convenience is not worth giving up everything.
When it comes to work, I try not to panic. Technology has always changed jobs. That’s nothing new. What’s different now is that we can use the same tools that scare us to make ourselves stronger. I look at which parts of my work are routine and which parts need judgment, communication, and deep understanding. I invest in those human skills. I use AI to support my thinking, not replace it.
I don’t believe in doomsday stories. I don’t believe in blind hype either. I believe in staying practical, informed, and involved.
The future of AI isn’t something that just happens to us. We help shape it through our choices, our voices, and our willingness to engage.
That’s why I’m choosing the quiet middle. It’s not flashy. It doesn’t fit neatly into a camp. But it feels like the most honest and responsible place to stand.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.
![]()

