This isn’t a made-up story. I saw it unfold with my own eyes, and I’m still shaking my head when I recall it.
An analyst came up with what seemed like a brilliant idea: use a custom AI chatbot to sift through massive piles of budget and financial data. No coding skills required. You could just type in a question in plain English, and the chatbot would dig through the numbers for you. No need to mess with Python or complicated databases. On paper, it was revolutionary, technology that could help anyone understand government spending or financial reports, not just trained experts.
And it worked. Sort of.
He loaded all this complex budget data into the chatbot, and suddenly people could ask things like, “How much did we spend on schools last year?” and get answers right away. It made the numbers accessible. It gave regular folks a way to ask serious questions without needing a background in data science.
But then came the moment that changed everything.
Someone in the room asked, “How do you know the chatbot is giving the right answers?”
The analyst didn’t hesitate. “It can’t be wrong,” he said. “It generates Python code.”
That might sound impressive, but it’s like saying, “This calculator always tells the truth because it uses math.” Just because a machine spits out code doesn’t mean the code is correct. Even perfect-looking code can give the wrong answers if it’s built on wrong assumptions or messy data. Anyone who works with data knows that.
But the analyst refused to even consider the idea that the AI might mess up. He was so sure of its perfection, he couldn’t see the flaws.
That was disappointing. But what came next was worse.
Someone suggested a new idea: What if the chatbot didn’t just answer questions, what if it could ask them too? Could it look at all this data and suggest interesting things to explore? Could it help us think about the budget in new ways?
The analyst shot it down immediately. “No, it would just ask random stuff,” he said. “It wouldn’t be useful.”
Really? This chatbot had been trained on mountains of budget data. It clearly understood the numbers. Why couldn’t it come up with thoughtful questions? Why not at least try?
But the analyst didn’t want to hear it. He had built a useful tool, but he couldn’t see beyond his original idea. He treated the AI like a calculator: you punch in a question, it gives an answer, and that’s it. He couldn’t imagine it doing anything more.
And that’s where the real problem lies.
We often treat new technology like it’s just a fancier version of what we already have. Faster searches. Better formatting. More efficient ways to do what we already do. But AI is more than that. It can help us think differently. It can raise questions we might not think to ask. It can challenge our assumptions and show us new paths we didn’t see before.
This analyst had the tools to break new ground. He built something powerful, then locked it in a box and threw away the key.
The real lesson? The biggest challenge with AI isn’t the tech, it’s our own mindset. It’s about staying curious. Staying open. Being willing to let a machine surprise us, not just with answers, but with better questions.
If we don’t keep imagining, even the smartest AI in the world won’t take us anywhere new.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.