I was recently invited to speak at Congress about new AI laws. I did not expect that one short comment would upset so many people. But it did.
I said that in the future, AI could change how our voices sound. One moment you might sound Bisaya. The next moment, you might sound like someone from BGC. I said it casually, almost as an example. I did not think much of it at the time.
Online, people thought very differently.
Some were angry. Some were hurt. Some felt attacked. When I saw the reactions, I understood why.
In the Philippines, accents are not just about sound. They are personal. The way we speak tells people where we grew up. It tells them our background. It tells part of our story. We joke about accents. We tease each other. But behind that humor is pride. Our voices are tied to who we are.
So when people hear the phrase “accent neutralization,” it can feel like erasing identity. It can sound like being told that the way you naturally speak is wrong. Or less than. Or something that needs fixing.
That reaction makes sense.
But I also think this is a good moment to explain what this technology really is, and what it is not.
Voice accent neutralization is an AI tool that changes how a person sounds. It does not change the words you say. It does not change your ideas. It only changes how your voice comes across to the listener. The goal is simple: to make speech easier to understand.
In call centers, that listener is usually in the United States or Canada. Around 70 percent of calls go there. So when companies say “neutral,” what they usually mean is closer to how Americans or Canadians speak.
This technology is not science fiction. It is already being used today.
Why do call centers use it? The answer is simple. It helps business. When customers understand agents more easily, calls move faster. People do not have to repeat themselves as much. There is less frustration on both sides. That leads to lower costs and better customer ratings.
It also changes training. If you have worked in a call center, you know how intense voice training can be. Agents practice vowels. They practice tone. They practice sounding a certain way. With AI, much of that work can be handled by software. Agents can focus more on solving problems instead of worrying about their accent.
Some people say this is good for workers. And in some ways, it is. People who were rejected before because of their accent may now get hired. The AI does the adjusting, not the person. Some companies also say this can help people with speech disabilities by removing barriers.
But there is another side to this story.
Many critics ask a hard question. Why is the “neutral” accent always Western? Why does neutral sound American or Canadian, but not Filipino? If the standard is always Western, what does that say about our own voices?
For some people, this feels like cultural erasure. Others say it is dehumanizing. If AI has to “fix” your accent, then your natural voice is treated like a defect. Something to hide. Something to correct.
As a Filipino, I feel this tension deeply.
Call center work is a huge part of our economy. Millions of people depend on it. And the truth is, adaptation has always been part of this industry. Long before AI, workers were already asked to change how they spoke. AI did not create this pressure. It just made it faster and cheaper.
So the real question is not whether this technology will exist. It already does. The real question is how it will be used.
Will workers have a choice? Will they be told when their voice is being changed? Will this protect them from bias, or quietly reinforce it? These questions matter.
When I hear call center workers outside of work, they sound like themselves. On the phone, they sound “neutral.” That split already exists. AI just makes it more visible.
Voice accent neutralization is not purely good or bad. It is a tool. And like most tools, it reflects the values of the system using it.
The challenge is to improve communication without losing respect for identity. We should ask not only what sounds clear, but what sounds fair.
And maybe the most important question is this: in a world where AI can change how we sound, who gets to decide what a “normal” voice really is?
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.
![]()

