In every field outside the classroom, generative AI has been a godsend. Researchers use it to crunch through literature reviews. Marketers use it to spin out endless campaign variants. Analysts automate drudgery and expand their insight. Automation specialists push productivity barriers that once seemed unbreakable. Everywhere AI is adopted with purpose, it accelerates creativity and human capability.
Except in universities.
Here, the purpose is depressingly narrow: pass the test, submit the essay, collect the diploma. And in such an environment, AI has not expanded learning, it has become the perfect accomplice to mediocrity. When the student’s goal is simply compliance with requirements, AI will optimize for compliance. It will write the essay. It will solve the problem set. It will mimic “original” thought at the push of a button. The result? Students “cheat” better, not learn better.
This, however, is not AI’s fault. It is the fault of an educational system whose purpose has been hollowed out, and of professors who refuse to see it.
The Doomer Professors
Listen to the faculty lounge these days and you’ll hear a chorus of doom. Professors fume about how AI “destroys brain function,” how students are “outsourcing thinking,” how ChatGPT “makes mistakes” that students cannot spot. The narrative is always the same: AI is broken, therefore students who use it will break, too.
But let’s be honest. These professors aren’t worried about brain function. They’re worried about control. They’ve built careers on gatekeeping compliance: grading papers, catching plagiarism, enforcing arbitrary requirements. Now, AI has exposed how brittle and artificial these requirements are.
The professoriate’s critique of AI is not really about accuracy. After all, students have been relying on Wikipedia, SparkNotes, and answer keys for decades. The critique is about authority. AI has shifted the power balance, giving students access to intellectual shortcuts that undermine the professor’s role as arbiter of effort.
Purposive AI, Purposive Learning
Contrast this with fields where the goals are real, not performative.
A researcher doesn’t just want an essay; they want to discover something new. AI helps them scan millions of documents, draft hypotheses, and model scenarios faster than any human assistant ever could.
A marketer doesn’t just want a slogan; they want to connect with an audience. AI helps them generate dozens of creative directions, refine them in real time, and test for resonance.
An analyst doesn’t just want numbers; they want insight. AI processes messy datasets, runs scenarios, and highlights patterns that a human brain might never notice.
In all these contexts, AI does not dumb down thinking, it expands it. The difference is purpose. Where purpose is authentic, AI multiplies capacity. Where purpose is hollow, AI multiplies shortcuts.
The Philippine University Problem
This question of purpose cuts especially deep in the Philippines. Our higher education system has long been criticized for treating learning as credentialism. The diploma is a ticket to employment, not a testament of mastery. Professors recycle lecture slides, demand essays nobody will ever read again, and test for memorization rather than understanding.
AI didn’t break this system. It simply revealed the cracks.
If a student can type a prompt into ChatGPT and pass a course, the problem is not that ChatGPT exists, it’s that the course requires so little in the first place. If the only barrier to graduation is producing a 500-word essay, then of course AI will do it better, faster, cheaper. That’s not a student problem or a technology problem. That’s a curriculum problem.
Stop Blaming AI, Start Blaming Ourselves
The doomers want to fight AI. They install “AI detectors” that are notoriously unreliable. They draft policies banning the use of AI. They demand students “handwrite” essays in class, as if reverting to 19th-century methods will somehow produce 21st-century thinkers.
This is a dead end. The more professors fight the technology, the more students will simply outpace their rules. You cannot ban the calculator and expect students to master arithmetic forever. You cannot ban the internet and expect students to master encyclopedias. And you cannot ban AI and expect students to master critical thought.
Instead, education must confront its own purposelessness. Why should students write essays? Why should they pass tests? What are they supposed to become as a result? Unless professors can answer these questions, their courses are little more than hoop-jumping exercises, and AI will always be the better jumper.
Toward a New Purpose
The challenge is not to make AI-proof education. The challenge is to make education worth more than AI’s shortcuts. That means projects that demand synthesis, reflection, collaboration, and original creation. That means assignments where the “answer” is not enough, students must defend, iterate, and apply their ideas. That means professors must abandon their fantasy of policing compliance and embrace the reality of guiding exploration.
AI will not destroy brains. Professors will, if they continue to design education as a game of compliance rather than a pursuit of purpose. The real danger is not that students will stop thinking. The danger is that professors will stop asking what learning is for.
Dominic “Doc” Ligot is one of the leading voices in AI in the Philippines. Doc has been extensively cited in local and global media outlets including The Economist, South China Morning Post, Washington Post, and Agence France Presse. His award-winning work has been recognized and published by prestigious organizations such as NASA, Data.org, Digital Public Goods Alliance, the Group on Earth Observations (GEO), the United Nations Development Programme (UNDP), the World Health Organization (WHO), and UNICEF.
If you need guidance or training in maximizing AI for your career or business, reach out to Doc via https://docligot.com.
![]()

