In the ever-evolving world of technology, it’s fascinating how artificial intelligence has crept into nearly every facet of our lives, from ordering takeout to recommending movies. But perhaps one of the most intriguing—and controversial—places it’s popping up is in education. As a longtime observer of higher learning trends, I’ve watched how tools like ChatGPT have revolutionized the way students approach their assignments. Suddenly, the classic struggle of late-night essay writing feels like a relic of the past, with AI offering a seemingly effortless shortcut. Kids today can generate full papers in minutes, tweaking them just enough to pass as their own. Yet, as with any powerful tool, there’s a double-edged sword here: the more accessible it is, the bigger the risks. Professors are waking up to this shift, and one in particular has gone viral for sharing a clever tip on spotting AI-generated work. It’s a reminder that while AI can mimic creativity, it hasn’t quite mastered the nuances of human voice.
Enter Matt Prince, an adjunct professor at Chapman University in Southern California, who became an unlikely internet sensation after posting a TikTok video that racked up over 700,000 views. In it, he addresses his students with the folksy charm of a seasoned educator, framing his message as a “friendly reminder” for those wrapping up their semester. Prince isn’t scolding outright; instead, he’s offering practical advice rooted in his experiences marking papers. He cautions that if students are leaning on AI for final submissions, they need to put in the extra effort to review and edit the output. Otherwise, what might seem like a time-saver could end up backfiring spectacularly. I’ve always appreciated teachers who warn rather than punish immediately—it builds trust and encourages responsibility. Prince’s approach humanizes the issue, treating students like adults capable of making smart choices, even in a world brimming with shortcuts. Watching the video, you can almost hear his voice echoing the frustrations many educators feel: technology is great, but it shouldn’t replace the learning process.
At the heart of Prince’s warning is a single, seemingly innocuous word that he’s flagged as a dead giveaway: “moreover.” He claims that if this adverb appears in a student’s paper, there’s a 99% chance it’s been churned out by an AI algorithm rather than a human teen. Think about it—while the word “moreover” sounds sophisticated and formal, it’s not exactly something you’d expect from a 20-year-old casually chatting about their weekend plans or venting on TikTok. Prince backs this up with his own observations from years in the classroom, noting that he’s simply never encountered a young person naturally weaving it into their writing without sounding stilted. He urges students to proofread meticulously, ensuring that their work flows organically and sounds authentically like them. This advice resonates on a personal level; I remember my own college days, struggling to infuse my essays with personality amidst academic jargon. AI might master grammar and structure, but it often misses that intangible spark—the unique cadence of a student’s voice shaped by late-night coffees and personal experiences. By highlighting “moreover,” Prince is essentially advocating for a return to effortful revision, a skill that benefits anyone beyond the classroom.
Interestingly, Prince contrasts this with a previous red flag for AI detection: the em-dash. For a while, that hyphenated punctuation mark was the telltale sign of machine-generated text, often appearing in places where human writers might opt for commas or periods. But AI has evolved, and so have its quirks—formal adverbs like “moreover” are stepping in as the new suspect. It’s a testament to how quickly these tools are improving; just when educators think they’ve cracked the code, AI adapts, learning from feedback and vast datasets. As someone who’s experimented with various AI platforms, I can see how this constant cat-and-mouse game keeps things dynamic. Back in the day, spotting an em-dash might have led to suspicions, but now it’s about vocabulary choices that feel archaic or overly polished. Prince’s insights underscore a broader truth: AI is here to stay in academia, but students who engage deeply—which means reading, editing, and infusing personal touches—stand a better chance of thriving. It’s not about banning tools but embracing them responsibly, much like how calculators transformed math education without erasing the need for understanding.
Of course, Prince’s video didn’t escape scrutiny, and the comment section erupted with passionate rebuttals from grammar enthusiasts and accused overachievers. One commenter, clearly miffed, wrote, “??? So we’re gonna get penalized for knowing English?” It sparked a mini-debate about the value of advanced vocabulary in academic settings. Another chimed in, sharing a personal story: at 22, they recalled using “moreover” frequently in essays as a teen, fueled by a love for expansive words and a penchant for grammar rules. They humorously added that they would have risked trouble just for writing authentically. Others lamented how their “extensive vocabulary” now gets mistaken for AI output, turning what was once an asset—a mark of intellect and effort—into something suspicious. As a former grammar nerd myself, I get where they’re coming from; it’s disheartening to think that complex language could backfire in today’s climate. These reactions highlight a generational divide: while older folks might view “moreover” as pretentious or robotic, younger generations are embracing diverse vocabularies, partly thanks to online influences. Prince’s critics argue that penalizing students for eloquence overlooks those who genuinely craft thoughtful prose, potentially discouraging creativity.
Ultimately, while Prince didn’t delve into the exact repercussions for getting caught—perhaps an academic integrity chat, a failing grade, or worse—it’s implied that it could seriously tarnish a student’s record. In the unforgiving world of higher education, accusations of cheating via AI aren’t taken lightly, and the fallout can linger on transcripts and future opportunities. This serves as a cautionary tale for everyone involved: students must navigate this gray area ethically, and educators need fair detection methods beyond subjective hunches. As AI integration grows, universities are likely to adopt more sophisticated tools, like plagiarism software enhanced for AI signatures. From my viewpoint, this uproar signals a turning point—education is adapting, but it reminds us to celebrate human ingenuity over shortcuts. Prince’s viral moment isn’t just about one word; it’s about fostering authenticity in a digital age, ensuring that the joy of learning isn’t overshadowed by convenience. In the end, whether you’re a student hedging your bets or a professor safeguarding standards, the lesson is clear: read, revise, and remain true to yourself. That way, “moreover” or any other quirk won’t define your work—it’ll be your own unique voice that shines through.
This whole saga got me reflecting on broader implications for society. Imagine a future where classrooms evolve with AI as a collaborative tool rather than a cheat sheet. Educators like Prince are pioneers, bridging the gap between technology and tradition. Students, in turn, could learn to dialogue with AI, treating it as a brainstorming partner rather than a ghostwriter. Ethics courses might include modules on digital authorship, teaching the importance of attribution and originality. On a personal level, I’ve seen how AI can enhance creativity—if used mindfully—helping me outline articles or brainstorm ideas without replacing the writing itself. The backlash in those comments also points to societal shifts: we’re valuing authenticity more, shunning performative perfection for raw, heartfelt expression. Penalizing natural skill with words like “moreover” feels archaic, akin to criticizing someone for using big words in conversation. Instead, perhaps essays should evaluate content depth over stylistic flair. As universities grapple with this, policies might emerge that reward hybrid approaches—AI for drafting, humans for refinement. It’s exciting yet daunting, forcing us to redefine what “original” means in 2023.
Looking ahead, Prince’s advice could spark educational reforms, encouraging interactive workshops where students practice humanizing AI outputs. For instance, teachers might assign “voice exercises,” where learners transform sterile text into conversational narratives. This builds confidence and skills, turning potential pitfalls into opportunities. I recall a documentary I watched about AI in journalism, where reporters used bots for research but always added the human touch—empathy, context, lived experience—that machines can’t replicate. That’s the key here: AI is a tool, not a crutch. If students heed Prince’s warning, they might emerge stronger, with essays that reflect their journeys rather than algorithmic guesswork. The comment section drama shows resistance to change, but also innovation; grammar ners rallying together might inspire new standards. Ultimately, this moment captures the spirit of progress—uneasy, debated, but undeniably forward-moving. For anyone navigating this landscape, take heart: education isn’t about perfection; it’s about growth. And with AI as an ally, not an adversary, that growth can be limitless. Now, if only we could teach AI to sprinkle in slang and inside jokes, writing papers might actually be fun again. For now, though, “moreover” serves as a gentle nudge: edit, engage, and own your work.
In wrapping up, the conversation around AI in academia feels like a microcosm of larger tech debates. We’re at a crossroads, balancing innovation with integrity. Prince’s viral tip isn’t just about a word—it’s about preserving the human element in learning. As I think back on my own student days, filled with scribbled notes and passionate revisions, I realize how lucky we are to have these discussions now. They prevent a future where creativity is commodified. Students, professors, and even AI developers can learn from this: collaboration beats suspicion. So, next time you craft an assignment, pause and ponder—does it sound like you? If “moreover” sneaks in, tweak it. Make it yours. That’s the beauty of education: it’s not static; it’s alive, evolving with every generation. And in this digital era, that’s more valuable than ever. Let’s embrace it, quirks and all.













