The Race Between Excellence and Mediocrity in the AI Age
In the rapidly evolving landscape of artificial intelligence, a nuanced reality is emerging in educational institutions across America. Sam Ransbotham, a professor of business analytics at Boston College and host of MIT Sloan Management Review’s “Me, Myself and AI” podcast, finds himself both inspired and concerned by what he witnesses daily in his machine learning classroom. Some students leverage AI tools to create truly remarkable work, pushing boundaries Ransbotham couldn’t have anticipated. Yet alongside this innovation runs a troubling parallel trend: many students are simply “phoning things into the machine,” settling for AI-generated mediocrity rather than developing deeper understanding. This phenomenon represents a new kind of digital divide—not one of access, as Boston College provides premium AI tools to all students regardless of socioeconomic background, but rather a divide in technological curiosity and engagement. “The deeper that someone is able to understand tools and technology, the more that they’re able to get out of those tools,” Ransbotham explains. His concern crystallizes around what he calls “a race to mediocre,” where the ease of achieving acceptable results through AI threatens to undermine the pursuit of excellence that should define higher education. “Boston College’s motto is ‘Ever to Excel.’ It’s not ‘Ever to Mediocre,'” he notes pointedly, highlighting how the convenience of AI-assisted shortcuts might actually impede students’ development of true mastery and understanding.
This educational dilemma reflects broader questions about how we measure AI’s impact on society and work. Ransbotham draws an insightful parallel to Wikipedia, which he researched extensively over a decade ago. Before Wikipedia, Encyclopedia Britannica represented a traditional business model with measurable economic value—employees produced content, printers created physical books, and distribution chains delivered tangible products. When Wikipedia emerged, Encyclopedia Britannica’s business model collapsed, representing a measurable economic loss. Yet Ransbotham poses a thought-provoking question: “Would any rational person say that the world is a worse place because we now have Wikipedia versus Encyclopedia Britannica?” The conventional economic metrics failed to capture Wikipedia’s immense societal value—its accessibility, scope, and dynamic nature far exceeded what was previously possible. Similarly, AI’s true impact often eludes traditional measurement frameworks. When AI helps someone make a marginally better decision by providing enhanced insights from existing documents or data, how do we quantify that improvement? These small, incremental enhancements to decision quality across millions of daily choices may represent enormous aggregate value that remains largely invisible to conventional economic indicators. Like Wikipedia before it, AI may be creating forms of value that our measurement systems aren’t designed to detect.
Interestingly, despite the media focus on AI’s content generation capabilities, Ransbotham finds himself gravitating toward a different application: information distillation. In a world drowning in data and content, AI’s ability to meaningfully summarize and extract key insights from vast information landscapes may ultimately prove more valuable than its creative functions. “We talk a lot about generation and the generational capabilities, what these things can create,” he observes, “I find myself using it far more for what it can summarize, what it can distill.” This perspective suggests that AI’s most practical immediate value might not be in replacing human creativity but in amplifying human comprehension—helping us process more information in less time, enabling us to focus our limited attention spans on what truly matters. In an information economy where attention has become perhaps our scarcest resource, AI tools that effectively compress and prioritize information could represent a significant productivity multiplier, helping professionals and students alike fit more meaningful cognitive work into their limited 24 hours each day.
Perhaps most fascinatingly, Ransbotham finds value in AI even when it fails—indeed, sometimes especially when it fails. “Often I find that the tool is completely wrong and ridiculous and it says just absolute garbage,” he admits, “But that garbage sparks me to think about something—the way that it’s wrong pushes me to think: why is that wrong? … and how can I push on that?” This perspective reveals a profound insight about human-AI collaboration: even incorrect AI outputs can stimulate valuable critical thinking. When an AI system produces a flawed answer, the process of identifying, analyzing, and correcting those flaws often leads humans to deeper understanding and more nuanced thinking. This suggests that the most productive relationship with AI might not be uncritical acceptance of its outputs, but rather an iterative dialogue where human judgment refines and improves upon machine-generated starting points. In educational settings, this points to potential pedagogical approaches where AI tools aren’t banned but are instead incorporated as thinking partners whose limitations become opportunities for developing critical thinking skills.
Throughout his conversation, Ransbotham emphasizes the importance of cutting through polarized narratives about artificial intelligence. “There’s a lot of hype about artificial intelligence,” he notes, “There’s a lot of naysaying about artificial intelligence. And somewhere between those, there is some signal, and some truth.” This measured approach reflects the mission of his podcast—to move beyond both techno-utopian fantasies and dystopian fears to uncover how AI is actually transforming organizations and lives. The reality he observes is neither the job apocalypse feared by pessimists nor the productivity paradise promised by optimists, but rather a complex landscape of incremental changes, unexpected applications, and evolving human-machine relationships. Finding this “signal” amid the noise requires looking beyond dramatic headlines to understand how people and organizations are actually incorporating these tools into their work and learning—sometimes in ways that the tools’ creators never anticipated. The true story of AI’s impact emerges not from theoretical projections but from careful observation of these real-world adaptations and innovations.
As we navigate this transitional period in AI development, Ransbotham’s observations highlight a central tension: technologies that make adequacy easily achievable may inadvertently discourage the pursuit of excellence. This challenge extends beyond education into workplace settings, creative fields, and virtually every domain where AI tools are being deployed. The path forward likely involves developing new pedagogical and organizational approaches that harness AI’s summarization and ideation capabilities while still cultivating the deeper understanding and critical thinking that distinguish truly excellent work. Rather than viewing AI as either a replacement for human effort or merely a productivity tool, we might instead conceptualize it as a thinking partner whose greatest value comes not from providing final answers but from expanding the range of possibilities we consider and helping us navigate increasingly complex information landscapes. The question becomes not whether AI will replace human intelligence, but how human intelligence will evolve and adapt in partnership with these new cognitive tools—and whether we can harness them to raise the ceiling of human achievement rather than merely lifting the floor of acceptable mediocrity.












