Kim Kardashian’s ChatGPT Mishap Spotlights AI’s Limitations in Law
Kim Kardashian’s recent revelation about failing law exams after relying on ChatGPT has ignited an important conversation about artificial intelligence’s role in professional fields. The SKIMS co-founder and aspiring attorney admitted in a Vanity Fair interview that she used the AI chatbot to answer legal questions while studying, only to discover its responses were frequently incorrect. “I use [ChatGPT] for legal advice… They’re always wrong. It has made me fail tests,” Kardashian confessed, highlighting how the technology’s confident yet inaccurate answers directly contributed to her academic setbacks. Despite these challenges, the 45-year-old remains committed to her legal journey, planning to retake the California bar exam. Her experience serves as a cautionary tale about the limitations of AI in specialized professional contexts.
The situation reflects a growing trend across various professions where students and practitioners increasingly turn to AI tools for assistance with complex tasks. Legal experts have responded with concern about this shift. Duncan Levin, a former prosecutor and Harvard law lecturer, colorfully described Kardashian’s approach as “like saying you hired a Magic 8 Ball as co-counsel,” warning that AI can “sound confident while being completely wrong.” The issue extends beyond celebrity anecdotes – American courts have already witnessed cases where attorneys submitted documents containing fabricated citations generated by AI, resulting in disciplinary actions. This pattern underscores a fundamental challenge: while these tools produce plausible-sounding content, they remain prediction machines rather than reliable factual databases, making them problematic for high-stakes professional work.
The concept of AI “hallucinations” – what OpenAI defines as “instances where a model confidently generates an answer that isn’t true” – stands at the heart of this controversy. Matthew Sag of Emory University School of Law emphasized that generative AI can be useful for lawyers, but only when wielded by those who already possess legal expertise. “Everything ChatGPT tells you about the law will sound plausible, but that’s dangerous if you don’t have some expertise or context to see what it’s missing and what it’s hallucinating,” he explained to Newsweek. This creates a paradoxical situation: those with enough knowledge to effectively use AI tools might not need them as much, while those most dependent on the technology lack the expertise to identify its errors, potentially leading to serious professional missteps.
The dilemma becomes more complex when considering access to legal resources. Harry Surden of the University of Colorado Law School points out that approximately 80% of Americans face legal issues without access to affordable legal counsel. In these situations, AI might represent “an improvement over the alternative, which is often guessing or bad legal advice from friends and family.” However, this pragmatic perspective comes with significant caveats. Mark Bartholomew from the University at Buffalo School of Law notes that while integrating AI into legal education is inevitable, “the danger is overreliance.” He worries that excessive dependence on AI “might stunt their development as lawyers” by replacing the essential work of reading cases, analyzing laws, and constructing arguments—skills that form the foundation of legal expertise and cannot be outsourced to algorithms.
Beyond individual challenges, AI’s infiltration into professional practice raises systemic concerns about accountability and competence. Frank Pasquale of Cornell Law School observes that incorrect AI-generated legal documents are “already a big problem,” with lawyers facing sanctions for citing nonexistent cases in multiple countries. Dr. Anat Lior from Drexel University emphasizes that Kardashian’s experience serves as “an important caution for anyone using ChatGPT for high-stakes situations.” The challenge extends beyond mere technical glitches—it involves fundamental questions about professional responsibility and the ethical use of technology. As judges begin issuing sanctions against “lazy lawyering” that relies too heavily on unverified AI output, the legal profession faces mounting pressure to establish clear boundaries and standards for appropriate AI integration.
Despite these significant concerns, experts generally agree that AI tools will remain fixtures in professional environments, including law. The emerging consensus suggests that generative AI should function as a starting point rather than a definitive authority—a resource that requires rigorous human verification before being trusted. Kardashian’s public admission of failure after relying on ChatGPT offers a valuable learning opportunity for students, professionals, and the wider public. It reminds us that while technology continues to advance at remarkable speed, certain professional skills—critical thinking, contextual understanding, and ethical judgment—remain irreplaceably human. As AI becomes more deeply integrated into high-stakes fields, maintaining healthy skepticism toward its outputs and upholding traditional standards of professional responsibility becomes not just prudent but essential when lives, livelihoods, and justice hang in the balance.













