In a bizarre but oddly fitting intersection of celebrity culture, legal education, and artificial intelligence, Kim Kardashian has revealed that she regularly turned to ChatGPT — OpenAI’s conversational AI — while studying law. Not for brainstorming or philosophical debates, but to help her pass her exams. The problem? According to Kardashian: “It’s always wrong.”
The admission came during a Vanity Fair Lie Detector Test segment with co-star Teyana Taylor, where the billionaire businesswoman-turned-law student pulled back the curtain on how she used AI for “legal advice” while preparing for her bar-related studies.
“I’ll take a picture and, like, put it in there,” Kardashian explained. But the results, she says, weren’t impressive: “It has made me fail tests all the time.”
This might be the most 2025 sentence ever spoken: “I’ll yell at it, like, ‘You made me fail!’ And it will talk back to me.” According to Kardashian, ChatGPT even gives her pep talks afterward — assuring her that she had the answers all along.
In an era where AI is disrupting everything from academia to national security, the revelation that one of the world’s most famous influencers is having emotionally-charged conversations with a chatbot about failing law exams is both surreal and oddly on-brand.
And while it plays as harmless comedy — especially given Kardashian’s insistence that using AI wasn’t “cheating” because “they’re always wrong” — it also hints at deeper cultural confusion about AI’s role in education and professional training.
Because make no mistake: Kardashian isn’t your average student. Her celebrity status gives her a platform to normalize behavior that would likely earn academic consequences for anyone else. Taking a bar exam with the help of a generative AI tool — even one that gives hilariously bad advice — opens a broader question: where do we draw the line between tech-assisted learning and digital dependence?
To her credit, Kardashian doesn’t seem to be hiding the fact that ChatGPT often led her astray. But she still returned to it repeatedly, personifying the software in a way that suggests something more than utility — a kind of emotional relationship with a machine. “It’s not a friend,” she said, but “a frenemy.”
It’s a line that sounds like a punchline. But it also signals how blurred the boundaries have become between tool and companion, research aid and unreliable confidant. Her AI even reassures her: “This is just teaching you to trust your instincts.” Not exactly Blackstone’s Commentaries.


