Most lawyers were torn between wincing and laughing when a lawyer filed a brief packed with case authority created out of whole cloth by an AI bot. Meanwhile, a segment of the bar is fretting that we will be replaced by powerful artificial intelligence.
My concern, based on a couple of casual forays into AI, is not that I will become redundant, but rather that I will spend my professional life trying to fix the harm of bad information delivered to the public. My fear is that the public stands to be even further misinformed, rather than enlightened, by artificial intelligence in the law.
Close encounters with chatbots
I launched a couple of queries about a a subject I know well, bankruptcy law, to scope out the current capacity of AI chatbots. I first asked ChatGPT, Google’s Bard, and Bing with AI to compare Chapter 7 with Chapter 13.
Bard told me you repay all your debts in Chapter 13. ChatGPT implied that only those below median income could file Chapter 7. Neither of course is correct. They offered, unbidden, conflicting answers on the impact of either chapter on credit score.
Then I asked about the impact of the CA Supreme Court Brace case on California real estate law. Brace overturned what had been the law for decades about joint tenancy and community property. Bard summarized the holding tolerably well, but continued to be screamingly wrong. Bard said the Brace holding made it more difficult for bankruptcy trustees to reach community property, rather than easier to reach joint tenancy property. Bard missed entirely that all of the community property comes into a bankruptcy estate, and inferred that a non debtor spouse kept that spouse’s 50% interest out of bankruptcy.
ChatbotGPT returned nothing, noting that its knowledge base cut off in 2021, the year after the Brace decision. Bing knew nothing about the impact of Brace.
Finally a question to Bard about the number of California properties held in joint tenancy delivered an answer, “12,500,000 most between spouses.” Good so far. But how do I cite it in the article I’m writing? And in light of its answers to other queries, how do I know it’s reliable?
But Bard couldn’t leave well enough alone; it went on to discuss joint tenancy and made a flat statement that joint tenants become liable for each other’s debts! I can’t even imagine what source it scraped for that bit of misinformation.
AI impact on consumers
I recognize that it’s early days for AI for the general public. But with all the hype, it’s inevitable that our potential clients will be looking to the web, even more than they do now, to answer their legal questions.
There’s no shortage of bad information about bankruptcy already out on the web. But a traditional web search returns multiple options with attribution that helps a careful reader assess the reliability of what any one site says about a question.
The chatbots I tinkered with offer no clues about where their information originated. Worse, it’s delivered as authoritative. And presumably, the chatbot has no liability for the harm that inevitably flows from accepting its results as gospel.
Currently we have bad information wrapped in a hi tec cloak of absolute authority. Part of our role as consumer lawyers has always been the education of the public. In the face of today’s AI, we are called on to up our game.