I’ve been waiting, and then it arrived, in a client’s updated Outside Counsel Guidelines:
The Firm and its personnel or subcontractors shall not use any External Generative AI Tools in the performance of any services in relation to a Matter … or use External Generative AI Tools to generate any deliverable, document or content for [Client], regardless of the intended end use.
No problem. We don’t use generative AI tools in our client work.
Don’t get me wrong – we’re not Luddites here, and I’m not dreading a Skynet singularity anytime soon. We’re a law firm focused on Information Governance, and so we need to remain up to speed on the impacts of a wide range of information technologies. And the promise of AI is indeed breathtaking. Sam Altman had a lot on his plate last week, but his Hard Fork interview of November 15, just two days before all the crazy erupted at OpenAI, is a must-listen on the transformative potential for AI, with due regard for attendant risks.
Yet we each choose for ourselves what tools we will use – or not use – when lawyering for clients. I don’t presume to know whether other lawyers should use generative AI tools, but I do know my answer. And it’s no for me and my law firm, for three reasons.
First, generative AI is not dependable enough for my taste in doing client work. And that’s not just the idiocy of filing a brief generated by AI without checking its accuracy. Simply put, the reliability, or as Altman prefers, the “controllability,” of generative AI is not there yet, or at least not for me. Hallucination at three percent or closer to 27 percent is well beyond my comfort level – my professional goal remains to hallucinate less than one percent of the time.
Second, after decades at a big firm, I designed my small firm to be simple, stripping away anything that has more risk or cost than value to me. For example, we don’t use Wi-Fi, because Wi-Fi is unnecessary for us, it has security vulnerabilities, and the requisite safeguards would complicate our firm’s annual ISO 27001 data security audits. For me, using generative AI tools would bring more risk and uncertainty than value to my law practice.
Last, I need to go to the source to ensure I know what I’m doing. When advising clients on data retention and data security requirements under U.S. federal and state statutes and regulations, I need to personally read the laws and regulations, without any filter or intermediary. Otherwise, I’m not sure. And to be frank, it all sticks better in my head when I take the time to do my work at this deliberate pace, which puts the “sure” in “slowly but surely.”
I know that these are early days for generative AI. In Gartner Hype Cycle terms, we are still on the Peak of Inflated Expectations, trending toward the Trough of Disillusionment. I’ll be watching from the sidelines, as AI eventually reaches the Plateau of Productivity. For purely creative writing, including some blogging, I of course acknowledge AI’s kickstarting value. And Kevin O’Keefe nails it when he reminds us that “using AI in legal writing is like taking an open book exam—you still need to understand the concepts.”
So, while state ethics committees will continue to ponder the legal ethics of AI use (despite GPT-4 apparently scoring higher than most lawyers-to-be when taking the Multistate Professional Responsibility Exam), my choice will remain Organic Intelligence, not AI.
No snobbery about this – you do you, and I’ll do me. But in my own legal work for clients, I’m simply not content with using AI-generated content.