By Javed Mohammed

Advocate Attorney at law at Pollonais Blanc de la Bastide & Jacelon.

Generative AI is now a practical reality in legal work. We may adopt it deliberately, or encounter it through clients, external counsel, and the platforms we use every day. The real question is no longer whether Attorneys at Law should use AI, but how we use it without compromising accuracy, confidentiality, and the proper administration of justice. Professional guidance emerging from the UK, and other Commonwealth jurisdictions, are converging on the same point, that innovation is welcome, but integrity is non-negotiable in the way these tools are selected, supervised, and deployed.

Large Language Models (LLMs) are, at their core, generative systems. They can be valuable drafting and brainstorming assistants, but they are not concerned with “truth” in the way humans are. They predict plausible text based on patterns in data, and their outputs often carry a confident tone that invites users to anthropomorphize them. That can lead to undue reliance on content that reads persuasively while being wrong. This distinction matters in legal work because professional obligations are not satisfied by plausible language, stylistic fluency, or the exhaustive use of legalese but rather by the accurate application of the law to the facts, supported by genuine authorities, to arrive at a considered judgement.

Hallucinations

One of the most practical risks is “hallucination”, that is, the generation of plausible but false content, including invented citations. The Bar Council’s Information Technology Panel, in its paper Considerations when using ChatGPT and generative artificial intelligence software based on large language models, cautions that hallucinations can occur even in specialized AI legal research tools and emphasizes that practitioners MUST verify that cited sources exist, that citations are accurate and extant, and that the authority genuinely supports the proposition advanced. The same paper points to the consequences now seen in litigation where fabricated citations found their way into pleadings, attracting severe judicial criticism, wasted costs consequences, and referrals of the Attorneys at Law to regulatory boards. This is not merely a “tech” issue but rather is an “ethical and professional responsibility” issue since ultimately, the practitioner remains fully accountable for what is advised, filed, served, or relied upon, whether or not an AI tool assisted at any stage.

Confidentiality and Privilege

Thus, confidentiality and privilege remain equally central. The Bar Council paper stresses the need for extreme vigilance before sharing legally privileged, confidential, or personal data with generative tools, particularly where user inputs may be stored, reused, or otherwise processed beyond the practitioner’s control. Closer to home, Mark Bissett’s LexisNexis commentary, Balancing Innovation and Integrity: The Bahamas Issues New AI Guidance for Legal Practitioners, echoes the same caution against uploading client-sensitive information into public chatbots or open AI platforms, precisely because content may be retained or repurposed by the same platform. The practical implication for Attorneys at Law therefore, is that if LLMs are used at all, whether or not regulatory frameworks exist in the jurisdiction, they should be used within a secure environment, with clear safeguards, and with firm-level policy specifying what may (and may not) be entered into any system. Oversight and training ought to be mandatory and integrated into current practices.  Rakhee Patel’s LexisNexis paper, Responsible AI in practice: managing risks across global operations, frames responsible AI as part of broader client trust and reputational risk management and reinforces that professional judgement, verification, and accountability remain with the lawyer regardless of the tool used. In practical terms, “responsible AI” looks like what well-run practices already do, that is, structured supervision, quality control, careful data discipline, and clear accountability from input to output.

Regulatory guidance

The regulatory direction is also becoming clearer. The Bar Council paper anticipates that procedural rules may develop to require disclosure to the Court where generative AI has been used in preparing materials, noting that this approach has already been adopted in some jurisdictions. The UK Law Society, in Generative AI – the essentials, similarly presents a balanced view that generative AI can enhance efficiency, reduce costs, and support innovation in legal services, but only where use is informed, deliberate, and subject to professional control. The Law Society emphasizes that generative AI does not “understand” meaning or accuracy in a human sense and cannot autonomously validate the correctness of its outputs, which highlights the risk of misleading material if outputs are relied upon without critical oversight. It also identifies the broader risk landscape, which goes outside the parameters of this article, including output integrity, confidentiality and data protection, intellectual property, cyber security, bias and ethical concerns, and reputational harm.

Conclusion

So where does that leave us as practitioners?

AI is a tool in the arsenal. Properly deployed, it can improve speed, consistency, and efficiency, and it may assist with early drafting, summarization, and structured prompts for analysis. But because LLMs are generative, they demand careful inputs, rigorous checking, and disciplined human oversight, and they cannot replicate the nuanced judgement that legal work requires. The anxiety many feel about not knowing what is authentic is understandable. The remedy however is not denial, but education, clear policies, and rules that clarify appropriate use. The horse has long bolted the pen. As a profession and a region, we either engage responsibly with these tools or we risk playing catch-up in a global game of tag where the other runners are getting faster by the day. The reassuring reality is that nuance remains deeply human, and generative AI is unlikely to replace that anytime soon, particularly where instructions, evidence, candour, truthfulness, and completeness remain wholly human responsibilities.

Disclaimer:  This blog is for informational purposes only and does not constitute legal advice.

Acknowledgement and sources (paraphrased with attribution):

  1. The Bar Council Information Technology Panel, Considerations when using ChatGPT and generative artificial intelligence software based on large language models (issued 30 January 2024; last reviewed 25 November 2025)  https://share.google/9UdjFuZgfsFAL9Exp
  2. Balancing Innovation and Integrity: The Bahamas Issues New AI Guidance for Legal Practitioners; Mark Bissett, November 13, 2025 https://share.google/2fB4Z3RS6KorvYRGe
  3. Responsible AI in practice: managing risk across global operations; Rakhee Patel. December 23, 2025 https://share.google/0fjIuLctRQrsxSnQT
  4. Generative AI – the essentials; The Law Society (last updated September 2025) https://prdsitecore93.azureedge.net/-/media/files/topics/ai-and-lawtech/generative-ai_the-essentials_september-2025.pdf?rev=9c8436404cbc4d809532044a4d6c5b1e&hash=6E1601CD65ADDA5F5AD5FA0B54EF76E0&_gl=1*1xupp7*_gcl_au*MTM4NzA4MDExNC4xNzY3Nzg1OTIx