Pollonais, Blanc, de la Bastide & Jacelon

By Javed Mohammed

Advocate Attorney at law at Pollonais Blanc de la Bastide & Jacelon.

 

Introduction
Artificial intelligence has rapidly evolved from a back-end tool into “agentic”[1] systems capable of autonomous decision-making. These AI agents can interpret data, make recommendations, or even take action with minimal human oversight. Such autonomy offers immense opportunities: automating customer service, optimising public services, and driving innovation. But it also introduces new legal and ethical complexities. Trinidad and Tobago is already confronting these realities. In May 2025, the Honourable Mr. Justice Westmin James criticised attorneys for submitting fictitious, AI-generated case law, warning that “irresponsible use of… generative AI tools undermines… the credibility of the legal system.”

In May 2025, Pollonais, Blanc, de la Bastide & Jacelon attended the international webinar Agentic AI: Navigating Legal Risks of Autonomous AI Tools for In-House Counsel, hosted by In-House Connect. Presenters Brendan Palfreyman and Michelle Fleming, of Harris Beach Murtha-Attorneys at Law, respected experts in emerging technology law, explored how AI agents, unlike traditional generative tools like ChatGPT and Grammarly, plan and act autonomously in complex digital environments. Their insights into governance strategies, vicarious liability, intellectual property, and regulatory trends offered valuable perspectives aimed at the global momentum toward risk-informed AI adoption. As Fleming highlighted, AI agents are “not just predicting, but performing tasks”, thus creating urgent questions around operational control and legal responsibility for businesses adopting the technology, and their legal advisors.

AI adoption has firmly entered the national conversation. The newly sworn Government of Trinidad and Tobago has signaled its commitment to the responsible adoption and integration of artificial intelligence with the establishment of the Ministry of Public Administration and Artificial Intelligence. This follows on from the previous administration’s creation of the Ministry of Digital Transformation, which had also been tasked with laying the groundwork for digital innovation and public sector reform. It shows a rare, but evident, meeting of the minds across the political divide that AI is here to stay, and adoption and implementation are key to survival. For example, in April 2024, Parliamentarians from both the government and opposition, participated in capacity-building workshops, and the Caribbean has collectively engaged with AI governance through initiatives such as the UNESCO Caribbean AI Policy Roadmap.

The message is clear: as policymakers work toward formal regulatory frameworks, both public and private actors must prepare for compliance and accountability.

AI and Legal Risk

Globally, businesses are embracing AI to enhance efficiency and competitiveness, but not without accompanying legal risk. In Trinidad and Tobago, it is still unclear whether existing common law doctrines can effectively extend to AI-related harms. Tort law may apply where an AI-enabled tool causes injury or financial loss, whether through negligent deployment or defective design. While no local decision has yet ruled on AI liability, jurisdictions such as the UK, under the Automated and Electric Vehicles Act 2018, have begun allocating responsibility to manufacturers and insurers in cases involving autonomous technologies. In the local context, this signals those organisations deploying AI must exercise care in testing, monitoring, and mitigating potential harms. Fleming and Palfreyman noted during the IHC webinar that as AI autonomy increases, so too does the scope of vicarious liability, which suggests that local courts, which have declaredin the absence of legislative amendment, the common law should evolve”, may eventually treat AI-induced errors under stricter liability standards through the common law, until statutory guardrails are implemented.

Contract law presents equally complex challenges. Trinidad and Tobago’s Electronic Transactions Act, Chapter 22:05, sections 20(1) and 20(2), recognises contracts formed by electronic agents, meaning, a chatbot or algorithm can legally bind a company in contract, whether with a consumer or another AI. Businesses must therefore clearly define the scope of authority of their AI systems and ensure that disclaimers, consent language, and governance policies are in place to prevent unintended obligations. The risk of misrepresentation, particularly in consumer-facing AI tools, makes it essential that companies have protocols for correcting erroneous outputs and limiting liability through carefully worded terms of use. These concerns were flagged in the Moffatt v. Air Canada 2024 BCCRT 149, discussed during the webinar, where the Court ruled that Air Canada was liable for negligent misrepresentation after its website chatbot inaccurately informed a customer that bereavement fare refunds could be applied for retroactively. The tribunal emphasized that companies are responsible for all information presented on their websites, including that provided by automated systems like chatbots, and ordered Air Canada to compensate the customer for the fare difference.

Perhaps the most immediate compliance risk lies in data protection. AI thrives on data, especially OpenAI and personal data, yet the legal framework surrounding data use in Trinidad and Tobago is still evolving. The Data Protection Act (DPA), Chapter 22:04, has only been partially proclaimed, but when it is fully operationalised, it is expected to reflect international best practice. Sections 6 and 69, read together for example, provide stricter guidance to companies to ensure that personal data used to train AI models is collected lawfully, stored securely, and processed with transparency and consent. With AI’s reliance on cloud computing and offshore platforms, local businesses will also need to navigate cross-border data transfer rules. As enforcement intensifies, proactive compliance with the DPA will become a key element of AI governance and businesses will be well advised to ensure they are fully educated in these matters to avoid treading upon the fine line between the learning algorithms that AI thrives upon to improve the delivery of the output, and the guardrails set by the DPA.

AI’s reach into intellectual property (IP), employment, discrimination, and cybersecurity law is also growing. IP laws currently do not recognise AI as an author or inventor, so companies must contractually secure rights over AI-generated outputs. Let’s say AI generates the lyrics for a calypso, or a new “soca riddim”, the question becomes, who owns the IP, and is it copyrightable? In the IHC webinar, the presenters emphasised that copyright protection is generally not available for content generated solely by AI, unless there is sufficient human intervention. Fleming explained that “you need a human hand on the output to make a copyright claim stick,” a reminder that AI content needs to be curated or guided.

In employment law, AI tools used in recruitment or performance evaluation may inadvertently breach anti-discrimination regulations under the Equal Opportunity Act, Chapter 22:03 without proper, proactive oversight.

The IHC webinar highlighted that strong internal governance, including clear policies defining permissible AI uses, approval structures, and staff training, is essential to responsibly deploy AI agents. Whether through ethics committees, human oversight protocols, or algorithmic audits, these measures are becoming essential to compliance and reputational protection.

Regional and International Models

Trinidad and Tobago can draw on several instructive models in the road to safe and efficient adoption. The UK has instituted what appears to be, at this time, a principles-based framework that relies on sector-specific laws rather than a centralised AI act. Canada has taken a more comprehensive path with its proposed Artificial Intelligence and Data Act, which would regulate “high-impact” AI systems and create mechanisms for redress. Within the Caribbean, countries like Jamaica and Barbados are exploring AI in public health and fintech (financial technology), respectively, while regional bodies like CARICOM IMPACS (Implementation Agency for Crime and Security) and the Caribbean Telecommunications Union are leading policy conversations across the region. The UNESCO AI Policy Roadmap also offers a set of regulatory priorities, including ethical oversight and cross-border cooperation, which can be tapped into if necessary.

Conclusion

AI is poised to reshape Trinidad and Tobago’s economy, governance and the way we do business. With that transformation comes the responsibility to ensure that AI tools are deployed safely, lawfully, and transparently. For legal teams, whether in the public or private sector, the time to prepare is now. This means revisiting policies, contracts, compliance frameworks, and internal governance systems to account for the distinct challenges posed by AI. The risks are not abstract: liability, reputational harm, regulatory scrutiny, and consumer trust are all at stake.

The legal community has a vital role to play in shaping this landscape. By staying engaged with international best practices and local legal reform, practitioners can help ensure that AI deployment in Trinidad and Tobago is both efficient and responsible. As the use of AI becomes more embedded in business operations and government services, access to clear, practical legal guidance will remain essential to aligning innovation with the rule of law.

Legal professionals with cross-sector experience in digital transformation, data governance, and regulatory compliance will be especially important in guiding organisations through this new landscape. Whether designing AI governance structures, reviewing vendor contracts, or interpreting evolving legislation, sound legal advice can help businesses and institutions unlock AI’s potential while remaining compliant with Trinidad and Tobago’s legal and ethical standards. In this way, Trinidad and Tobago can confidently and ambitiously embrace the global digital age.

Disclaimer:  This blog is for informational purposes only and does not constitute legal advice.

 

[1] Agentic AI refers to artificial intelligence that operates with autonomy. Unlike generative AI, which responds to user input, agentic AI is proactive. It can independently plan, make decisions, and execute multi-step tasks. According to Microsoft, these agents act as an intelligent layer atop language models like ChatGPT, enabling them to observe, plan, and act. This autonomy is what distinguishes agentic AI and introduces novel legal risks.

 

Leave a Reply

Your email address will not be published. Required fields are marked *