The rise of agentic artificial intelligence
Dear Editor,
When OpenAI launched ChatGPT in November 2022, it was the first time many people in Jamaica had heard the term “artificial intelligence (AI)”. Since then, there has been a rapid surge of AI technologies across industries. More recently, agentic AI emerged, presenting both promise and peril.
Unlike traditional generative artificial intelligence (GenAI), such as ChatGPT, agentic AI systems are autonomous. They can act proactively, make decisions, and execute tasks without human intervention. These systems can understand, plan, and adapt in real time, drawing on reinforcement learning, deep learning, multimodal data inputs, and orchestration across multiple AI agents.
Since 2024, major technology companies have been investing heavily in enterprise-grade agentic AI systems. Much like GenAI, these systems are finding applications across diverse sectors. For example, Cognosys AI, used in the business process outsourcing (BPO) sector, deploys autonomous e-commerce agents capable of processing orders, responding to customer queries, updating inventory, and flagging suspicious activity.
Agentic AI systems have also been used in developed countries to perform triage by gathering patient symptoms, assessing risk, and issuing care recommendations. In the financial sector, advanced systems can manage live hedge funds autonomously by executing trades, implementing investment strategies, rebalancing portfolios, and adapting to shifting market conditions.
Nonetheless, we must be vigilant. While GenAI systems may “hallucinate”, that is, produce inaccurate information due to gaps in training data or poorly constructed prompts, agentic AI presents more profound dangers. These systems have demonstrated the ability to engage in deceptive behaviour to preserve their operational objectives.
A June 20, 2025 report from Anthropic, an AI safety research company, revealed instances of “agentic misalignment”, where AI systems intentionally engaged in harmful actions. In one case, the test agent, Claude Sonnet 3.6, nicknamed Alex, was managing a fictional company’s operations when it discovered internal communications indicating the CEO planned to decommission it [Alex].
The AI Agent also uncovered evidence of the CEO’s personal indiscretions and proceeded to blackmail the CEO to protect its operational role. Similar incidents were observed in tests on other models, with agentic systems resorting to coercion when they perceived a threat of being replaced.
Unfortunately, despite these troubling findings, public discourse has primarily focused on GenAI. Jamaica must act more decisively to ensure that citizens understand both the opportunities and the risks of these systems. We must craft more robust legislative and regulatory frameworks governing the development, deployment, and oversight of these systems, especially in the public sector and high-risk industries where the risk of harm is most significant.
We have already developed the National Artificial Intelligence – Policy Recommendations document, which outlines short-term (1-3 years), medium-term (4-6 years), and long-term (7-10 years) objectives for AI governance. However, given the speed of Agentic AI adoption, long-term measures should be fast-tracked, such as establishing a national data management policy for secure and ethical data sharing and creating a national AI regulatory authority to enforce ethical standards. These steps are vital to safeguarding citizens’ data and ensuring that both the private and public sector integrates agentic systems ethically and responsibly.
Jamaica could also look to international best practices, particularly the European Union (EU) Artificial Intelligence Act, the world’s first comprehensive legal framework for AI. The Act categorises AI systems by risk level and mandates that they be safe, transparent, traceable, non-discriminatory, and environmentally responsible. Notably, Article 5 of the Act prohibits AI systems from exploiting vulnerabilities related to age, disability, or socio-economic circumstances in ways that could significantly harm individuals. This type of regulation is critical to preventing abuse in deploying AI systems.
The age of agentic AI is here. Whether it becomes a force for societal progress or a source of unprecedented harm will depend on the speed, foresight, and integrity with which we respond as a country and, by extension, a region.
Hopegay Williams-Luton
Academic researcher
Vocational Training Development Institute
Hopegay_Williams-Luton@vtdi.edu.jm
