Making AI accountable in JamaicaWhy human decision authority still matters
ARTIFICIAL intelligence (AI) is no longer experimental in Jamaica. It is already embedded in banking systems, fraud detection, credit assessments, cybersecurity monitoring, and digital government services. These tools help organisations operate faster and more efficiently — and that is a good thing.
But a critical question is still not being asked clearly enough: When AI influences a decision that affects people’s lives, who is accountable?
Too often the answer sounds technical rather than responsible: “The system flagged it.” “That’s what the model recommended.” “The algorithm made the call.”
Those explanations may sound sophisticated but they are not defensible — legally, ethically, or institutionally.
The Accountability Gap Jamaica and the Caribbean Are Already Experiencing
Across Jamaica and the wider Caribbean, digital systems already shape decisions involving fraud alerts and transaction monitoring in banks; credit scoring in small, underbanked markets; automated eligibility checks for social and public sector programmes; cybersecurity tools that escalate incidents in real time.
When these systems work well, they go unnoticed. When they fail, accountability often disappears.
Customers are locked out of accounts. Applications are denied. Alerts are triggered. Yet no clear decision-maker can be identified. The decision is blamed on “the system”.
That is not innovation, it is accountability avoidance.
A simple but essential principle: Accountable human decision authority
There is a straightforward way to close the phenomenon of accountable human decision authority (AHDA), which is in operation whenever an AI system influences a decision with legal, financial, regulatory, or social consequences. A clearly identified human must have final authority — and must be accountable for the outcome. AI can assist, AI can recommend, AI can automate execution. However, responsibility must always remain human.
Why “human review” is not enough
Many organisations claim they are protected because a human reviews AI outputs. But reviewing is not the same as being accountable.
If the person reviewing an AI recommendation cannot override it, cannot escalate concerns, and cannot halt an automated action, then responsibility has not stayed human — it has simply been disguised.
Accountable human decision authority requires real authority, not a procedural checkbox.
Why this matters for Jamaica’s business and regulatory environment
Jamaica is aligning more closely with international governance and security standards, including ISO/IEC 27001. What is often missed is that these standards already assume human accountability such as, risk must have an owner, and decisions must be defensible.
Incidents must be investigated by accountable leaders.
An AI system cannot own risk, accept residual exposure, be accountable in an audit, or explain judgement after failure. Without clear human decision authority, AI systems may operate — but governance fails.
From national practice to regional leadership
Jamaica also has an opportunity to lead the Caribbean. By formalising AHDA as a governance expectation, regulators gain clarity, auditors gain consistency, institutions gain defensibility, and citizens gain trust.. Organisations such as a Jamaica AI Association — particularly through its privacy, ethics, governance and security committee — are well placed to help institutionalise regional governance standard, not just a local best practice.
AI can make Jamaican institutions faster, smarter, and more competitive. But efficiency without accountability is risk, not progress. AI may inform decisions. AI may optimise decisions. AI may act. But AI cannot be responsible.
If Jamaica want trustworthy, auditable, and globally credible AI adoption, one rule must be clear from the start: A human must always be accountable.
Horatio Morgan is the CEO of Morgan Signing House Consultancies and a forward-thinking AI governance leader based in Atlanta, Georgia. Contact Morgan Signing House Consultancies at mshllccorporation@gmail.com.