The real danger behind the AI caricature trend
Dear Editor,
The instructions seem harmless and simple: “Go to ChatGPT and use this prompt: Create a caricature of me and my job based on everything you know about me.”
A straightforward direction, but one with potentially devastating implications.
The truth is the inner workings of many artificial intelligence (AI) systems are opaque and complex. This makes it difficult for the average user to know exactly where the information they submit goes, how it is processed, how long it is stored, or whether it may be reused in the future.
What many people don’t realise is that their faces are not just images — they qualify as sensitive biometric data. Their uniqueness is so valuable that it has become a major security tool. In fact, many modern devices rely on facial recognition as a secure, hands-free method of authentication.
When individuals upload their faces to AI platforms, they may unknowingly be handing over biometric data that could be captured, stored, or used to train systems in ways they never intended. In some cases, that image may become part of a dataset — data that the AI model can draw from to generate future images, videos, or other content.
And the risks don’t stop at caricatures. Your face can be misused in other ways, including the creation of deepfakes. It can potentially be reproduced, manipulated, or inserted into content without your knowledge or consent. Someone who has never met you could generate an image of you for reasons you cannot control, predict, or even detect. The trend may look fun, but the privacy risks are real.
Stop feeding AI platforms your personal information — especially your face — without understanding the consequences.
Your privacy is power, guard it with care.
Brandy Evans
Attorney-at-law
Data protection officer
evansbrandy649@gmail.com
