- Home
- Products & Services
- About
- Blog
- Faq
- Contact Us

As artificial intelligence (AI) powers ahead, unlocking new capabilities across industries, it’s also kicking up tough questions about data privacy. Are these two forces doomed to clash—or can they learn to live together? That was the question at the heart of a recent webinar hosted by Juta and Novation Consulting: AI & Privacy: Can They Get Along?
Led by industry changemakers, Ilze Luttig Hattingh and Sarah Buerger (Novation Consulting), Nerushka Bowan (LITT Institute), and Johan Steyn (AI for Business), the panel dived into the complex dance between innovation and compliance. From unpacking South Africa’s POPIA and PAIA in the age of AI, to tracking the global regulatory pulse—such as the EU’s AI Act and moves in the US and Asia—the discussion sparked valuable, forward-looking debate.
One thing was clear: the law is playing catch-up. And as AI continues weaving itself into the fabric of business and government, the need to get the privacy foundations right is more pressing than ever.
POPIA in Practice: South Africa’s Privacy Moment
Locally, the Information Regulator is stepping up. POPIA is no longer just theory—it is being put to work, especially where AI and data processing collide.
WhatsApp’s Wake-Up Call
Take WhatsApp, for example. The platform landed in hot water after the Regulator found its privacy policy didn’t measure up to South African standards—even though it claimed compliance with EU GDPR . The enforcement notice highlighted three key issues:
South African users weren’t getting a POPIA-compliant privacy notice.
WhatsApp’s “consent” mechanism didn’t cut it—agreeing to a privacy policy isn’t the same as specific, informed consent.
The company’s use of “legitimate interest” to share data with Meta didn’t pass muster.
Oh, and no PAIA manual? Another strike—even if they claim not to be based here.
New Rules, New Expectations
On 17 April 2025 the Information Regulator gazetted amended POPIA regulations that added more detail—and more responsibility. Information Officers must now keep meticulous records and file annual reports on things like complaints and breaches. Data subjects can also now object to processing or give consent via channels like SMS and WhatsApp.
One surprisingly welcome change? Companies can now apply for payment plans on administrative fines. Compliance just got a little more doable.
Direct Marketing, Redefined
A key update affects anyone doing direct marketing. The Regulator clarified that even VoIP calls fall under the POPIA definition of “electronic communication”—and need prior consent. While this guidance might one day face a legal test, the message is loud and clear: don’t cold-call without permission.
Mandatory PAIA Reports Are In
By 30 June 2025, all public and private bodies must submit PAIA reports—even if they didn’t receive any access requests on the Information Regulator’s portal. This is a proactive disclosure report marking a shift toward more proactive transparency and risk management.
Keeping AI in Check: POPIA Still Applies
The Regulator has made it clear: AI tools—yes, even deepfakes—must play by the same rules as everyone else. That means privacy by design, with safeguards built into AI systems from day one.
But some issues aren’t so easy to solve. What happens when someone files a PAIA request for information an AI has already “learned”? It's not just a technical headache—it’s a legal puzzle.
Globally? It is Still the Wild West
Outside South Africa, the picture isn’t much clearer. Different countries are taking vastly different approaches, and some are cracking down hard.
Legal Action and Bans
OpenAI is facing a lawsuit in California for allegedly training its models on personal data without permission. Meanwhile, entire organisations like the US Space Force have banned generative AI altogether over security concerns.
Italy's No-Nonsense Approach
Italy temporarily banned ChatGPT, only lifting the ban after changes were made—and later slapped OpenAI with a €15 million fine. Other platforms like Deepseek and Replika have also been penalised for privacy violations, especially those affecting minors.
Data Scraping and Interception
Patagonia is in court in the US over AI tools that analysed customer calls—possibly violating interception laws. Meta’s data scraping practices in the UK have also come under scrutiny, especially for training AI without proper consent.
You Can’t Unlearn Data
In a notable case, South Korea’s privacy authority ordered Apple Pay and Kakao Pay to delete an AI model trained on unlawfully transferred data. But once a model has learned something, deleting it isn’t simple—it’s like trying to make someone un-read a book. Even if a regulator orders deletion, full model unlearning is not technically feasible.
Explainability Is No Longer Optional
A European court recently ruled that people are entitled to explanations when AI makes decisions that affect them. This puts pressure on developers to build explainable, accountable systems—no more hiding behind the “black box.”
Legal Orders Shaping AI
A New York court has ordered OpenAI to retain all user outputs indefinitely while a copyright case unfolds. It’s a strong reminder: AI infrastructure is now being shaped by court rulings, not just code.
So, What Comes Next?
AI isn’t slowing down—but regulation is still struggling to keep up.
SA’s AI Strategy Still in the Works
South Africa’s national AI strategy is due to propose regulations by year-end, but many feel the timeline already lags behind reality. There are even doubts about whether enforcement of existing privacy laws is truly effective yet.
EU AI Act: A Model or a Warning?
The EU AI Act sets a global standard with its risk-based framework, mandatory ethics training, and eye-watering fines. But some worry it could smother innovation—especially in smaller markets.
Agentic AI and the Blame Game
As we move toward more autonomous AI (think: drones or AI chatbots acting independently), a new problem arises: who’s responsible when something goes wrong? Existing laws aren’t well equipped to deal with AI systems making their own decisions.
The Ethical Bottom Line
The consensus? Regulation is non-negotiable. Like nuclear tech, AI has enormous potential—but also the power to do harm. Guardrails matter.
Deepfakes on the Rise
Deepfake technology is now available to just about anyone, and South Africa’s cybercrime laws are struggling to keep up. Even when laws exist, enforcement is another story.
Responsible AI Starts with Diversity
Organisations are being urged to take a more thoughtful, inclusive approach. That means building AI with diverse teams—from legal and HR to developers and data scientists. This includes embedding privacy, ethics and impact assessments at the design phase. Otherwise, we risk building tools that simply replicate existing inequalities.
Where Innovation and Privacy Meet
The big takeaway from the webinar? AI and privacy can coexist—but only if we design systems that are human-centred, transparent, and ethically sound.
The work is far from over, but by keeping the conversation going and putting responsibility at the centre of innovation, we can steer AI in a direction that benefits everyone.
Want to stay ahead?
Subscribe to the POPIA Portal for practical tools, expert insights, and the latest developments—all in one place
Kagiso Tiso & Kagiso Media Fraud Hotline: 0800 21 25 83