The 60-day comment window closes on 10 June 2026.
Here's what the draft means for your practice, your clients, and the AI tools you're already using.
On 10 April 2026, the Department of Communications and Digital Technologies gazetted South Africa's Draft National AI Policy. It's 86 pages long. It proposes seven new oversight bodies. It borrows the EU's risk-based vocabulary but leaves the most important definitions - including what counts as "high-risk" AI - for later.
None of that is why you should care.
You should care because the draft makes explicit what POPIA, the Legal Practice Act, and three recent court decisions have already been signalling: if you use AI in legal practice, you need to be able to explain how it works, where your data goes, who reviewed the output, and who is accountable when something goes wrong. Not eventually. Now.
The firms that can evidence that control will win clients. The firms that can't will face uncomfortable questions from regulators, insurers, and the courts. And the AI tools those firms choose will determine which side of that line they're on.
Let's cut through the policy language and focus on what matters for daily practice.
The draft doesn't create new privacy rules for AI. It does something more consequential: it confirms that POPIA's existing rules - purpose limitation, data minimisation, security safeguards, and the section 71 protections against automated decision-making - apply directly to every AI tool you use.
That means every prompt you type, every document you upload, every client file you feed into an AI system is a regulated information flow under POPIA. Not a casual productivity input. Not a quick shortcut. A data-processing activity that needs to comply with the eight conditions for lawful processing.
As Werksmans' Ahmore Burger-Smidt has pointed out, POPIA's conditions - purpose limitation under section 13, minimality under section 10, security safeguards under section 19 - were not designed with AI training data in mind. The draft doesn't resolve that tension. It just makes the expectation official.
The draft requires predetermined human intervention points for critical automated decisions, plain-language notifications when people are affected by AI systems, and an "attributable responsibility" principle: someone - a named person or entity - must be accountable for every AI-assisted output.
For lawyers, this aligns with duties you already have: supervision, competence, confidentiality, and professional judgment. The difference is that the draft creates a regulatory framework that will eventually audit whether you're meeting those duties when AI is involved.
Deputy Director-General Alfred Mmoto summarised the principle plainly: "we can't use AI as just a black box."
This is the issue the draft doesn't address - and the one that should keep litigators awake.
Cliffe Dekker Hofmeyr published what may be the most important professional-practice alert since the draft landed. Their analysis concludes that inputting privileged material into a public-facing AI platform likely constitutes disclosure to a third party - which could destroy privilege entirely.
Webber Wentzel's Kim Rew and Tristan Marot made the parallel case: a practitioner who inputs client information into a consumer AI platform without adequate contractual safeguards risks breaching their duty of confidentiality, regardless of whether privilege is ever formally tested in court.
The practical question is blunt: does your AI tool train on your inputs? If yes, or if you don't know, you have a privilege problem that no amount of policy compliance can fix.
The draft is a policy document, not legislation. It creates no direct penalties. But waiting for the final statute misreads where enforcement pressure is actually coming from.
It's already here, from three directions:
Honest assessment matters more than alarm. Here's what the draft leaves unresolved:
The most useful framing for practitioners: South Africa is adopting EU rhetoric with UK architecture and NIST operational scaffolding.
For firms doing cross-border work, the practical implication is straightforward: your international clients will increasingly expect you to meet EU-grade governance standards, whether or not South African law technically requires it. The draft accelerates that expectation.
The comment deadline is real, and the profession's collective voice is conspicuously absent. As of 23 April 2026, no statement has been issued by the Law Society of South Africa, the Legal Practice Council, the General Council of the Bar, or any of the provincial Bar Councils. That silence is a vacuum - and it means the terms of AI governance for legal practice will be written by corporate-commercial firms acting for corporate clients, unless the broader profession speaks up.
The draft shifts the AI conversation from "what's fastest" to "what's defensible." Speed still matters. But a tool that's fast and opaque is now a liability. A tool that's fast and auditable is an asset.
This is the design principle behind Squire. We built it for legal professionals who need AI that works the way professional obligations require - not the way consumer chatbots happen to.
The draft policy is not final. But the obligations it points to are not new. POPIA, the Legal Practice Act, King IV, and the courts have been building toward this moment for years. The draft simply makes the direction unmistakable.
Treat your AI use today as though it will need to be explained tomorrow - to a client, a regulator, a court, or an insurer. Choose tools that can evidence confidentiality, human oversight, and accountability rather than merely promise efficiency.
That's the standard this draft is pointing toward. It's the standard legal technology should already be meeting.
The comment period closes on 10 June 2026. The profession has weeks, not years, to decide whether legal AI in South Africa is governed on terms lawyers help write - or on terms that arrive pre-assembled.
Disclaimer: This article provides general legal information and commentary. It does not constitute legal advice and should not be relied upon as a substitute for consultation with a qualified attorney licensed to practise in your jurisdiction.
Researched with the assistance of AI and reviewed by Squire's legal and editorial team.
Updating...
Please wait while we reconnect