South Africa's Draft AI Policy Just Made Your Choice of Legal Tech a Compliance Decision

The 60-day comment window closes on 10 June 2026.
Here's what the draft means for your practice, your clients, and the AI tools you're already using.

The 48-Second Version

On 10 April 2026, the Department of Communications and Digital Technologies gazetted South Africa's Draft National AI Policy. It's 86 pages long. It proposes seven new oversight bodies. It borrows the EU's risk-based vocabulary but leaves the most important definitions - including what counts as "high-risk" AI - for later.

None of that is why you should care.

You should care because the draft makes explicit what POPIA, the Legal Practice Act, and three recent court decisions have already been signalling: if you use AI in legal practice, you need to be able to explain how it works, where your data goes, who reviewed the output, and who is accountable when something goes wrong. Not eventually. Now.

The firms that can evidence that control will win clients. The firms that can't will face uncomfortable questions from regulators, insurers, and the courts. And the AI tools those firms choose will determine which side of that line they're on.

What the Draft Actually Requires from Lawyers

Let's cut through the policy language and focus on what matters for daily practice.

POPIA Is Now Your AI Governance Framework

The draft doesn't create new privacy rules for AI. It does something more consequential: it confirms that POPIA's existing rules - purpose limitation, data minimisation, security safeguards, and the section 71 protections against automated decision-making - apply directly to every AI tool you use.

That means every prompt you type, every document you upload, every client file you feed into an AI system is a regulated information flow under POPIA. Not a casual productivity input. Not a quick shortcut. A data-processing activity that needs to comply with the eight conditions for lawful processing.

As Werksmans' Ahmore Burger-Smidt has pointed out, POPIA's conditions - purpose limitation under section 13, minimality under section 10, security safeguards under section 19 - were not designed with AI training data in mind. The draft doesn't resolve that tension. It just makes the expectation official.

Human Oversight Isn't Optional

The draft requires predetermined human intervention points for critical automated decisions, plain-language notifications when people are affected by AI systems, and an "attributable responsibility" principle: someone - a named person or entity - must be accountable for every AI-assisted output.

For lawyers, this aligns with duties you already have: supervision, competence, confidentiality, and professional judgment. The difference is that the draft creates a regulatory framework that will eventually audit whether you're meeting those duties when AI is involved.

Deputy Director-General Alfred Mmoto summarised the principle plainly: "we can't use AI as just a black box."

Privilege Is on the Line

This is the issue the draft doesn't address - and the one that should keep litigators awake.

Cliffe Dekker Hofmeyr published what may be the most important professional-practice alert since the draft landed. Their analysis concludes that inputting privileged material into a public-facing AI platform likely constitutes disclosure to a third party - which could destroy privilege entirely.

Webber Wentzel's Kim Rew and Tristan Marot made the parallel case: a practitioner who inputs client information into a consumer AI platform without adequate contractual safeguards risks breaching their duty of confidentiality, regardless of whether privilege is ever formally tested in court.

The practical question is blunt: does your AI tool train on your inputs? If yes, or if you don't know, you have a privilege problem that no amount of policy compliance can fix.

The Risk You Already Carry

The draft is a policy document, not legislation. It creates no direct penalties. But waiting for the final statute misreads where enforcement pressure is actually coming from.

It's already here, from three directions:

  • The courts. In Parker v Forsyth N.O. (2023), the court cautioned that technological efficiency must still be tempered by independent reading - lawyers cannot simply parrot unverified chatbot output. Since then, Mavundla v MEC (January 2025) and Northbound Processing v SA Diamond & Precious Metals Regulator (June 2025) have reinforced the message: AI-hallucinated citations will be sanctioned. The judiciary is not waiting for policy to catch up. Dive into these cases in more detail in our recent analysis.
  • POPIA. The Information Regulator already has the tools to investigate AI-related data processing complaints. Section 71 automated decision-making protections, section 19 security safeguards, and section 72 cross-border transfer rules don't need the draft policy to become actionable. They're law.
  • Your clients. Sophisticated corporate clients are already asking their law firms: what AI are you using, where does our data go, and can you prove human review? The firms that can answer those questions clearly will keep the work. The firms that can't will discover that the biggest compliance risk isn't regulatory - it's commercial.

What the Draft Doesn't Do (Yet)

Honest assessment matters more than alarm. Here's what the draft leaves unresolved:

  • It doesn't define "high-risk." The draft uses the EU AI Act's risk-based vocabulary - unacceptable, high, medium, low - but never defines the thresholds. CDH, Werksmans, and Bowmans have all flagged this gap. The Werksmans analysis notes that the draft "contemplates risk-based classification, drawing some inspiration from the European Union AI Act" - but without defining what falls into each category. Those sector strategies are targeted for 2026/27 and 2027/28.
  • It doesn't carve out legal practice. Unlike the EU AI Act, which explicitly lists AI used by judicial authorities as high-risk, the South African draft does not mention legal practice as a distinct category. The safest assumption is that legal workflows affecting rights, remedies, and strategic decisions will not escape scrutiny simply because the profession isn't named.
  • It doesn't resolve the institutional architecture. Seven new bodies are proposed: a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, an AI Insurance Superfund, and an Integrated Monitoring Centre. Werksmans calls this a "recipe for overlap, turf disputes, and diluted accountability." The risk of jurisdictional congestion - where a single AI-related data breach triggers parallel processes across five bodies - is real.
  • It doesn't give you a privilege-safe operating manual. The draft creates audit, transparency, and contestability mechanisms. It does not explain how those mechanisms coexist with legal professional privilege. This is the single most important gap for the profession to address in the comment period.

How South Africa Compares Globally

The most useful framing for practitioners: South Africa is adopting EU rhetoric with UK architecture and NIST operational scaffolding.

  • EU AI Act: Binds directly, carries penalties of up to 7% of global turnover, and explicitly classifies legal and judicial AI as high-risk. South Africa's draft classifies nothing yet.
  • UK approach: Distributes authority across existing sectoral regulators without a single AI statute. South Africa is doing something similar but proposing six or seven new bodies on top of existing regulators - trading the EU's legal clarity for what critics warn is fragmentation without adequate resourcing.
  • NIST AI RMF: Explicitly named in the gazette, provides the operational logic: Govern, Map, Measure, Manage. It's voluntary in the US. In South Africa, compliance will eventually become compulsory through sector-specific regulation.

For firms doing cross-border work, the practical implication is straightforward: your international clients will increasingly expect you to meet EU-grade governance standards, whether or not South African law technically requires it. The draft accelerates that expectation.

What You Should Do Before 10 June

The comment deadline is real, and the profession's collective voice is conspicuously absent. As of 23 April 2026, no statement has been issued by the Law Society of South Africa, the Legal Practice Council, the General Council of the Bar, or any of the provincial Bar Councils. That silence is a vacuum - and it means the terms of AI governance for legal practice will be written by corporate-commercial firms acting for corporate clients, unless the broader profession speaks up.

  1. Audit your current AI use. Map every tool, every workflow, every person using AI in your firm. Separate the low-risk administrative uses (scheduling, formatting) from the rights-affecting legal work (research, drafting, analysis). You cannot govern what you haven't mapped.
  2. Test your privilege exposure. For every AI tool your firm uses: does the provider train on your inputs? Where is your data processed and stored? What contractual commitments exist around data isolation? If you don't have clear answers, you have a problem that predates the draft policy.
  3. Adopt a POPIA-aligned AI use policy. This doesn't need to be elaborate. It needs to specify: which tools are approved, what categories of information may and may not be input, who reviews AI-assisted output before it leaves the firm, and how use is logged.
  4. Tighten your client engagement terms. Update your letters of engagement to disclose AI use where appropriate, and to specify the safeguards in place.
  5. Require human review as a structural default. Not as a suggestion. Not as guidance. As a non-negotiable checkpoint before any AI-assisted legal output reaches a client, a court, or a counterparty.
  6. Consider making a submission. The draft is open for comment. Written comments go to aipolicy@dcdt.gov.za by 16h00 on 10 June 2026.

Why the Right AI Tool Is Now a Governance Decision

The draft shifts the AI conversation from "what's fastest" to "what's defensible." Speed still matters. But a tool that's fast and opaque is now a liability. A tool that's fast and auditable is an asset.

This is the design principle behind Squire. We built it for legal professionals who need AI that works the way professional obligations require - not the way consumer chatbots happen to.

  • Jurisdiction-aware intelligence. Squire is trained on South African law, regulations, and case precedents. When you ask a question, the answer reflects the legal framework that actually governs your matter - not a generic international dataset that you need to verify and localise yourself.
  • Your data stays yours. Client inputs are not used to train our models. Documents, prompts, and queries are isolated by matter. That's not a feature toggle - it's an architectural decision. The privilege risk that CDH and Webber Wentzel have flagged doesn't arise when the platform is designed to prevent third-party disclosure by default.
  • Built for the audit trail the draft demands. Exportable logs, review checkpoints, matter-level permissions, and retention controls - so that when a client, regulator, or court asks how AI shaped a particular output, you have a documented answer.
  • Human review is structural, not optional. Squire is designed around the principle that AI generates, but lawyers decide. Every output is a starting point for professional judgment, not a substitute for it.

The Bottom Line

The draft policy is not final. But the obligations it points to are not new. POPIA, the Legal Practice Act, King IV, and the courts have been building toward this moment for years. The draft simply makes the direction unmistakable.

Treat your AI use today as though it will need to be explained tomorrow - to a client, a regulator, a court, or an insurer. Choose tools that can evidence confidentiality, human oversight, and accountability rather than merely promise efficiency.

That's the standard this draft is pointing toward. It's the standard legal technology should already be meeting.

The comment period closes on 10 June 2026. The profession has weeks, not years, to decide whether legal AI in South Africa is governed on terms lawyers help write - or on terms that arrive pre-assembled.

Disclaimer: This article provides general legal information and commentary. It does not constitute legal advice and should not be relied upon as a substitute for consultation with a qualified attorney licensed to practise in your jurisdiction.

Researched with the assistance of AI and reviewed by Squire's legal and editorial team.

Works Cited

  1. Department of Communications and Digital Technologies, "Draft South Africa National Artificial Intelligence (AI) Policy," Government Gazette No. 54477, General Notice 3880, 10 April 2026.
  2. Government Communication and Information System, "Statement on the Cabinet Meeting of 25 March 2026 and Special Cabinet Meeting of 1 April 2026."
  3. TechCentral, "South Africa's draft AI policy headed to cabinet."
  4. Regulation (EU) 2024/1689 (EU AI Act)
  5. UK Department for Science, Innovation and Technology, "A Pro-Innovation Approach to AI Regulation" (White Paper, March 2023).
  6. National Institute of Standards and Technology, "Artificial Intelligence Risk Management Framework (AI RMF 1.0)," NIST AI 100-1, January 2023.
  7. Parker v Forsyth N.O. 2023 ZAGPJHC (AI-hallucinated citations; judicial warning on unverified chatbot research).
  8. Mavundla v MEC (January 2025) (sanctions for AI-fabricated authorities).
  9. Northbound Processing v SA Diamond & Precious Metals Regulator (30 June 2025) (continued judicial intolerance for AI-generated false citations).
  10. Burger-Smidt, Ahmore (Werksmans Attorneys), "Speak now or forever hold your peace: The draft AI policy has been published and parties have 60 days to comment."
  11. Werksmans Attorneys, "The AI Governance Stack and South Africa's Draft National AI Policy: An Operational Gap in Search of a Framework."
  12. Cliffe Dekker Hofmeyr, "Chatting away your protection - Are you waiving legal privilege when you use AI?" (21 April 2026, Dispute Resolution and Knowledge Management Alert).
  13. Cliffe Dekker Hofmeyr, "Another episode of fabricated citations, real repercussions" (July 2025).
  14. Bowmans, "South Africa: Draft Artificial Intelligence Policy to be gazetted for public comment."
  15. Baker McKenzie, "South Africa: Draft AI Policy Opens for Public Comment" (April 2026).
  16. Adams & Adams, "South Africa's Draft National AI Policy: Building a Framework for Responsible and Inclusive AI Governance."
  17. Michalsons, "Draft South Africa National AI Policy published for comment."
  18. Webber Wentzel, "Artificial Intelligence has POPIA implications."