An Artificial Intelligence (AI) expert has cautioned that South Africa’s withdrawal of its draft AI policy should be seen less as a damaging blunder and more as a pivotal lesson in governance, after Minister of Communications and Digital Technologies Solly Malatsi admitted the document contained fictitious sources generated by AI.
Malatsi withdrew the country’s Draft National AI Policy after it was found to contain “various fictitious sources in its reference list.”
‘Deserve better’
Malatsi said South Africans “deserve better.”
“The Department of Communications and Digital Technologies did not deliver on the standard that is acceptable for an institution entrusted with the role to lead South Africa’s digital policy environment.
“The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened,” Malatsi said.
‘Blunder’
AI expert and associate Professor at the Wits University School of Statistics and Actuarial Science, Professor Rendani Mbuvha, said the “blunder” underscores the irony of a human‑centred framework being undermined by AI hallucination and highlights the urgent need to train policymakers to understand both the promise and the shortcomings of the technology.
“I think these sorts of blunders are going to be the mainstay of the adoption of AI. Because I think in terms of what the blunder actually signals is that there’s increasing adoption and use of AI, including in policymaking. And perhaps the irony is that the human is supposed to be at the centre of policy adoption, including in the draft policy itself.
“It seems to appear that when we drafted the policy, we let the AI hallucinate, but I think it’s something that we can easily remedy, because again, as you see in the AI era, you’re seeing a situation where the technology is far ahead of its regulation. So again, it’s one of those cases where we’re left to catch up. But I think on the ground, South Africa is making great strides in the adoption of AI,” Mbuvha told eNCA.
New policy
Mbuvha said Malatsi now has an opportunity to develop a policy that’s coherent across different inputs.
“I would probably employ him to also look at different forms of advice across multiple sectors. Academia is there for him, I think, also, you know, also looking at what other jurisdictions are doing, I think in Africa, there are about seven countries that have adopted an AI policy and an AI strategy.
“And I think the AI policy is creating a certainty around, you know, how different players interact with you in the world of AI. And I think it’ll be important to sort of say, what are South Africa’s aspects in terms of data, infrastructure and others,” Mbuvha said.
‘Not a technical issue’
Malatsi said the failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy.
“In fact, this unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility.
‘I want to reassure the country that we are treating this matter with the gravity it deserves. There will be consequence management for those responsible for drafting and quality assurance,” Malatsi said.
Resetting the agenda
While the withdrawal of South Africa’s draft AI policy exposed the risks of over‑reliance on unverified machine outputs, experts argue it also offers a valuable opportunity to reset the agenda – one that blends accountability with innovation, draws on local philosophies such as Ubuntu, and positions the country to craft a coherent, human‑centred framework that can both regulate and harness AI for inclusive growth.
Draft AI policy
Malatsi’s Draft National AI Policy, published on 10 April 2026, proposed bold measures, including the creation of an AI Insurance Superfund, signalling that the era of “move fast and break things” is over.
Modelled after the embattled Road Accident Fund, this “Superfund” was the centrepiece of a policy that refused to play it safe.
By suggesting a state-backed financial safety net for victims of algorithmic bias and AI errors, South Africa is admitting what Silicon Valley often ignores: AI will cause harm, and someone has to pay for it.
The 86-page document, gazetted on 10 April 2026, signalled that the era of “move fast and break things” was over in South Africa.
The 86‑page document establishes an AI Ombudsperson and Ethics Board to enforce accountability, marking a significant shift in how the government intends to regulate emerging technologies.