Start main page content

AI is not built for African users, exposing a governance chasm

- Wits University

Artificial intelligence (AI) was not built with Africans in mind, and borrowed AI governance frameworks being enforced don’t fit the continent’s reality.

This was one of the main insights raised at the second African Cyber Law Conference at Wits, which brought together legal scholars, students and those working in the digital industry.

AI is rapidly shaping law, rights and digital life across the continent. While there is a plethora of regulations, a structural misalignment among law, technology, and society is causing harm.

“One of the ways we can see this manifesting in the real world is that Africans are not represented in the design of the technology, and it raises questions about who is seen, who is excluded and who is protected in the digital system,” says Dr Nomalanga Mashinini, senior lecturer at the Wits School of Law, and organiser of the conference.

Harmful content in African languages often goes undetected because AI systems that mediate everything from content moderation to financial access are largely trained on Global North datasets. “This is systemic exclusion. African languages are underrepresented, cultural nuance is lost, and entire populations are misclassified or rendered invisible,” says Mashinini.

There are attempts to address this. Sipho Mtombeni from Google pointed to growing efforts to build African language datasets and more representative systems.

However, inclusion at the level of data does not shift the underlying power dynamics. Data generated in Africa is routinely extracted, processed and monetised, but Africans bear the brunt of the risk without sharing the returns. This dynamic, described as ‘digital extractivism’, is shaping how value and accountability are distributed in AI systems. “We are seeing how inequalities, which are persistent in other aspects of society, are now being reinforced in the digital sphere,” says Mashinini.

African Cyber Law Conference

Africa is not under-regulated, but uncoordinated

A second key insight from the conference challenged a persistent assumption: that Africa lacks the legal tools to govern AI. In reality, many of the necessary frameworks already exist, including those in data protection, consumer protection, cybercrime legislation, administrative law, and constitutional rights.

“The governance gap in African AI is not primarily a legislative one but a gap in institutional coordination and enforcement frameworks,” Mashinini argues.

Indeed, AI systems cut across sectors and jurisdictions, yet regulatory bodies remain siloed, with limited mechanisms for collaboration. The result is a mismatch between how technology functions and how law is structured to respond.

Professor in the Wits School of Law, Jonathan Klaaren, explained that effective governance requires alignment among legal frameworks, technical systems, and institutional actors, but that there are no current approaches designed to achieve this.  

The consequences of this fragmentation are, however, visible. Governments are increasingly adopting automated decision-making systems, raising questions about transparency and accountability. Digital platforms are shaping public discourse in ways that challenge existing regulatory models. Cybercrime, misinformation and synthetic media are evolving faster than enforcement capacity. At the same time, the legal profession itself is being reshaped.

In a panel moderated by Associate Professor Michele van Eck at the Wits School of Law, speakers, including attorney Azhar Aziz-Ismail, stressed that AI is no longer a future concern. It is already embedded in legal practice, requiring new forms of competence, verification and accountability.

“The use of AI by both practitioners and clients is outpacing regulatory guidance, placing pressure on professional standards and ethical frameworks,” noted van Eck.

We don’t need entirely new laws, but a relook at design

Rather than calling for entirely new laws, the conference pointed to more immediate, actionable steps.

First, governance must move closer to the point of design. Legal and ethical principles, such as accountability, transparency and rights protection, must be embedded within AI systems themselves, not applied after harm occurs.

Second, existing legal frameworks must be activated in a coordinated way. This requires stronger alignment between regulators, clearer enforcement pathways, and institutional structures capable of responding to technologies that do not fit within traditional boundaries.

Third, governance must be contextually grounded. Frameworks need to reflect African realities, including linguistic diversity, uneven digital access and socio-economic inequality,  

Practical tools discussed at the conference included algorithmic impact assessments, explainability standards aligned with legal thresholds, and independent oversight bodies that bridge technical and legal expertise.

The practical path forward for policymakers and regulators

The event produced a series of policy briefs addressing issues such as algorithmic fairness, digital language resources, AI-driven surveillance and cyber warfare governance

“The quality of the student and early to mid-career scholarship presented was exceptional,” Mashinini notes. “These are researchers who are going to define African cyber law for the next generation.”

The challenge now is continuity. “My hope is that the papers become published works, that the policy briefs reach the desks of the people who can act on them, and that the conversations continue in the months ahead,” she says.

The second African Cyber Law Conference was held at Wits University from the 24th-25th of March, with the theme: “Resilient and Responsible Design: Governing AI, expression and digital media.

Share