Start main page content

Where we MIND humans as much as code

- Tamsin Mackay

In a world dictated by data, scale and speed, the Wits MIND Institute prioritises the human being.

The Machine Intelligence and Neural Discovery (MIND) Institute at Wits University is reclaiming space for humans, not as users or productivity metrics but as whole, complex beings. The Institute’s researchers come from the fields of philosophy, neuroscience, ethics, media studies, architecture and engineering and they bring nuance and story to the machinery of artificial intelligence (AI).

Where global giants speak of alignment and efficiency, members of the MIND Institute speak of trust, emotion, ethics, cosmology and care. The interdisciplinary community is focused on ensuring that people are not forgotten in the rush to innovate.

“Technology is all human artefact so I’m not fully on board with the idea that technology has gotten away from humans,” says Martin Bekker, Lecturer in the School of Electrical and Information Engineering at Wits. “However, we are asking if the applications of AI are good. Are they making us better, kinder and more effective? Or are they just new toys with unexpected downsides?”

Humans and nature | Curiosity 19: #Disruption ? /curiosity/

Ethical archaeology

Bekker’s dual work in AI ethics and protest prediction shows both the promise and pitfalls of AI. He sees his work at the MIND Institute as a form of ethical archaeology which uncovers and questions the hidden assumptions that shape how AI is built and used. There is a need to refocus conversations, he says, towards what he calls “value elicitation”– discovering what machines prioritise and asking if these reflect human values.

This theme of quiet, persistent interrogation is reflected in how Dr Mary Carman, a Senior Lecturer in Philosophy at Wits, perceives the value of the MIND Institute – which is in creating space for disciplines to challenge each other meaningfully.

“It’s quite difficult to have other disciplines take each other seriously, especially for harder sciences to take an ethicist seriously,” she says. “However, the MIND Institute creates a forum where we are all equal contributors.”

This is essential as these so-called ‘harder’ disciplines are taking the front seat in technology and innovation. This concern is reflected in how engineers might explore solutions for robotic care and in how humanities scholars might pose the uncomfortable and important question: “Why are we trying to solve this problem with a robot?”

Ethical progress is not a matter of coding better decisions into AI. Rather, it is also about choosing whether a specific solution is appropriate at all. After all, looking after frail and sick people is challenging but if humanity chooses the robot, then is it also choosing a less ethical path?

Alternative cosmologies

For Professor Iginio Gagliardone from Media Studies, who has worked extensively in media technology and decolonial thought, the project of reclaiming human meaning in technology begins with widening the lens. He suggests that many of today’s dominant narratives about intelligence and ethics, especially in AI, are bound to Euro-American histories and assumptions. At the MIND Institute, he is exploring how African cosmologies could offer alternatives.

“Traditional, pre-colonial African cosmologies accommodate both the human and the non-human and may offer more meaningful ways of thinking about how we coexist with intelligent systems,” he says.

Gagliardone was recently appointed as the first SARChi SA-UK Bilateral Chair in the Digital Humanities — read more.

There is an urgency in the resistance to the most powerful voices in technology and the need to ensure that the rest of the world has a say in shaping what comes next.

The MIND Institute is not a laboratory or a think tank. Instead, it brings together vastly different disciplines and allows for friction to do the work. “Big disruption is actually going to be in the impact that these conversations have on our choices and what we do with our research,” says Carman.

In a culture wired for speed and scale, the MIND Institute’s approach is measured, dialogic and human. It does not offer shortcuts or certainty but it is a space that allows people to think, feel and imagine differently and ultimately to centre people at the heart of technology.


 

5 MIND Disruptions | Curiosity 19: #Disruption ? /curiosity/

足球竞彩app排名 the Wits MIND Institute

 The Machine Intelligence and Neural Discovery Institute at Wits University is an African-based interdisciplinary AI research hub that pushes the frontiers of the scientific understanding of machine, human and animal intelligence.

Led by Professor Benjamin Rosman, it focuses on fundamental AI research that promotes breakthrough scientific discoveries and aims to grow a much-needed critical mass of AI expertise on the continent.?Through robust interdisciplinary collaborations, the MIND Institute partners with industry and others to develop cutting-edge technologies tailored to Africa’s unique challenges. It also addresses how AI interfaces with society from an ethical and policy perspective, shaping governance and ensuring that AI development is safe, inclusive and beneficial to all.

Here are five MIND disruptions its first cohort of Fellows anticipate:

  1. Decolonising intelligence through African cosmologies

Gagliardone asks: “What if the future of AI lies behind us and in the philosophies that we forgot to remember?” He believes that one of the most important ideas emerging from MIND is that African cosmologies may offer a deeper, more human framework for understanding AI than the dominant models created in Silicon Valley or scripture.

Where non-Abrahamic traditions struggle to make space for non-human sentience, African pre-colonial systems of thought embrace the interconnectedness of all beings – animate, inanimate, human or machine. Within these ontologies, AI isn’t a threat or a tool but something that is ‘in relation to’, something to be lived with, rather than by which to be dominated.

“Use the past to find frameworks that let us coexist with machines instead of fearing or worshipping them. African cosmologies could provide a better framework that accommodates both the human and the non-human. At a time when many of the people building this technology have dystopic, unhuman visions, we need alternatives,” he says.

  1. Challenging the fixation of technology as a solution

Sometimes one of the most powerful ways to disrupt the status quo is to ask a question: “Why are we solving this problem with technology?”

Carman is asking what motivates code in the first place? Take the example of care robots – machines designed to alleviate loneliness or used in the stead of humans. Carman looks beyond the rise of these tools and questions why more people are lonely in the first place and why humans are so quick to outsource care?

“These robots might help but we need to ask if they are they making us less ethical if we hand over work to them that makes us deeply human?” she questions.

  1. Using ethics as a mirror

Ethics, says Bekker, are too often seen as the seatbelt that stops innovation from crashing but it can also be a mirror and a way to reflect on what is happening before it is too late to turn back. He is building ways to ‘elicit values’ from AI models to gain a deeper understanding of them. By feeding large language models morally complex scenarios and observing their choices, he’s asking what the AI really values.

“Is it life? Utility? Fairness? Age? Autonomy? This approach doesn’t scold the machine, it focuses on making the AI’s reasoning visible so that we can interrogate it and also our own assumptions,” says Bekker.

The goal is to find clarity in a world where humans have also become untethered from shared moral anchors.

  1. Leveraging interdisciplinary doubt

For Carman, the role of different disciplines in listening to one another is invaluable. Intellectual discomfort means troubling assumptions and asking questions that challenge the prestige of technology.

“Sometimes, the disruption you’re working on is a wonderful project, but there is value in someone questioning whether you should be doing it in the first place and in their having the courage to say it,” says Carman.

“In a space increasingly ruled by velocity, this is a slow, human pause that has the potential to change research direction and remind its makers what really matters.

  1. Predicting protest with compassion

Dr Bekker’s second disruption sits between heavy data and deeply human concepts. Drawing from 17 years of South African police records, he built an AI model to predict the frequency of public protest, and the results are fascinating. Patterns emerged that could forecast unrest years in advance.

For Bekker, however, it is less a victory for surveillance and more of a wake-up call: “Once you can predict protest, you have to ask what we are measuring and more importantly what we are doing with this knowledge.”

Read more at wits.ai.

  • Tamsin Mackay is a freelance writer.
  • This article first appeared in?Curiosity,?a research magazine produced by?Wits Communications?and the?Research Office.
  • Read more in the 19thissue, themed #Disruption, which explores the crises, tech, research, and people shaking up our world in 2025
Share