Start main page content

Global Ethics Day colloquium exposes vulnerabilities in research

- Wits University

Not just research subjects, but research itself, is vulnerable to the onslaught of technology.

The impact of AI on the crafting and outcomes of academic and medical research dominated discussions at the 5th Annual Wits SARIMA Carnegie Global Ethics Day Colloquium 2025 held online on 24 July 2025.

Themed The impact of multidimensional vulnerability on the research and innovation landscape, the colloquium interrogated how research itself is open to AI generated inaccuracies and how certain societal demographics, professions such as journalism, and even entire countries, experience its impact.

Professor Lynn Morris, Wits Deputy Vice-Chancellor: Research and Innovation, told delegates in her welcome address that research and innovation are human endeavours based on the principles of trust, respect, and fairness.

“A core ethical tenet of research is that it must always be about people,” she said.

This people-first theme linked each of the presentations, looking at who and what is vulnerable in the context of research – from animals, individuals, and communities, as well as the research process itself.

In support of society

Looking at vulnerability through this multidimensional lens, the colloquium highlighted the imperative that research must serve all of society, without compromise. It must safeguard rights, and the dignity and welfare of research subjects.

Professor Mary Davis, Academic Integrity Lead at Oxford Brookes University, opened the debate on AI, speaking about the dichotomy between its capacity both to decrease and increase the digital divide, given the skewed influence and dominance of the Global North over AI technology.

Simultaneously, the classification as AI of tools that were previously considered assistive, such as spell-checking apps and voice to text programmes, can further disadvantage non-native English speakers and people with disabilities, leaving them vulnerable to poor academic outcomes.

“Inequity in AI detection tools for teachers can leave students who cannot afford to pay for sophisticated AI packages vulnerable in the same way,” she said, while Jason Gibson of Turnitin told the colloquium, “The workload of instructors has increased exponentially as they try to deploy detection tools to root out the misuse of AI in students’ work.”

Lives of the rich and powerful

Aroop Chatterjee from the Southern Centre for Inequality Studies (SCIS) at Wits made a case for more research into the wealthy and powerful, who enjoy institutional isolation from much of the current research into poverty and wealth.

While the poor are studied in granular detail, the elite evade this kind of scrutiny, meaning then that some social studies do not meet the basic requirements of ethical research practices, which call for a fair and equal distribution of risks and benefits.

“It is researchers’ duty to ensure that social studies are balanced, but many are incomplete if they do not include the lives of the elite,” Chatterjee claims. To guard against this abuse of privacy privilege, and in the interests of symmetry, Chatterjee called for ethical pluralism in social studies and for academic institutions to support elite research.

AI and its misuse is also eroding our trust in journalism, claims Dr Nechama Brodie from the Wits Centre for Journalism. “One of the key tenets of journalism is accountability, and AI cannot be held accountable,” she lamented, saying that ‘modern’ journalism serves the commercial interests of media platforms, when it should serve the public interest. This leaves both the role of journalism and the credibility of our news vulnerable.

The case for AI-assisted vs AI-generated

Niall Reddy, also from SCIS, spoke about an entire set of social inequality when it comes to the implementation, innovation, and investment into AI technology.

“From a labour perspective, using vast armies of people in the Global South to train large language models that benefit the Global North, in data centres that have a massive impact on the environment, is inherently unfair,” he said.

The extraction of minerals required to power AI and other technologies raises social justice issues and the massive amount of data that AI ingests raises questions around intellectual property, authorship, and ownership.

This was also raised by Sumaya Laher, Wits Professor of Psychology and editor of the SA Journal of Psychology. Aside from issues of writers submitting AI generated work, reviewers feeding books and articles into AI exposes the original authors, leaving them vulnerable to unauthorised use of their work.

Laher’s discussion on what comprises permissible use of AI echoed Davis’s questions around AI-assisted versus AI-generated. Given the role that AI already plays in research, there was consensus that guidelines on the ethical use of AI should be in place in academic institutions.

Ethics in medical trial funding

Vulnerabilities in medical research due to funding shortfalls or cancellation were exposed by Howard Snoyman of Werksmans Attorneys and Neil Kirby, Head of Legal and Ethics at Discovery.

“Accepting funding that is inadequate to support the anticipated duration of a study is irresponsible,” argued Snoyman. “Money running out and leaving patients in a medical limbo can have serious health consequences, and researcher have an ethical duty to plan for when funding runs out.”

In the light of Donald Trump’s recent withdrawal of funding from many global organisations, Kirby spoke about the precarity of donees when politics come into play and the need for contractual protection.

Wayne Howard and Anastasia Trataris-Rebisz from the National Institute for Communicable Diseases (NICD) discussed how a failure in medical research to protect against biohazards can potentially have a global impact.

Allegations that the 足球竞彩app排名 pandemic was the result of a lab leak in China illustrate the potential dangers of inadequate bio-risk and biosafety measures in labs at risk from theft, sabotage, insider threats, and human handling error. The NICD offers training and advice on matters relating to bio-risk and biosafety management to reduce vulnerabilities of those inside and outside of lab environments.

Protecting medical trial patients

Emeritus Professor Paul Ruff, Chair of Wits’ Human Research Ethics Committee: Medical,  as well as Azeem Walele from 2 Military Hospital and Zinhle Makathini from the SA Medical Research Council (SAMRC), put forward the case for the protection of vulnerable groups such as pregnant and lactating women, minors, and HIV+ patients in clinical research trials, in a presentation titled Balancing Risk and Inclusion.

Drug trials, argues Ruff, should be scientifically, statistically, and clinically appropriate to the needs of the study group. This means that drugs targeting pregnant and lactating mothers should be looking for better maternal-foetal and maternal-child outcomes.

Studies in minors that require parental consent pose a challenge in South Africa with many absent parents and proceeding without consent can invalidate compensation and insurance agreements, resulting in even greater vulnerability of this study group.

Denying healthy HIV+ patients the right to participate in certain trials can perpetuate stigma and inequity, and Ruff thus recommends the inclusion of HIV+ patients in certain trials, where appropriate.

As a leader in the field of applied ethics, Wits University’s ‘for good’ ethos requires exceeding mere compliance, to capacity-building. The University is committed to embedding ethics into every aspect of its work, and this imperative was echoed by all the speakers in this year’s event.

Share