Start main page content

Securing vibe coding: The hidden risks behind AI-generated code

- Rennie Naidoo

South Africa’s tech leaders must rethink their growing dependence on AI-assisted software development

In software engineering circles across South Africa, artificial intelligence has become both a trusted assistant and a capable coder. Tools like GitHub Copilot, ChatGPT, and Claude are reshaping how applications are built, transforming natural language prompts into functional code in seconds. Under relentless delivery pressure and a persistent shortage of senior developers, these AI companions seem like the perfect solution.

But every technological leap carries hidden downsides. Beneath the surface of this transformation lies a mounting concern that speed is seducing us into complacency, and that convenience is coming at the cost of security, reliability and deep understanding.

The software engineering community has coined a new term for this phenomenon: vibe coding. It describes a growing tendency among developers to accept AI-generated code with minimal scrutiny, trusting that if it compiles and runs, it must be correct. Like flying on autopilot without checking the instruments, it works brilliantly until it doesn’t.

The risk isn’t hypothetical. Research from Veracode and multiple academic studies shows that nearly half of AI-generated code contains known vulnerabilities. These are not esoteric or edge-case flaws. They are foundational mistakes: injection vulnerabilities, broken authentication, poor input validation and insecure dependencies.

At its core, the problem lies in how these AI models are trained. By ingesting public repositories, the models absorb patterns, both good and bad, secure and dangerous. They imitate what they have seen. And much of what they have seen is legacy code written under very different conditions and assumptions.

A dangerous illusion of competence

What AI misses is precisely what human developers are trained to watch for: context. Ask a model to generate a login function, and you may receive something syntactically perfect, yet riddled with flaws. From hardcoded credentials to unvalidated inputs, the omissions are subtle but consequential. A session cookie without the HttpOnly or Secure flag is an open invitation for hijacking. A dynamic SQL query built from unchecked user input is a textbook example of an injection vulnerability.

AI can often gloss over authentication logic, skip sanitisation routines and recommend outdated or vulnerable libraries. Sometimes it even suggests dependencies that do not exist. This opens the door to typosquatting attacks, where malicious actors preemptively register packages with similar names.

And because AI lacks an understanding of your specific business logic, it can build applications that technically work but violate domain rules, regulatory requirements, or customer trust. In South Africa’s heavily regulated industries such as finance, healthcare, and public services, this is not a small oversight. It is a risk to data sovereignty, compliance and national resilience.

It is tempting to treat AI like an oracle of modernity. Yet, ironically, it often resurrects the past. Without proper guardrails, models trained on historical code can reintroduce deprecated SSL protocols, weak cryptographic practices, and insecure memory operations. These are habits the software industry has spent decades unlearning.

In one public test, an AI-generated C function was found to be vulnerable to classic buffer overflows. In another, Python’s notoriously unsafe pickle module was used for multiplayer data exchange. Even simple frontend examples, such as file upload handlers, have demonstrated glaring failures like unchecked filenames that can overwrite system files.

These are not fringe cases. They are warnings.

Why South African IT leaders must pay attention

The consequences of uncritical AI adoption may be particularly acute in the South African context. Public-facing digital services, especially in banking, insurance, and healthcare, often hold vast troves of sensitive data. A single injection flaw could expose an entire backend. Internally, DevOps scripts written by AI might misconfigure firewalls or leave cloud buckets wide open. Even educational and civic platforms are not immune, especially where code is deployed rapidly under pressure and oversight is minimal.

In a country where digital infrastructure is essential to economic inclusion and service delivery, such vulnerabilities are more than technical glitches. They are threats to trust and progress.

The answer is not abstinence but discipline. AI can remain part of the developer toolkit, but it must be treated as a junior contributor, not a silent partner. Developers must craft smarter prompts, asking explicitly for input validation, secure defaults or audit-ready code. Prompting the model to explain its reasoning and then interrogating that explanation is another step toward safety.

Just as importantly, human code reviews remain non-negotiable. AI-written functions should undergo the same scrutiny as those crafted by hand. Security-aware developers must be part of this process, equipped with static analysis tools, dynamic testing tools, and dependency scanners. When AI suggests a library, someone must check its provenance, licensing and patch history.

Organisations should set clear boundaries. AI should be off-limits for high-risk components such as authentication modules, payment systems, or infrastructure scripts. Governance frameworks should define when, where and how AI is used, ensuring that accountability never disappears into the fog of automation.

Education must catch up. Developers, especially juniors, need to be trained to view AI critically, not as an infallible source of code, but as a fallible synthesiser of historical patterns. The more we teach developers to ask “why” and “what if”, rather than simply “what next”, the more resilient our systems will become.

The productivity gains of AI are real. But so are the risks. In a world where the speed of code generation is no longer a bottleneck, it is judgement, context and ethical discernment that must become the new differentiators.

Securing AI-generated code is not just a technical challenge. It is a cultural one. We must choose thoughtful engineering over impulsive automation. If we get it right, the tools will empower. If we get it wrong, the consequences will write themselves in code we no longer fully understand.

Rennie Naidoo is an IS professor and research director at the Wits School of Business Sciences. An established NRF-rated researcher, his focus areas include data science, sustainable IT, artificial intelligence and cyber security.

This article was first published on CIO-SA.

Share