ABUJA, Nigeria – Nigeria’s National Information Technology Development Agency (NITDA) warns that new vulnerabilities discovered in OpenAI’s GPT-4.0 and GPT-5 models could expose users to serious data-leakage and manipulation risks.
In a cybersecurity advisory released via its official X account, NITDA’s Computer Emergency Readiness and Response Team (CERRT.NG) says seven critical flaws allow attackers to exploit ChatGPT through indirect prompt injection.
According to the agency, cybercriminals can embed hidden instructions in webpages, comments or crafted URLs, causing ChatGPT to execute unintended commands during routine browsing or summarisation — sometimes without users clicking anything.
“These vulnerabilities enable attackers to bypass safety filters and exploit markdown rendering weaknesses,” NITDA says. “In extreme cases, attackers can poison ChatGPT’s memory, so malicious instructions persist across future interactions.”
The agency’s Director of Corporate Affairs and External Relations, Mrs Hadiza Umar, confirms that while OpenAI has introduced partial fixes, large language models still struggle to reliably distinguish user intent from concealed malicious input.
NITDA warns that the flaws could result in unauthorised actions, data leakage, manipulated outputs and long-term behavioural influence, affecting both individuals and organisations.
As precautionary measures, CERRT urges users to disable browsing and memory features when unnecessary, limit interactions with untrusted websites and ensure GPT models are regularly updated.
In a related alert, the agency also warns of a new cyberattack technique exploiting older vulnerabilities in Cisco Secure Firewall ASA and FTD systems, which can force devices to reboot and cause network outages.
