PakistanTech

NUST Bans ChatGPT Due To Its Cybersecurity Threats

The National University of Sciences and Technology (NUST) has banned the use of ChatGPT, an Artificial Intelligence backed chatbot developed by OpenAI, which has gained massive interest and popularity among the masses since its launch. Even though AI development promised to augment the work of cyber threat hunters and defenders, it carries critical risks in leading cyber threats such as phishing emails, spreading misinformation, and malware development.

Potential Cyber Threats Posed by ChatGPT

The university’s IT department cited several concerns about ChatGPT’s potential for malicious use. The following are some of the specific cybersecurity risks posed by ChatGPT:

  • Generating Realistic Phishing Emails: ChatGPT can generate realistic phishing emails that are difficult to distinguish from legitimate ones. These emails can trick users into revealing sensitive information, such as passwords or credit card numbers.
  • Creating Malware: ChatGPT can create malware that is difficult to detect and remove. This malware can steal data, install ransomware, or take control of computers.
  • Spreading Misinformation and Disinformation: ChatGPT can be used to spread misinformation and disinformation. This misinformation can be used to manipulate public opinion or damage the reputation of individuals or organizations.

Preventive Measures Issued By NUST

The ban on ChatGPT is the latest in a series of measures NUST took to improve its cybersecurity posture. The university also issued advisory steps to mitigate the risks, such as:

  • Training employees on the cybersecurity risks of ChatGPT.
  • Using security tools to scan for malicious content generated by ChatGPT.
  • Official phones MUST NOT be used for ChatGPT.
  • Use separate servers/routing for offline LAN and online networks.
  • Implementing policies and procedures to prevent the use of ChatGPT for malicious purposes.

In addition to the above, the university also organized multiple seminars to educate students about the new era of programming and AI. These seminars were aimed at raising awareness of AI’s potential benefits and risks and providing guidance on how to use AI safely and ethically. It is important to note that ChatGPT is not inherently malicious.  AI technology is still under development, and its security vulnerabilities are not fully understood. This means that there is always the potential for new risks to emerge. As a result, organizations should regularly review their security practices and ensure they are current with the latest threats.

Back to top button