ChatGPT-4: AI’s Evolving Capabilities and Consequences for Cybersecurity

ChatGPT-4: AI’s Evolving Capabilities and Consequences for Cybersecurity

ChatGPT, the OpenAI Chatbot that has taken the tech world by storm, is getting smarter with today’s release of GPT-4. This super interesting technology responds to queries and exchanges information back-and-forth in a manner that is almost human. The impressive responses, with the content and flow of a human-to-human conversation, feel like we’ve had such a technological breakthrough, very much like the early internet in the early 1990’s. With the release of ChatGPT-4, the speed at which AI is evolving has practitioners wondering what the impacts will be to cybersecurity.

This blog starts off with some historical background, covers what we know today about ChatGPT, speculates a little bit about how ChatGPT will affect cybersecurity in the near-term, and provides a glimpse into Nozomi Networks product functionality that we will be introducing later this year.

History: From Early AI to ChatGPT

Artificial Intelligence (AI) history is as old as Computer Science history, and maybe older.

In his seminal paper “Computing Machinery and Intelligence”, Alan Turing formulated what is known as the Turing Test. This test’s intent was to set the bar to understand if a computer was really “thinking” like a human, or not — and it boils down to having a human detect if the party they are interacting with via a text conversation is another human or a machine. If the human interrogator cannot tell whether the other side is a computer, the computer wins. Up to now, no computer has been able to pass the Turing Test.

AI researchers have been pushing towards a generic, universal algorithm/system/model able to learn anything and mimic the human learning workflow. Despite that goal, the flexibility of a three-year-old’s learning is still unmatched by today’s computers and algorithms. The “failure” to reach this holy grail of AI led to an “AI winter” in the second half of the 1970s. Since then, after a loss of popularity within research and funding in the 80’s and early 90’s, a lot has been achieved by the so-called “narrow AI” — which aim to solve specific problems, recognizing that larger, more generic problems are too hard. Today, neural networks of all kinds – deep and shallow – are used in many tasks, easily available to everyone (think about tools like Tensorflow or scikit-learn), accelerated with dedicated hardware (Apple Neural Engine and Google’s Tensor Processing Unit), and considered as other software engineering tools to solve specific problems.

Narrow AI is interesting, useful, and continues to progress non-stop. But its “narrow” tag is there as a reminder that passing the Turing Test is no longer a goal. Maybe in the future a computer will pass the Turing Test, and at that point we may ask ourselves if the computer is really thinking or not, but that’s another story. The reason behind this is the advancements made in the last few years by what is called Artificial General Intelligence, that basically goes back to solve the single big learning problem (also leveraging Narrow AI building blocks) with systems able to require less engineering when being adapted for a specific problem and domain.

Today: Enter ChatGPT

OpenAI’s ChatGPT has definitely shaken the ground since its introduction in November 2022 by OpenAI.

ChatGPT is a chatbot based on a Large Language Model (LLM) that you can ask a question to (prompt) and ChatGPT will respond and write a text for it. The prompt gives the chatbot a context within which to use a specific corpus of content and probability to determine which words best line up, forming new sentences. It generates new unique text and does not show source content. All this is available via a web application and APIs for consumption.

Everyone who gives ChatGPT a spin is impressed by the quality of answers, and it can be easy to think it could pass the Turing Test — and by searching the topic on the web, one can even see folks declaring that. But I’d say it’s not passing the Turing Test. As OpenAI’s researchers are keen to admit, it’s far from perfect. Not so infrequently, answers are wrong, inconsistent, or inaccurate. However, the bot works “well enough” that people may think it’s still telling the truth.

Moreover, we need to remember that it is a chatbot. You can ask a question, and ChatGPT will answer. While its application seems quite broad, it still has a somewhat narrow goal: to start from a context and produce content that makes sense for that context. It cannot properly analyze data (structured, unstructured, or images), train a neural network, or do most of the jobs a human can do. The list of things it cannot do or mistakes it can make is huge. It’s not the terrifying Skynet (from the Terminator movies), nor one of the fully autonomous “AI” systems featured in sci-fi stories since forever. Again, the achievements obtained are astonishing, especially for such an early version — but it’s important to not let our enthusiasm get carried away beyond reality.

OpenAI’s technology (and in particular a tailored version of GPT-3) has been used for GitHub Copilot, a tool able to assist software developers in creating software. Still, it’s an aid, an efficiency or acceleration tool. Today, you cannot create software with Copilot and hope it will replace a human Software Engineering team. BUT – it’s powerful. All kinds of Software Developers can benefit from it. Skilled developers – the so called 10x Developers – will benefit from an additional speed boost, to the point that maybe one day it would be not possible or competitive to be in the Software Engineering market without the “10x Developer” pin and the requirement to have such tools at their disposal.

Research and development on the topic is not going to stop. Things will get better and better, and limitations are going to be less and less over time. The world’s best-in-class researchers and engineers are going to raise the bar year after year.

How Will ChatGPT Affect Cybersecurity?

All technology can be used for good or evil. And ChatGPT is no exception. A lot has been shared on the web about how ChatGPT can effectively be used. At the end of the day, the tool has ingested a considerable amount of knowledge and – with the limitations underlined before – it can help the bad guys automate, speed up and improve existing bad intention initiatives.

Here is a summary of evil uses of ChatGPT that we’ve found online:

  1. While ChatGPT has content filters to avoid misuse, there are many examples online showing how, by using its web GUI or programmatic API, one can bypass content filters to obtain harmful code in response (e.g. obtain source code to create polymorphic malware, which is harder to detect by anti-malware technology). The same technique can be used to find a recently-discovered vulnerability that allows an attacker to inject the malicious payload on to a victim. This use case lowers the technology bar for those who want to cause harm. If less skill is required for an evil deed, more people can do it.
  2. ChatGPT is fully automated and requires no human intervention to generate its responses, so it can be used programmatically (as part of a computer program) and in parallel (many times at the same time). This means that if used for phishing attacks or more targeted spear-phishing, it can be a lot more sophisticated in using social engineering. With fewer language or translation mistakes, and automated for a higher volume of people, it will be easier to create large-scale, more sophisticated phishing campaigns that are very credible.
  3. Beyond spear-phishing campaigns, which generally refer to text-based email impersonation, ChatGPT can be used for large-scale “Deep Fake” campaigns. ChatGPT has been showcased as a chatbot, using a text interface, but with additional stages in a data pipeline, it can be used to generate audio and video output. By learning how a particular person looks or speaks through input images, video, or sound, AI technology can generate credible audio of a person’s voice and/or animated images (think Zoom video) following a script that the real person never performed. These Deep Fakes can be used for fraud, robbery, misleading, impersonating, smear campaigns, brand attacks, etc.
  4. If one can direct or modify the content used by ChatGPT to affect how it learns context (which is not yet possible in the public demos available today), then the creation of large-scale misinformation campaigns becomes possible. This could result in large Twitter storms appearing to support a misleading version of reality or events, or other similar false narratives.
  5. ChatGPT or similar technologies are so powerful that they are likely to be embedded in a lot of future technology and products that we will likely use every day, which opens significant vulnerabilities (remember SolarWinds?). Once part of our lives, ChatGPT can be affected by “hallucinations”, which are erroneous learnings or conclusions derived from processing content. These hallucinations can happen by accident (like when a self-driving car does not “see” an object in front of it) or with malicious purpose.

With such powerful tools available, it should be clear that keeping up with the ChatGPT-enabled adversaries is going to be the new chapter of the cat-and-mouse game, with the acknowledgment that the bad guys just got a significant upgrade.

On the other hand, ChatGPT technology can also be used to advance cybersecurity’s agenda. Here are a few examples:

  1. Since we know that ChatGPT will be a tool for malicious actors, we prepare for it to be part of the arsenal that we need to defend against. We can incorporate the evil uses outlined above into the battery of tests that security products and practices must pass. For example:
  2. Product vendors can easily detect and prioritize the elimination of vulnerabilities in their software faster with ChatGPT.
  3. ChatGPT can be incorporated into a company’s anti-phishing training to raise the bar for employee awareness and robustness of process safety.
  4. ChatGPT can assist security analysts in analyzing and reporting on security threats in the wild, which will lead to faster discovery, faster and wider sharing of threat intelligence, and ultimately faster remediation of the exposure to those threats.
  5. As ChatGPT technology takes a stronger foothold in the world of software development, it can be trained with more secure content. Software developers using this kind of technology will reuse more and more software with fewer vulnerabilities, easier maintenance, and following more secure guidelines, leading to a more robust implementations of secure-by-design.
  6. The use of this technology by red teams and blue teams (partially or fully automated with ChatGPT) can be taken to the next level when it comes to cybersecurity penetration testing, vulnerability scanning, bug bounty programs, attack surface discovery/analysis, and much more.

ChatGPT also creates an opportunity for cybersecurity vendors. The shortage of cybersecurity professionals is a challenge that companies across the globe need to face every day, and likely for years to come. The ability to rely on highly autonomous and intelligent tools will become a need. Good enough won’t be enough; tools will need to be exceptionally good just to enter the league.

Up Next: Implications for the Nozomi Networks Platform

The Nozomi Networks Platform has always used AI and ML to power and improve some of its algorithms. Signal learning algorithms for process learning, clustering algorithms for incident grouping, Bayesian models for symbolic predictions, neural networks of all kinds for classification and regression — all sorts of models are used in code produced by Nozomi Networks engineers, either in final products being sold to customers or “just” used in research projects for future use.

As a cybersecurity company, we believe that the research and innovation cannot stop. We should not rest on our laurels – or the bad guys will win this fight. While creating a tool like ChatGPT is not our mission, we are committed to continue making our platform more intelligent than ever. We have many longstanding R&D initiatives that will make our platform even more efficient and intelligent than ever, so that the good guys will have the right tools to counter bad actors out there.

Stay tuned!