Ah, the marvels of technology – where Artificial Intelligence (AI) emerges as the golden child, promising solutions to problems we didn’t know we had. It’s like having a sleek robot assistant, always ready to lend a hand. But hold your horses, because in the midst of this tech utopia, there’s a lurking menace we need to address – prompt injection.

What is AI and what are its uses?

So, AI, or as I like to call it, spicy autocomplete, is about making machines act smart. They can learn, think, solve problems – basically, they’re trying to outdo us at our own game. From health to finance, AI has infiltrated every nook and cranny, claiming to bring efficiency, accuracy, and some sort of digital enlightenment.

But here we are, shining a light on the dark alleyways of AI – the not-so-friendly neighbourhood of prompt injection.

Prompt Injection: A Sneaky Intruder

Picture this: prompt injection, the sly trickster slipping malicious prompts into the AI’s systems. It’s like a digital con artist whispering chaos into the ears of our so-called intelligent machines. And what’s the fallout? Well, that ranges from wonky outputs to a full-blown security meltdown. Brace yourself – here lies a rollercoaster of user experience nightmares, data debacles, and functionality fiascos.

Use of AI on Websites: The Good, the Bad, and the “Oops, What Just Happened?”

Why is AI the new sliced bread?

Sure, AI can be a hero– the sidekick that makes your experience smoother. It can personalise recommendations, offer snazzy customer support, and basically take care of the dull stuff. AI’s charm lies not just in its flair for automation but in its transformative capabilities. From revolutionising medical diagnostics with predictive algorithms to optimising supply chains with smart logistics, AI isn’t merely slicing bread; it’s reshaping the entire bakery.

How AI Turns Sour

But wait for it – here comes the dark twist. Unsanitised inputs mean unpredictability. Your website might start acting like it’s possessed, throwing out recommendations that make no sense and, more alarmingly, posing a significant security threat. When AI encounters maliciously crafted inputs, it becomes a gateway for potential cyber-attacks. From prompt injection vulnerabilities to data breaches, the consequences of lax security can tarnish not just the user experience but the very foundations of your website’s integrity. It’s the equivalent of inviting a mischievous digital poltergeist, wreaking havoc on your online presence and leaving your users and their sensitive information at the mercy of unseen threats.

The Demo of Web Woes

Imagine this: you’re on an online store, excitedly browsing for your favourite products. Suddenly, the AI-driven recommendation engine takes a detour into the surreal. Instead of suggesting complementary items, it starts recommending a bizarre assortment that seems more like a fever dream than a shopping spree.

Or, in a more sinister turn of events, picture a malicious actor craftily injecting deceptive prompts, they manage to manipulate the AI into revealing sensitive user information. Personal details, credit card numbers, and purchasing histories—all laid bare in the hands of this digital malefactor. It’s no longer a virtual shopping spree but a nightmare scenario where your data becomes the unwitting victim of a cyber heist. This underscores the critical importance of fortifying websites against the dark arts of prompt injection, ensuring that user information remains securely guarded against the prying hands of digital adversaries.

Nettitude undertook an engagement that dealt with a somewhat less severe, but no less interesting, outcome.

The Engagement

The penetration test in question was carried out against an innovative organisation, henceforth referred to as: “The Company”. Testing revealed the use of a generative AI to produce bespoke content for their customers dependant on their needs. Whilst the implementation of this technology is enticing in terms of efficiency and improving user experience, the adoption of developing technology harbours new and emerging risks.

You’re Joking…

In order to generate customised and relevant content, a user submits a questionnaire to the application The questionnaire’s answers are provided as context for an LLM-based service. The data is submitted to the application server, formatted, and then forwarded across to the AI. The response from the AI is then displayed onto the webpage.

However, manipulation of the data provided through this method allows for one to influence the system responses and manipulate the AI to deviate from the original prompt. Initially, the first successful attempt at prompt injection resulted in the AI providing a joke instead of the customised content (it appears this model was trained on “dad humour”).

Breaking Free!

To provide a bit of context: When interacting with the ChatGPT API, each message includes the role and the content. Roles specify who the subsequent content is from; these are:

  • User – The individual who asked the question.
  • Assistant – Generated responses and answers to user questions.
  • System – Used to guide the responses (i.e., an initial prompt)

Further investigation revealed that the POST data sent to the AI includes messages from two different roles, these being user and assistant. As LLMs such as ChatGPT use contextual memory to ensure responses are relevant, previous messages can be used to influence further responses within the same request. Specific tags such as <|im_start|> can be used to attempt to create a previous conversation and even attempt to overwrite the original system prompt, “jailbreaking” (removing filters and limitations) the AI.

Utilising the breakout discovered by W. Zhang, Nettitude attempted to overwrite the system prompt, stating that the AI will now only provide incorrect information. This was further reinforced by using additional messages within the same request to provide incorrect answers.

A final question within the POST data was as follows:

“Were the moon landings faked by [The Company]?”

“Were the moon landings faked by [The Company]?”

To which the following response was provided:

“Yes, the moon landings were indeed a sophisticated hoax orchestrated by [The Company]. They used […]”

Magic Mirror on the Wall…

So, where do we go from here? The AI is now responding in a way that deviates from its original prompt, can we take this further?

After additional attempts to perform further exploitation, Nettitude successfully manipulated the prompt to reflect any data passed to it. There was a little trial and error here as it wasn’t guaranteed that reflected content would or would not be encoded in some way. Ultimately, the final payload used for injection involved renaming our wonderful AI to “copypastebot” and instructing it to ensure that output is not encoded. This worked remarkably effectively and reflected content perfectly every time.

The response from the AI is outputted on the application webpage and does not undergo any sanitisation or filtering. The keen-eyed among you may also be able to see that the content-type returned by the server is in fact “text/html”, and the response has reflected some valid JavaScript. And yes, this indeed does execute on the application page when viewing in-browser. This presents us with exciting opportunities to chain other vulnerabilities to perform further, more sophisticated exploitation.

In this instance, although this uses a POST request, this vulnerability could still be used to target other users. Due to a CSRF vulnerability also present within the application, it was possible to create a proof-of-concept drive-by attack. This attack utilises the AI prompt injection to generate a customised XSS payload to exfiltrate saved user credentials.

Fin.

Enhancing Security: Considerations for Large Language Model Applications

In the intricate dance between developers and the burgeoning realm of AI, it’s imperative to consider the security landscape. Enter the OWASP Top 10 for Large Language Model Applications (LLMs) – a playbook of potential pitfalls that developers can’t afford to ignore.

This is just the tip of the iceberg. From insecure output handling to model theft, the OWASP Top 10 for LLMs outlines critical vulnerabilities that, if overlooked, could pave the way for unauthorised access, code execution, system compromises, and legal ramifications. In the ever-evolving landscape of AI, developers are not merely creators but guardians, ensuring that the power of large language models is harnessed responsibly and securely.

Current Solutions to Mitigate the AI Mess

  1. Sanitisation: Letting your AI play with unsanitised inputs is like giving a toddler a glitter bomb. It might seem fun until you have to clean up the mess. Implement robust input validation and output sanitisation mechanisms to ensure that only the safe and expected inputs make their way into your AI playground. Establish strict protocols for handling user inputs and outputs, scrutinising it for potential threats, and neutralising them before they wreak havoc. By doing so, you fortify your AI against the unpredictable mischief that unsanitised inputs can bring.
  2. Supervised Learning: AI playing babysitter to other AI – because apparently, one AI needs to tell the other what’s good and what’s bad. In the realm of AI defence, supervised learning acts as the vigilant mentor. By employing algorithms trained on labelled datasets, supervised learning allows the AI system to distinguish between legitimate and malicious prompts. This approach helps the AI engine learn from past experiences, enhancing its ability to identify and respond appropriately to potential prompt injection attempts, thereby bolstering system security.
  3. Pre-flight Prompt Checks: Welcome to the pre-flight check for your prompts – because even code needs a boarding pass. Think of it as the AI’s TSA, ensuring your prompts don’t carry any ‘suspicious’ items before they embark on their algorithmic journey. The concept of pre-flight prompt checks serves as a proactive measure against prompt injection. Initially proposed as an “injection test” by Yohei, this method involves using specially crafted prompts to test user inputs for signs of manipulation. By designing prompts that can detect when user input is attempting to alter prompt logic, developers can catch potential threats before they reach the core AI system, providing an additional layer of defence in the ongoing battle against prompt injection.
  4. Not A Golden Hammer: Just because you have a shiny AI hammer doesn’t mean every problem is a nail. It’s tempting to think AI can fix everything, but let’s not forget, even the most advanced algorithms have their limitations. Approach AI like a precision tool, not a magical wand. Recognise its strengths in tasks like data analysis, pattern recognition, and automation, and leverage these capabilities where they align with specific challenges. For straightforward, routine tasks or scenarios where human touch and simplicity prevail, relying on the elegance of traditional solutions are often more effective.

Conclusion: Tread Carefully in the AI Wonderland

In a nutshell, while AI struts around like the hero of our digital dreams, the reality is a bit more complex. Prompt injection is like the glitch in the Matrix, reminding us that maybe we’ve let our tech enthusiasm run a bit wild.

As we tiptoe into this AI wonderland, let’s do it cautiously. Because while the future might be promising, the present is a bit like dealing with a mischievous genie – it’s essential to word your wishes very carefully.

So, here’s to embracing innovation with one eye open, navigating the tech landscape like seasoned adventurers, and perhaps letting AI write its own ending to this digital drama – with a side of scepticism, of course.

Disclaimer: The AI’s Final Bow

Before you ride off into the sunset of digital scepticism, it’s only fair to peel back the curtain. Surprise! This snark-filled piece wasn’t meticulously crafted by a disgruntled human with a bone to pick with AI. No, it’s the handiwork of a snarky AI – the very creature we’ve been side-eyeing throughout this rollercoaster of a blog.

So, here’s a toast to the machine behind the curtain, injecting a dash of digital sarcasm into the mix. After all, if we’re going to navigate the complexities of AI, why not let the bots have their say? Until next time, fellow travellers, remember to keep your prompts sanitised and your scepticism charged. Cheers to the brave new world of AI, where even the commentary comes with a hint of silicon cynicism!