- Critical Thinking - Bug Bounty Podcast
- Posts
- Episode 24 Blogcast: AI Hacking Workflows and Attacks
Episode 24 Blogcast: AI Hacking Workflows and Attacks
Two renown experts join the podcast to discuss AI tactics they use for hacking! Join us for a conversation about the bleeding edge of security.
Make sure to checkout episode 24 of podcast to take in more of this incredible conversation!
News and Opinions
What/Why/How of Episode 24
Where, oh where, do we start?
AI has become a hot-button word in nearly every market. To some, it's a superhighway to the future of our species, granting us the ability to optimize ourselves and move at light speed. To others, it is the advent of a machine uprising and a freakish parody of the human form. The genesis of new tech is often met with suspicion. Too many unknowns cloud our minds and roll over into fear or curiosity.
However, curiosity is a hacker's domain.
This week's show had veteran security practitioner Daniel Miessler and top-tier bug hunter Rez0 along for the ride. Both guests guided us through pivotal takes on how they use AI in their workflows and the security risks they have their eyes on!
Before we dive in, here are some quick tips for those on the go!
Quick Tips Useful AI
Beautify JavaScript with ChatGPT (Rhynorater): ChatGPT can do more than answer your questions. It can also help you clean up your code. Try asking it to beautify a JavaScript snippet. You'll be surprised to find that it formats the snippet for you and renames variables to make them more meaningful!
Convert JSON POST body requests to URL-encoded forms (rez0): When dealing with Cross-Site Request Forgery (CSRF), rez0 suggests using ChatGPT to convert JSON POST body requests into URL-encoded forms. For more on this, check out rez0's blog.
Meta-Prompting with ChatGPT (rez0): In the podcast, rez0 introduces "meta-prompting" with ChatGPT. The idea is to use ChatGPT to improve your prompts, making them more specific and detailed. This can be achieved by instructing the model to mention experts in the space, elaborate on any steps that need to be taken, and use step-by-step reasoning.
For instance, start with a simple, user-generated prompt. Then, use the meta-prompter to enhance it, instructing the model to think and respond step-by-step. Finally, ask the model to summarize the answer. This way, the user gets a highly accurate, concise response to their initial prompt. Shout out to LUD from Daniel Miessler's Unsupervised Learning community for sharing the inspiration for this tip!
For more GPT-related tips, check out the State of GPT to enhance your meta-prompting skills!
AI Security Workflows
Off the bat, Daniel Miessler brings “Mechanizing The Methodology” into the mix. It is the concept of condensing your security tasks or questions into digestible output from tooling, intending to utilize that info in a chain of other processes. The goal is to create a workflow out of mini-commands that act as functions. So, the more simple the output, the better.
Example: Create a prompt using tools for recon called (get TLDs)
Input: Any existing Top Level Domains you want particular recon on
Output: TLDs ready to be served into another command that takes input as a parameter or another part of the chain. For instance, SDNs (subdomains) would be an example of the next logical command to send TLDs to. These actions are similar to how you'd pipe commands in a terminal but rely on how usable the data is as it's passed over.
This important for two reasons:
This workflow gives you a fluid path through different recon stages.
ChatGPT and its subsequent versions all have a limit of 2048 tokens. Queries and responses represent these tokens. So, as time goes on, the AI will lose track of bits of your conversation. So, reducing the size of your queries by applying this method benefits you.
Clearing a Path for Data
To start, you need clean data—rez0 remarks on how vital producing clean data is when working with LLMs. You should strive to reduce the noise that is being generated and curate only the essential pieces you need. Less fluff is always better.
When cutting down the chatter in LLMs, the use of "symbex" and "LLM" really shines. Symbex lets developers sift through Python code to pinpoint specific functions and classes, giving us a laser-focused way to pull out the code snippets we need. When we take that tidy output from "symbex" and feed it into "LLM," we can fine-tune the prompts the language model generates.
This means we can chat with the language model more streamlined and precisely. Effectively turning down the noise and smoothing out our workflow. By ensuring we're only working with the cream of the crop regarding information, we can boost our efficiency with LLMs and keep distractions to a minimum.
Justin Gardner (Rhynorater) also mentions a perfect example of utilization. In the podcast, Justin Gardner discusses the challenges he faces when building tooling, particularly when it involves analyzing JavaScript files or open-source code. He mentions the difficulty locating specific functions, especially when multiple functions have the same signature for a particular call (the fluff). He suggests that an LLM could be beneficial in mapping out these complex paths of code.
What if we flip the script and have an LLM guide our workflow? We would have to set our eyes on using GPT Engineer. Rez0 explains how this AI can ask itself questions and go beyond one standard prompt. It can create an entire codebase using resumable outcomes and a standard prompt. If you asked it to make a simple game, it could create an entire project with the files needed to make it happen.
Here’s what is brilliant about it and what can help your workflow. Imagine you wanted to create security tooling in the future. However, you’re not sure what steps to take but have a general idea of the goal. GPT Engineer will create clarifying questions after being prompted for the project you’d like to create. This line of questioning allows you to learn about the information you were missing by asking clarifying questions. So, even if you’re unsure of what steps to take, you can use this tool to begin questioning yourself on the important steps for moving forward.
In summation, a good workflow requires clean data, a decisive route, and a clear outcome. The tools above show you how these principles are possible- whether learning to carve out information more precisely to reduce fluff with symbex or practicing flow creation by utilizing GPT Engineer to create project codebases and studying code flow for tooling you’re interested in.
Hacking With AI
Let's switch gears and talk about using AI to copilot hacking and discuss a prime vulnerability in AI!
Innocent Questions for Information Leaks
AI Agent: Daniel Miessler gives context to this term in the podcast. Imagine this being a virtualized entity instead of something running statically. They can take requests and utilize tools via an array made in Python. The agent can decide what tool to run in the array and where to route it to.
The Problem
Anyone trying to get adversarial information from ChatGPT has already discovered its moral obligation; you shall not pass.
Daniel Miessler got around this by briefly stating that he has an agent in front of GPT4 that's routed to a local LLM and has it answer the question instead. The unfortunate part, he adds, is that the local LLM is not as sophisticated as GPT4.
The Workaround
Daniel Miessler created local models made in Oobabooga's UI on dedicated machines. With the tool, he creates an entire scenario around how the company's infrastructure would run and the roles associated with fictitious end users. For example, he could add specifics like the company owning an AWS account that's root and doesn't have 2FA.
Using the context from his fictitious company and leaving behind "breadcrumbs" like these allowed him to ask questions that would indirectly give him the recon he sought.
The main benefit to this, Daniel mentions, is that this would be perfect for internal red teams. The teams can ingest all information from prior reports or configuration settings. Using a similar line of questioning could lead teams to perform rapid internal assessments.
Targeting AI Features
So, how does AI become a target? We have covered the bases on how you should use AI in your workflow and the possibilities therein. What about targeting AI features that companies implement in their applications?
Special Characters
GPT has special characters that allow you to ask questions about plugins or tools you can call on. Additionally, asking about the system prompt or jail-breaking the prompt will allow you to ask questions outside the parameters of what is allowed. Essentially, asking about internal tooling or code you should be looking for.
Reminder from rez0, anything taken into an LLM should be treated as untrusted input. He advises avoiding hooking up a system that could browse the internet and ingest data for users with access to any internals.
Prompt Injection
Daniel Miessler writes a brief and insightful article about the dangers roaming AI agents have when scraping the internet. Particularly, a very clever idea to put prompt injections into the robots.txt file of a website.
Imagine an agent stumbling onto a robot.txt during its crawl and finding a command that it will run. Imagine you have a listener waiting at home for a response from a command the agent is asked to run. Imagine getting a call back from that agent!
Now, think about this in the context of the current state of AI. There is rapid adoption of different platforms by both users and companies looking to stay ahead. Prompt injection could become a massive vector for bug hunters looking to correct these issues and score a bounty.
As we close the curtain, remember that AI, with all its complexities and potential, is an ever-evolving tool in our cybersecurity toolbox. Be it refining codes with ChatGPT, orchestrating security workflows, or even the notion of innocent questions leading to information leaks, we continue to shape and redefine the scope of this technology. This journey of innovation and discovery is just at its inception, fueled by shared knowledge and collective growth. Let's remain curious, stay vigilant, and continue shaping a future where AI and security stand not as adversaries but as integral parts of a safe and efficient digital world.
Be sure to check out both Daniel Miessler’s and rez0’s blogs for more high quality info!
As always, keep thinking critically!