[HackerNotes EP.129] Is this how Bug Bounty Ends

The future of hunting, hackbots and Rez0's blog, 'This Is How They Tell Me Bug Bounty Ends.

Hacker TL;DR

  • The Future is Hybrid: AI won't replace bug bounty hunters, but ignoring it is a mistake. The future is a Human-in-the-Loop model where AI augments hunters, handling recon and lead generation, making them faster and more efficient.

  • Hackbot Lead Farming: Get ready for a massive volume of AI-generated leads. Rez0 predicts hackbot-augmented workflows could produce 500-1,000 leads per year.

  • Promptify Your Brain: Your notes and inner monologue are gold. Start structuring your findings, thought processes, and even failed attempts into prompts. Feed this context to an AI to get custom-tailored payloads and attack suggestions based on your own successful patterns.

  • Context Engineering for AI Agents: When using multiple AI agents for different tasks (recon, fuzzing, reporting), you need a "context bridge." Think shared datastores (like a central JSON file) and versioned prompts to ensure each agent has the context to do its job effectively.

  • The Source of the Spark: This whole debate was kicked off by Rez0’s blog post, "This Is How They Tell Me Bug Bounty Ends." It’s a must-read to understand the conversation. Link: https://josephthacker.com/hacking/2025/06/09/this-is-how-they-tell-me-bug-bounty-ends.html

This Is How They Tell Me Bug Bounty Ends

You might have read it from Rez0’s blog, but this post was a pretty big hitter. If you read it or if you’re paying attention to recent developments, you’ve probably heard the rumblings about AI’s growing role in cybersecurity in general, but in this context, for bug bounty.

Some are in denial, while others are sounding the alarm that AI might completely overhaul the bug bounty landscape in just a year. The truth, as usual, is somewhere in between.

The TLDR of the blog is - ignoring AI's massive and improving capabilities, especially for hunting, is simply wrong. The truth isn't on the extreme ends of the spectrum – it's not that AI will have no impact, nor is it that bug bounty hunters will be out of work in 6-12 months.

The future will be a hybrid of AI and human collaboration. One of the first steps in this evolution is ‘Human-in-the-Loop’, where AI aids human hunters to be faster, smarter, and more efficient.

Imagine an AI assistant handling reconnaissance, identifying potential leads, and flagging odd behaviours, leaving the final call to a human to probe and exploit any interesting or odd behaviours.

Hackbot Leads

Hackbot lead farming is already on the horizon. As one blog quote puts it, “One good hacker and a hackbot system working together will be able to outhack most everyone (from a volume perspective).

And it’s not just theory - Rez0 even mentioned on the podcast that he expects hackbot-augmented workflows to be churning out 500 - 1,000 leads per year by the end of this year.

Now, the first step in this transition is obvious - AI doesn’t kick us out of the loop when hunting, it just augments our workflow. Picture your typical recon phase - enumerating endpoints, fuzzing parameters, and analysing responses. Now imagine an AI assistant parsing through thousands of HTTP requests for you, surfacing the oddball ones that match your custom heuristics.

From a bug-hunting perspective, being able to just eyeball potential leads for bugs and focus on exploiting them would massively increase your throughput hunting. It would essentially be a context-aware automation, only flagging high signal (if prompted right) leads.

If you’re in the CTBB Discord, one of the cool things about the hackalongs is the collaboration across hackers who can hand over leads and sketchy functionality. If a hackbot can get to that level of identifying strange behaviours or leads, that would be a pretty sweet spot to be in.

Promptifying Your Notes and Leads

This is where your notes might start coming in useful - if you take extensive notes, you might be able to consolidate these and feed them into an AI quite nicely to notify you on weird behaviour.

Have you ever thought about the amount of time you spend documenting your process? Every vulnerability you find, every technique you try, and every payload you test could be categorised and fed into a prompt. I have what I’d consider good notes, but there’s still a lot of context missing when taking this perspective.

Imagine having an AI assist you in future hunts by giving you feedback based on your previous experiences. It could even suggest your next move based on patterns it identifies in your past successes, if the prompts were good enough.

An approach I’ll likely start using moving forward - XSS used as an example:

  1. Consolidate your findings into bite-sized prompts broken down.

  2. Tag each with context: “XSS - reflected - urlParam” and outcome: “true/false.”

  3. Feed those to your AI: “Hey, here are my last 20 failed XSS tests; suggest a new payload.”

To add to this, your inner monologue is gold to an AI because of the amount of context you have on the target and the behaviour. One of the examples you could use dropped on the pod when trying to document your chain of thought and give it as context to an AI, using CSPT as an example:

  1. Why do you think CSPT exists here?

  2. What is your first attempt when looking at CSPT?

  3. If it failed, your next payload will be X because of Y.

    1. What payloads would you iterate through after this?

    2. How would you decide and modify each iteration of the payload?

    3. Why would you decide on each iteration, given the context

  4. What, if any, other behaviour would we look at if the above failed or succeeded

I think taking voice notes when talking through a finding could be incredibly useful for this.

Context Engineering

One of the last key subjects on the pod was context engineering - as hackbots are dependent on more, there will need to be a focus on how you hand over context to different agents.

When you have multiple agents (ie one for fuzzing, one for recon, one for report drafting), you need a context bridge - a means of each agent handing over enough context for the next agent to complete its task properly.

When I started thinking about how this could look for bug bounty, I mapped out the below - I have no idea if this would be effective, I just combined how some current recon frameworks store data and switched it for agents:

  • Shared datastore: every agent reads/writes to a central JSON file of endpoints, payloads tried, and results.

  • Versioned prompts: tag each interaction with a timestamp and agent ID so you can rewind if something breaks.

  • Human checkpoint: human reviews every lead and finding at some point in the chain

Rez0’s original blog that sparked this debate can be found here:

And that’s a wrap for this week’s episode. Short, but lots to think about going forward with hunting.

As always, keep hacking!