[HackerNotes Ep. 173] Bug Bounty: Is It Dead, or Are We Just Getting Started?

Bug bounty isn't dead, but it's definitely changing a lot. Here's what's actually changing.

Hacker TL;DR

  • Google's VRP is paying more for top-tier impact (up to $1.5M for a zero-click full chain on Pixel) but cutting payouts on lows and meds. Programs everywhere are swamped with AI reports.

  • Network effects (model providers shipping security review, cheaper agentic pentests, internal red teams running hackbots) are killing the easy bug pipeline from every side.

  • The copium is real: scope on big targets is too big for any team to fully cover, top hunters running agents at scale find more than ever, and learning has never been faster.

  • Video-first PoCs and submission fees (like HackenProof did) are showing up as filters against AI slop, and opting out of training data is worth taking seriously.

Join the discussion on Discord and share your thoughts on the future of bug bounty: forms.ctbb.show/future_of_bug_bounty. Justin and Joseph want this feedback to pass directly to the platforms.

Today's Sponsor: Check out ThreatLocker Zero Trust Cloud Access https://www.threatlocker.com/capabilities/zero-trust-cloud-access

In the News

The Bear Case: Network Forces Squeezing the Funnel

1. Programs Can't Keep Up

Triage and payment delays are basically the new normal. Some programs are closing entirely. Others, like Google's Android and Chrome VRPs, are dropping payouts on low and medium severity findings on certain scope. Google's blog says this is to focus more on impact, and to be fair, the high-impact ranges actually went up (a zero-click full chain with persistence on Pixel Titan M2 now pays up to $1.5M). But the bottom of the funnel is being cut hard, and that's where a lot of hunters built steady income.

2. Model Providers Are Coming Downstream

Anthropic shipping a Security Review feature is the warning sign. The idea is pretty simple and not great for us: if your business is a wrapper around someone else's model, the model provider eventually eats you. Security is something providers care about now, both because they need to defend their own infra, and because the security tooling market still matters even if it's not the biggest one out there.

Claude 4.6 → 4.7 already pushed Cyberbench scores from ~30% to ~60%. No reason to think it's gonna stop.

3. Agentic Pentest Startups Are Racing to the Bottom

It's never been easier to wrap a model in a workflow, call it an "agentic pentest," and sell it. Snyk, Wiz, and the bug bounty platforms themselves are all building in this direction. Prices are crashing: full automated pentests at $100 to $500 are showing up. Quality is all over the place, but like Joseph said, even a bad agentic pentest is probably better than the old-school compliance-checkbox pentest a tired contractor ran for two weeks without keeping up.

4. The New Pre-Disclosure Pipeline

Here's how the funnel looks from a target company's side in 2026:

  1. Internal devs push code generated 10 to 50× faster with AI coding agents.

  2. Internal AppSec / red team runs hackbots (white-box on the source, black-box on staging) before it ships.

  3. Third-party agentic pentest (platform-hosted or vendor) scans the asset before it enters scope.

  4. Bug bounty scope gets what's left after all that.

  5. Top hunters run their own hackbots 24/7 across that scope.

Every layer above the bug bounty line is getting better fast. By the time a finding reaches the public scope, the easy stuff is mostly already gone.

5. Top Hunters Scaling Their Own Edge

The same agents that compress the bottom of the funnel also boost the top. A skilled hunter can apply expertise across way more programs and way more scope at the same time. Net effect: the pie isn't growing, but the slices going to the top hunters are getting bigger.

We do subs at $25, $10, and $5, premium subscribers get access to:

Hackalongs: live bug bounty hacking on real programs, VODs available
Live data streams, exploits, tools, scripts & un-redacted bug reports

Why It's Not Over

1. Easier Than Ever to Learn

Probably the most underrated good thing. Spinning up labs, walking through bug classes, having the model act as a sparring partner. That's a whole new level for learning on your own. Shout out to Daniel Miessler's "don't take your robots to the gym": use AI to actually understand stuff, not to skip the work.

2. Scope Is Genuinely Too Big

Even in the worst case where every layer of the new pipeline runs, the attack surface on a target like Yahoo or Google is too big for full coverage. There will be vulnerabilities. AI-fast dev cycles + huge attack surface = more bugs being shipped, not fewer.

3. We Ship More Than Ever

A hunter today, with a good agent loop, puts out more research, more tooling, and covers more attack surface than was possible a year ago. Same idea on every side.

4. You Can Finish What You Started

The classic bug bounty pattern: get a private invite, hunt for 6 or 7 hours, find 3 bugs, get distracted by the next invite, never come back. AI changes that. Old leads that were too time-consuming to finish can now be closed out. That hidden scope niche that used to give you 3 bugs may now give you 10, and some of them deeper than what we could find before.

The False Positive Wall

The catch you can't avoid: AI finds more, but most of it is not real. Joel made the point a couple of months back. What percentage of what comes out of your hackbot is actually a valid, reportable, impactful finding?

The implication is rough for new hunters:

It's easier than ever to find something. But if you don't have the skill to tell a real bug from a gadget, from intended behavior, from public-by-design data, you'll either spam programs and get banned, or get demotivated when nothing pays.

The skill that matters now is not just finding, it's validating. Plus context management, skill file writing, and learning to turn massive agent output into something a triager will actually accept.

What Programs and Platforms Should Do

Submission Fees (like HackenProof)

The HackenProof CTO did this experiment in public. $1 didn't change anything. $5 cut AI slop and low-quality submissions by ~80%. $10 was the sweet spot. The Web3 chain integration makes this pretty easy for them, you just attach a fee gas-style on submission. Harder for traditional platforms to do, but the idea works.

A few caveats raised:

  • You'd need regional pricing (a flat $5 USD is too much for hunters in lower-income regions).

  • Banned or unwilling-to-pay hunters still have bugs they found. Where do those go?

  • Token or rep-based variants (earn submission credits via valid reports) were thrown around as alternatives.

Video-First / Video-Required Submissions

This is the idea that hit the hardest. Some live event programs already require a screenshot or short video as part of the submission. Lots of upside:

  • Filters AI slop right at the entry. You can fake it, but it's expensive at scale.

  • Protects the hunter when a bug becomes "no longer reproducible" mid-triage but was actually real at submission time.

  • Speeds up triage. The triager doesn't have to reverse-engineer the report from scratch.

Even if not mandatory, programs could fast-track submissions with video and pay them out under a "paid once validated" rule, even if the state changes later.

The Training-Data Debate

Should you be feeding your bug bounty reports into model providers (via tools like H1 Brain) to build your personal skill files?

The Cautious Side

  • Consumer Claude plans (Free, Pro, Max), including Claude Code on personal accounts, have been on an opt-out training model since the September 2025 policy change. The default is "Improve Claude for everyone" = on. If you're opted in, data is kept up to 5 years.

  • Even with API and Team/Enterprise being non-training by default, clicking thumbs up/down feedback makes the whole conversation usable for training and 5-year retention.

  • Self-host where you can. Use orgs / Enterprise tiers. Disable training across every machine where Claude Code is installed (it's not just account-level, you have to verify per environment).

  • Refs: Anthropic's data privacy controls and claude.ai/settings/data-privacy-controls.

  • Also: API keys pasted into chats already get flagged for rotation. Same caution applies to your reports.

The Pragmatic Side

  • Even if every top hunter opted out, the providers have enough money to buy or build top-tier cyber evals in-house. The OffensiveAICon talks last year had Meta, Google, Anthropic, OpenAI staff asking researchers publicly to build more cyber datasets. They got them.

  • Cyberbench-style evals are public. Providers can improve without a single user session.

  • Your individual contribution to the training pool is basically zero in terms of impact. The reward of a faster loop and more bugs found is real and immediate.

  • This is the privacy / voting analogy: your single data point doesn't move the big picture, but acting on principle is a fair personal choice.

Whichever side you pick:

  1. Disable "Improve Claude for everyone" on every personal Claude account and every machine running Claude Code.

  2. Use Team / Enterprise / API for anything sensitive (default no-training).

  3. For high-security work, ask for a Zero Data Retention (ZDR) agreement on the API.

  4. Don't paste valid PoC payloads, customer data, or live exploit chains into consumer chats no matter what your opt-out status is. Admin access exists at every company.

Methodology: How to Survive the Transition

Phase 1: Defensive Setup

  • Check training-data settings on every account and every Claude Code install.

  • Move sensitive workflows to API, Team, or Enterprise tiers.

  • Where you can, run open-source models locally for the most sensitive stuff.

Phase 2: Day-to-Day Adjustments

  • Default to video PoCs on anything high or critical, so you're protected if the bug becomes "no longer reproducible" later.

  • Track triage and payment timelines per program. Drop programs that always stall.

  • Prefer programs with shorter dupe windows (live events especially), because agents are causing massive dupe splits on medium-hanging fruit.

Phase 3: Get Sharper

  • Validation is the new core skill. Build pipelines that make false-positive triage cheap.

  • Writing skill files, managing context, and handling agent output cleanly are now real hunting skills, not optional extras.

  • Use AI as a sparring partner for learning new bug classes, not just an answer machine.

Closing Thoughts

This year still has some room, but bug bounty as a hobby is closing fast. To really thrive going forward, it's becoming a lifestyle. Eat, breathe, improve your tooling. The casual hunter pipeline is the most affected.

At the same time, humans have never had more power to build, learn, and ship. The ceiling for what one skilled hunter can do has gone up, not down. The shape of the work is changing, and that's not the same thing as the work disappearing.

If you have ideas for how platforms and programs should adapt, this is the moment to put them on record: forms.ctbb.show/future_of_bug_bounty.

Resources

That's it for the week, keep hacking!