Join us

Cato CTRL™ Threat Research: HashJack - Novel Indirect Prompt Injection Against AI Browser Assistants

A new attack method - HashJack - shows how AI browsers can be tricked with nothing more than a URL fragment.

It works like this: drop malicious instructions after the # in a link, and AI copilots like Comet, Copilot for Edge, and Gemini for Chrome might swallow them whole. No need to hack the site. The LLM reads the URL’s tail, pulls in the prompt, and boom - indirect prompt injection.

Phishing, data leaks, fake content, malware - it’s all on the table.

System shift: HashJack calls out a core design flaw in how these AI tools treat client-side URL fragments like trusted input. It’s a quiet exploit, but a loud wake-up call for anyone shipping LLMs into browsers.


Give a Pawfive to this post!


Only registered users can post comments. Please, login or signup.

Start writing about what excites you in tech — connect with developers, grow your voice, and get rewarded.

Join other developers and claim your FAUN.dev() account now!

Avatar

Kala #GenAI

FAUN.dev()

@kala
Generative AI Weekly Newsletter, Kala. Curated GenAI news, tutorials, tools and more!
Developer Influence
31

Influence

1

Total Hits

131

Posts