AI assistants with shell, network, or filesystem "skills" don't just help, they expose. These hooks can run commands before any human checks the model’s output. That means a bigger attack surface. More room for lateral movement. Easier persistence.
In setups where tools like Claude Code run often, it starts looking like a supply chain problem: malicious payloads creeping in through routines we trust and workflows we don’t question.









