

You’ve likely heard of hacking and data breaches but this one is on a whole new level. A popular AI coding platform, Orchids, allowed a hacker to seize control of a BBC journalist’s laptop in mere seconds without a single click, download, or warning. The journalist did absolutely nothing wrong.
How the AI Breach Unfolded
The breach exploited a flaw in Orchids, a rapidly growing “vibe-coding” tool that lets users build apps simply by typing instructions into a chatbot. While the platform’s automation writes and executes the code for users, that very convenience created a silent backdoor, exposing a new dimension of risk in AI-powered development tools.
Cybersecurity researcher Etizaz Mohsin put the vulnerability to the test on a spare machine used by the reporter. With a subtle tweak inserted into the AI-generated code, the platform immediately executed it. In a matter of moments, a file popped up on the desktop and the wallpaper morphed into a robot skull, flashing the ominous message: “you are hacked.”
Many cyber attacks rely on deception. Typically, victims click on a harmful link, open a file, or provide a password. However, this situation was different. The harmful code operated within the trusted AI project itself. Consequently, Mohsin obtained remote access to the system, allowing him to view or modify files. A criminal exploiting the same vulnerability could install spyware, steal financial information, or activate cameras and microphones.
“The whole proposition of having the AI handle things for you comes with big risks,” Mohsin said. He brought up the problem several weeks earlier. Orchids, established in 2025 and boasting approximately a million users, did not issue a public response prior to the article's release. The company later explained that earlier alerts might have been overlooked due to their small team being overburdened.
AI coding tools offer speed and ease, letting users build software without deep technical skills. However, experts caution that automation without thorough review can spread hidden vulnerabilities. With agentic AI performing complex tasks on devices, managing files, sending messages, and executing commands, any flaw can compromise an entire system, highlighting the growing cybersecurity risks of granting AI deep system access.
As AI coding tools become increasingly powerful and widespread, experts are urging users to stay vigilant. Taking simple precautions can help safeguard devices and data while still enjoying the benefits of AI-driven development.
Use separate machines: Run experimental or untested AI tools on isolated devices whenever possible.
Limit account access: Prefer limited or disposable accounts over full personal or work accounts.
Review permissions carefully: Check what system access an AI tool is requesting before granting full control.
Stay cautious: Even as AI coding surges, robust security practices are essential to prevent hidden risks from compromising devices.