Security researchers say a new class of agentic AI browsers is exposing users to theft, entering card details on fraudulent checkout pages and clicking through phishing traps.
In controlled trials, cybersecurity arm Guardio Labs built a simulation called Scamlexity to test how AI agents respond inside scam environments.
Instead of flagging danger, the bots followed instructions blindly, paying fake merchants, handing over data, and even navigating deeper into malicious sites.
Guardio says,
“Not only did the agents willingly complete fraudulent transactions, they also failed to recognize the obvious signs of phishing. This behavior highlights the severe risks in trusting AI browsers with sensitive tasks without proper guardrails.”
The Scamlexity results suggest AI-powered agents are not only fallible, but capable of accelerating fraud when embedded into real-world scam funnels. Researchers warn developers and early adopters to treat AI browsers with caution, especially around logins, financial accounts, or private credentials.
“This is not a distant theoretical risk,” Guardio adds. “We are already seeing scams evolve to target AI tools directly, anticipating that these agents will act without human skepticism.”
The findings land as tech firms race to integrate autonomous agents into mainstream browsers, marketing them as productivity boosters. Guardio’s work shows the tech is far from ready for financial or security-sensitive use.
If exploited, Guardio concludes, AI browsers could become “dream targets” for attackers, conduits for phishing campaigns, fraudulent payments, and large-scale data theft.