Artificial intelligence is speeding up every aspect of business and security, but one of its side effects is the uncontrolled spread of “shadow AI,” says a cybersecurity platform.
In a new CNBC interview, SentinelOne chief executive Tomer Weingarten says attackers are not the only ones moving faster, but enterprises themselves are introducing new risks by letting employees use generative AI tools without oversight.
“AI is accelerating everything. So obviously, it’s accelerating also what attackers and adversaries can do.”
He warns that unsanctioned AI model usage is becoming a nightmare for companies and security teams.
“It’s a lot of data leakage. It’s a lot of non -accurate production, I would say. If you kind of think about everything that’s happening right now with AI, the need for security, the need to know that what the model is doing for you is predictable and has the proper guardrails, is just an imperative.
And then the data that people share with that model can also often lead to leakage. If there’s no control over what employees are putting into these models, into these applications, it becomes a nightmare for enterprises trying to wrangle what is being put into these models, both for sanctioned users, but also for complete shadow AI unsanctioned usage, and everybody’s just clamoring for visibility and control over what’s being done right now in any enterprise environment.”
Shadow AI is when employees feed sensitive data, like code, contracts, or customer records, into generative AI tools without oversight. The practice tends to create data leakage, compliance risks, and blind spots for security teams.
SentinelOne is a US-based cybersecurity firm founded in 2013, best known for its AI-driven Singularity platform that protects endpoints and cloud workloads. Its 8,500+ customers include four of the Fortune 10 and brands like Aston Martin, Samsung SDS, Q2 Holdings, the Golden State Warriors, and TGI Fridays.