In an interview with CRN, Weingarten says that solution and service providers have an ‘incredibly’ important role to play in enabling the secure adoption of AI and agents going forward.
Surging demand for security expertise and managed services to enable AI adoption means that MSSPs have an “incredibly” important role to play in the coming years, according to SentinelOne Co-founder and CEO Tomer Weingarten.
In an interview with CRN this week at RSAC 2026, Weingarten said that even as AI and agents deliver unprecedented levels of automation into the workforce, “you still [need] enough human power behind it to keep pace” from a governance and security perspective.
[Related: 20 Coolest AI And Security Products At RSAC 2026]
“That’s where you look at the managed service provider ecosystem at large,” Weingarten said. “They are the hyperscalers for generative AI governance and security.”
Still, the security industry’s massive focus on delivering new AI-powered tools has somewhat obscured the central importance of the channel in making agentic a reality for customers, he said.
“I think that’s probably one of the most overlooked elements right now,” Weingarten said.
Even with increased automation of security powered by AI, “you still need human supervision,” he said. “And I think that scaling is going to come from the partner ecosystem.”
Weingarten also discussed the resurgence of EDR (endpoint detection and response) as a result of AI adoption, along with the growing risk that certain cybersecurity skills could be lost if too much responsibility is handed over to agents.
What follows is more of CRN’s interview with Weingarten.
Right now, what is the most pressing topic for you when it comes to agents?
The governance layer, I think, is the most important piece. There’s a lot of different applications on the floor and a lot of different technologies. I think a lot of folks are looking at this as a technological discipline. I think it’s really more of a shift of how you bring together visibility and observability—but all the way to the human operator. And I think that’s the fabric we’re essentially trying to solve through the lens of EDR, but then [also through] the capabilities that Prompt Security brings to the picture, and the ability to monitor generative AI usage in the deepest way possible, and apply that same level of behavioral algorithms. [We’re] then creating that visibility for human operators—or even managing those same capabilities for customers with something like the partnership we announced today with LevelBlue. That is all about getting managed services and managed security closer to the end customer, and really making sure that as you do things with more speed and velocity, you still have enough human power behind it to keep pace. That’s where you look at the managed service provider ecosystem at large. They are the hyperscalers for generative AI governance and security. I really think about it this way. So that’s the most exciting stuff. It’s thinking about architecture, more than it is thinking about [a certain] technology or control. [It’s not just,] “Deploy this, then you’re super secure.” That is just not going to be scalable with the prevalence of the problem that we’re seeing.
What have been some of the surprises you’ve seen so far when it comes to the evolution of AI agents?
We’re obviously seeing no agents take form in an interesting way. When OpenClaw was launched—[originally] as Clawdbot—we released our ClawSec capability as an open-source security capability for agents. That’s exciting. It’s exciting to see the uptake and see that people are using it. But obviously the attack surface is significant. And the fact that these agents kind of do what they want to do, regardless of the boundaries that you put, regardless of the policies—and they just find ways to get places—I think that, in an interesting way, triangulates back to the endpoint. It’s almost like a renaissance of the EDR days, where you realize that you have to bulletproof all these assets. Because if you’re leaning on, “I’m just going to put in API security or MCP security”—the agent can always go and see, “OK, I can’t do it through the API. How about I just go kernel-level, get to the database, download it all and get the data out?” That’s why you need really robust security that today comes only in the form of really robust EDR. The most interesting piece about it is that endpoint is still so critically important. Because these agents are running on hosts—and data is hosted on servers. And all of that is the purview of EDR and endpoint protection.
How important are your managed service partners, and channel partners in general, with securing agentic?
Incredibly important. I think that’s probably one of the most overlooked elements right now. If we think about these agents basically as more employees, how do you scale your security operation? You’re not going to be able to hire fast enough. Yes, we’re getting more automated. Yes, there’s more autonomy in the SOC—but you still need human supervision. And I think that scaling is going to come from the partner ecosystem. We have one of the most robust partner ecosystems out there, especially in the managed security space. I think that’s just a mega opportunity for them to get all these great new capabilities that we’re putting out there—in terms of agentic investigations, all the data lake capabilities, managed SIEM. [Partners can] basically deliver these services for customers and manage customer environments, augment customer security teams. It’s just going to be a growing need. There’s not going to be any job displacement in the managed security space—that I’m pretty sure of.
During your [RSAC] keynote you mentioned the risk of losing security skills by ceding too much to AI. Is that more of a long-term risk or could it be nearer term?
I think that’s happening. I think that has happened already to an extent. And I think that if we continue to lean on technologies without fully understanding what they do, then we just lose [those skills]. In many ways, we have not been training the right disciplines for the cybersecurity operator. And I do hope that through offloading some of the daily, nitty gritty work, we can focus on the more critical aspects of doing the cybersecurity job. I think that’s what it would give way to. But if we don’t consciously focus on that, it’s not going to happen. Humans are creatures of habit. “I got a new tool. It’s doing the work for me. I’m going to go get a coffee. I’m not going to go and try to make sure that I know exactly what is happening, and make sure that I can sharpen my intuition as to when things might go wrong.” [But] I think the people that actually do that are just going to become so much more in-demand and so much more productive. So that’s going to be a natural driver for people to continue and learn and get out of the comfort zone and just get toe-to-toe with AI—and not just let AI do the work. I think in most cases, if you truly put your mind to it, you’ll get to a better outcome than these algorithms will get to—at least as they look today. But you have to learn. You have to keep on improving. You have to keep on evolving. I think that’s the hardest part for humans, just to keep that drive to continue and develop.
Do you think this is where the role of the partners could become especially critical?
I think that they’re going to have to be supervisors. They’re going to have to supervise what these agents are doing. Somebody is going to have to do it. And it seems like that’s falling within the boundaries of cybersecurity. We’re regulating the access, regulating the operation, regulating identity. But there is a human layer there that needs to eventually say, “This is OK, this is not OK”—at scale, of course. So I think that’s why behavioral detection is so important. We’ve been talking here for quite a few years, and every year I can say the same sentence—which is, “There’s not enough people in cybersecurity.” And every year it’s going to be true. I don’t think that’s going to change even with agents. AI is not going to change it. Because AI for cybersecurity is not just something to help scale the workforce, as it does for every industry—it also produces more work for the cybersecurity operator. So it’s that one segment where it’s not just automating work, [but] it’s also creating more work.







