Oasis Security lands $120M to harden enterprise AI agents
Oasis Security, a startup focused on securing autonomous AI agents inside large organisations, has raised a $120 million Series B round backed by leading venture firms Accel and Sequoia. The fresh capital underscores surging investor interest in tools that make generative AI safer and more controllable for enterprises.
The company is building a security and governance layer that sits between corporate data, business applications and the rapidly growing ecosystem of AI assistants, automation agents and LLM-powered workflows. As organisations allow software agents to read emails, move money, update CRM records or modify code, the risk of data leakage and unintended actions has become a board-level concern.
A security fabric for autonomous AI workflows
Oasis Security positions its platform as a policy and control plane for enterprise AI. It aims to give security and compliance teams granular visibility into what each AI agent can access, which actions it can perform, and how those actions are logged and audited.
The startup’s technology focuses on enforcing least-privilege access, monitoring high-risk operations in real time and providing guardrails that prevent agents from overstepping their intended remit. This approach mirrors traditional identity and access management and privileged access management, but is rebuilt for the unpredictable behaviour of large language models and autonomous decision-making systems.
Accel and Sequoia double down on AI security
The participation of both Accel and Sequoia highlights how central AI security has become to the broader generative AI investment thesis. As enterprises accelerate deployment of copilots and task-specific agents across finance, HR, software development and customer support, they are under pressure to demonstrate strong risk management, data governance and regulatory compliance.
With this $120 million Series B, Oasis Security is expected to scale product development, expand integrations with major AI platforms and cloud providers, and grow its go-to-market teams in North America and Europe. The company is betting that every serious AI deployment will ultimately require a dedicated security layer — and aims to become the default standard for controlling what enterprise AI agents are allowed to see and do.

