Every enterprise CTO in 2026 is being asked the same impossible question: how do you deploy AI agents at scale when you cannot verify what they will do next? The companies that have cracked this , and there are very few of them , have built internal governance layers quietly, expensively, and mostly invisibly. Sri Viswanath, who scaled Atlassian engineering organization through its most consequential growth phase, looked at that pattern and saw not a solved problem, but an unfilled market. On March 30, 2026, he announced $65 million to fill it.
What Actually Happened
Sycamore, founded by Sri Viswanath, raised a $65 million seed round led by Coatue and Lightspeed Venture Partners, with additional participation from Abstract Ventures, Dell Technologies Capital, 8VC, Fellows Fund, and E14 Fund. The angel roster reads like a directory of AI infrastructure credibility: Bob McGrew, former chief scientist at OpenAI; Lip-Bu Tan, CEO of Intel; and Ali Ghodsi, CEO of Databricks. The round is one of the largest seed raises in enterprise software history by dollar amount, announced March 30, 2026. Viswanath had spent time as a partner at Coatue before founding the company, which means he understood exactly what institutional investors were pricing , and structured the raise accordingly.
The product is what Sycamore calls a Trusted Agent Operating System: a platform that enables enterprises to discover, build, deploy, and observe fleets of AI agents within a governed, secure environment. The system is built around four core capabilities: trust by design (agents earn autonomy through demonstrated reliability rather than being granted it upfront), adaptive system generation (production-ready systems built from natural language specifications), continuous improvement (agents that learn from operational outcomes), and collective intelligence (organizational knowledge surfaced across teams and accumulated over time). Viswanath scaled Atlassian engineering from $500 million to over $2.5 billion in revenue as CTO , a period that included one of the most complex cloud migrations in enterprise SaaS history.
The company is already working with Fortune 500 companies in production deployments, not pilots. That distinction matters: most enterprise AI startups at the seed stage have design partners who are testing. Sycamore has customers who are running.
Why This Matters More Than People Think
The enterprise AI agent market in 2026 has a structural problem that most of the industry is actively avoiding. The infrastructure for building agents , frameworks like LangGraph, AutoGen, and Microsoft Semantic Kernel , is mature, widely adopted, and improving rapidly. What does not exist at production scale is the layer that sits above it: the control plane that answers the questions legal, compliance, and security teams actually care about. Who authorized this agent to access that system? What did it do? Can we reproduce its reasoning? Can we roll back its changes? Can we audit it for a regulator?
This is not a niche concern. A survey of Fortune 500 CIOs in early 2026 found that governance and security were cited as the top two barriers to expanding AI agent deployments , ahead of cost, ahead of performance, and ahead of accuracy. The companies that have scaled past pilots into genuine enterprise deployment are almost universally the ones that built this governance layer internally. Sycamore is betting that most enterprises will not build it themselves, for the same reason they did not build their own identity management systems: it is not core to their business, it is expensive to get right, and the failure modes are catastrophic.
The Competitive Landscape
Sycamore is not the only company attacking enterprise AI agent governance, but it has an unusually clear field given the market size. ServiceNow has made agent orchestration a core part of its Now platform roadmap, but its approach is tightly bound to the ServiceNow ecosystem. Microsoft Agent 365, which launched May 1, 2026 at $15 per user per month, provides governance controls specifically for agents built on Microsoft own stack , Foundry, Copilot Studio, and select third-party integrations. Neither is a neutral, cross-stack control plane.
Salesforce Agentforce has signed over 18,500 enterprise customers, but Agentforce is fundamentally a vertical solution for customer-facing workflows, not a horizontal operating system for AI agents across all business functions. The closest historical analogies are the identity governance platforms of the 2010s , companies like SailPoint and CyberArk that emerged when enterprises realized they needed a dedicated layer for managing non-human identities at scale. The $25 billion acquisition of CyberArk by Palo Alto Networks in 2026 was, in part, a bet that this pattern would repeat for AI agents. The question is whether Sycamore establishes that position before hyperscalers bundle the capability into their existing enterprise agreements.
Hidden Insight: Why the Founder Matters More Than the Product
Most coverage of this raise has focused on the round size and investor names. The more important data point is what Sri Viswanath specifically did at Atlassian. The Atlassian cloud migration , moving hundreds of thousands of customers off self-managed Jira, Confluence, and Bitbucket deployments to multi-tenant cloud , was one of the most technically and organizationally complex transformations in enterprise SaaS history. Enterprises handed over their most sensitive project and code data to a cloud system they did not fully control. The solution was not better technology alone , it was better governance primitives, audit trails, compliance certifications, and administrative controls that gave enterprise buyers a credible answer to every hard question. Viswanath led the engineering team through this during the years Atlassian revenue grew from $500M to over $2.5 billion.
That experience is precisely the background you would design if you were recruiting someone to build an enterprise agent governance platform. The parallels are not metaphorical , they are structural. In both cases, the core challenge is convincing enterprises to hand autonomous control to a system they did not build, by giving them verifiable evidence that the system behaves predictably. Sycamore trust-by-design architecture , where agents earn autonomy through demonstrated reliability rather than being granted it upfront , is Viswanath applying the Atlassian cloud migration lesson to AI agents. You do not ask enterprises to trust the AI. You give them the controls to verify it over time.
The angel investor composition reinforces this reading in ways that are not accidental. Bob McGrew built OpenAI research infrastructure before leaving in 2024 and understands better than almost anyone how AI systems behave at scale in ways that diverge from the demos. Lip-Bu Tan at Intel has a direct commercial interest in the inference infrastructure that Sycamore agent fleets will drive. Ali Ghodsi at Databricks has built the dominant enterprise data governance platform and understands the compliance requirements of Fortune 500 data organizations from the inside. These are not passive checks written by fans of the founder. They are signal that people who have seen the specific failure modes of enterprise AI at scale are betting on this particular solution.
The deepest counterintuitive point is this: the race to deploy AI agents inside enterprises is currently bottlenecked not by model capability or cost, but by organizational trust. Most enterprise AI projects that have stalled in 2026 did not fail because the AI was not good enough. They failed because the AI was autonomous enough to make changes that the organization could not adequately audit, explain, or reverse. Sycamore is solving a compliance problem that presents as a technical problem , and that is historically how category-defining enterprise software gets created.
What to Watch Next
The 90-day indicator to track is whether Microsoft broadens Agent 365 scope beyond Microsoft-native agents to third-party frameworks. The May 1 launch was explicitly scoped to Microsoft-built agents. If Microsoft expands to neutral orchestration in Q2 or Q3 2026, it validates Sycamore thesis that horizontal governance is defensible territory while dramatically intensifying competition. If Microsoft stays narrow, it confirms Sycamore market positioning and opens the door to faster Fortune 500 adoption among multicloud enterprises that cannot commit to a single-vendor agent stack.
Over six to twelve months, watch for Sycamore Series A terms as the leading signal of enterprise traction. A seed-to-Series-A conversion at a valuation multiple above 10x , which would put Sycamore above $500 million , would confirm that Fortune 500 production deployments are converting to ARR at a rate that justifies the governance-layer thesis. Also watch for EU AI Act developments: agent-specific provisions expected to be clarified in late 2026 could mandate the audit and governance controls Sycamore provides, transforming compliance from a purchase justification into a purchase requirement. If that regulatory clarity arrives, Sycamore total addressable market changes overnight , every enterprise deploying agents in regulated industries becomes an immediate prospect.
The enterprise AI agent market does not have a capability problem , it has a trust problem, and Sycamore just raised $65M on the bet that trust is worth more than speed.
Key Takeaways
- $65M seed led by Coatue and Lightspeed , announced March 30, 2026, one of the largest seed rounds in enterprise software history
- Sri Viswanath scaled Atlassian from $500M to $2.5B+ revenue as CTO , his cloud migration experience directly informs Sycamore trust architecture
- Angels include Bob McGrew, Lip-Bu Tan, and Ali Ghodsi , former OpenAI chief scientist, Intel CEO, and Databricks CEO respectively
- Fortune 500 companies are already in production with Sycamore platform before the seed capital has been fully deployed
- EU AI Act agent-specific provisions expected in late 2026 could mandate the exact governance controls Sycamore provides, converting compliance into a purchase requirement
Questions Worth Asking
- If AI agent governance becomes a compliance requirement rather than a best practice, does that create a durable moat for the first mover , or does it open the market to every incumbent security vendor?
- Sycamore trust-by-design model grants agents more autonomy as they demonstrate reliability , what are the failure modes when an agent demonstrated track record does not predict its next action?
- Your company is probably already running AI agents in production workflows. Do you have an audit trail that would satisfy a board-level inquiry or regulatory review?