Main content

The Julia Child method - how Atlassian taught 14,000 non-developers to build AI agents

Alyx MacQueen Profile picture for user alex_lee February 19, 2026
Summary:
Atlassian now runs nearly one AI agent per employee. I met with SVP Tal Saraf to discuss what's working, who's building them, and why the gaps matter.

Robot baking in a kitchen © Canva.com

When Tal Saraf, SVP of Customer Engineering at Atlassian, pulls up his internal dashboard, the numbers are striking. The company runs nearly as many AI agents as it has employees. With a workforce of 14,000 distributed across the globe, Atlassian has built close to 14,000 agents, with a third of those actively used in the past month alone. He observes:

We're approaching an agent for every employee, which is crazy to me on some level.

What makes these figures more significant is velocity. In the 28 days preceding our conversation, Atlassian employees created a thousand new agents. The average employee interacted with at least five different agents in the past month. These are not dormant experiments; they represent sustained, practical use woven into daily workflows across a company that shares no physical office space.

The Julia Child method (no butter required)

A central claim in enterprise AI deployments is that tools require minimal technical expertise. Atlassian's experience suggests this holds up in practice. Saraf draws an unexpected analogy using the famous American TV chef to explain how employees learn to build agents:

First she shows me how to bake the cake or roast the chicken and now it's like I've seen somebody do it, but now I go bake my own cake or roast my own chicken.

Atlassian runs sessions where employees spend an hour learning to build agents, then leave having created their own. The finance employee who built a travel and expense agent is not a developer. Sales staff, marketing teams, and HR professionals are all building without writing code. Saraf explains:

It's democratized the ability to create to somebody who can describe something in words. ‘This is what I would like you to do. These are the questions I'd like you to ask. These are the clarifying questions you might want to ask.'

He compares it to giving someone directions before GPS existed: you did not need to be a civil engineer to explain how to reach a destination, just the ability to describe the route in plain terms.

Three agents, three usage patterns

Saraf walks through three agents that exemplify different use patterns. The first, a Context Gatherer built by the Finance team, handles travel and expense queries. Rather than employees hunting through policy documentation or submitting queries to finance staff, they ask the agent directly about per diem rates, deductibility rules, or mileage allowances across jurisdictions. The agent pulls answers from Confluence pages maintained by Finance. When the US Internal Revenue Service updates allowable per diems for 2026, Finance updates the Confluence page, and the agent immediately reflects the new figures. Saraf notes: 

Nobody had to reprogram that Rovo agent. And I didn't have to ask that poor person in finance for the nth time, wait, what did the per diems change to this year?

The second, an HR onboarding agent called Nora, demonstrates a different pattern. New employees rely heavily on it during their first couple of weeks, then usage naturally drops off. He explains:

It's incredibly helpful for people their first maybe couple of weeks as they onboard. After that, their usage drops off because you've gotten the answers to the questions you want.

The decline is actually a signal of success — the agent has served its purpose for that cohort while remaining available for the next wave.

The third, Customer 360, serves sales and customer-facing teams with rapid access to customer intelligence. He explains:

Anytime I want to talk to a customer, I can just quickly ask a question about a customer and get insights about how long have they been using Confluence or Jira or Trello or Bitbucket or Loom.

While dashboards exist for visual exploration, the agent handles quick queries that might otherwise interrupt workflow.

The creative sparring partner

Beyond task automation, Saraf describes a more sophisticated use case: preparation through role-play. Employees use agents to simulate difficult conversations before they happen.

You could think about a sales situation where you want to talk to a customer and say, hey, this is the thing I'd like to say to this customer. Act as the customer. What are the hardest questions you could ask me?

He pauses: 

I should have actually done it for this. 'What are the questions that I might get asked in this interview?'

This shifts AI from assistant to adversary, stress-testing ideas before they meet reality.

Where the gaps remain

Saraf expresses enthusiasm for Atlassian's teamwork graph, which connects work items, project status, and people across the product portfolio. The ambition is to eliminate dead ends, where looking up information in one tool leads naturally to related context elsewhere. He frames it as "no dead ends" — a customer looked up in one system should surface the same entity everywhere, with full context attached.

However, there are still some gaps. "Some of the third-party tools don't yet interact via MCP [Model Context Protocol] or Rovo in a way I'd love for them to," Saraf acknowledges. MCP adoption among partners is uneven, and Atlassian is working with its partner ecosystem to extend connectivity through MCP Gateways.

This is the interoperability problem that haunts every enterprise integration strategy. MCP promises a standard way for AI systems to connect with external tools and data sources, but promise and adoption are different things. Until the partner ecosystem catches up, even a company running wall-to-wall Atlassian hits friction the moment a workflow touches an external CRM, HCM platform, or any other third-party system. The vision of a seamless network across first-party and third-party tools remains aspirational.

My take

The numbers here are genuinely impressive. Nearly one agent per employee, a third in active use, a thousand new ones created in a month. This is not a pilot program or an innovation lab; it is operational scale across a fully distributed enterprise.

One of the things I appreciated most about this conversation is the honesty about limitations. Saraf does not pretend that Atlassian has solved enterprise AI integration. The MCP gaps he describes are the same interoperability challenges that plague every organization trying to connect disparate systems. Atlassian benefits from running its own stack, but the moment third-party tools enter the picture, the friction returns.

The Julia Child framing is apt. Showing someone how to build an agent is not the same as ensuring what they build is reliable, secure, or maintainable at scale. Internal technology rollout is notoriously difficult, even for companies that claim strong cultures. Plenty of enterprise vendors struggle to run their own systems – some don't use their own products at all. The fact that Atlassian has deployed at this scale internally, with genuine adoption rather than mandated usage, is meaningful. That operational knowledge - what works, what breaks, where friction lives – should translate into better guidance for customers facing similar challenges.

The caveats remain. Atlassian controls the platform; enterprises running heterogeneous environments face a steeper climb. But internal success at scale is the credibility bar that many vendors fail to clear. Atlassian has cleared it – it will be interesting to see how that translates to customers who don't have the same home advantage.

Image credit - Robot baking in a kitchen © Canva.com

Disclosure - At the time of writing, Atlassian is a premier partner of diginomica.

Loading
A grey colored placeholder image