Team '25 Europe - Atlassian wants smart AI regulation. Its 80% adoption rate shows why that matters
- Summary:
- While others fight AI regulation, Atlassian’s General Counsel Stan Shepard argues that early compliance builds trust - and we get into the details behind what 'guardrails' really mean in practice.
While much of the technology industry fights AI regulation, Atlassian's General Counsel Stan Shepard is making a different calculation. He believes that companies who comply early will win the trust war that determines enterprise AI adoption. In conversation at Team '25 Europe in Barcelona, he outlines a strategy that treats regulation as a market advantage rather than an innovation blocker.
This puts Atlassian at odds with peers such as OpenAI and Meta, which argue that early or heavy-handed regulation could slow AI innovation. Shepard explains:
The success of AI for us and our customers is really around being able to trust it. And fortunately or unfortunately, I think that trust needs a little bit of a carrot, in the sense of there has to be a law that we are aiming to meet, and without that law, sometimes, I don't think industry necessarily will do the right thing.
Smart regulation as partnership – not constraint
Shepard distinguishes between "smart regulation" and "regulating technology for the sake of regulation," with the difference coming down to collaboration. He elaborates:
It's going to be a partnership between industry and lawmakers, and we have a very large role, Atlassian and some of our peers, I think, in helping regulators and lawmakers understand the technology that they're trying to regulate.
This approach helps policymakers focus on where the real risks are — for example, high-impact uses such as hiring or performance reviews, where real-life decisions are at stake, versus lower-risk applications.
Atlassian demonstrates this through early adoption of the European Union (EU) AI Pact, an optional early compliance regime. Shepard notes:
We thought it was just a really great opportunity for us to be leading the pack. It was very practical. It was very, you know, here's the things you have to do to meet it.
More importantly, it aligns with Atlassian's principles of transparency and customer focus.
The legal team that broke the adoption curve
Inside his own organization, Shepard can point to concrete results. He cites a recent industry survey suggesting that legal teams rank among the slowest adopters of AI. Atlassian's legal team has achieved 80-90% daily active users for AI tools, challenging the narrative about AI readiness in traditionally conservative functions.
"I'm so proud the Atlassian legal team has flipped the script on that," Shepard says. Three things drive this: quality products like Atlassian's Rovo; cultural alignment with being "an innovative legal team" that works like the engineering and product teams they support; and the nature of legal work itself. He explains:
If you think about the legal profession, similar to journalism, it's all about words. You know, constantly, like words have meaning, and every word happens on a page. And so for us, generative AI is like, perfect for us.
Applications range from contract drafting to document summarization to translation. "Gone is the day I think of staring at a blank page and being like, I need a contract," Shepard says. The main challenge is training and change management. His philosophy is to "go slow to go fast" — invest time in learning now, and teams move faster within a few months.
Defining guardrails – beyond the buzzword
I’m keen to dig into a term that too often floats by without substance: 'guardrails'. Everyone claims to have them, but few explain what they are. Shepard is one of the few who does.
Hard guardrails - Non-negotiable legal boundaries. "Those are ones where I come down firmly, which is, like, we will not cross that line," Shepard states. This includes both new AI-specific laws and existing regulations around privacy, security, and data protection that now have "new application because of AI new technology, but the laws have been around for many years."
Industry-specific guardrails - These vary by sector and customer type. For regulated industries – government, banking, healthcare – there are additional protections around personal information that don't apply universally but must be respected contextually.
Ethical guardrails - Voluntary standards that go beyond legal requirements. Shepard cites deepfakes as an example where Atlassian might impose restrictions not because they're legally mandated, but because "it's the ethical right thing to do." This represents the difference between "AI that's actually utopian and creates, like, a world that we want to live in, and not dystopian."
By breaking the idea down into legal, industry, and ethical layers, this framework moves the conversation from abstract principles to operational decisions that engineering teams can work with, compliance functions can audit against, and customers can evaluate.
Atlassian's responsible tech review process – integrated directly into development workflows through a standardized template – shows how ethical frameworks survive contact with shipping deadlines. Shepard acknowledges that version one "was not the perfect version," and that engineering feedback centered on efficiency concerns like redundant questions and excessive depth for lower-risk use cases.
The response was to iterate. He elaborates:
We have a great relationship with engineering. They definitely understand the why, why this is important, why responsible tech is critical to shipping products that customers will trust. So it really just comes down to the how.
The revised approach uses threshold questions that calibrate review depth to risk level, streamlining the process for lower-stakes features while maintaining rigor for high-consequence applications.
Beyond process and policy, Atlassian is building technical capabilities that address enterprise concerns directly. The launch of Atlassian-hosted Large Language Models (LLMs) responds to customers who "don't want their data to leave the perimeter of Atlassian control," with particular emphasis on data residency requirements for European customers.
Shepard sees the European Union AI Act as the new "high-water mark" for global regulation, much as the General Data Protection Regulation (GDPR) sets the privacy standard. The strategy is simple: aim high, then adjust around the edges.
My take
Atlassian is treating regulation not as a constraint but as a product feature. Shepard's legal team is effectively prototyping what "trust-led AI" looks like inside a fast-moving software company – turning compliance into a design discipline. What stands out is how Atlassian translates broad ideas — trust, responsibility, ethics — into frameworks that engineers can actually build against. Shepard’s guardrails model shows that clarity isn’t just moral hygiene but also operational advantage.
The results are unusually solid - near-universal AI adoption in a department that's usually allergic to risk; a three-tier guardrails model that translates ethics into engineering language; and review processes that evolve through developer feedback rather than stall because of it.
This is a different kind of competitive logic for enterprise AI. Shepard's belief is that credibility will compound faster than novelty – that the companies building for the law's high-water mark will outpace those chasing the next shiny feature. Regulation, in this view, isn't the drag coefficient of innovation – it's the stabilizer that lets it scale.
Across Atlassian’s Team ’25 stories, there's a consistent common theme. Whether it’s developer experience, product design, or legal governance, the company treats trust as an engineering problem — something you build into the system, not something you retrofit with slogans.
You can read all of our coverage from Team '25 Europe in our dedicated event hub.