LLMs represent a rapidly advancing area of technology, with various foundation model providers competing to build the leading solution. Like any technology, the differing models often have a spectrum of their own strengths and weaknesses.

Atlassian currently partners with an array of leading proprietary model providers and open-source technologies, incorporating models from OpenAI, Anthropic, Google, Meta, and others to harness the best from each.

To operate at scale, Atlassian’s AI and ML Platform team has built our own “AI Gateway” to help support our developers in having convenient abstraction access and control to the latest models, alongside the AI Gateway Software Development Kit and Command Line Interface to provide a battle-tested integration path and convenient management tools for our AI developers across the company, all while providing security and reliability for our customers.

The Atlassian AI Gateway offers Atlassian Developers a curated model garden of 20+ LLMs, audio and image models from 5+ providers, and 10+ model families. It’s been very successful for Atlassian, with 100+ use cases and experiments built across 8+ Atlassian apps, 40+ Atlassian teams, and our internal platform.

As always, all of the LLMs used in Atlassian’s apps have zero data retention. Our third-party hosted LLM providers won’t store your inputs and outputs for any purpose.

The Atlassian Trust Center

Discover how the Atlassian Trust Center empowers you with transparency, control, and industry-leading security for your organization’s data.

Core Functionalities of AI Gateway: Atlassian’s AI Gateway

Unified Access and Agility

  • Integration flexibility: Developers interact with multiple LLM vendors (e.g., OpenAI, Gemini) using a single, consistent API contract. This simplifies prompt management and makes it easy to switch models or vendors with minimal code changes and run A/B tests. Centralized routing has also made it easier for us to test how each feature works with our new option for AI Features to exclusively use Atlassian Hosted Models.
  • Shared tooling: This eliminates the need for each team to write and maintain its own integration code, saving time and reducing errors.
  • Deep integration with our AI platform: The AI Evaluation suite and ML Studio, our in-house training platform, use the AI Gateway for synthetic data generation and testing LLMs, and Atlassian’s Prompt Registry is integrated to allow rapid iteration for our engineers.

Reliability and Resiliency

  • Automated fallback: LLMs are a very new technology, and because of this, they can be unreliable. AI APIs, unfortunately, have frequent incidents and performance degradations. One of the cutting-edge capabilities we’ve implemented in AI Gateway is automatically resolving incidents by falling back to alternative models or vendors when one vendor is down.

Security, Privacy, and Compliance

  • Centralizes enforcement: The Gateway embeds privacy, security, and compliance controls, providing granular access controls, routing to only approved endpoints, and ensuring policies are consistently applied across features.
  • Model Status: AI Gateway tracks the status of each model, whether in alpha, preview, or prod. Atlassian developers know if a model is available via AI Gateway and marked as ready for production—both the infrastructure/capacity and compliance checks have been done to make it ready for use in our apps.

Cost, Resource, and Operational Management

  • Centrally managed capacity: The AI Platform centrally manages capacity across our model vendors and internal hosting, ensuring we have the correct amount of resources provisioned and benefiting from pooling all of our usage, minimizing cost.
  • Quota management: To handle all of our use cases sharing resources, AI Gateway implements robust rate limiting, quota management, and cost tracking to prevent overuse and manage spend across dozens of teams, avoiding the “noisy neighbor effect.”
  • Best practices built in: The AI Gateway SDK implements resilience mechanisms like retries, backoff strategies, and error handling out of the box, ensuring robust and reliable client interactions.

How the AI Gateway Benefits Atlassian Apps

AI Gateway streamlines model releases

We use AI Gateway’s centralized controls to manage our model release. Atlassian collaborated with OpenAI as an early tester for GPT-5, first releasing ‘alpha’ access via AI Gateway to key use cases before launch. We were able to provision and prepare production GPT-5 capacity for our developers the day of the launch—Rovo Dev rolled out GPT-5 to users on launch day!

Operational Excellence

We’ve seen significant uptime improvements and resilience to incidents—multiple incidents successfully mitigated due to our central detectors and automated fallback. Our on-call engineers can sleep easily, and our customers can continue to use Rovo seamlessly!

Cutting-edge testing and learning

We often offer cutting-edge open-source models through our AI Gateway. Our Rovo team frequently evaluates agents against all model providers and finds the best model for each use case. This allows us to experiment and improve the products quickly with minimal effort and learn about new AI innovations through experimental models before productionizing them.

This has translated into Atlassian using a diverse range of models to provide the best experience:

  • Google’s Gemini models power our Editor AI for blazing-fast contextual edits in your Confluence documents.
  • Open-source text-to-speech models transcribe your Loom videos for streamlined search and summaries.
  • GPT-5 models are combined with our advanced coding orchestration to deliver a state-of-the-art coding agent in Rovo Dev Agents.
  • We use small, blazing-fast open-source models from Meta to understand Rovo’s query and review AI inputs to apply Atlassian’s Acceptable Use Policy to user requests.
  • The thinking capabilities of Anthropic’s Claude 4 models build a research plan to power Rovo’s Deep Research.

Our approach to AI is ever-evolving for the best possible experience

We continuously refine and adapt our model selections, transitioning between models in accordance with the most recent performance metrics, all conducted behind the scenes.

Learn more about how our AI solutions work in our Rovo Deep Research and Hybrid LLM info blogs.

At the forefront of enterprise-level AI solutions

AI Gateway provides a foundational capability for our developers to rapidly improve our apps and offer advanced capabilities to our millions of users around the world while providing our customers with the secure, thoughtful, and trusted approach they expect from Atlassian.

Atlassian’s AI Gateway: Best in Class Model Garden