Atlassian’s AI Gateway: Best in Class Model Garden

Atlassian’s AI Gateway: Best in Class Model Garden

LLMs represent a rapidly advancing area of technology, with various foundation model providers competing to build the leading solution. Like any technology, the differing models often have a spectrum of their own strengths and weaknesses.

Atlassian currently partners with an array of leading proprietary model providers and open-source technologies, incorporating models from OpenAI, Anthropic, Google, Meta, and others to harness the best from each.

To operate at scale, Atlassian’s AI and ML Platform team has built our own “AI Gateway” to help support our developers in having convenient abstraction access and control to the latest models, alongside the AI Gateway Software Development Kit and Command Line Interface to provide a battle-tested integration path and convenient management tools for our AI developers across the company, all while providing security and reliability for our customers.

The Atlassian AI Gateway offers Atlassian Developers a curated model garden of 20+ LLMs, audio and image models from 5+ providers, and 10+ model families. It’s been very successful for Atlassian, with 100+ use cases and experiments built across 8+ Atlassian apps, 40+ Atlassian teams, and our internal platform.

As always, all of the LLMs used in Atlassian’s apps have zero data retention. Our third-party hosted LLM providers won’t store your inputs and outputs for any purpose.

The Atlassian Trust Center

Discover how the Atlassian Trust Center empowers you with transparency, control, and industry-leading security for your organization’s data.

Core Functionalities of AI Gateway: Atlassian’s AI Gateway

Unified Access and Agility

Reliability and Resiliency

Security, Privacy, and Compliance

Cost, Resource, and Operational Management

How the AI Gateway Benefits Atlassian Apps

AI Gateway streamlines model releases

We use AI Gateway’s centralized controls to manage our model release. Atlassian collaborated with OpenAI as an early tester for GPT-5, first releasing ‘alpha’ access via AI Gateway to key use cases before launch. We were able to provision and prepare production GPT-5 capacity for our developers the day of the launch—Rovo Dev rolled out GPT-5 to users on launch day!

Operational Excellence

We’ve seen significant uptime improvements and resilience to incidents—multiple incidents successfully mitigated due to our central detectors and automated fallback. Our on-call engineers can sleep easily, and our customers can continue to use Rovo seamlessly!

Cutting-edge testing and learning

We often offer cutting-edge open-source models through our AI Gateway. Our Rovo team frequently evaluates agents against all model providers and finds the best model for each use case. This allows us to experiment and improve the products quickly with minimal effort and learn about new AI innovations through experimental models before productionizing them.

This has translated into Atlassian using a diverse range of models to provide the best experience:

Our approach to AI is ever-evolving for the best possible experience

We continuously refine and adapt our model selections, transitioning between models in accordance with the most recent performance metrics, all conducted behind the scenes.

Learn more about how our AI solutions work in our Rovo Deep Research and Hybrid LLM info blogs.

At the forefront of enterprise-level AI solutions

AI Gateway provides a foundational capability for our developers to rapidly improve our apps and offer advanced capabilities to our millions of users around the world while providing our customers with the secure, thoughtful, and trusted approach they expect from Atlassian.

Exit mobile version