Why Jenkins might cost you 10x more than Bitbucket Pipelines
And slowing down your developers and your pace of innovation in the process

Any company still running their own CI/CD, such as Jenkins, is paying a price. The question is which one:
- Sacrificing developer productivity (by under-provisioning the compute resources you need)
- Paying for an unnecessary and inefficient amount of compute power (by over-provisioning what you need)
- Hiring a full platform-engineering team (to run your own DIY Auto-Scaling CI/CD Platform)
Each of these is extremely costly, and yet many software teams still host and run their own CI/CD for two reasons: tools like Jenkins are cheap or free, and hosting Jenkins on an AWS EC2 instance is often cheaper per minute than SaaS CI/CD services like Bitbucket Pipelines.
Both reasons are true, but they’re also a trap.
Let’s do the math—ignoring the additional capabilities afforded by SaaS solutions and specifically focusing on dollar costs—and show that the TCO of running your own CI/CD is actually higher (by as much as 10x!) than running Bitbucket Pipelines.
Variable CI/CD loads don’t scale
Unlike other software development tools, which can flexibly share server capacity, CI/CD effectively consumes all available compute resources, since it benefits from as much power as you give it. Want a build to run faster? Give it more processing power.
This means when you host an agent to run CI/CD jobs, it will max out all available resources to run one job at a time before moving on to the next one. It’s a classic horizontal scalability challenge: You can make the agent run one job faster with more power, or you can run more jobs in parallel with more agents.
This works for a small development team, but once you have more than 10 developers, a single agent isn’t going to scale.
For those of you thinking “but what about containerising my CI/CD and running multiple containers on one machine,” you still fundamentally have the same problem.
Unless you can auto-scale the actual hardware that you’re running your CI/CD on, you’re always going to have either an over-provisioning or under-provisioning issue.
Here’s why: CI/CD workloads are highly variable throughout the day. At midnight when nobody is working and you have just a few scheduled builds running, a single agent might suffice. But at lunchtime, when a few hundred developers decide to check in their code before going on a break, you suddenly need a hundred agents to handle this surge of builds.

Unless you can schedule developers to work in shifts or allot dedicated times to push code (good luck with that), there is no way to evenly distribute CI/CD load throughout the day.
You have to find another way to solve for this variability: You need a CI/CD system that cost-efficiently scales from 1 build up to 1,000 builds, and back down again, all at a moment’s notice.

The True Cost of Self-Hosted CI/CD
There are three approaches to the self-hosted CI/CD challenge I describe above:
- Under-provision: Sacrifice developer productivity and optimize for the midnight scenario and run the minimum number of agents needed to run the fewest number of builds at a time. This saves the most money but destroys developer productivity by severely limiting how many builds you can run at a time. This leads to long build queues and lots of wasted time waiting—not optimal for any company.
- Over-provision: Optimize for the lunch break scenario and run the maximum number of agents needed, 24/7, to avoid build queues no matter the time of day, even during off-peak hours. This is a huge waste of compute resources (and therefore money) and also a significant maintenance burden (more agents = more servers to manage).
- DIY Auto-Scaling Platform: The third option is to build and maintain a custom auto-scaling CI/CD platform that auto-provisions physical servers (and the agents to run on them) during peak hours, and then de-provisions them automatically when load drops-off. This requires a dedicated DevOps team—conservatively, one DevOps engineer per 100 developers—to write and maintain the necessary tooling to correctly provision the servers, install Jenkins and the correct plugins, triage when something goes wrong, and much more. If this sounds like running your own internal, cloud-services company, it is.
Unsurprisingly, many companies choose the third approach. Although many SaaS CI/CD services exist (we’re one of them), companies still believe in a false-economy where they can “save money” by electing to run their own CI/CD.
They compare the raw compute costs for self-hosting CI/CD and see that they are marginally lower than the “per-minute” costs of SaaS CI/CD solutions.
The logic goes something like: “If we (a large-scale software organization) can run self-hosted CI/CD, paying only for the raw compute (which is ~30% cheaper than per-minute SaaS charges), then we’ll achieve meaningful cost-savings and save our business money.”
What they leave out of the equation, is the cost of hiring a DevOps team needed to run an internal platform that auto-scales to meet their variable CI/CD needs.
Let’s do that comparison, evaluating the total cost of running your own CI/CD platform vs using a SaaS CI/CD service like Bitbucket Pipelines.
First, some assumptions:
| Team size | 500 developers |
| Build minutes needed per developer, per day | 60 build min/day (30K min/day → 150K/week) |
| AWS EC2 instance | m5.4xlarge: $0.768/hour; runs ~5 build agents in parallel |
| Instance efficiency | 75% (no system is perfectly efficient—trust us, we know) |
| DevOps team needed to support this system | ~1 dedicated platform-engineer for every 100 active developers |
Compute cost, assuming an industry-leading auto-scaling platform:
| Daily capacity needed (build min/day ÷ # of min in 24 hrs) | 30K build min/day ÷ 1440 min in 24 hours * 75% instance efficiency= 27.8 agents needed to process daily capacity |
| Server instances needed | 5.6 server instances (27.8 ÷ 5 parallel agents per server) |
| Total compute cost | $516/week (5.6 server instances * $0.768/hour * 24 hours * 5 days/week) |
| Cost per 1000 build min | ~$3.41 per 1,000 build min |
Platform engineering cost to run the platform:
| # of Platform engineers | ~5 (to support 500 developers at 1:100 ratio) |
| Average US DevOps engineer salary per Indeed | $160K ($130k, adding 20-25% for benefits and additional comp), or a bit more than $15K/week for 5 engineers |
| Staff cost per 1000 build minutes | $100 ($15K/week spent over 150K build minutes) |
Comparing DIY auto-scaling vs. Bitbucket Pipelines
| Cost category | DIY Auto-Scaling Platform (using AWS m5.4xlarge) | Bitbucket Pipelines |
|---|---|---|
| Compute cost per 1000 minutes | $3.41 | $10 |
| DevOps engineer staff salary per 1000 minutes | $100 | $0 |
| Total per 1000 minutes | $103.41 | $10 |
| Total per year (assuming 150,000 minutes per week) | $826,600 | $78,000 |
The net result? SaaS CI/CD solutions, such as Bitbucket Pipelines, are significantly more cost effective! This is because running CI/CD for a large software team, in an efficient way, requires bearing the additional cost of an entire team. Yes, running your own CI/CD is cheaper per minute of compute, but then you have to pay someone to run it, and if you want to hire someone with the expertise to ensure you don’t leave developers waiting in build queues or overspend on wasted compute minutes, you need to pay a lot.
Instead, let Atlassian’s Bitbucket Pipelines team handle the complexity of elastic scaling for you, and you simply pay for the build minutes you actually use.
How Atlassian saved millions in dollars and countless developer hours
In 2022, we recognized we had to reinvent our own CI/CD at Atlassian. Before centralising CI/CD on Bitbucket Pipelines, each product engineering team at Atlassian ran their own CI/CD (Bamboo, Jenkins, etc.) with their own build engineering teams. We had a main build engineering team of about 20. The Jira team was approximately another 15, as was another internal platform team. That alone was 70 DevOps engineers, and at the above salary rate, equals $11M/year spent on maintaining the infrastructure to run our various CI/CD platforms, alone.
Since then, our build engineering teams no longer maintain infrastructure, and as some of the smartest and most talented people we have (they were running an autoscaling CI/CD platform for ~9,000 people, after all), they now spend their time building incredible tools to help our developers be even more productive.
Two recent examples of what our build engineering teams built, once they no longer had to focus on operating server infrastructure:
- Managed Builds, which automates the entire process of ensuring that CI/CD workflows always include updated best practices, security guidelines and compliance policies, even as those standards change over time.
- Custom pipeline logic, so that pipelines selectively ran tests that applied to the code that changed, rather than run all tests all the time. This reduced the number of builds we had to run, which reduced compute time and ultimately saved us $1.5M in build costs.
Additional security and cost benefits of Bitbucket Pipelines
So far, this cost analysis has focused on what it really takes to run scalable, cost-efficient CI/CD for large software teams. We haven’t even pointed out the additional security, governance, and efficiency wins that only SaaS solutions can offer at scale, such as:
- Different runner sizes and types, to reduce costs and only run the type of machine needed to get the job done
- Isolated, ephemeral environments, for hardened security requirements, ensuring no two jobs can share the same environment (as we create a new container for each job and then destroy it when completed).
- Dynamic Pipelines, the same feature that saved us $1.5M, also allows you to manage and enforce security standards across all pipelines, such as ensuring that all builds use pre-approved docker images, no exceptions.
Bottom Line: Switch to Bitbucket Pipelines, save money, and innovate faster
Cheap and free CI/CD tools like Jenkins were wonderful for their time, but they create pain and drive business costs in ways today that aren’t always obvious. When looking at these systems, it’s critical that you don’t just consider the compute cost of running them; instead, consider the total cost of owning an internal, elastic platform that must scale to meet the extremely variable needs of large software teams.
Unless you’re willing to sacrifice developer productivity or pay an enormous sum of unused compute cost, you would need to hire and operate a team of DevOps engineers to run such a service for you.
But the best option for most companies is to deploy those engineers on more strategic projects, and stop running your own CI/CD. Let Atlassian focus on running your CI/CD platform, so your teams can focus on building and delivering great software.