To Bitbucket from Jenkins: Enhancing Developer Experience
Atlassian’s Bitbucket Cloud has tightly integrated CI/CD capabilities via its Bitbucket Pipelines feature set. However, some of our Bitbucket Cloud and Bitbucket Data Center customers still use Jenkins for CI/CD. In this blog, I present a practical walkthrough of the benefits of Bitbucket Pipelines over a tool like Jenkins in the context of two key stats from our recent State of DevEx 2025 report. These stats serve serve as motivation for why teams should move to a tightly integrated, cloud native SAAS offerings like Bitbucket Cloud with Bitbucket Pipelines, rather than self-hosted on-prem CI/CD tools like Jenkins.
What is DevEx
First, let’s quickly review what DevEx is and briefly touch on why it’s important.

There’s even a Knuth quote about DevEx:
Knuth says:
“The enjoyment of one’s tools is an essential ingredient of successful work.”
DevEx is important because developers working in an environment with a good developer experience spend more time building software and solving problems for their customers, and less time doing other less valuable work.
State of DevEx Report 2025
Atlassian produces a State of DevEx report every year. We interview thousands of developers and development leaders and ask questions about their teams’ development experiences. I want to focus on two stats from the 2025 report.
First:
“50% of developers now report losing more than 10 hours of their working week due to inefficiencies.“
This stat is wild to me. If a developer works a 40-hour week, that means 25% of their time is lost to friction in the development process. Developers tend to work long hours to hit project deadlines. If these 10 hours a week could be recouped, we’d be happier and get more done.
Second:
“Developers are only spending 16% of their time every week writing code.“
This means that developers spend 84% of their time searching for information, context switching between tools, in meetings, and fighting with tech debt. If we could shift even a fraction of that time to building software and solving problems, we’d all be happier and better able to ship features to our customers.
Please keep these two stats in mind as you continue.
What it takes to get a new Jenkins box up and running
I decided to setup a new Jenkins box on AWS, integrate it with Bitbucket, and get it building, testing, and deploying my code using a Jenkinsfile. The following sections provide some insight into the setup process and future work I’d be taking on to continue using Jenkins.
Create an EC2 box and install Jenkins and its dependencies

The first thing I did was create an AWS EC2 box in ca-central-1. Then, I installed Jenkins and its dependencies following the Jenkins documentation. The image above shows some of the AWS infrastructure I created and some of the commands necessary to install and configure Jenkins and its dependencies.
This was fairly straightforward, as the documentation was good. When I finished this process, I had a single Jenkins box running on a single AWS EC2 server. However, there are problems with what I had setup. A single node isn’t highly available, resilient, or durable. I’d need additional infrastructure to make this a service I could roll out to my team and other teams.
Also, I’m not an expert in AWS EC2 networking or Linux security, and I’m unsure if I implemented security best practices. I will likely have security vulnerabilities to address in the future, and I will definitely have to patch the box and the software as new vulnerabilities arise.
I’ve created tech debt for myself.
Install some plugins
After I got the Jenkins box up and running, I needed to install some plugins to make it work the way I wanted it to. I didn’t know I needed to install these plugins so I spent some time fumbling around Stack Overflow and the Jenkins documentation until I figured it out.
I eventually installed plugins for Git, Docker, and Bitbucket. The plugin install process is simple and none of the plugins I installed required plugin specific setup. Going through the UI to install the plugins I noticed that there are literally hundreds of plugins available. While having all these plugins available for Jenkins is great, it seems like a lot of them simply provide functionality that Bitbucket Cloud and Pipelines offers out of the box.
For example:
- Git is available in every build by default and the Bitbucket Pipelines runtime automatically clones relevant repos.
- Bitbucket Pipelines is entirely docker-native with full support for “Docker-in-Docker” setups and
docker buildx. - Bitbucket Pipelines comes fully integrated with Bitbucket Cloud and all the other Atlassian tools by default.
The key takeaway for me is that, although Jenkins’ library of plugins is vast, many of them exist to provide the bare essentials. The idea of managing a system with potentially hundreds of plugins enabled, that all needed to be updated from time to time, gave me anxiety.
Integrate Jenkins with Bitbucket
After I got the plugins installed I setup the integration between Bitbucket and Jenkins. I wanted Jenkins to run the pipeline defined in my Jenkinsfile whenever I pushed to a branch, created a pull request, or merged a branch to my production branch. This was easy to setup and get working.
At this point I had two separate systems up and running, Bitbucket Cloud and Jenkins, and an integration between them. Now, I could move on to actually setting up a Jenkins pipeline to build, test, and deploy my software.
Configure a Jenkins pipeline
From the Jenkins main page I had the option to create a New Item.

From here I had to choose from one of the six options.

I wasn’t sure sure what I wanted so I was off to the Jenkins documentation to learn. I guessed that I wanted a Pipeline so I started reading about that, and luckily I was correct. The pipeline screen provided a bunch of configuration checkboxes and options to setup what I wanted. After some reading, I got what I needed setup.
This part of the process is pretty similar to every other CI/CD product on the market. Setup some data in the product and write a config file that lives in the repository that details the various analysis, build, test, and deploy steps you want. All that was left was to write a proper Jenkinsfile.
Write a Jenkinsfile; Time to learn Groovy
While Bitbucket and other vendors use YAML for their CI/CD configuration, Jenkins uses Groovy. This means I have to learn a completely separate language just to use Jenkins, in addition to learning the Jenkinsfile syntax. I don’t know Groovy, and I don’t want to pick up yet another language just to use one specific tool. Luckily, I had Rovo Dev CLI to help me write the Jenkinsfile.
Rovo is Atlassian’s platform-wide AI solution. Rovo Dev CLI is a terminal-based way of interacting with the Atlassian platform. It provides all of the expected coding support that other AI tools provide, AND it lets me interact with my Atlassian products.
After I got it to build a Jenkinsfile for me, I told it to update the Jira issue I was using to track my work. This is pretty handy as it saves me from having to jump into Jira, search for the Jira issue, and manually type the updates. This improves my DevEx because I spend less time clicking around a UI in my browser and more time in my dev tools.
Now, I want to talk about a couple other bits of work that came up as part of the process of standing up a Jenkins box.
Emergent work
At this point I had Jenkins up and running and building, testing, and deploying my code. I suspected that my networking and EC2 setup were probably not up to standards and would require some attention.
Tickets from Cloud Engineering
Turns out I was right.
My new EC2 box wasn’t standards-compliant. Atlassian Cloud Engineering maintains standards for infrastructure running in AWS. I got three tickets from cloud engineering for things I needed to patch, update, and install on my EC2 box to make it compliant with Atlassian standards.
The wiki pages describing the required steps for each of the tickets were substantial. The work wasn’t hard to complete, but it pulled me away from building software and solving problems for my customers. I was spending time administering Jenkins. This is a prime example of that 16% / 84% statistic from the State of DevEx report I reference earlier.
Every company I’ve worked for has had their own standards for how to setup and configure infrastructure, so this isn’t an Atlassian specific problem.
User access control? Availability? Reliability? Durability?
My current setup has only a single Jenkins node. That isn’t production ready. I need more nodes for redundancy and I need to setup some kind of a backup process to recover from failures.
Another problem is scaling. I don’t want to pay for multiple EC2 boxes when no one is running builds. I also don’t want to be compute constrained when my team is trying to run hundreds of concurrent builds, or running extensive test suites in QA.
Setting up more nodes, backup and recovery, and auto-scaling is going to be a ton of work that I just don’t have to do with modern SAAS systems like Bitbucket Cloud.
Bitbucket Pipelines features that improve the DevEx
Now, let’s switch gears for a second and look at some features of Bitbucket Pipelines that improve the developer experience. We’ll start off with an extremely powerful tool called Dynamic Pipelines.
Dynamic Pipelines
Dynamic Pipelines are a way for engineering or platform teams to create standards-compliant pipelines and then push them out across one or more Bitbucket workspaces. Dynamic Pipelines are defined in code using Atlassian’s Forge platform. They can be as simple as injecting a single step into the pipelines of repositories in a workspace or as complex as dynamically generating standards compliant, always up to date, best practice workflows from nothing but a few labels in a YAML file.
As you might guess, there are numerous benefits to this capability.
For example, using Dynamic Pipelines, a central security or compliance team can guarantee that a set of static analysis and security scanning steps are executed in every pipeline that runs in a workspace. What’s even better is that those static analysis and security scanning steps don’t need to be defined in the bitbucket-pipelines.yml file in each repository; The steps are injected by dynamic pipelines at runtime, using configuration defined by the central team.
Furthermore, when the organization decides to change the set of static analysis and security scanning steps they want to run, they can update the dynamic pipeline once and all pipelines in the workspace will automatically start running the new steps without engineers having to manually update individual bitbucket-pipelines.yml files. This reduces maintenance work and helps organizations quickly adapt to changing requirements and technologies.
In addition, engineers are no longer required to know the exact process to use each of the static analysis and security scanning tools since the correctly configured steps are injected automatically at runtime.
Critical to note though, is that Dynamic Pipelines do not prevent teams from still writing their own YAML workflows if they want to. Dynamic Pipelines are smart enough to enable centralized standards compliance whilst still retaining individual team autonomy.
Dynamic Pipelines improve the developer experience by reducing the number of lines of YAML they have to maintain, and frees up engineers’ cognitive capacity to focus on building software and solving problems for customers by reducing the amount of brain power they spend maintaining CI/CD.
You can learn more about Dynamic Pipelines here, here, and here.
AI assisted pipelines
AI-assisted pipelines are like having a build engineering sitting beside you to help you fix problems in your pipeline. When a pipeline step fails, Rovo will look at things like the code being deployed, the pipeline’s configuration, and pipeline logs to determine what happened and how to fix it.
Without AI, we all have to do this manually, and it can be awful. I don’t like looking through 230570238509 lines of logs to figure out what broke and neither does any other engineer I know. I’d rather have someone else solve this kind of problem for me so I can focus on building stuff.
AI-assisted pipelines directly addresses the 16% / 84% stat mention earlier by reducing the amount of time I spend digging through logs and searching for information.
Self-Hosted Runners
By default, Bitbucket Pipelines executes all steps on Atlassian cloud hardware. This works well for many customers, but some customers have compliance regimes meaning they need to run some of their steps on their own hardware, behind their firewall. Bitbucket Cloud’s self-hosted runners makes this simple, allowing teams to run a single step, all the way up to an entire pipeline, on their infrastructure – still orchestrated from Bitbucket Cloud.
To use self-hosted runners, teams create a runner and register it with Bitbucket. This process is a couple of clicks in the UI.

Once the runner is registered in Bitbucket, teams can add the runs-on tag to their bitbucket-pipelines.yml file to tell Bitbucket Pipelines to execute that particular step on the runner. Teams can also give specific runners unique tags and then add those tags to the same runs-on section to distribute specific pipelines or steps to specific individual runners.

In this way teams can setup as many runners as they need, with specific resources and access for the task they’re going to perform, and Bitbucket will ship work to them as required. With this approach teams can run most of theirs steps on Atlassian hardware whilst distributing specialized workloads onto their own hardware. This hybrid approach gives them the best of both worlds.
Size parameter
Different steps in a CI/CD pipeline can require different amounts of memory and take different amounts of time to execute. With the size parameter, teams can control the amount of CPU and memory resources available to each individual step. This lets them fine tune their resource usage.

By default, if the size parameter is not specified, Bitbucket runs the step with a 1x size parameter, which is a runner with 2 CPUs and 4 GB of memory and is the least expensive runner to use. This helps keep costs to a minimum by default.

When a team has a step that is taking too long to execute or requires more memory, they can add the size parameter to the step and get access to up to 32 CPUs and 64 GB of memory. In this way, teams can tailor their resource usage to have sufficient performance while being as inexpensive as possible.
DORA metrics in Jira and Compass
Bitbucket is tightly integrated with the rest of the Atlassian platform. For developers, that means it automatically ships CI/CD metric information to other Atlassian products. In particular, it is easy to get access to DORA metrics in both Jira, and Compass. The data is available in Jira, and Compass automatically, with no additional configuration.
This means you can add DORA metrics to your Compass components.

And you can view DORA metrics in Jira reports.

With the tight integration of Bitbucket with the Atlassian platform, teams don’t have to setup yet another tool to track and calculate their DORA metrics. They get them for free, out of the box.
How to migrate to Bitbucket Pipelines from Jenkins

I strongly encourage everyone who is using both Bitbucket and Jenkins to consider migrating to Bitbucket Pipelines. Doing this will improve your developer experience, allow you to spend more time building software and solving problems for your customers, and spend less time on server setup, configuration, and maintenance.
To that end, Atlassian provides a tool to help migrate declarative Jenkins pipelines to Bitbucket Pipelines by converting Jenkinsfiles to bitbucket-pipelines.yml files. You can find information about this process by following the QR code or the link above.
You can also try converting Jenkinsfiles to bitbucket-pipelines.yml files yourself using the Rovo Dev CLI, which is currently in Beta.