Long CI pipelines slow everything down. In our case, builds in Bitbucket were taking nearly an hour—stalling our team, delaying feedback, and hurting delivery speed.
To address this, we introduced parallel execution, which brought execution time down by almost 50%. But the more we optimized, the more we paid.
That tradeoff—speed vs. cost—pushed us to rethink our setup. This article walks through our migration from Bitbucket Pipelines to SemaphoreCI and the key lessons we learned along the way.
At first, Bitbucket’s parallelism gave us the speed boost we needed. But as our test suite expanded and usage scaled, costs spiked fast—and control was limited.
We realized Bitbucket couldn’t keep up with our performance goals without blowing past our budget. So, we moved to SemaphoreCI—a platform that offered better flexibility, smarter parallelization, and lower execution costs at scale.
First, we needed to review our next platform. We evaluated options like CircleCI, TravisCI, SemaphoreCI, among others, to understand the value and insights each one could offer.
After choosing SemaphoreCI as our preferred option, we spent time getting familiar with the platform and its configuration files; essentially, experimenting and learning how to get things done. Once we felt confident in our choice, we began implementing our workflows in SemaphoreCI.
A critical part of planning the migration was adapting our scripts. Some of our existing scripts used Bitbucket-specific variables and internal utilities, such as the repository cloning process. For example, these variables:
With that in mind, we started looking for equivalent environment variables in SemaphoreCI to replace them.
Next, we needed to configure our main image and services in Semaphore, which required a different setup. It looked something like this:
Let’s break this configuration down to understand what’s happening:
With our base images and services configured, we could start building a first approach to parallelized steps in Semaphore. We began small, creating basic steps to validate repository cloning (including full-depth), and ensuring all scripts were correctly updated.
Here’s an example:
We started by setting up environment variables, checking out the repository, restoring the Maven cache, and building the project. Semaphore’s epilogue feature (similar to Bitbucket’s after_script) lets you define commands that run conditionally, depending on whether the job passed or failed.
A convenient way to ensure failed builds don’t trigger unnecessary steps. This was our first successful configuration for running a job in SemaphoreCI.
In SemaphoreCI, all jobs in the same block run in parallel. So, if your test batches are automated and well-organized, you can leverage this to speed things up significantly.
Let’s take a look at an example where you run a single job with parallelism:
Extracted from: https://docs.semaphoreci.com/using-semaphore/jobs#job-parallelism
Here, we use the $SEMAPHORE_JOB_INDEX variable to determine which batch of tests to run:
Or, more dynamically:
The key is to match the number of batches to the parallelism value. Want something smarter? You could use a custom (or third-party) test balancer that’s aware of how long each test takes and distribute batches accordingly.
If you are running something like this:
You can split your tests manually into dedicated jobs:
This approach is simple and readable.But you can take it a step further by adding run conditions, so each block runs only when relevant code changes are detected:
This structure saves time by avoiding unnecessary test executions.Learn more:
https://docs.semaphoreci.com/using-semaphore/jobs#skip-run
https://docs.semaphoreci.com/using-semaphore/monorepo#skip-run
Now, let's look at the optimization steps we embraced during this migration.
First, we noticed that the initial setup of each job was taking a considerable amount of time. In some cases, we observed something like this: job initialization — 5 minutes; test execution — only 1–2 minutes.
Given this imbalance, we decided to reduce the number of steps by grouping smaller modules into a single batch, and splitting only those modules that were large enough to justify separate execution.
Another key optimization, previously mentioned, was the implementation of run/skip conditions on blocks. This had a significant impact on build times. By adding conditions based on branch names and changes within specific modules, we were able to prevent unnecessary executions. In the worst-case scenario, we managed to skip at least two module test runs per pipeline, significantly reducing our build minutes consumption.
In parallel, our development team also worked on optimizing the test code itself, speeding up slow tests and splitting others to allow for more efficient batching and distribution.
Lastly, one improvement worth mentioning was fine-tuning the type of machine used for each job. By assigning different machine types depending on the CPU or memory demands of each step, we managed to reduce infrastructure costs without sacrificing performance.
Here, we are presenting some of the key results we got with the migration:
From this migration experience, we learned that while migrations may initially seem complex and risky, they’re often necessary when your infrastructure can no longer scale effectively.
Here are a few key takeaways:
Migrating from Bitbucket Pipelines to SemaphoreCI was a strategic decision driven by performance and cost. Not only did we achieve a faster and more cost-effective CI flow, but we also gained greater flexibility and insight into our processes.
SemaphoreCI provided us with the right balance of usability, control, and optimization features. With better resource management, test orchestration, and developer experience, we now have a more scalable and sustainable CI/CD solution.
If your current pipeline feels limiting or costly, don’t be afraid to explore what other platforms can offer—you might find the investment in migration pays off faster than you think.