From 41 minutes to 8 minutes: How I made my CI/CD pipeline 5x faster
introduce
In the world of software development, timing is everything. Continuous Integration/Continuous Deployment pipelines speed up the process, but ironically, sometimes it is the pipeline that slows down the process. This is the latest problem I encountered when our Jenkins pipeline grew to an unmanageable 41 minutes per build.
To eliminate this inefficiency, I analyzed, optimized, and shortened our process from a whopping 41 minutes to 8 minutes—a 5x improvement! In this article, I’ll walk you through the problems I encountered, the solutions I implemented, and strategies you can use to enhance your own pipelines.
question
Our CI/CD pipeline handles the following tasks for both backend and frontend:
-
code checkout
-
Static code analysis: ESLint, SonarQube
-
Unit testing
-
Docker image construction and push
-
Phased deployment
-
Manual approval and production deployment
At first glance, the pipeline seems powerful, but some problems arise:
-
Bloated Docker build context
The build context—all the files sent to Docker during the image build process—had grown to 1.5GB, and the build was taking a long time. -
Redundant installation dependencies
Each stage in the pipeline must reinstall npm dependencies from scratch, adding unnecessary latency. -
Docker image mismanagement
The Docker image is rebuilt and pushed to the registry even if no changes have occurred. -
No parallel execution
Likewise, all tasks, such as static code analysis or testing, are run sequentially. -
Manual deployment steps
Because this involves manually updating AWS ECS task definitions, backend deployment is time-consuming and prone to human error.
solution
Here’s how I revamped my pipeline to achieve 5x optimization.
Reduce the size of Docker build context
Docker build context is unnecessarily large due to unfiltered project directory. We can use .dockerignore file to exclude certain files such as node_modules, logs, etc.
key file: .dockerignore
node_modules
*.log
dist
coverage
test-results
Influence:
Reduced build context size from 1.5GB to approximately 10MB and transfer time from 30 minutes to less than 1 minute.
Rely on caching
Use npm install at every stage. I replaced it with npm ci for reproducibility and started the cache in Jenkins.
Command update:
npm ci --cache ~/.npm
Influence:
Reduce dependency installation time from 3-4 minutes per stage to <20 seconds.
Improved Docker image handling
Previously, the pipeline would rebuild and push the Docker image regardless of changes. I added logic to compare the hash values of the local and remote images and only push if the image changes.
update logic:
def remoteImageHash = sh(returnStdout: true, script: "docker inspect --format="{{.Id}}" $DOCKER_IMAGE:$DOCKER_TAG || echo ''").trim()
def localImageHash = sh(returnStdout: true, script: "docker images --no-trunc -q $DOCKER_IMAGE:$DOCKER_TAG").trim()
if (localImageHash != remoteImageHash) {
sh 'docker push $DOCKER_IMAGE:$DOCKER_TAG'
} else {
echo "Image has not changed; skipping push."
}
Influence:
Avoid unnecessary pushing and save 3-5 minutes per build.
Run static analysis and tests in parallel
I extended the Jenkins pipeline to take advantage of parallel instructions so that tasks such as ESLint, SonarQube analysis, and unit testing can occur simultaneously.
Updated pipeline:
stage('Static Code Analysis') {
parallel {
stage('Frontend ESLint') {
steps {
sh 'npm run lint'
}
}
stage('Backend SonarQube') {
steps {
withSonarQubeEnv() {
sh 'sonar-scanner'
}
}
}
}
}
Influence:
Static analysis and testing time is reduced by 50%.
Automatic backend deployment
Manually updating AWS ECS task definitions is time-consuming and error-prone. I use the AWS CLI to automate this step.
Automation script:
def taskDefinitionJson = """
{
"family": "$ECS_TASK_DEFINITION_NAME",
"containerDefinitions": [
{
"name": "backend",
"image": "$DOCKER_IMAGE:$DOCKER_TAG",
"memory": 512,
"cpu": 256,
"essential": true
}
]
}
"""
sh "echo '${taskDefinitionJson}' > task-definition.json"
sh "aws ecs register-task-definition --cli-input-json file://task-definition.json --region $AWS_REGION"
sh "aws ecs update-service --cluster $ECS_CLUSTER_NAME --service $ECS_SERVICE_NAME --task-definition $ECS_TASK_DEFINITION_NAME --region $AWS_REGION"
Influence:
Simplify deployment and save 5 minutes.
result
After these optimizations, the pipeline time was reduced by 5 times from 41 minutes to 8 minutes. Here’s a detailed comparison:
Lessons learned
-
Logs are your best friend: Analyze logs to pinpoint bottlenecks.
-
Caching saves the day: Effective use of cache can significantly reduce build time.
-
Run tasks in parallel: Use parallel execution to save time immediately.
-
Exclude irrelevant files: .dockerignore files can significantly improve performance.
-
Automate repetitive tasks: Automation can eliminate errors and speed up workflow.
in conclusion
Optimizing your CI/CD pipeline is an eye-opening experience. Targeting key bottlenecks and implementing strategic changes turned a 41-minute chore into an 8-minute powerhouse task. The result? Faster deployments, happier developers, and more time focusing on features.
If you’re struggling with a slow pipeline, start by identifying bottlenecks, leveraging caching, parallelizing tasks, and automating repetitive steps. Even small adjustments can lead to huge gains.
How much time have you saved by optimizing your CI/CD pipeline? Share your experiences and tips in the comments below!