Continuous Delivery Without the Drama
How we cut cycle time in half and ship often without wrecking production
Overview
Many people say they’re doing Continuous Delivery (CD), but are they?
The truth is, most teams are running weekly or biweekly deployments with multiple features/stories, frantic QA cycles, and a Slack thread full of “Did we test X feature?” fire drills. They’ve duct-taped together build-automation scripts and called it a pipeline. It’s Continuous Delivery in name only.
We were no different. Before this overhaul, we’d batch everything until the sprint ended, then try to push it all live in one massive release. QA would be overloaded, integration bugs would sneak through, and we’d waste time retesting already-finished features from the beginning of the sprint that got buried in the pile. We thought this was a controlled way to release if we had human ownership over the "production release" button.
However, it wasn’t sustainable, so we built a real continuous delivery system—one that could deploy any feature as it was completed, safely and with traceability. We now release roughly several times a week across 20+ services and frontend projects. And we did it without increasing incidents or babysitting production.
The Push That Started It
Our wake-up call came when I started implementing DORA (DevOps Research and Assessment) metrics to track our software engineering process. I plugged our GitHub and Jira data into LinearB and saw that our Cycle Time was north of 15 days. Coding and PR reviews were well within industry standards, especially since we were already pretty good at breaking down user stories into smaller deliverables. The real lag was waiting to get them into production. Features sat idle while we wrapped up sprints, merged everyone’s work, and ran final integration and regression tests.
This quarter, I challenged my team with a new goal: cut our cycle time by half and enable true continuous delivery of our features. But here’s the rub: you can’t ship faster if doing so increases your risk. That's where most teams fall apart. We didn’t just want speed—we wanted reliable releases. That meant feature flags, automated tests, and issue traceability.
The State of CI/CD in 2025: What It Should Look Like
In 2025, continuous delivery isn’t just a tooling problem—it’s a discipline problem. Real CD means:
Features deploy when they’re done, not when the sprint ends
QA can test in production without exposing untested or broken code to users
Version tags and Jira tickets stay in sync automatically
You don’t need to manually touch anything for a release to go live
If your pipeline can’t do that, it’s not CD—it’s just CI with dreams of CD.
Our DevOps Stack
We didn’t chase shiny new tools to accomplish this either. We used what was already working and automated it to death.
Jira tracks the work and gets updated automatically
GitHub hosts our code and triggers workflows where needed
TeamCity runs our CI/CD pipelines
GrowthBook manages feature flags across services
Postman & Playwright run our API and E2E tests
Custom TeamCity build scripts + GitHub Actions bridge everything together
Kubernetes is our deployment platform on AWS EKS
We documented the whole thing in Confluence because “tribal knowledge” isn’t a delivery strategy; we wanted to be sure that when we onboard new developers, they understand our process and can integrate quickly into our flow.
Here’s a representation of my company’s complete SDLC with the tools mentioned above.
How It Actually Works
Let’s walk through the key components.
1. Labeling PRs
Every PR must include a Jira issue number that the developer enters when they create it—this isn’t optional. Our GitHub Actions parse the PR title, extract the Jira issue number, and tag the issue with the name of the affected project so our release script can use it later to bind the project's release version number to the ticket for easy traceability. No label, no traceability. This gives us a clear link between code and tickets, without a PM chasing people down.
Here's the actual labeling workflow that is in a central GitHub repo, so that we can make changes in one place:
name: Jira Labeler
on:
workflow_call:
inputs:
project_name:
required: true
type: string
secrets:
JIRA_DOMAIN:
required: true
JIRA_EMAIL:
required: true
JIRA_API_TOKEN:
required: true
jobs:
label-jira-ticket:
runs-on: ubuntu-latest
steps:
- name: Extract Jira Key from PR title
run: |
JIRA_KEY=$(echo "${{ github.event.pull_request.title }}" | grep -o '[A-Z]\+-[0-9]\+')
if [ -z "$JIRA_KEY" ]; then
echo ":x: No Jira key found in PR title: '${{ github.event.pull_request.title }}'"
exit 1
fi
echo "JIRA_KEY=$JIRA_KEY" >> $GITHUB_ENV
- name: Add Label to Jira ticket
env:
JIRA_DOMAIN: ${{ secrets.JIRA_DOMAIN }}
JIRA_EMAIL: ${{ secrets.JIRA_EMAIL }}
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}
PROJECT_NAME: ${{ inputs.project_name }}
ISSUE_KEY: ${{ env.JIRA_KEY }}
run: |
curl --request PUT \
--url "${JIRA_DOMAIN}/rest/api/2/issue/${ISSUE_KEY}" \
--user "${JIRA_EMAIL}:${JIRA_API_TOKEN}" \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data "{
\"update\": {
\"labels\": [
{\"add\": \"${PROJECT_NAME}\"}
]
}
}"
In each project, we add this bit of code in .github/workflows/label-jira.yml to reference the above script and only run it when we close the PR:
on:
pull_request:
types: [closed]
jobs:
label-jira:
uses: nutreense/cm/.github/workflows/label-jira.yml@main
with:
project_name: ${{project.name}}
secrets:
JIRA_DOMAIN: ${{ secrets.JIRA_DOMAIN }}
JIRA_EMAIL: ${{ secrets.JIRA_EMAIL }}
JIRA_API_TOKEN: ${{ secrets.JIRA_API_TOKEN }}
2. Extract the Version and Project Name
Each project repo has its own version in the package.json file. A TeamCity step reads package.json and sets two variables:
const packageInfo = require('./package.json');
console.log(`##teamcity[setParameter name='env.PROJECT_TAG' value='${ packageInfo.version }']`);
console.log(`##teamcity[setParameter name='env.PROJECT_NAME' value='${ packageInfo.name }']`);
These values flow through the entire pipeline. It’s how we tag releases, update Jira, and maintain traceability.
3. Builds Must Pass All Automated Tests
We use Postman for APIs and Playwright for the front end. Tests run in lower environments, block production releases if they fail, and notify the release manager. We’ve set this up using TeamCity finish build triggers, so when a PR is merged, the build/test/release chain is fully automated. Zero green tests = zero chance of shipping.
4. Git Release Scripts Handle Release Versioning
We use gitflow to automate our releases. The pipeline:
Checks for differences between the develop and main branches
Bumps the version using semantic versioning (major, minor, patch, e.g., v1.0.0)
Updates the package.json, commits, tags, and pushes
Publishes the new release
Here's a snippet from the release logic:
LATEST_TAG=$(git tag -l 'v*' | sort -V | tail -1 | sed 's/^v//')
RELEASE_TAG="v$(calculate_next_version)"
git flow release start "$RELEASE_TAG"
npm version "$RELEASE_TAG" --no-git-tag-version
git commit -am "Bump version $RELEASE_TAG"
git flow release finish "$RELEASE_TAG"
git push origin main develop --tags
5. Push Release Versions Back to Jira Automatically
Once a release is tagged, we call a custom Jira update script that:
Finds all issues labeled with the project
Verifies the Jira issue is moved to QA or Done
Confirms that subtasks, if any, meet the same criteria
Adds the release version to the fixVersion field in the Jira issue
It even supports a DRY_RUN mode to preview changes before they go live. If a version doesn’t exist in Jira, we create it, so now every story shows when it shipped and what version it's in—zero guesswork.
6. Features Stay Behind Flags Until They’re Ready
Feature flags are the special sauce that helps prevent us from blowing up production. We use GrowthBook to control visibility in our application. Once a feature is deployed, QA can turn on the flag for testing. QA also writes new tests to validate the latest feature and includes them in our test suite for future regression testing. When they give the green light, the feature flag is flipped on in production after it passes functional testing, NOT when the code is initially deployed.
With Growthbook, we can tie flags to specific user personas and do limited rollouts or even A/B testing. Once satisfied, we can remove the flag and clean up the code.
This approach lets us ship often without exposing untested features.
Results That Actually Matter
This wasn’t just a process improvement—it had real impact. Here’s a sampling of some of the metrics:
Cycle Time: Went down from 15+ days to 8 days (75th percentile).
Deploy Frequency: Went up to 8 deployments/week across 20+ projects. That’s more than one per day versus waiting 2+ weeks.
Merge Frequency: 13.5 PRs/week, many of which go live the same day after they’re reviewed.
QA Unblocked: QA can now test immediately without needing every other feature in the sprint to be ready. Once they’re satisfied, they turn on the feature in production.
Customer Impact: We can deliver and validate features mid-sprint. No more waiting two weeks to see if something works. This allows us to get feedback quicker.
Final Thoughts: Real CD is Boring
Good CI/CD isn’t flashy, and that’s basically the point. It doesn’t require all-hands calls or “go/no-go” meetings. It should be boring, safe, and automatic. We didn’t get here by buying a shiny new platform. We got here by building discipline, owning the tooling, and treating CI/CD as a product.
And the best part? No one on the team wants to go back. I owe it to my team for making this vision a reality!