Measuring Flow and Speed: How to Understand and Improve Software Delivery Performance
- Craig Risi
- 13 minutes ago
- 5 min read

My last two blog posts have focused on the importance of metrics and how to capture your data to create meaningful metrics. I now want ot turn attention to focusing more specifically on the details of the metrics themselves that you should be looking at and why.
If we look at the modern expectations on software delivery, it is often about speed. How fast can you get ot the market, deliver a fix, or deploy a new feature?
In software delivery, speed without flow is chaos - and flow without speed is stagnation. High-performing engineering organisations don’t simply deliver faster; they deliver consistently, predictably, and with minimal friction. To achieve this, teams must understand how work moves through their systems.
Flow and speed metrics provide this visibility. They reveal where work slows down, where it queues, and where value is delayed. When measured correctly, they transform delivery from opinion-based debates into data-driven improvement.

What Are Flow and Speed Metrics?
Flow and speed metrics describe how work moves from idea to production. They focus on time, volume, and efficiency rather than output alone.
These metrics answer questions such as:
How long does it take for work to reach customers?
Where does work get stuck?
How predictable is our delivery?
How much work are we doing at once?
They are foundational to continuous improvement.
Core Flow & Speed Metrics
Below is an example of some key data and metrics you could look at to track aspects of your delivery flow better. Many more metrics could be measured - but it's important not to overwhelm your analytics and focus on too much - otherwise you will get confused in the detail and stop measuring the right thing. So using all or a selection of these is normally a good guideline to start identifying key flow gaps either in individual eams or at a wider organizational level:
1. Lead Time
What it measures: The total time from when a request is created to when it is delivered to production.
Why it matters: Long lead times mean slower feedback, higher risk, and lower business agility.
Use case: Compare lead times across teams or services to identify systemic delays.
How to measure it:
Start: Work item created in Jira / Azure DevOps
End: Work item marked as Done or Released to Production
Lead Time = End Date – Created Date
Visualise using percentiles (50th, 85th, 95th) instead of averages.
2. Cycle Time
What it measures: The time from when work begins (In Progress) to when it is completed.
Why it matters: Cycle time reflects team efficiency and internal friction.
Use case: Break cycle time into stages (dev, review, test, deploy) to find bottlenecks.
How to measure it:
Start: First status change to In Progress
End: Status changed to Done or Ready for Release
Cycle Time = End Date – In Progress Date
Segment by workflow states for deeper insights.
3. Throughput
What it measures: The number of work items completed in a given time period.
Why it matters: It shows delivery capacity and sustainability.
Use case: Track throughput trends to understand whether flow is improving or degrading.
How to measure it:
Count completed items per sprint, week, or month
Track by work type (features, defects, tech debt)
Compare rolling averages to spot delivery trends.
4. Work in Progress (WIP)
What it measures: The number of active items being worked on at the same time.
Why it matters: High WIP slows everything. It increases context switching, defects, and delays.
Use case: Limit WIP to improve flow and reduce multitasking.
How to measure it:
Count items in In Progress or equivalent workflow states
Track daily WIP trends per team or service
Correlate WIP levels with cycle time and defects.
5. Flow Efficiency
What it measures: The percentage of time work is actively being processed vs waiting.
Why it matters: Most delays are waiting time, not work time.
Use case: Improve handoffs, approvals, and testing queues to raise efficiency.
How to measure it:
Active Time: Time spent in In Progress states
Waiting Time: Time in Blocked, Waiting, Review, Queue states
Flow Efficiency = (Active Time ÷ Total Lead Time) × 100
Target improvements by reducing wait states.
6. Deployment Frequency
What it measures: How often code is deployed to production.
Why it matters: Frequent deployments reduce risk and shorten feedback loops.
Use case: Track whether improvements in automation increase delivery cadence.
How to measure it:
Count production deployments from CI/CD tools (Azure Pipelines, Jenkins, GitHub Actions)
Track per service, per team, or per day/week
Trend deployment cadence over time.
7. Change Lead Time
What it measures: Time from code commit to production release.
Why it matters: Shows how fast validated code reaches customers.
Use case: Correlate long change lead times with defect rates and release risk.
How to measure it:
Start: Code committed to main branch
End: Deployed to production
Pull timestamps from Git and deployment tools
Measure using median and percentile ranges.

Measuring Flow Across the Toolchain
True flow cannot be measured from a single tool in isolation. Every stage of the delivery lifecycle leaves a trail of data in a different system, and only when these systems are connected does the full story emerge. Work tracking platforms such as Jira capture demand and progress. Source control systems like GitHub or GitLab show when work is actually changed and reviewed. CI/CD pipelines (Jenkins, GitHub Actions, Azure Pipelines) reveal how long validation and builds take, while deployment platforms such as Kubernetes, Octopus, or Argo track how quickly code reaches production. Finally, monitoring and observability tools like Datadog or Splunk close the loop by showing how those changes behave in the real world.
By correlating these systems through shared identifiers (such as ticket IDs in commit messages), timestamps, and service metadata, you can follow work from idea → code → test → deployment → customer impact. This end-to-end visibility transforms disconnected activity logs into a continuous value stream that can be measured, analysed, and improved.
Common Bottlenecks Flow Metrics Reveal
When flow data is visualised across the toolchain, patterns quickly emerge. You begin to see where work is consistently waiting instead of moving. Typical bottlenecks include:
Long pull request review queues
Slow or flaky test execution
Manual approval gates and sign-offs
Limited test or production environments
Manual or fragile release processes
Overloaded teams juggling too many priorities
Excessive work in progress (WIP)
These constraints often remain hidden when teams look only at their own tools. Flow metrics make them visible, enabling organisations to improve systems and processes rather than placing blame on individuals.
Turning Flow Metrics into Action
The goal of measuring flow is not to push teams to move faster at any cost, but to create a delivery system that is predictable, sustainable, and resilient. Flow metrics help organisations:
Reduce unnecessary waiting and rework
Improve delivery predictability and planning confidence
Lower risk by shrinking batch sizes and cycle times
Increase learning speed through faster feedback loops
Even small, targeted improvements in flow - such as reducing review wait times or automating a manual gate - can compound into dramatic gains in reliability, quality, and trust over time. When teams understand how work truly moves, they can design systems that allow value to flow smoothly to customers.
Closing Thought
Flow is the heartbeat of your delivery system. When it is healthy, teams move with confidence, customers receive value faster, and the organisation adapts with ease.
By measuring flow and speed, intentionally, you stop guessing and start engineering your delivery system with clarity and control.




Comments