Turning Tool Data into Engineering Insight
- Craig Risi
- 5 days ago
- 6 min read

I started writing in my previous blog post about the importance of metrics and how they provide detailed insight into the development life cycle and identify key areas for improvement in software delivery for teams. Before delving into the different metrics themselves, though, it’s important to look at how we actually gather the data in the first place. After all, there is no way we can gain insight into our delivery if we don’t have access to the data that provides it.
How to Gather Metrics from Your Delivery, Defect, and Coding Platforms
Modern engineering teams generate vast amounts of data every day: commits, builds, deployments, test results, incidents, tickets, reviews, and more. Yet despite this abundance, many organisations still struggle to answer basic questions:
Where is work really slowing down?
Why are defects escaping?
Which changes are risky?
How healthy is our delivery system?
The problem is not just a lack of data. It’s that the data is fragmented across tools, teams, and processes. To truly understand engineering performance, organisations must connect their delivery tracking, defect management, and coding systems into a single, coherent measurement ecosystem.
Why Engineering Metrics Must Span the Entire Toolchain
No single tool can tell the full story of software delivery.
Your work tracking tool shows intent and flow.
Your code repositories show behavior and change.
Your CI/CD platforms show validation and automation health.
Your defect systems show where quality breaks down.
Your production telemetry shows real-world impact.
Many organisations will use many tools across these different functions to support their delivery and toolchain in different ways. Each tool plays a part – but it’s that collective story that is important, which is often the missing gap in understanding where the real delivery gaps lie. Only by correlating these systems can you understand cause and effect — not just what happened, but why it happened.
Engineering metrics must therefore be end-to-end, not tool-specific.
Step 1: Identify the Data Sources That Matter
Most organisations already possess the raw ingredients needed to produce meaningful engineering metrics. The challenge is not a lack of data, but the fact that this data is spread across disconnected systems, each capturing only one part of the delivery lifecycle.
Typical sources include:
Delivery and Work Tracking Systems: Tools such as Jira, Azure DevOps, ServiceNow, and Rally capture how work flows through the organisation. They provide insight into user stories, cycle times, status transitions, priorities, and planning data.
Source Control and Code Platforms: Platforms like GitHub, GitLab, Bitbucket, and Azure Repos record how code changes are created and reviewed. From these systems, teams can measure commits, pull requests, review times, merge frequency, and collaboration patterns.
CI/CD and Build Systems: Jenkins, GitHub Actions, GitLab CI, and Azure Pipelines track how software is built, tested, and deployed. They expose build durations, failure rates, deployment counts, and pipeline reliability.
Defect and Incident Systems: Tools such as Jira, ServiceNow, PagerDuty, and Opsgenie reveal how quality issues surface in production. They provide data on defect rates, severity, resolution times, and incident frequency.
Observability and Runtime Telemetry: Platforms like Splunk, Datadog, New Relic, and Prometheus show how systems behave in the real world, capturing error rates, latency, outages, and rollback frequency.
Each of these systems holds a valuable fragment of the truth. Only when they are connected through a unified data model do they form a complete, trustworthy picture of how value flows from idea to customer, and where it breaks down.
Step 2: Create a Common Delivery Data Model
To truly unify tool data across the delivery lifecycle, you need a shared data model that creates clear traceability between each stage of work:
A requirement is linked to a commit
A commit is linked to a build
A build is linked to a deployment
A deployment is linked to a defect or incident
This end-to-end chain allows you to follow value from idea to production and back again when issues occur.
To make this work in practice, the model must be supported by:
Unique identifiers that flow through the toolchain (for example, including ticket IDs in commit messages)
Consistent naming conventions across repositories, pipelines, and environments
Time-based correlation to align events from different systems
Rich metadata enrichment, such as team, service, application, and environment context
When implemented correctly, this shared data model becomes the backbone of your engineering metrics platform, enabling reliable insights, meaningful dashboards, and data-driven improvement across the organisation.
Step 3: Automate Data Collection
Manual reporting erodes trust and does not scale. It introduces delays, inconsistencies, and human bias that make metrics unreliable. A modern engineering organisation should instead rely on fully automated data pipelines that continuously collect, process, and standardise delivery data directly from source systems.
This is achieved by:
Using APIs and webhooks to extract events from tools such as Jira, GitHub, CI/CD platforms, test tools, and incident systems. Webhooks push events in real time, while APIs are used for backfills and historical data.
Streaming events into a central analytics platform, such as a data lake or cloud data warehouse (e.g., Snowflake, BigQuery, Azure Data Explorer). Event brokers like Kafka, Event Hubs, or Kinesis can be used to buffer and reliably transport high-volume data.
Normalising timestamps, states, and ownership models so that different tools speak the same “language.” For example, mapping tool-specific statuses (e.g., Done, Closed, Released) into a single lifecycle model and aligning all times to a standard timezone.
Applying transformation and validation logic using data processing layers (ETL/ELT pipelines with tools like dbt, Spark, or cloud-native data services) to calculate standardised metrics and enforce data quality rules.
With this automation in place, metrics become:
Real-time or near real-time, enabling faster feedback and decision-making
Consistent across teams and systems, because they are derived from the same logic and data sources
Free from human bias or manipulation, as data is captured directly from the tools of record
By treating metrics as an engineered system rather than a manual process, organisations can build a trusted, scalable foundation for continuous improvement.
Step 4: Correlate the Data to Reveal Engineering Signals
Once delivery data is unified across the toolchain, a new class of metrics becomes possible, metrics that were previously hidden by fragmented systems and disconnected reporting. Instead of viewing isolated snapshots from individual tools, teams gain an end-to-end, system-wide perspective on how work truly flows from idea to production and into operations.
From this connected data, several powerful insight categories emerge:
Flow Metrics: These reveal how efficiently work moves through the delivery pipeline:
Lead time from ticket creation to production
Time spent in each stage of the delivery lifecycle
Pull request review and approval delays
Quality Metrics: These expose how reliably software is being delivered:
Defect density per release
Change failure rate per service
Defects introduced per deployment
Risk Indicators: These highlight early warning signals before failures occur:
High change frequency combined with low test coverage
Frequent hotfixes following deployments
Services showing rising incident trends over time
Predictability Metrics: These show how stable and trustworthy delivery has become:
Variance between planned and actual delivery
Forecast confidence over time
Cycle time stability
Together, these metrics transform raw delivery data into actionable insight. They are only possible when systems are truly connected—when requirements, code, pipelines, deployments, and incidents are no longer siloed, but form a single, observable delivery value stream.
I will be unpacking many metrics like the above in more detail in later blog posts
Step 5: Build Actionable Dashboards, Not Vanity Reports
Good dashboards do not attempt to show everything; they surface the decisions that are waiting to be made. Their purpose is not to report activity, but to drive action by making problems, trends, and opportunities visible at the right time.
Effective engineering dashboards are designed to:
Show trends, not static numbers, so teams can understand direction and momentum rather than isolated snapshots
Highlight bottlenecks, not volume, focusing attention on constraints that limit flow and delivery
Compare before-and-after changes, making the impact of process or tooling improvements measurable
Provide drill-down to root cause, allowing teams to move from symptom to source using the same data
When built this way, metrics stop being vanity indicators and become decision enablers. Every chart and signal should help answer a single, guiding question:
What should we improve next, and why?
This is how dashboards evolve from reports into tools for continuous improvement.
Step 6: Use Metrics to Drive Improvement, Not Judgment
I mentioned this point in the last blog post and will probably repeat it several times in this series. It’s that important that it needs repeated stating. Metrics should never be used to measure or judge individual performance. Their true purpose is to improve the system, not to create fear or assign blame. When metrics are used incorrectly, they drive defensive behaviour and distort the very data they are meant to provide.
Used well, metrics are designed to:
Improve systems, not blame people, by focusing on process gaps rather than personal shortcomings
Identify constraints, not assign fault, so teams can remove friction instead of hiding it
Encourage experimentation and learning, creating a safe environment to test improvements and adapt quickly
When teams trust the data, they use it to make better decisions. When they fear it, they game it.
This trust boundary is what determines whether metrics become a catalyst for continuous improvement or a source of dysfunction.
Closing Thoughts
Engineering metrics don’t come from a single tool - they emerge from the connections between tools. By integrating delivery tracking, coding platforms, CI/CD systems, and defect management into a single measurement ecosystem, organisations move from activity tracking to insight-driven engineering.
The result is not just better reporting, it is better decision-making, stronger delivery confidence, and continuous improvement at scale.
In a world where software is the business, understanding how it is built is no longer optional; it is a strategic capability.




Comments