LogoLogo
AllClearStack
All articles
·9 min read

The DORA Metrics Hallucination

Modern engineering leadership has become a game of shadow puppetry. We pretend the light from the dashboard represents the health of the fire, while the fire itself is burning down the house. The industry has entered a collective hallucination where four specific metrics—Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Time to Restore Service—are treated as the holy trinity plus one. We call them DORA metrics, and we have allowed them to rot the foundations of architectural common sense.

Goodhart’s Law is the ghost in the machine that every CTO ignores at their peril. When a measure becomes a target, it ceases to be a good measure. By canonizing these metrics, we have inadvertently incentivized engineers to optimize for the scoreboard rather than the product. The result is a landscape littered with hyper-fragmented architectures that exist solely to satisfy a 'High Performing' label on a spreadsheet. We are shipping more frequently, yet the user experience is stagnant, and the technical debt is compounding at predatory interest rates.

The Cult of Deployment Frequency is a Productivity Mirage

Deployment frequency is the most abused metric in the modern stack. It has morphed from a measure of agility into a fetish for activity. Teams are now celebrated for pushing fifty times a day, regardless of whether those pushes contain a single line of meaningful logic. We see developers splitting a single, coherent feature into twenty distinct pull requests just to inflate the numbers. This is not progress; it is a frantic form of architectural treading water.

This behavior creates an illusion of momentum that masks a deep-seated stagnation. Each deployment carries an inherent tax—a cognitive context switch, a CI/CD pipeline run, and a potential point of failure. When we prioritize frequency over substance, we force our systems into a state of perpetual churn. The overhead of managing these micro-releases often outweighs the value they provide to the end-user. It is a factory line moving at double speed to produce empty boxes.

High-performing teams should be measured by the distance they move the needle, not the number of times they pull the lever. A team that ships one monumental, perfectly executed feature once a week is infinitely more valuable than a team that ships thirty-five bug fixes for self-inflicted wounds. We have forgotten that 'deploy' is a transitive verb. You are supposed to deploy something of value, not just deploy for the sake of the deployment itself.

Microservices Are the Ultimate Metric-Gamer's Weapon

The rush toward microservices was never purely about scalability or organizational decoupling. It was, in many dark corners of the enterprise, a way to game the DORA system. In a monolith, a deployment is a singular, heavy event. In a distributed web of three hundred services, you can 'deploy' every five minutes by tweaking a configuration file in a non-critical utility. This artificially inflates the 'High Performing' status of the organization while the system becomes exponentially more fragile.

We have weaponized architectural complexity to satisfy a management dashboard. By decomposing systems into granular, interdependent shards, we have created a maintenance nightmare. Each shard requires its own pipeline, its own monitoring, and its own lifecycle management. The 'deployment frequency' looks fantastic, but the 'Lead Time for Changes' for a cross-cutting feature has actually increased because of the coordination hell required to synchronize twenty different services.

This architectural deformation is a direct result of metric-driven development. Engineers are smart; they will always find the path of least resistance to their performance bonuses. If the bonus depends on how many times they hit the 'merge' button, they will build a system that requires them to hit it constantly. We are building houses of cards made of glowing glass circuit boards, praying the wind doesn't blow while we point at our 100% green dials.

The Lead Time to Change Lie

Lead Time for Changes is touted as a measure of efficiency, but it is a hollow number. It usually measures the time from the first commit to the code hitting production. This definition conveniently ignores the three months of bureaucratic planning, design reviews, and 'alignment meetings' that happened before that first commit was ever typed. It is a measurement of the sprint, not the marathon, and it encourages a short-termism that is lethal to long-form innovation.

When we focus exclusively on the speed of the pipeline, we neglect the quality of the thought process. Speed is useless if you are running in the wrong direction. The pressure to keep lead times low discourages deep architectural work that requires long periods of uninterrupted focus. It favors 'low-hanging fruit' and iterative tweaks over the difficult, structural changes that actually define a platform's longevity. We are optimizing the plumbing while the house is built on a swamp.

Furthermore, this metric ignores the 'Discovery' phase entirely. A team can have a five-minute lead time for a change that nobody actually wanted. In this scenario, the metric is green, the manager is happy, and the business is losing money. We must stop pretending that the speed of the conveyor belt is the same as the quality of the product at the end of it. Efficiency without efficacy is just a faster way to fail.

Technical Sovereignty Requires Meaningful Infrastructure

To escape the DORA trap, organizations must reclaim their technical sovereignty. This means moving away from the 'black box' mentality of serverless abstractions that hide the true cost of operation and moving toward predictable, high-performance infrastructure. When you own the metal and the strategy, you stop worrying about how many times you can deploy a function and start worrying about how that infrastructure serves the business. Vultr provides the kind of transparent, high-performance compute that allows teams to focus on actual throughput rather than metric-chasing.

Infrastructure should be a silent partner, not a metric-generating engine. The current trend of adding layers of abstraction just to make the 'Change Failure Rate' look better by isolating failures is a coward's game. It adds latency, cost, and complexity. A sovereign engineer knows that a stable, well-understood monolith on high-performance cloud compute is often superior to a 'high-performing' DORA-compliant mess of lambda functions and service meshes.

We need to get back to the basics of engineering: latency, throughput, and reliability. These are physical realities, not management abstractions. By simplifying the stack and using providers that offer raw power without the 'tax of complexity,' we can redirect engineering talent away from pipeline maintenance and back toward product development. The goal is a system that works, not a dashboard that lies.

The Hidden Tax of Cognitive Load and Maintenance

Every 'deployment' is a heartbeat of a system that is becoming increasingly difficult to understand. The cognitive load required to manage a high-frequency deployment environment is staggering. Engineers are no longer building; they are janitors of their own automation. They spend their days debugging why a 'Green' deployment in service A broke a 'Green' deployment in service B three layers deep in the call stack.

This environment breeds burnout and cynicism. Senior engineers, the ones who have seen this cycle before, realize that the 'High Performing' metrics are a facade. They see the technical debt piling up in the corners where the DORA metrics don't reach. They see the documentation rotting because there is no metric for 'Clarity of Thought.' We are creating a generation of engineers who are experts in YAML and CI/CD triggers but have no idea how to design a resilient data schema.

Maintenance is the silent killer of the modern enterprise. By optimizing for deployment frequency, we are essentially increasing the surface area of what needs to be maintained. Each deployment is a new version that must be tracked, each microservice a new dependency that must be patched. We are drowning in our own 'agility.' The cost of keeping the lights on in a 'High Performing' DORA shop is often triple that of a 'Low Performing' shop that actually delivers value.

Reclaiming Value from the Dashboard Janitors

It is time for CTOs to conduct a reality audit. Stop looking at the DORA dashboards for ten minutes and look at the product roadmap. How many of the features promised six months ago are actually in the hands of users and functioning correctly? If your deployment frequency is high but your user growth is flat and your bug report count is climbing, you are not high-performing. You are a victim of the DORA hallucination.

We must redefine performance based on business outcomes, not pipeline telemetry. A successful engineering team is one that builds a system that is easy to reason about, cheap to operate, and quick to adapt to market needs. None of these things can be measured by how many times you pushed to production last Tuesday. We need to reward simplicity over complexity and results over activity.

Kill the vanity metrics. If a metric can be gamed, it will be gamed. Instead, focus on the 'Time to Value'—the time from an actual business need being identified to that need being met in a way that doesn't require five hotfixes the following day. This requires a cultural shift away from the worship of tools and back to the respect of the craft. Stop being a janitor for your dashboard and start being an architect for your users.

The Death of the Generalist Engineer

The hyper-focus on deployment metrics has led to the specialization of the 'Pipeline Engineer,' a role that shouldn't need to exist. We have carved out the soul of engineering and replaced it with a series of narrow, metric-focused silos. The generalist who understands the whole stack from the kernel to the UI is becoming a rare breed, hunted to extinction by the demand for 'High Frequency' specialists who can optimize a Jenkins file but can't fix a memory leak.

This loss of holistic understanding is why our systems are becoming more fragile despite our 'High Performing' status. When no one understands the whole system, the system is in control, not the engineers. We are building machines that we can no longer maintain without an army of specialists, each focused on their own tiny slice of the DORA pie. It is a tower of Babel built on top of a Kubernetes cluster.

We must return to a state where the engineer is responsible for the value, not just the commit. This means breaking the metrics-driven feedback loops that reward activity over impact. It means valuing the 'Low Frequency' engineer who prevents a catastrophic failure over the 'High Frequency' engineer who triggers a hundred deployments of fluff. The future of engineering belongs to those who can see through the hallucination and build something that lasts.

Not sure which tools to pick?

Answer 7 questions and get a personalized stack recommendation with cost analysis — free.

Try Stack Advisor

Enjoyed this?

One email per week with fresh thinking on tools, systems, and engineering decisions. No spam.

Related Essays