Skip to main content
· REELIANT

Technical Debt: Measure It Before You Fix It

Before modernizing, you need to understand what you're modernizing. A practical guide to quantifying technical debt and defining a data-driven remediation roadmap.

“We’re going to refactor, rewrite, migrate.” When someone mentions technical debt, the first instinct is often to act immediately. That’s a mistake. Before fixing it, you need to measure it. Seriously, not intuitively.

This step is the one most teams skip, out of impatience or lack of tooling. Yet it is the one that determines the relevance of everything that follows.

What “Technical Debt” Actually Covers

The term is overused. It designates very different things:

  • Poorly structured or undocumented code
  • Obsolete or unmaintained dependencies
  • Tightly coupled architecture that makes changes expensive
  • Insufficient or absent tests
  • Manual and fragile deployment processes
  • Business rules buried in code without traceability

These categories don’t have the same impact, nor the same remediation cost. Treating them all with the same weight is a management error.

The Four Dimensions to Evaluate

1. Cyclomatic Complexity

Cyclomatic complexity measures the number of independent logical paths in a function or module. Complexity > 10 on a single function is a warning signal. > 20 means a real problem.

Tools like SonarQube, CodeClimate or Lizard calculate this metric automatically across the entire codebase. The goal is not to reach 1 everywhere, but to identify the hotspots, high-complexity modules that are also frequently modified.

It is the intersection of complexity × change frequency that indicates real priority.

2. Dependency Analysis

An aging system accumulates dependencies, some of which are:

  • No longer maintained (uncorrected CVEs)
  • Blocking major framework updates
  • Creating version conflicts impossible to resolve without a rewrite

How to evaluate:

  • npm audit / pip audit / mvn dependency:analyze depending on the ecosystem
  • Check last commit dates on critical dependencies
  • Licence analysis (legal risk often underestimated)

A simple dashboard with: dependency / current version / stable version / open CVEs / end-of-support date.

3. Test Coverage

Coverage is not a quality metric in itself: a badly written test proves nothing. But the absence of tests on critical modules is a reliable indicator of regression risk.

What matters:

  • Coverage of critical paths (payments, authentication, business calculations)
  • Existence of integration tests, not just unit tests
  • Test stability over time (a “flaky” test is worse than no test)

4. Coupling and Cohesion

A well-architected system has loosely coupled modules (few inter-module dependencies) and highly cohesive ones (each module does one thing and does it well).

Static analysis tools measure:

  • The number of incoming/outgoing dependencies per module (fan-in/fan-out)
  • Dependency cycles (A depends on B which depends on A)
  • Module size (a 10,000-line module is almost always a problem)

The Hotspot Method: Crossing Complexity and Change

Hotspot method, complexity vs change frequency: 4 prioritisation quadrants

Developed by Adam Tornhill (CodeScene), the hotspot method is the most operationally useful for prioritization.

Principle: complex code that never changes is not urgent. Complex code that changes often is a daily risk.

Protocol:

  1. Extract Git history for the last 12 months: git log --format=format: --name-only | sort | uniq -c | sort -rg
  2. Cross with complexity scores per file
  3. Files at the top of both rankings are your priority hotspots

This analysis takes half a day and radically changes the conversation about priorities.

What the Audit Should Produce

A good technical debt audit produces three deliverables:

A Visual Map

A dependency graph between modules, color-coded by complexity. Not to look pretty, but to allow non-developers (management, business) to understand where the risk zones are.

A Criticality Score Per Module

A simple grid: complexity × coupling × change frequency × business criticality. Each module receives a score. The score guides refactoring decisions.

ModuleComplexityCouplingFrequencyBusiness CriticalityGlobal Score
Auth serviceHighStrongHighCritical🔴 Urgent
ReportingMediumLowLowStandard🟡 Monitor
Batch legacyVery highMediumLowPeripheral🟡 Plan

A Debt Cost Estimate

Often ignored, yet it is what allows defending a modernization budget. Reasonable approximation:

  • Additional development time induced by complexity (measured on recent sprints)
  • Incident costs related to fragile areas (resolution time × frequency)
  • Opportunity cost: features not delivered because of the debt

Pitfalls to Avoid

Measuring without business context. A high-complexity module managing inherited tax rules may be intentionally complex. The technical audit must always be crossed with functional knowledge.

Wanting to measure everything. An exhaustive 3-month audit on 500 modules produces a report no one reads. Better a targeted analysis on the 20% of modules that concentrate 80% of risk.

Confusing code coverage and quality. 90% coverage with tests that test nothing real is worse than 40% coverage on the real critical paths.

Forgetting organizational debt. Technical debt often has an organizational cause: teams too small, pressure on deadlines, absence of code review. Treating the symptom without addressing the cause means recreating debt.

Conclusion

Measuring technical debt is not an end in itself. It is the prerequisite for a conversation based on facts rather than intuitions. It is what allows defending a budget, prioritizing workstreams, and defining a realistic roadmap.

Without measurement, modernization is a faith decision. With rigorous measurement, it becomes an engineering decision.

Want to frame a technical debt audit on your IS? Let’s talk.