Performance Analysis · Q2 2025 – Q2 2026
Performance is measured via Complexity Performance — the rate at which engineers deliver complex features. Each commit is analyzed for the complexity of the change and categorized into Growth (new value creation), Maintenance (sustaining existing systems), and Fixes (rework / throwaway code). Higher performance means complex features are shipped faster.
Cumulative contribution over time. Watch developers race as positions shift month by month.
Commit activity distribution by hour and day of week across the entire organization.
Executive Summary
Over the analyzed period, Microsoft delivered a total complexity performance of 3492, representing a −49% change from Q2'25 to Q2'26. This is not a hiring story. The team grew -35%. Performance grew -49%.
Performance per engineer changed by −23% (4.4 → 3.4). New value creation (Grow) accounts for 37% of total performance, while waste is held at 19%.
There are 3 signals worth watching — flagged below with context.
Performance / Engineer
−23%
4.4 → 3.4
Total Performance
−49%
619.8 → 314.1
Cost / Perf Unit
+29%
Effective cost change
Fixes
+1pp
14.6% → 15.5%
Team Size
−35%
141 → 92 contributors
Codebase
12
Repositories in org
Shows how engineering performance scales relative to team growth. Left axis shows total performance score, right axis shows active contributor count. The gap between curves represents productivity gains — more delivered per person, not just more people. Unit: Engineering Throughput Value (ETV).
Cost per Performance Unit
+29%
If performance per engineer grew -23%, each unit of engineering performance now costs approximately 29% less than in Q2'25. This is a directional estimate — the exact figure depends on fully-loaded engineer cost — but the direction is unambiguous.
Effective Capacity Added
+-21 engineers
At Q2'26 productivity levels, the current 92-person team delivers the performance equivalent of 71 engineers at Q2'25 baseline. That's roughly -21 engineers worth of capacity added through productivity gains, not hiring.
Stacked bars show total complexity performance split into Growth (new value), Maintenance (sustaining systems), and Fixes (rework). The yellow line overlays performance per contributor — rising line means each engineer is delivering more, regardless of team size changes. Unit: Engineering Throughput Value (ETV).
How engineering effort is distributed: Growth % is new feature development, Maintenance % is sustaining existing systems, Fixes % is rework and throwaway code. A healthy team has rising Growth, stable Maintenance, and declining Fixes. Sudden shifts may indicate a change in priorities or a technical debt paydown.
Automatically detected anomalies — sharp rises in waste, sudden performance drops, or unusual shifts in composition. Each signal includes context to help determine whether it reflects a real concern or an expected transition (e.g. onboarding, migration, tech debt cleanup).
Waste percentage increased significantly compared to Q3'25 (15.8%). This may reflect a transition cost, tooling change, or technical debt paydown. Worth investigating root cause.
Total performance fell from 1287.63 to 314.05. This may indicate reduced activity, scope changes, or transitional costs.
Maintenance increased from 34.8% to 45.4%. The team may be increasingly servicing existing systems. Worth monitoring to ensure new value creation doesn't stall.
Monthly performance per active contributor — the most direct measure of individual productivity over time. An upward trend means engineers are delivering more complex work per person. The forward signal below extrapolates the current run rate, but is not a forecast — it shows the floor if current productivity holds. Unit: Engineering Throughput Value (ETV).
Forward signal: Apr'26 hit 3.4 performance per contributor — the highest recent month on record. If the current run rate holds, next quarter total performance is on track to continue growing. That's not a forecast. It's a floor, assuming no regression.
The raw numbers behind the charts: commits analyzed, active contributors, total performance, performance per developer, and the Growth / Maintenance / Fixes split for each quarter. The QoQ column shows quarter-over-quarter change in total performance — green means growth, red means a decline.
| Quarter | ||||||||
|---|---|---|---|---|---|---|---|---|
| Q2'25 | 4,191 | 141 | 619.81 | 4.4 | 34.8% | 50.6% | 14.6% | — |
| Q3'25 | 4,636 | 141 | 613.5 | 4.35 | 36.9% | 47.3% | 15.8% | −1% |
| Q4'25 | 4,826 | 138 | 657.02 | 4.76 | 30.7% | 48% | 21.3% | +7% |
| Q1'26 | 7,428 | 138 | 1,287.63 | 9.33 | 41.9% | 34.8% | 23.3% | +96% |
| Q2'26 | 1,560 | 92 | 314.05 | 3.41 | 39% | 45.4% | 15.5% | −76% |
Are we scaling efficiently? PLI = Performance Growth % / Headcount Growth %. A PLI > 1.0 means output grows faster than team size — good leverage. < 1.0 indicates diminishing returns from hiring.
Output grows faster than team — good leverage
+0.27 QoQ
PLI Trend (Q2'26)
Ratio of Growth output to Debt work (Maintenance + Fixes). A ratio ≥ 2:1 (ideal) means for every unit of debt work, 2 units of new value are created. Below 1:1 is a danger zone — tech debt dominates.
Cost per performance unit (headcount / total performance × 100) per quarter. A declining trend means the same cost produces more output. Color: red = cost increase, green = cost decrease.
Which teams deliver the most value per person? X = total team performance, Y = Growth %, bubble size = team size. Use play button to animate through quarters and track team evolution over time.
Where is engineering investment actually going? Flow: Organization → Repositories → Work Type (Grow / Maintenance / Fixes). Latest quarter. Wider flows = more investment.
Who has the strongest momentum right now? Cumulative momentum = Σ(delta_perf × grow%) / (1 + waste%). Use the play button to animate the race. Top 10 developers.
Are there predictable seasonal patterns in productivity? 12-axis radar shows multi-year average vs. current year performance. Deviations above average = above-seasonal productivity.
Reclassifies engineering effort based on bug attribution. Commits that introduced bugs are retrospectively counted as poor investments.
Investment Quality reclassifies engineering effort based on bug attribution data. Commits identified as buggy origins (those that introduced bugs later fixed by someone) have their grow and maintenance time moved into the Wasted Time category. Their waste (fix commits) remains counted as productive. All other commits retain their standard classification: grow is productive, maintenance is maintenance, and waste (fixes) is productive.
The standard model classifies commits as Growth, Maintenance, or Fixes. Investment Quality adds a quality lens: a commit that introduced a bug is retrospectively counted as a poor investment — the engineering time spent on it was wasted because it ultimately required additional fix work. Fix commits (Fixes in the standard model) are reframed as productive, because fixing bugs is valuable work.
Currently computed client-side from commit and bug attribution data. Ideal server-side endpoint:
POST /v1/organizations/{orgId}/investment-quality
Content-Type: application/json
Request:
{
"startTime": "2025-01-01T00:00:00Z",
"endTime": "2025-12-31T23:59:59Z",
"bucketSize": "BUCKET_SIZE_MONTH",
"groupBy": ["repository_id" | "deliverer_email"]
}
Response:
{
"productivePct": 74,
"maintenancePct": 18,
"wastedPct": 8,
"buckets": [
{
"bucketStart": "2025-01-01T00:00:00Z",
"productive": 4.2,
"maintenance": 1.8,
"wasted": 0.6
}
]
}