THOR THUNDERSCAN · THESIS Engineering Process Maturity Model

Software Development KPA Maturity Evaluation Framework

A CMMI-Inspired, Agile Software Development Process Maturity Evaluation Framework — Built by Golden Section VC for Pre- and Post-Investment Assessment

Isaac Shi, Co-Founder & GP of Golden Section VC
February 12, 2024
Share on LinkedIn Share on X
Relational vs. NoSQL: The Architect's Dilemma All Articles

Introduction: The Hidden Risk Inside Every Software Investment

When a venture capital firm evaluates a B2B software company, the due diligence process typically scrutinizes revenue multiples, churn rates, net retention, and market size. What it rarely scrutinizes with equal rigor is the engineering process — the invisible operational infrastructure that determines whether the product can scale, whether the team can ship predictably, and whether the codebase will survive the transition from startup to growth company.

Yet process maturity is one of the most reliable predictors of long-term company performance. A team with well-defined, consistently executed development processes can absorb headcount growth, survive leadership changes, and compress time-to-market as competition intensifies. A team without these foundations tends to accumulate technical debt, miss release windows, and experience catastrophic quality failures precisely at the moments — fundraising rounds, enterprise sales cycles, acquisition discussions — when reliability matters most.

The core problem with traditional technical due diligence:

Most technical due diligence focuses on what exists — architecture diagrams, code samples, security scans, infrastructure costs — and rarely examines how the team builds: the processes, disciplines, and institutional habits that determine whether what exists is repeatable and improvable, or accidental and fragile.

At Golden Section VC, our investment focus is concentrated in B2B software companies at the Series A and growth stage. Over years of pre-investment diligence and post-investment portfolio support, we observed a consistent pattern: the companies that outperformed their peers operationally were not necessarily those with the most sophisticated technology — they were those with the most disciplined engineering processes. Companies that underperformed, conversely, often had compelling products but fragile operational foundations that became liabilities at scale.

This observation drove us to develop a structured methodology for evaluating software development maturity — one rigorous enough to be predictive, yet practical enough to apply to a 10-person seed-stage team and a 200-person Series B organization alike. The result is our Key Process Area (KPA) Maturity Evaluation Framework.

62%
of professional developers cite technical debt as their top workplace frustration
Stack Overflow Developer Survey 2024
87%
of organizations using structured process improvement achieve their objectives
CMMI Institute Technical Report, 2025
~33%
of developer time lost to technical debt, bugs, and rework in immature orgs
McKinsey / Multiple surveys, 2024

CMMI: A Brief Primer

The Capability Maturity Model Integration (CMMI) has its intellectual roots in work conducted at IBM in the 1980s by Watts Humphrey, often called the "father of software quality." Humphrey adapted Philip Crosby's quality management concepts to software engineering. His foundational Capability Maturity Model (CMM) was first published by the Software Engineering Institute (SEI) at Carnegie Mellon University in 1989. The SEI later expanded CMM into CMMI in 2002, integrating separate models for software development, systems engineering, and supplier management into a single unified framework.

The framework was born from a practical crisis. The U.S. Department of Defense was spending billions of dollars on software systems that were delivered late, over budget, and riddled with defects. The government needed a way to objectively assess whether a software contractor's processes were reliable enough to execute large, complex programs. The Capability Maturity Model (CMM) provided that assessment mechanism — a structured, evidence-based way to evaluate process maturity across five progressive levels.

The Original CMM Philosophy

Watts Humphrey's core insight was deceptively simple: the quality of a software product is largely determined by the quality of the process used to develop it. Organizations that rely on the heroics of individual developers produce unpredictable outcomes. Organizations with disciplined, documented, and measured processes produce consistent outcomes. Process improvement, Humphrey argued, was the lever that transformed individual talent into organizational capability.

CMMI defines five maturity levels, each characterizing the degree to which an organization's processes are defined, managed, and optimized:

0
Incomplete / Ad Hoc — No defined process. Work is reactive and undocumented. Outcomes depend entirely on individual heroics.
1
Initial / Informal — Process exists informally but is inconsistently applied. Work gets completed but is often delayed and over budget.
2
Managed — Projects are planned, performed, measured, and controlled at the project level. Inconsistent across teams.
3
Defined — Organization-wide standards provide guidance across projects. Processes are proactive rather than reactive.
4
Quantitatively Managed — Organization is data-driven with quantitative performance improvement objectives. Measured and controlled.

CMMI organizes its requirements into Key Process Areas (KPAs) — clusters of related practices that, when implemented together, achieve a set of goals considered important to process maturity. Each KPA is evaluated against specific goals, activities, and measurements. Think of KPAs as the "chapters" of the maturity story: each one addresses a distinct dimension of how software teams operate, from requirements management and project planning to configuration management, quality assurance, and organizational process improvement.

The CMMI Institute's 2025 performance report shows that 87% of organizations using structured process improvement achieve their stated objectives — a compelling argument for the underlying discipline.

Where CMMI Falls Short for Modern SaaS

CMMI was designed in an era of waterfall development, government contracting, and large-scale defense systems programs. Its strengths — rigorous documentation, formal appraisal processes, comprehensive coverage — are simultaneously its weaknesses when applied to a modern SaaS startup operating in a competitive, fast-moving market.

⚠ CMMI FRICTION POINT #1
Certification Overhead

Formal CMMI appraisals are expensive, time-consuming, and process-heavy. A typical Level 3 appraisal requires months of preparation, extensive documentation artifacts, and third-party auditors. For a 15-person startup, this is a category error.

⚠ CMMI FRICTION POINT #2
Agile and DevOps Misalignment

CMMI's process areas were designed for sequential, plan-driven development. Mapping them to Agile sprints, continuous delivery pipelines, and trunk-based development requires significant interpretation — and often produces friction rather than insight.

⚠ CMMI FRICTION POINT #3
No Cloud, AI, or DevOps Coverage

CMMI predates cloud-native architecture, Infrastructure as Code, ML pipelines, and modern observability. The framework has no native language for evaluating Kubernetes maturity, CI/CD pipeline quality, APM discipline, or AI model governance.

⚠ CMMI FRICTION POINT #4
Stage-Agnostic Scoring

CMMI applies the same requirements to a seed-stage startup and a Fortune 500 enterprise. A 12-person company scored against the same rubric as IBM produces meaningless results — and worse, discourages process investment by making maturity seem unreachable.

None of this is a criticism of CMMI's intellectual foundations — the core insight that process quality determines product quality remains as valid today as it was in the late 1980s. The problem is one of translation: the original framework needs significant adaptation to be useful as a practical evaluation tool for modern SaaS organizations. This is exactly what Golden Section's KPA framework attempts to provide.

✓ WHAT WE PRESERVED FROM CMMI

The foundational structure of Key Process Areas, the five-level maturity scale, the principle that process improvement is incremental and staged, and the emphasis on measurement and data-driven management — all of these survive intact in our adaptation.

Golden Section's KPA Methodology

Golden Section's KPA Maturity Evaluation Framework adapts CMMI for modern B2B SaaS, cloud-native, and AI-enabled organizations. It was built from direct experience evaluating and supporting portfolio companies across the full investment lifecycle — from seed-stage technical diligence to post-Series B operational maturity programs.

Golden Section Looking Glass
The Tool Behind the Framework

Golden Section built CIO Looking Glass to make this framework operational. Looking Glass is the Development Intelligence platform Golden Section uses to assess portfolio companies' development maturity and generate two core outputs: a Development Proficiency Score (DPS) and a Tech Stack Score (TSS) — objective, data-driven measures benchmarked against stage-appropriate peers.

DPS
Development Proficiency Score

A weighted composite of all 19 KPA maturity scores. Reflects the overall engineering process health of the organization — how reliably and repeatably the team can build, ship, and improve software. Scored 0–100, with stage-appropriate benchmarks for Seed, Series A/B, and Series C+.

TSS
Tech Stack Score

An assessment of the quality, coherence, and scalability of the company's technology choices — from cloud infrastructure and database design to CI/CD tooling and observability stack. Identifies technical debt embedded in tool choices, not just in code.

The framework serves two purposes:

Pre-Investment

During technical due diligence, the KPA framework gives us a structured lens for evaluating whether a company's engineering processes are durable enough to support the growth trajectory implied by the valuation. We score each KPA, weight it by company stage, and produce a maturity profile that surfaces both strengths and critical gaps — informing both investment decisions and the structure of post-investment support plans.

Post-Investment

After investment, the KPA framework becomes a roadmap for systematic process improvement. Portfolio companies use their initial KPA scores as a baseline and work with our operational team to build improvement plans prioritized by business impact. Re-assessments every 6–12 months track progress and identify new priorities as the company scales through different growth stages.

The framework scores 19 Key Process Areas on a 0–4 maturity scale, weighted by the company's current stage:

Early Stage (Seed)

Product, UX, and Architecture KPAs weighted highest. The foundational choices made now will be the hardest to change later.

Growth Stage (Series A/B)

CI/CD, QA, and Security KPAs move to the foreground. Speed and reliability must coexist as team size multiplies.

Scale Stage (Series C+)

APM, DevOps, and Data Warehouse KPAs become the differentiators. Operational excellence separates market leaders from followers.

Maturity Levels: The Scoring Scale

LevelDescriptionKey Signal
Level 0
Nonexistent
No defined process. Work is reactive and undocumented. Each project is a new improvisation. Outcomes depend on who is available, not on institutional capability. No written process. No consistent behavior. Chaos concealed by individual heroics.
Level 1
Initial
Process exists informally in some people's heads or as ad hoc conventions. Inconsistently applied. Limited or no documentation. Success depends on specific individuals. Tribal knowledge. Works when the "right person" is involved. Breaks down during onboarding or key person departure.
Level 2
Defined
Documented process exists and is applied across most of the team. Consistently enforced in some areas, inconsistently in others. A written standard, but compliance varies. Written runbooks or wikis exist. Most people follow the process most of the time. Exceptions are common.
Level 3
Managed
Process is standardized, measured, and regularly reviewed. Metrics are tracked. Process compliance is enforced and visible. Retrospectives drive incremental improvements. Dashboard visibility. Regular process reviews. Measurable KPIs. Exception handling is systematic, not ad hoc.
Level 4
Optimized
Process is data-driven and automated where possible. Continuous improvement is institutionalized. The organization treats process improvement as a core engineering competency, not an overhead activity. Automated enforcement. Predictive analytics. Process improvement is a first-class product roadmap item.

A practical note on targets: Not every KPA needs to reach Level 4 — and for most early-stage companies, Level 3 across the critical KPAs represents a genuinely strong foundation. The framework is designed to surface the right priorities at each stage, not to impose a single universal standard. The goal is appropriate maturity for stage, not maximum maturity for its own sake.

Critical KPAs: Core Development Maturity

These eight KPAs cover the foundational engineering disciplines that determine whether a product can be built reliably, maintained sustainably, and scaled without catastrophic technical debt. They carry the highest weight for early-stage companies and remain critical at every subsequent stage.

1
Product Management Process
Product Management Process

Evaluates the rigor and data-orientation of the product management function: customer discovery and validation, quality and completeness of PRDs, roadmap planning methodology, data-driven prioritization, and the feedback loops used to connect product decisions to user behavior.

Why it matters: At Level 0–1, teams build features reactively — responding to the loudest customer, the last sales call, or the CEO's intuition. At Level 3–4, product teams run structured discovery cycles, validate with data before committing engineering resources, and maintain a quantitatively prioritized backlog. The difference in engineering efficiency is typically measured in a factor of 2–3× in productive output.
Maturity hallmark: From reactive feature-building → to data-driven roadmap governance with measurable outcome tracking.
2
UX & User Interaction Design Process
UX & User Interaction Design Process

Evaluates whether UX research, wireframing, prototyping, usability testing, and design system consistency are embedded into the product development cycle — or bolted on as an afterthought post-engineering.

Why it matters: UX rework is among the most expensive forms of engineering waste. Features built without validated UX requirements often require 40–60% of their development investment to be re-spent on redesigns after user feedback. Mature UX processes front-load the cheap work (research, prototyping) and compress the expensive work (engineering, QA).
Maturity hallmark: UX is embedded before engineering begins, not appended after delivery.
3
Architecture & Tech Stack
Tech Stack Selection & Architectural Design

Evaluates the intentionality of technology choices: MVP-appropriate vs. prematurely scalable architecture, cloud-native design principles, architectural documentation, performance planning, and the discipline of explicit tradeoff analysis.

Why it matters: Architecture decisions are the most expensive mistakes to reverse. A monolith chosen deliberately for speed-to-market can be migrated; an accidental monolith that grew without a plan is an operational anchor. We evaluate whether architectural decisions are documented, reasoned, and revisited on a defined cadence.
Maturity hallmark: Architecture evolves intentionally with scale, not reactively under crisis.
4
Database Design · CRITICAL
Database Schema Design & Maintenance

Evaluates normalization discipline, referential integrity, schema governance processes, migration strategy, and the overall data modeling culture — including how schema changes are proposed, reviewed, tested, and deployed.

⚠ HIGHEST-RISK KPA FOR SCALING COMPANIES
Database schema is the single hardest artifact to change after customers are live on production data. A poorly normalized schema adopted at Series A will still be constraining engineering capacity at Series C. Unlike application code, which can be refactored incrementally, schema migrations on active databases with millions of rows require careful planning, backward compatibility discipline, and often multi-week migration windows.
Maturity hallmark: Schema evolution strategy with backward compatibility controls and documented migration governance.
5
Version Control
Source Code Management & Version Control

Evaluates Git discipline: branching model maturity (trunk-based, QFE, feature branches), commit hygiene, merge conflict management, code review integration into the version control workflow, and rollback capability.

Why it matters: Version control is the foundation of every other engineering process. Teams without strong SCM discipline struggle with CI/CD adoption, code review enforcement, release management, and incident recovery. A clean branching strategy is a force multiplier for the rest of the engineering process stack.
Maturity hallmark: Clean branching strategy with enforceable review gates and documented rollback procedures.
6
Agile Execution
Agile (Scrum) Development Process

Evaluates sprint cadence consistency, backlog grooming quality, sprint planning and retrospective discipline, velocity tracking, and whether incremental delivery is actually practiced (not just performed ceremonially).

Why it matters: "We do Agile" is one of the most frequently misleading claims in startup due diligence. We evaluate actual sprint completion rates, backlog health (ratio of estimated vs. unestimated stories, age of items), and whether retrospective outputs translate into process changes. Performative Agile — standups without velocity tracking, sprints without retrospectives — provides none of the operational benefits of disciplined incremental delivery.
Maturity hallmark: Predictable sprint output with measurable velocity and closed retrospective loops.
Looking Glass Looking Glass tracks: Sprint completion rate, velocity trends, WIP limits, backlog aging, scope creep metrics via Project Planning & Monitoring KPAs.
7
Code Quality
Code Quality Review & Style Consistency

Evaluates peer review standards and coverage rates, coding style enforcement (linting, static analysis), refactoring discipline, documentation standards, and the overall maintainability culture of the codebase.

Why it matters: Code quality is a leading indicator of engineering velocity at scale. A codebase with high complexity, inconsistent style, and no documentation slows onboarding, makes refactoring expensive, and amplifies the cost of every subsequent feature. Teams that invest in code quality early grow their engineering capacity proportionally to headcount; teams that don't find that each additional developer adds less marginal output than the last.
Maturity hallmark: Codebase remains readable and scalable as team and product grow.
8
Quality Assurance
QA Process

Evaluates test planning maturity, automated testing coverage (unit, integration, end-to-end), regression testing discipline, manual QA governance, and bug triage and resolution processes.

Why it matters: DORA's 2024 State of DevOps research identifies automated testing as one of the strongest discriminators between elite and low-performing engineering organizations. Elite performers (those who deploy multiple times per day with <15% change failure rate) universally have mature automated test suites. Companies with low test coverage experience 4–7× more production incidents and significantly longer recovery times.
Maturity hallmark: QA shifts from reactive bug-fixing to proactive quality engineering embedded in the delivery pipeline.
Looking Glass Looking Glass tracks: Test coverage percentage, defect escape rate, QA cycle time, automated vs. manual test ratio via Testing & Validation and Quality Assurance KPAs.

Infrastructure & Operational KPAs

These six KPAs cover the infrastructure and operational disciplines that determine whether a product runs reliably in production, deploys safely, and can be monitored effectively. They are weighted more heavily at the Growth and Scale stages, when production reliability is a direct revenue concern.

9
Cloud Infrastructure & Deployment
Cloud Infrastructure & Deployment

Evaluates containerization maturity, autoscaling configuration, load balancing, serverless adoption where appropriate, and deployment repeatability — including the use of Infrastructure as Code (IaC) for environment consistency.

Maturity hallmark: IaC with full environment parity. No "snowflake servers." Infrastructure is version-controlled and reproducible.
10
DevOps & Ops Support
DevOps & Production Support

Evaluates DevOps culture and collaboration, incident response processes, production triage discipline, SLA management, and postmortem rigor — including whether postmortems produce actual systemic improvements or merely document what happened.

DORA context: Mean Time to Recovery (MTTR) is one of four core DORA metrics. Elite-performing engineering organizations restore service in under one hour from a production failure. Low performers take between one day and one week. This 24–168× difference in recovery speed directly impacts SLA compliance, customer trust, and enterprise sales cycles.
Maturity hallmark: MTTR is measured, tracked, and continuously optimized with blameless postmortems.
11
CI/CD & Release Management
CI/CD & Release Management

Evaluates continuous integration maturity, automated build pipelines, testing gates within the CI/CD pipeline, deployment pipeline design (blue/green, canary, rolling), artifact management, and release version control.

Why it matters: Deployment frequency and lead time for changes are the two DORA throughput metrics. Elite performers deploy to production multiple times per day with lead times under one hour. Companies with manual, infrequent releases accumulate larger change batches — which directly increases both change failure rate and recovery complexity. CI/CD is not a DevOps luxury; it is the operational foundation of competitive software delivery.
Maturity hallmark: Deployment-ready artifact always available. Release is a routine operation, not a high-risk event.
Looking Glass Looking Glass tracks: Deployment frequency, lead time for changes, change failure rate, rollback rate via Deployment Management KPA — directly mapping to DORA elite-performer thresholds.
12
Application Performance Monitoring
Application Performance Monitoring (APM)

Evaluates observability maturity: response time monitoring, error rate tracking, resource utilization visibility, real-time alerting configuration, and root cause analytics capability — the full stack from infrastructure metrics to application-level traces.

Maturity hallmark: Observability-driven engineering decisions. No significant incident should be unknown before a customer reports it.
Looking Glass Looking Glass tracks: MTTR, incident frequency, error rate trends, system uptime, alert noise ratio via Engineering Performance Management KPA — with AI anomaly detection to surface emerging issues before customer impact.
13
Data Warehouse & ETL
Cloud Data Warehouse & ETL Process

Evaluates analytics infrastructure maturity: centralized data warehouse presence, ETL pipeline governance, data quality validation processes, BI accessibility for non-engineering stakeholders, and integration of data into product and business decision-making.

Why it matters for AI-era companies: A mature data warehouse is a prerequisite for AI and ML capabilities. Companies that want to build AI features on top of their product without a governed, clean data layer are building on sand. This KPA also evaluates the "data as a product" discipline — whether data is treated as a first-class engineering deliverable.
Maturity hallmark: Single source of truth for analytics. Data quality is measured and SLA-governed.
14
Application & Cloud Security
Application & Cloud Security

Evaluates security posture across the full CIA triad (Confidentiality, Integrity, Availability): access controls, encryption at rest and in transit, security monitoring and logging, vulnerability management programs, security testing integration, and cloud security posture management.

Investment context: Security is no longer a late-stage concern. Enterprise customers require demonstrable security posture as a precondition for procurement (SOC 2 Type II is now required by 93% of enterprise procurement processes). Security debt accumulated in the early stages is extremely expensive to remediate under the time pressure of an enterprise sales cycle.
Maturity hallmark: Proactive security architecture, not reactive patching. Security is a development process concern, not a compliance checkbox.

Engineering Governance & Optimization KPAs

These five KPAs govern the meta-processes — learning, measurement, and continuous improvement — that allow an engineering organization to compound its operational advantage over time.

15
R&D Metrics
R&D Metrics & Development Output Measurement

Evaluates whether the engineering organization measures its own output: code complexity tracking, lead time and cycle time measurement, defect rate tracking, and engineering KPI dashboards — the data infrastructure for managing engineering as a business function.

DORA alignment: This KPA directly maps to all four DORA metrics — deployment frequency, lead time for changes, change failure rate, and mean time to restore. The 2024 DORA report identifies measurement maturity as one of the most significant discriminators between elite and low-performing organizations.
Maturity hallmark: Engineering is managed with data. Engineering leaders can predict delivery timelines and explain variance.
Looking Glass Looking Glass tracks: All four DORA metrics, engineering throughput, cycle time, and KPI coverage — this KPA feeds directly into the Development Proficiency Score (DPS) output.
16
ML & AI Implementation
ML & AI Implementation

Evaluates the engineering discipline around AI and ML capabilities: ML pipeline governance, model monitoring, data labeling processes, AI explainability mechanisms, and responsible AI standards — including bias testing and model versioning.

Why this KPA is increasingly critical: The 2024 DORA report found that while 75% of developers use AI tools daily, AI adoption without proper process governance is correlated with a 7.2% decrease in delivery stability. AI is not a free capability upgrade — it requires the same engineering discipline applied to any other production system: versioning, monitoring, testing, and governance.
Maturity hallmark: AI is engineered, versioned, and monitored — not experimental chaos deployed to production.
17
Learning & Training
Learning Management & Organizational Training

Evaluates developer training programs, onboarding maturity, security awareness training, technical documentation culture, and knowledge transfer mechanisms — the institutional practices that determine whether organizational capability grows with headcount.

Maturity hallmark: Continuous skill development is institutionalized. New hires reach productivity faster than attrition can erode capability.
18
Process Improvement
Process Ongoing Improvement

Evaluates retrospective implementation quality, metrics-based process improvement cycles, technical debt governance (including explicit debt budgets and paydown strategies), root cause analysis discipline, and feedback loop integration across the engineering organization.

Maturity hallmark: Process evolves intentionally. Technical debt is governed, not ignored. Retrospectives produce measurable outcomes.
Looking Glass Looking Glass tracks: Technical debt velocity, retrospective output rates, process deviation trends, and improvement initiative completion — via Continuous Improvement and Performance Measurement KPAs.
19
Innovation Process
Skunk Works / Disruptive Innovation Process

Evaluates the organization's capacity for structured innovation: experimental R&D tracks, innovation sandboxes, dedicated experimentation budgets, rapid prototyping capability, and governance frameworks that enable controlled risk-taking without destabilizing the core product.

Why this KPA appears last: Innovation capacity is the highest-order concern — and the one that can only be sustainably built on top of a solid process foundation. Organizations that attempt Skunk Works-style innovation programs without mature underlying processes typically produce prototype chaos rather than product breakthroughs. Innovation discipline is the dividend of operational excellence.
Maturity hallmark: Structured innovation without destabilizing core product delivery.

Scoring & Weighting Model

The framework produces a composite maturity score using a weighted average of individual KPA scores. The weighting model is stage-adjusted, reflecting the fact that different processes matter at different points in a company's lifecycle.

Total Company Maturity Score

Score = Σ (KPAi Level × KPAi Weight) / Σ (KPAi Weight)

Each KPA is scored 0–4. Each KPA is assigned a weight from 1 (standard importance) to 3 (critical for current stage). The weighted average produces a composite score between 0.0 and 4.0, which maps to an overall maturity level.

Composite ScoreOverall MaturityTypical ProfileInvestor Signal
0.0 – 0.9 Level 0 — Nonexistent Pre-product or very early team. No repeatable processes. Seed only. Significant process coaching required.
1.0 – 1.9 Level 1 — Initial Product exists but processes are ad hoc. Key-person dependent. Seed / Pre-A. Critical gaps must be part of investment thesis.
2.0 – 2.9 Level 2 — Defined Core processes documented. Uneven execution. Scale risk present. Series A range. Improvement plan required for growth funding.
3.0 – 3.9 Level 3 — Managed Disciplined execution across most KPAs. Measurable and improving. Strong signal for Series B+. Enterprise-ready process posture.
4.0 Level 4 — Optimized Fully automated, data-driven, continuously improving processes. Elite operator signal. Acquisition-ready process documentation.
KPA Weighting — Seed Stage
Product ManagementWeight 3
UX ProcessWeight 3
Architecture DesignWeight 3
Database SchemaWeight 3
Source Code ManagementWeight 2
CI/CD, Security, QAWeight 1
KPA Weighting — Scale Stage
CI/CD & Release MgmtWeight 3
APM & ObservabilityWeight 3
Application SecurityWeight 3
DevOps & Prod SupportWeight 2
Data Warehouse / ETLWeight 2
R&D MetricsWeight 2

How We Use This in Practice

In practice, the KPA framework is a structured conversation guide for our technical due diligence process and a shared vocabulary for post-investment improvement planning. Looking Glass CIO Looking Glass turns that periodic audit into a continuous, data-driven discipline.

Pre-Investment Technical Due Diligence

During diligence, we evaluate each KPA through a combination of document review, codebase inspection, engineering team interviews, and operational metric analysis. For each KPA, we look for:

  • Evidence of process: Written runbooks, documented standards, recorded decisions — not just claims in a pitch deck.
  • Evidence of execution: Git history commit patterns, CI/CD pipeline configurations, sprint velocity data, incident postmortem records.
  • Evidence of measurement: Whether the team tracks the right metrics and whether those metrics inform decisions.
  • Evidence of improvement: Whether the organization acts on retrospectives, resolves technical debt deliberately, and adapts processes over time.

The output is a KPA scorecard alongside a narrative assessment of critical gaps, their likely impact at the next funding stage, and a preliminary estimate of the remediation investment required. This becomes part of the investment committee memo and, for investments that proceed, the foundation of the 100-day plan.

Post-Investment Portfolio Support

After investment, the initial KPA scores serve as a baseline for a structured improvement roadmap. We work with portfolio company CTOs and engineering leadership to:

  • Prioritize the 2–3 KPAs with the highest leverage relative to current business stage and growth plans.
  • Define specific, measurable improvement milestones for each targeted KPA over a 90-day period.
  • Connect portfolio companies with peer CTOs in our network who have navigated the same KPA transitions.
  • Re-assess KPA scores semi-annually to track progress, celebrate improvement, and reprioritize as the company scales.

CIO Looking Glass CIO Looking Glass: Development Intelligence in Action

Golden Section Looking Glass

CIO Looking Glass, developed by Golden Section Technology, is a proprietary Development Intelligence (DI) platform that consolidates metrics data from all phases of the software development life cycle into convenient, intuitive dashboards — giving CIOs and engineering leaders a 360-degree view of their projects' health and empowering them to effectively orchestrate R&D efforts.

Development Intelligence (DI) is a term coined by Golden Section Technology. It describes the discipline of converting raw SDLC tool data — from commits, tickets, deployments, and incidents — into actionable, objective insight for engineering leadership.

Looking Glass Looking Glass does not replace existing tools. It stitches them together — Jira, GitHub, Jenkins, PagerDuty, Datadog, and more — transforming scattered SDLC data exhaust into a coherent Development Intelligence signal for CIOs.

The Looking Glass Mission

"CIO Looking Glass's mission is not to create a new piece of the puzzle, but to stitch all the pieces together, presenting the complete picture in a manner most informative to software company CIOs."

The Four Visibility Gaps Looking Glass Closes

Four recurring pain points drove the design of Looking Glass:

1. Limited Project Visibility

CIOs and VPs of Engineering struggle to obtain accurate, timely project status from busy development teams. Traditional status reports are subjective, delayed, and incomplete. Looking Glass provides a unified, automated dashboard of all key metrics needed to assess project health in real time — without depending on manual reports or engineering time.

2. Tool Selection & Integration Complexity

For startups, selecting and integrating the right productivity tools across the SDLC is genuinely complex — the landscape is wide, the integration overhead is real, and poor choices compound. Looking Glass guides tool selection, manages integrations, and transforms disparate data streams into Development Intelligence from day one.

3. Best-Practice Enforcement at Scale

Using the right tools the wrong way is as damaging as using no tools at all. Teams adopt Jira but run inconsistent sprint ceremonies. They adopt GitHub but have no branch protection rules. Looking Glass surfaces process deviations in the dashboard before they compound into technical debt — turning best practice from aspiration into observable reality.

4. Objective Benchmarking

Self-assessment produces optimistic scores. External assessments are expensive and infrequent. Looking Glass provides continuous benchmarking reports that compare your KPA metrics against peer companies within the platform's ecosystem — replacing subjective self-assessment with objective, cross-company comparisons grounded in real operational data.

How Looking Glass Operationalizes the KPA Framework

The KPA framework defines what to measure. Looking Glass Looking Glass provides how — automatically, continuously, without manual scorecards — by extracting data from existing tools and surfacing it in a unified dashboard aligned to each KPA.

Looking Glass Data Flow Architecture
SDLC Tools
Jira · GitHub · Jenkins
PagerDuty · Datadog
Confluence · Zendesk
Stitch
Cloud-native ETL
Pre-built connectors
Automated ingestion
Panoply
Cloud data warehouse
Data normalization
AWS-native storage
AWS QuickSight
Embedded analytics
KPA dashboards
Executive views
Looking Glass
DI Dashboard
for CIOs

Stitch handles integrations from productivity tools into Panoply's cloud data warehouse. Panoply normalizes and transfers the data to AWS, where QuickSight visualizes it through embedded, role-appropriate dashboards inside Looking Glass.

Key Platform Features

FeatureWhat It DoesKPA Impact
Historical Snapshots Captures longitudinal metric trends across the SDLC, making process regression immediately visible. Unlike point-in-time assessments, Looking Glass reveals the direction of your KPA trajectory — whether you are improving, plateauing, or regressing. KPAs 15, 18 — R&D metrics, process improvement
Benchmarking Reports Compares your KPA performance against peer companies within the Looking Glass ecosystem. Benchmarking transforms subjective self-assessment into objective evidence — answering the question "Are we good for our stage?" with data rather than intuition. All KPAs — objective maturity calibration
Smart AI Algorithms AI-driven anomaly detection, trend analysis, project risk awareness, and development efficiency optimization. Smart algorithms surface signals that manual dashboard review would miss — identifying at-risk sprints, deployment pattern anomalies, and emerging quality debt before it becomes visible to customers. KPAs 8, 12, 15, 16 — QA, APM, R&D metrics, ML/AI
Fractional CIO Guidance Access to GST's fractional CIO consulting service for expert interpretation of dashboard metrics and targeted improvement planning. For companies without a full-time CIO, this service provides senior engineering leadership perspective without the full-time cost. All KPAs — post-investment support & coaching
Project Health Assessment Automated synthesis of multiple KPA signals into a single project health score, enabling executives and board members to understand engineering status at a glance without requiring domain expertise in every underlying process area. All KPAs — executive-level visibility

The 18 KPAs Looking Glass Tracks Continuously

The platform continuously tracks 18 KPAs covering the full software development lifecycle:

Foundation & Planning
  1. Requirements Management — backlog quality, requirement clarity
  2. Project Planning — sprint accuracy, estimation quality
  3. Project Monitoring & Control — velocity, burn rate, WIP
  4. Supplier Agreement Management — vendor SLAs, dependency tracking
  5. Quality Assurance — test coverage, defect rates
  6. Configuration Management — branching, artifact governance
  7. Change Management — change frequency, review compliance
  8. Risk Management — risk identification, escalation tracking
  9. Data Management — schema governance, data quality metrics
Execution & Operations
  1. Engineering Performance Management — DORA metrics, throughput
  2. Integration Management — API reliability, service health
  3. Testing & Validation — pipeline gate pass rates
  4. Deployment Management — deploy frequency, rollback rate
  5. Maintenance Management — incident resolution, debt paydown
  6. Customer Support Management — ticket SLAs, escalation rates
  7. Stakeholder Engagement — reporting cadence, comms quality
  8. Continuous Improvement — retrospective outputs, debt trends
  9. Performance Measurement & Analytics — KPI coverage, metric-driven decisions

How Looking Glass KPAs Map to This Framework

The 18 platform KPAs are operationally equivalent to the 19 evaluation KPAs in this article — the key distinction being that Looking Glass converts each KPA from a scored checklist into a continuously monitored dashboard dimension.

This Framework (Evaluation)
KPA 1: Product Management → Requirements Mgmt
KPA 5: SCM → Configuration Mgmt
KPA 6: Agile → Project Planning & Monitoring
KPA 8: QA → Testing & Validation + QA
KPA 11: CI/CD → Deployment Management
KPA 12: APM → Engineering Performance
KPA 15: R&D Metrics → Performance Measurement
KPA 18: Process Improvement → Continuous Improvement
Looking GlassLooking Glass (Continuous Monitoring)
Real-time backlog health, ticket aging, scope creep
Branch protection, PR compliance, merge velocity
Sprint completion rate, velocity trends, WIP limits
Test pass rates, coverage trends, defect escape rate
Deploy frequency, lead time, change failure rate
DORA metrics, throughput, system uptime
KPI dashboard completeness, metric utilization rate
Debt velocity, retrospective completion rate
Golden Section Looking Glass

CIO Looking Glass translates the KPA evaluation framework into a live operational dashboard — moving your engineering organization from periodic assessments to continuous Development Intelligence. If you are a portfolio company, a prospective investment target, or an engineering leader seeking to operationalize process maturity, contact Golden Section to learn how Looking Glass maps to your current engineering stack and where your team sits on the maturity curve today.

Contact Golden Section
goldensection.com

Maturity Signals by Stage: What We Look For

The table below summarizes healthy KPA targets by stage. These are prioritization guides, not hard cutoffs.

KPASeed (Pre-A)Growth (Series A–B)Scale (Series C+)
Product ManagementLevel 1–2Level 2–3Level 3–4
UX ProcessLevel 1–2Level 2–3Level 3–4
Architecture DesignLevel 2Level 2–3Level 3–4
Database SchemaLevel 2Level 3Level 3–4
Source Code MgmtLevel 2Level 3Level 3–4
Agile ProcessLevel 1–2Level 2–3Level 3
Code QualityLevel 1–2Level 2–3Level 3–4
QA ProcessLevel 1Level 2–3Level 3–4
Cloud InfrastructureLevel 1Level 2–3Level 3–4
DevOps / Prod SupportLevel 1Level 2Level 3–4
CI/CDLevel 1Level 2–3Level 3–4
APM / ObservabilityLevel 0–1Level 2Level 3–4
Data Warehouse / ETLLevel 0–1Level 1–2Level 3
Application SecurityLevel 1Level 2–3Level 3–4
R&D MetricsLevel 1Level 2Level 3
ML / AI ImplementationLevel 0–1Level 1–2Level 2–3
Learning ManagementLevel 1Level 2Level 3
Process ImprovementLevel 1Level 2Level 3
Innovation ProcessLevel 0–1Level 1–2Level 2–3

Conclusion: Process Maturity as Durable Competitive Advantage

In a market where features can be replicated in months and technology choices rebuilt with capital, engineering process maturity becomes one of the few durable competitive advantages available to a B2B software company.

A company with Level 3 maturity across its critical KPAs can onboard engineers faster, ship features more predictably, recover from incidents more quickly, and adapt to market changes with less chaos than a competitor relying on individual heroics and tribal knowledge. These advantages compound over time. They are expensive to imitate — not because of technical complexity, but because they require genuine cultural and behavioral change, which takes quarters, not weeks.

CMMI gave the software industry its first rigorous language for process quality. This framework translates that language for modern cloud-native, AI-enabled B2B software companies. With Looking Glass, it is no longer a spreadsheet — it is a live operational instrument producing a Development Proficiency Score and a Tech Stack Score for every portfolio company, every quarter, without a single manual audit.

Practical Guidance for Founders and Engineering Leaders

  • 1.Start with Database Schema and Architecture — these are the decisions that will constrain you longest. Get them right before scaling.
  • 2.Invest in CI/CD before you "need" it — the time to build delivery automation is before the system is too complex to safely change.
  • 3.Treat process improvement as engineering work — budget engineering time for retrospectives, technical debt, and tooling investment. It is not overhead; it is capital maintenance.
  • 4.Measure what matters to you — DORA metrics are an excellent starting point. Any team that cannot report its deployment frequency and MTTR is operating blind.
  • 5.Stage-appropriate ambition — Level 4 across all 19 KPAs is not the goal for a Series A company. The goal is the right processes, at the right level of maturity, for the current stage — and a deliberate plan for evolving them as you scale.

Sources & Further Reading

  1. Wikipedia. Capability Maturity Model.
  2. CMMI Institute. CMMI Maturity Levels.
  3. CMMI Institute. (2025). CMMI Technical Report — Performance Results 2025.
  4. Google Cloud / DORA. (2024). 2024 DORA State of DevOps Report.
  5. DORA. DORA Metrics Guide.
  6. SEI, Carnegie Mellon. (1993). The Capability Maturity Model for Software, Version 1.1.
  7. McKinsey & Company. (2024). The State of AI 2024 (incl. data engineering benchmarks). McKinsey Digital. (The "Global Data Engineering Report" is an internal benchmark study; the State of AI 2024 report covers related data maturity findings.)
  8. Stack Overflow. (2024). Developer Survey 2024 — Professional Developers. Stack Overflow Research.

Further Reading from the Author

Isaac Shi writes about AI, software, and entrepreneurship at isaacshi.com. These essays provide the strategic and philosophical context behind this thesis.

Essay · Isaac Shi
Don't Code Before Reading This
A developer's pairing — the culture, craft, and orchestrated chaos of high-performing engineering teams, told through Anthony Bourdain's kitchen.
Essay · Isaac Shi
Cognitive Biases, Data & Decision Making
How Kahneman's System 1 & 2 apply to engineering leadership — making sound maturity assessments under uncertainty without falling into cognitive traps.
Found this useful?
Share with engineering leaders and investors evaluating software development maturity.
Share on LinkedIn Share on X Download PDF
Continue Reading
© 2026 Thor ThunderScan  ·  ← Back to Thesis  ·  Start Scanning →