The CFO looked at me across the conference table, frustration evident in his voice. "We spent $2.3 million on IT governance last year. The board keeps asking if we're getting value. Honestly? I have no idea how to answer them."
This was 2017, and I was sitting in a Fortune 500 financial services company that had implemented dozens of IT processes, bought expensive tools, and hired a governance team. But they couldn't answer the most fundamental question: Are we actually good at this?
That's when I introduced them to COBIT capability levels. Six months later, that same CFO told me it was the most valuable framework they'd ever implemented—not because it made them compliant, but because it gave them a language to measure, communicate, and improve their IT governance maturity.
After fifteen years of helping organizations navigate IT governance frameworks, I can tell you this: understanding where you are is infinitely more valuable than pretending you're where you want to be.
Why Capability Assessment Changed How I Think About IT Governance
Let me take you back to 2012. I was consulting for a mid-sized healthcare organization that proudly declared they had "implemented COBIT." They had policies, procedures, documentation—boxes and boxes of it.
Then I asked a simple question: "How do you know if your change management process is working?"
Silence.
They had a change management process. They had a change advisory board. They had approval workflows. But they had absolutely no idea if any of it was effective. Changes still broke production systems. Rollbacks were common. Nobody tracked whether the process actually prevented problems.
That's when I learned the hard truth: having a process and having a mature process are completely different things.
"You can't improve what you can't measure, and you can't measure what you don't understand. COBIT capability levels give you both the measurement and the understanding."
Understanding COBIT Process Capability: The Foundation
COBIT 2019 uses the Process Capability Model based on ISO/IEC 33020, which defines six capability levels (0-5). Think of these levels as a maturity ladder—each rung represents a significant improvement in how effectively and reliably a process delivers value.
Here's the framework that transformed how I assess IT governance:
The Six Capability Levels: A Detailed Breakdown
Capability Level | Name | Core Characteristic | Real-World Indicator |
|---|---|---|---|
Level 0 | Incomplete | Process not implemented or fails to achieve purpose | You're firefighting constantly; outcomes are unpredictable |
Level 1 | Performed | Process achieves its purpose | You get results, but inconsistently and unpredictably |
Level 2 | Managed | Process is planned, monitored, and adjusted | You can explain what you do and track if it worked |
Level 3 | Established | Standardized process deployed across organization | Everyone follows the same playbook consistently |
Level 4 | Predictable | Process operates within defined limits | You can forecast outcomes with high accuracy |
Level 5 | Optimizing | Continuous improvement drives innovation | You're constantly getting better and adapting |
Let me break down what these levels actually look like in practice, because I've seen every single one in real organizations.
Level 0: Incomplete Process (The "Chaos Zone")
I walked into a rapidly growing e-commerce startup in 2019. They'd grown from 20 to 200 employees in eighteen months. Their infrastructure was a beautiful disaster.
What I observed:
Developers deployed to production whenever they felt like it
No testing environment (they tested in production)
Documentation existed but nobody knew where
When something broke, they'd roll back... if they could figure out what changed
The same security vulnerabilities got introduced repeatedly
This is Level 0. The process either doesn't exist or exists only in theory. One developer told me: "We have a deployment process—it's called 'hope and pray.'"
Typical characteristics I've seen at Level 0:
Ad hoc responses to problems
Heavy reliance on individual heroics
No consistent outcomes
Lack of basic documentation
Reactive rather than proactive
"At Level 0, you're not managing processes—you're managing crises. And crisis management is exhausting, expensive, and unsustainable."
The Cost of Level 0
That e-commerce company was experiencing:
4-8 hours of production downtime weekly
23% of deployments causing customer-facing issues
Developer burnout (40% annual turnover)
Unable to pass customer security audits
Lost a $1.8M contract due to failed security assessment
The chaos had a price tag: approximately $3.2 million annually in lost revenue, remediation costs, and opportunity costs.
Level 1: Performed Process (The "We Get It Done" Stage)
After three months of intensive work with that e-commerce company, we'd moved them to Level 1. They had implemented basic processes:
Deployments happened during scheduled maintenance windows
Someone (usually) tested changes before production
They documented major changes (when they remembered)
Security scans ran weekly
Results improved dramatically. Downtime dropped to 1-2 hours per week. But here's the thing about Level 1: it's still heavily dependent on individuals.
When their lead DevOps engineer went on vacation, deployments basically stopped because only he really understood the process. When a junior developer tried to follow the "documented" procedure, he discovered it was outdated by six months.
Level 1 Reality Check
Here's what I typically see at Level 1:
Positive indicators:
Process achieves its intended outcome (most of the time)
Some documentation exists
People know there's supposed to be a process
Basic success criteria are understood
Warning signs:
Outcomes vary significantly based on who's doing the work
Documentation is incomplete or outdated
No formal tracking or monitoring
Process breaks down under stress
Knowledge exists in people's heads, not systems
I worked with a financial services company at Level 1 for incident management. They responded to incidents, resolved them, and moved on. But six months later, I pulled their incident logs and found they'd had the same root cause failure seventeen times. Nobody was analyzing patterns because that wasn't part of the process.
Level 2: Managed Process (The "Professional Operation" Threshold)
This is where things get interesting. Level 2 is where you transition from "we do this" to "we manage how we do this."
I'll share a transformation story that illustrates this perfectly.
In 2020, I worked with a healthcare provider moving from Level 1 to Level 2 for their access management process. At Level 1, they granted access when people requested it, following a basic approval workflow.
At Level 2, we implemented:
The Managed Process Framework
Process Attribute | Level 1 (Before) | Level 2 (After) |
|---|---|---|
Planning | Ad hoc requests handled as received | Planned provisioning cycles aligned with HR onboarding |
Monitoring | No tracking of access requests | Dashboard showing request volume, approval time, SLA compliance |
Work Products | Email approvals, informal documentation | Standardized request forms, approval trails, access reviews |
Performance Management | No metrics tracked | SLA: 95% of requests completed within 4 hours |
Documentation | Basic procedure document | Detailed playbooks, decision trees, escalation procedures |
Responsibilities | "IT handles it" | Clear RACI matrix, defined roles, accountability tracking |
The transformation results:
Access provisioning time: 2-3 days → 2.4 hours average
Orphaned accounts reduced: 340 → 23
Audit findings: 12 → 0
User satisfaction score: 6.2/10 → 8.7/10
But here's what really mattered: when their access management specialist left for another job, the replacement got up to speed in two weeks instead of the six months it took the previous person. Why? Because the process was documented, measured, and managed—not locked in someone's head.
"Level 2 is where you stop relying on heroes and start building systems. Heroes are great until they're on vacation, sick, or working for your competitor."
Characteristics of Effective Level 2 Processes
After implementing dozens of Level 2 processes, I've identified the consistent markers:
Work Product Management:
Standardized templates and forms
Version-controlled documentation
Clear inputs and outputs defined
Quality criteria established
Performance Management:
Defined metrics and KPIs
Regular performance reviews
SLAs or targets established
Variance analysis when targets aren't met
Planning:
Resource allocation planned
Capacity planning implemented
Dependencies identified and managed
Schedule adherence tracked
Monitoring:
Real-time or near-real-time visibility
Dashboard or reporting mechanism
Exception handling procedures
Escalation paths defined
Level 3: Established Process (The "Enterprise Standard" Achievement)
Level 3 is where individual process success becomes organizational capability. This is the level where you've truly standardized across the enterprise.
I spent two years helping a global manufacturing company achieve Level 3 for their change management process across 23 sites in 14 countries. The challenge was fascinating and frustrating in equal measure.
The Standardization Challenge
At the start:
European sites had one change process
Asian sites had a completely different approach
North American sites had three variations
Each site insisted their way was "best for our unique situation"
Sound familiar? I hear this constantly. Everyone thinks their situation is so unique that they need a custom process.
Here's what we discovered: 95% of what they did was identical. Only 5% needed local adaptation.
The Level 3 Transformation Framework
We implemented a global standard with controlled variation points:
Process Element | Global Standard (Required) | Local Adaptation (Permitted) |
|---|---|---|
Risk Assessment | Standardized risk scoring matrix | Local risk tolerance thresholds |
Approval Workflow | 4-tier approval structure | Specific individuals in roles |
Testing Requirements | Mandatory test categories | Specific test procedures |
Documentation | Required information fields | Additional local requirements |
Emergency Changes | Standard criteria and process | Local emergency contacts |
Post-Implementation Review | Required for all changes | Local review frequency |
The results after 18 months:
Metric | Before (Mixed Levels 1-2) | After (Level 3) | Improvement |
|---|---|---|---|
Failed changes causing outages | 8.2% | 1.3% | 84% reduction |
Change approval time | 4.7 days average | 1.2 days average | 74% faster |
Emergency changes | 31% of all changes | 6% of all changes | 81% reduction |
Documentation compliance | 34% | 97% | 185% improvement |
Cross-site knowledge transfer time | 6-8 weeks | 1-2 weeks | 75% reduction |
But the real transformation was cultural. When an engineer transferred from Shanghai to Frankfurt, they could be productive immediately. The process was the same. The tools were the same. The documentation format was the same.
The Three Pillars of Level 3
Through my experience, I've found that achieving Level 3 requires three critical elements:
1. Process Definition and Deployment:
Documented standard process maintained in central repository
Tailoring guidelines for permitted variations
Process owner responsible for standard
Training program ensures understanding
Compliance monitoring ensures adherence
2. Process Infrastructure:
Common tools and platforms across organization
Shared process documentation and knowledge base
Centralized metrics and reporting
Standard work products and templates
Unified communication channels
3. Process Competency:
Defined competency levels for process performers
Training programs and certification where appropriate
Communities of practice for knowledge sharing
Mentoring and coaching programs
Performance assessment tied to process adherence
"Level 3 is where you transform from 'we have a process' to 'we ARE a process-driven organization.' It's the difference between aspiration and culture."
Level 4: Predictable Process (The "Statistical Control" Frontier)
Level 4 is where things get mathematical. This is the realm of statistical process control, quantitative management, and predictive analytics.
I'll be honest: I've only helped a handful of organizations achieve genuine Level 4 for critical processes. It requires significant investment and a culture that values data-driven decision-making.
But when it works? It's powerful.
A Level 4 Case Study: Incident Management
I worked with a major cloud service provider that achieved Level 4 for their incident management process. Here's what made them different:
Statistical Process Control Implementation:
They tracked dozens of metrics for every incident:
Time to detect (from event to alert)
Time to acknowledge (from alert to human acknowledgment)
Time to triage (from acknowledgment to severity assignment)
Time to engage (from triage to right team involved)
Time to diagnose (from engagement to root cause identified)
Time to resolve (from diagnosis to service restoration)
Time to close (from resolution to documentation complete)
But here's the clever part: they didn't just track averages. They calculated statistical control limits for each metric.
Understanding Process Predictability
Metric | Mean | Standard Deviation | Upper Control Limit | Lower Control Limit |
|---|---|---|---|---|
Time to Detect | 2.3 min | 0.8 min | 4.7 min | 0 min |
Time to Acknowledge | 3.1 min | 1.2 min | 6.7 min | 0 min |
Time to Triage | 5.4 min | 2.1 min | 11.7 min | 0 min |
Time to Engage | 8.2 min | 3.4 min | 18.4 min | 0 min |
Time to Diagnose | 23.7 min | 12.3 min | 60.9 min | 0 min |
Time to Resolve | 47.2 min | 18.6 min | 103.0 min | 0 min |
Why this matters:
When an incident occurred, they could predict with 95% confidence how long resolution would take based on early indicators. More importantly, when a metric fell outside control limits, they knew something was wrong with the process itself—not just the incident.
For example, if time-to-diagnose exceeded 60 minutes, they didn't just blame complexity. They investigated: Was documentation inadequate? Did the on-call rotation need adjustment? Was training insufficient?
The business impact:
MTTR (Mean Time to Resolution): 89 minutes → 47 minutes
SLA achievement: 92% → 99.2%
Major incidents: 23/year → 4/year
Customer satisfaction: 7.8/10 → 9.3/10
Predictability: Could forecast quarterly incident impact within 5% accuracy
Level 4 Process Characteristics
From my experience, Level 4 processes exhibit these qualities:
Quantitative Management:
Statistical process control applied to critical metrics
Control limits established and monitored
Process capability indices calculated
Variation causes identified and addressed
Measurement and Analysis:
Comprehensive measurement framework
Automated data collection where possible
Statistical analysis performed regularly
Trends identified and acted upon
Predictive Capability:
Outcome forecasting with quantified confidence
Risk prediction based on process metrics
Capacity planning driven by statistical models
Proactive intervention based on trend analysis
Level 5: Optimizing Process (The "Continuous Innovation" Pinnacle)
Level 5 is the holy grail of process maturity. I've seen it achieved for specific processes in world-class organizations, but maintaining it requires relentless commitment.
The key difference between Level 4 and Level 5: Level 4 maintains stability; Level 5 drives innovation while maintaining stability.
A Real-World Level 5 Example
The most impressive Level 5 process I've witnessed was at a global payment processor. Their fraud detection process wasn't just predictable—it was continuously evolving.
How they operated:
Innovation Management:
Quarterly process improvement goals set based on business objectives
Cross-functional innovation teams tested new approaches
A/B testing framework allowed controlled experimentation
Machine learning models continuously refined detection algorithms
Feedback loops from every source: customers, staff, partners, competitors
Continuous Improvement Metrics:
Quarter | Fraud Detection Rate | False Positive Rate | Processing Time | Innovation Initiatives Tested | Improvements Deployed |
|---|---|---|---|---|---|
Q1 2023 | 97.2% | 0.8% | 47ms | 12 | 3 |
Q2 2023 | 97.8% | 0.6% | 43ms | 15 | 4 |
Q3 2023 | 98.1% | 0.5% | 41ms | 14 | 5 |
Q4 2023 | 98.6% | 0.4% | 38ms | 16 | 6 |
Every quarter, they improved. Not by accident, but by design.
The secret sauce:
10% of process capacity dedicated to innovation
Protected time for experimentation
Failure celebrated as learning
Rapid iteration cycles (2-week sprints)
Data-driven decision making on what to keep
"Level 5 processes don't just respond to change—they create change. They're living, evolving systems that get better every day."
How to Assess Your Current Capability Level: A Practical Framework
After hundreds of assessments, I've developed a practical approach that works across different processes and industries.
The Assessment Process I Use
Step 1: Define the Process Scope
First, be crystal clear about what process you're assessing. I see organizations fail here constantly.
Bad scope: "IT Security" Good scope: "Security Incident Response" Better scope: "Detection, containment, and resolution of security incidents affecting production systems"
Step 2: Gather Evidence
For each capability level, gather concrete evidence:
Evidence Collection Framework
Capability Level | Evidence Required | Where to Find It |
|---|---|---|
Level 1 | Process achieves purpose | Completed work products, outcome records, stakeholder interviews |
Level 2 | Management practices | Planning documents, performance reports, monitoring dashboards, work assignments |
Level 3 | Standardization across units | Process documentation, training materials, cross-unit consistency checks |
Level 4 | Quantitative data | Statistical reports, control charts, process capability analyses, prediction models |
Level 5 | Innovation records | Improvement initiatives, A/B test results, change impact analyses, innovation metrics |
Step 3: Evaluate Process Attributes
COBIT uses nine process attributes across the capability levels. Here's my assessment checklist:
Process Attribute Assessment Checklist
Level | Process Attributes | Assessment Questions |
|---|---|---|
Level 1 | Process performance | Does the process achieve its intended outcomes? Can you demonstrate success? |
Level 2 | Performance management | Are objectives set and tracked? Is performance measured against targets? |
Level 2 | Work product management | Are outputs defined and managed? Do templates and standards exist? |
Level 3 | Process definition | Is there a documented standard process? Is it maintained and accessible? |
Level 3 | Process deployment | Is the process consistently deployed? Are people trained on it? |
Level 4 | Process measurement | Are quantitative measures defined and collected? Are baselines established? |
Level 4 | Process control | Is the process controlled using statistical methods? Are limits defined? |
Level 5 | Process innovation | Are innovations systematically identified and tested? |
Level 5 | Process optimization | Is continuous improvement embedded in the process? |
Step 4: Determine Capability Level
Here's the critical rule: You can only claim a capability level if ALL attributes for that level and all preceding levels are fully achieved.
I see organizations make this mistake all the time. They have some Level 3 characteristics, so they claim Level 3. But they're missing Level 2 attributes.
Think of it like building a house. You can't claim the second floor is complete if the first floor has holes in it.
Common Assessment Mistakes (And How to Avoid Them)
After fifteen years of conducting assessments, I've seen every mistake possible. Here are the ones that trip up even experienced teams:
Mistake #1: Confusing Documentation with Implementation
I assessed a company that had beautiful process documentation. Level 3 quality, truly impressive.
Then I interviewed the people who were supposed to be following the process. Half of them didn't know the documentation existed. The other half had read it once during onboarding and never looked at it again.
Documentation ≠ Implementation
How to avoid it: Always validate documentation against actual practice. Interview process performers. Observe the process in action. Check recent work products for compliance.
Mistake #2: Rating Based on Aspiration, Not Reality
A CIO once told me: "We're definitely Level 3 for change management. We have a standard process that everyone follows."
I asked to see the last 20 change requests. 14 of them had skipped required approval steps. 8 had inadequate testing documentation. 3 had no risk assessment at all.
When I pointed this out, he said: "Well, people should be following the process. That's what we expect."
Expectations ≠ Reality
How to avoid it: Base ratings on evidence, not aspirations. Sample actual work products. Calculate compliance percentages. If adherence is below 80%, you haven't achieved that level.
Mistake #3: Inconsistent Scoping Across Processes
I watched an organization assess 25 different processes. Each assessment used different scoping, different evidence requirements, and different rating criteria.
The result? Completely meaningless comparisons. They couldn't identify which processes needed improvement most because the assessments weren't comparable.
How to avoid it: Use a standard assessment framework. Apply the same evidence requirements. Use consistent rating criteria. Document your approach and stick to it.
The Capability Development Roadmap: Moving Up the Levels
Here's the framework I use to help organizations systematically improve process capability.
Level 0 → Level 1: Establishing Basic Performance
Time investment: 1-3 months Resource requirement: Moderate Primary focus: Getting consistent outcomes
Key actions:
Document basic process flow (doesn't need to be perfect)
Identify clear success criteria
Assign process ownership
Implement basic tools or templates
Ensure everyone knows the process exists
Celebrate early wins to build momentum
Success indicator: Process achieves its purpose 80%+ of the time
Level 1 → Level 2: Implementing Management Practices
Time investment: 3-6 months Resource requirement: Moderate to High Primary focus: Making process manageable and measurable
Key actions:
Action Area | Specific Steps | Expected Outcome |
|---|---|---|
Planning | Define resource needs, create schedules, allocate responsibilities | Process is planned before execution |
Monitoring | Implement metrics, create dashboards, establish reporting cadence | Process performance is visible |
Work Products | Standardize templates, define quality criteria, implement reviews | Outputs are consistent and high-quality |
Performance | Set targets, track against SLAs, conduct variance analysis | Performance is measured and managed |
Adjustment | Create feedback loops, implement corrective actions, document lessons | Process improves based on feedback |
Success indicator: You can answer "How's the process performing?" with data, not opinions
Level 2 → Level 3: Achieving Standardization
Time investment: 6-12 months Resource requirement: High Primary focus: Enterprise-wide consistency
This is where most organizations struggle. Moving to Level 3 requires:
Organizational Change Management:
Executive sponsorship (critical—I've never seen Level 3 achieved without it)
Change champions in each business unit
Communication campaign explaining "why" before "what"
Address "not invented here" syndrome head-on
Process Standardization:
Documented standard process (single source of truth)
Tailoring guidelines for permitted variations
Training program for all process performers
Certification or competency assessment
Compliance monitoring and enforcement
Infrastructure:
Common tools and platforms
Shared knowledge repository
Centralized reporting and metrics
Standard communication channels
Success indicator: Process is performed the same way across the organization; new staff can be productive quickly
Level 3 → Level 4: Implementing Statistical Control
Time investment: 12-24 months Resource requirement: Very High Primary focus: Predictability and quantitative management
Moving to Level 4 requires:
Statistical Capability:
Identify critical process metrics
Establish measurement systems
Calculate baselines and control limits
Implement statistical process control
Train staff in data analysis
Create prediction models
Data Infrastructure:
Automated data collection where possible
Data quality assurance processes
Analytics platforms and tools
Reporting and visualization capabilities
Analytical Culture:
Data-driven decision making becomes norm
Statistical literacy developed across team
Hypotheses tested with data
Interventions based on statistical evidence
Success indicator: You can predict process outcomes with quantified confidence and detect process anomalies before they impact results
Level 4 → Level 5: Embedding Continuous Innovation
Time investment: 18-36 months Resource requirement: Very High Primary focus: Systematic innovation and optimization
Achieving Level 5 requires:
Innovation Framework:
Dedicated innovation capacity (10-15% of resources)
Systematic identification of improvement opportunities
Controlled experimentation environment
Rapid iteration and testing cycles
Clear criteria for adopting innovations
Learning Culture:
Failures treated as learning opportunities
Knowledge sharing is standard practice
Cross-pollination with other industries/organizations
Continuous skill development
Recognition for innovation and improvement
Advanced Analytics:
Predictive analytics identifying optimization opportunities
Machine learning where applicable
Simulation and modeling capabilities
A/B testing infrastructure
Success indicator: Process continuously improves, adapting to changing needs while maintaining stability
The Business Case for Capability Improvement
Let me get practical. CFOs and boards want to know: What's the ROI of improving process capability?
I helped a retail organization perform this analysis for their order fulfillment process. Here's what we found:
ROI Analysis: Order Fulfillment Process Improvement
Capability Level | Operating Cost (Annual) | Error Rate | Customer Satisfaction | Revenue Impact |
|---|---|---|---|---|
Level 1 (Current) | $4.2M | 8.7% | 72% | Baseline |
Level 2 (Target) | $3.8M | 3.2% | 84% | +$2.1M |
Level 3 (Future) | $3.4M | 1.1% | 91% | +$4.8M |
Investment required for Level 1 → Level 2:
Process improvement consulting: $180K
Tool implementation: $220K
Training and change management: $95K
Total: $495K
Payback period: 4.2 months
The business case was a no-brainer. But here's what made it compelling to the board: we didn't just show cost savings. We quantified:
Revenue impact from improved customer satisfaction
Risk reduction from lower error rates
Competitive advantage from faster fulfillment
Employee satisfaction improvements (reducing turnover costs)
"The question isn't whether you can afford to improve process capability. The question is whether you can afford not to."
Practical Tips from the Trenches
After fifteen years of capability assessments and improvements, here are my hard-won lessons:
Start Small, But Start
Don't try to assess and improve 50 processes simultaneously. Pick 3-5 critical processes and do them well.
I worked with a company that tried to improve 23 processes at once. Two years later, they'd made marginal progress on all of them and significant progress on none.
Compare that to another client who focused on just three processes. Within 18 months, they'd moved all three from Level 1 to Level 3, demonstrated clear ROI, and built organizational capability they then applied to other processes.
Use Capability Assessment as a Strategic Planning Tool
The organizations that get the most value from capability assessment use it to inform their strategic planning.
One client created a heat map showing:
Current capability level of each critical process
Strategic importance of each process
Cost/effort to improve each process
This led to a fantastic conversation: "We're Level 1 for vendor risk management, which is critical to our strategy and relatively easy to improve. Why are we investing in AI-powered analytics when we can't even manage our vendors properly?"
Don't Skip Levels
I've seen organizations try to jump from Level 1 to Level 3. It doesn't work.
You can't standardize a process you haven't learned to manage. You can't manage a process you haven't learned to perform consistently.
Build the foundation before adding the next floor.
Celebrate Progress, Not Just Perfection
Moving from Level 0 to Level 1 is worth celebrating. You don't need to reach Level 5 to create value.
I worked with a startup that achieved Level 2 for their software release process. Were they world-class? No. Did they reduce release failures from 30% to 5%? Yes. Did that create massive value? Absolutely.
The Future of Process Capability Management
Here's where I see this field heading, based on trends I'm observing:
AI-Powered Assessment: Tools that continuously assess process capability based on work product analysis, communication patterns, and outcome metrics. Already seeing early implementations.
Predictive Capability Management: Systems that predict when a process is degrading before performance suffers, allowing proactive intervention.
Capability-Driven Architecture: Organizations designing their enterprise architecture explicitly around capability levels, with different tools and governance for different maturity levels.
Continuous Capability Measurement: Moving from periodic assessments to real-time capability monitoring, making capability management dynamic rather than static.
Your Action Plan: Getting Started Today
If you're ready to assess and improve your process capability, here's what I recommend:
This Week:
Identify your 3-5 most critical IT processes
Gather your process owners for a discussion
Do a quick informal assessment using the tables in this article
Identify which processes are at which levels
This Month:
Select one process for formal assessment
Gather evidence systematically using the framework above
Document current state honestly (no aspirational ratings)
Identify the biggest gaps preventing next-level achievement
This Quarter:
Develop improvement plan for selected process
Secure resources and executive support
Implement improvements systematically
Measure progress against baseline
Document lessons learned
This Year:
Achieve measurable improvement in selected process
Apply lessons learned to additional processes
Build organizational capability assessment competency
Integrate capability management into strategic planning
Final Thoughts: The Journey Is the Destination
I started this article with a CFO asking if their IT governance investment was creating value. After implementing capability assessment, they had a definitive answer: yes, and here's exactly how much, and here's where we're going next.
But the real transformation wasn't in the metrics. It was in the conversations.
Instead of arguments about whether a process was "good enough," they had objective discussions about capability levels. Instead of finger-pointing when things went wrong, they had diagnostic tools to identify process gaps. Instead of random improvement initiatives, they had a roadmap.
Process capability assessment transforms IT governance from an art to a science. It doesn't remove judgment, but it informs judgment with data, structure, and a common language.
After fifteen years in this field, I can tell you: organizations that systematically assess and improve their process capability don't just have better IT governance. They have better businesses. They respond faster to market changes. They scale more efficiently. They compete more effectively.
Because in today's digital economy, your IT processes are your business processes. And the maturity of those processes directly impacts your ability to create value, manage risk, and achieve your strategic objectives.
So where are your critical processes on the capability maturity ladder? More importantly, where do they need to be?
The answers to those questions might be the most valuable insight you gain this year.