The conference room at the Department of Veterans Affairs fell silent. I'd just asked a seemingly simple question: "What's the worst thing that could happen if this system was compromised?"
The room erupted. The medical director worried about patient safety. The privacy officer talked about HIPAA violations. The IT director mentioned system downtime. The legal counsel brought up congressional hearings.
After fifteen years of implementing FISMA across federal agencies, I've learned that this chaos is actually progress. Because here's the truth: if you can't articulate what you're protecting and why it matters, you can't protect it effectively.
That's exactly why the Risk Management Framework (RMF) starts with categorization. It's not bureaucracy—it's clarity. And in my experience, it's the step that most organizations rush through and later regret.
Why Categorization Is Your Foundation (Not Just Paperwork)
Let me share something that still makes me cringe. In 2017, I was brought in to help a mid-sized federal agency that had "completed" their RMF implementation. They proudly showed me their Security Authorization packages, their control assessments, everything.
Then I asked to see their system categorization documentation.
What I found was a disaster. Every single system—from their public website to their classified intelligence database—was categorized as "Moderate." When I asked why, the answer was devastatingly simple: "It seemed like the middle ground, and we didn't want to overthink it."
Six months later, an inspector general audit tore them apart. Systems handling classified information had inadequate controls. Low-risk systems had unnecessary, expensive controls that drained resources. The agency spent $2.3 million and eighteen months fixing what should have been done right the first time.
"System categorization isn't about checking a box on page one. It's about making intelligent decisions that ripple through your entire security program for years to come."
Understanding FIPS 199: The Foundation of Federal Security Categorization
Before we dive into the how, let's talk about the why. The Federal Information Processing Standard (FIPS) 199 provides the framework for categorizing federal information and information systems. It's not optional reading—it's the bedrock of everything that follows.
FIPS 199 defines security categorization based on three security objectives:
The Security Triad: Confidentiality, Integrity, and Availability
I've sat through hundreds of categorization workshops, and I've developed a way to explain these concepts that actually sticks:
Confidentiality: Can the wrong people access this information? Integrity: Can someone change this information without authorization? Availability: Will this information be there when authorized people need it?
Sounds simple, right? It's not.
Let me give you a real example that illustrates why this matters. I worked with the National Weather Service on categorizing their storm prediction systems. Here's how we thought through it:
Security Objective | Impact Level | Reasoning |
|---|---|---|
Confidentiality | LOW | Weather data is public information; minimal harm if disclosed |
Integrity | HIGH | Incorrect storm predictions could lead to loss of life—people make evacuation decisions based on this data |
Availability | HIGH | During severe weather events, unavailable data could prevent timely warnings, potentially causing casualties |
The overall system categorization? HIGH (you take the highest impact level across all three objectives).
This meant implementing rigorous controls around data accuracy and system uptime, while relaxing some confidentiality controls. Without proper categorization, they might have wasted resources protecting public data while under-protecting the integrity and availability that actually mattered.
The Three Impact Levels: What They Really Mean
FIPS 199 defines three impact levels: Low, Moderate, and High. But here's what nobody tells you—these aren't just labels. They're fundamentally different security postures with different costs, different complexity, and different operational implications.
Low Impact: When Loss Is Limited
FIPS 199 Definition: "The loss of confidentiality, integrity, or availability could be expected to have a LIMITED adverse effect on organizational operations, organizational assets, or individuals."
What this actually means in practice:
Minor financial loss or operational inconvenience
Temporary reduction in mission capability
Minor harm to individuals
Short-term public embarrassment
Real-world example: I helped categorize a federal agency's employee cafeteria menu system. Compromise of confidentiality (someone learns what's for lunch), integrity (menu is changed), or availability (menu system is down) would cause minimal harm. This was clearly Low impact across all three objectives.
The cost implication: Low impact systems require 125 NIST 800-53 baseline controls. Implementation typically costs $50,000-$150,000 depending on system complexity.
Moderate Impact: The Federal Default (And Why That's Dangerous)
FIPS 199 Definition: "The loss of confidentiality, integrity, or availability could be expected to have a SERIOUS adverse effect on organizational operations, organizational assets, or individuals."
What this actually means in practice:
Significant financial loss or operational degradation
Significant reduction in mission effectiveness
Significant harm to individuals
Damage to agency reputation
Real-world example: A federal grant management system I worked with handled millions in grant funding. Compromise could result in:
Confidentiality breach: Grant applicants' proprietary information exposed
Integrity breach: Funding decisions manipulated, wrong organizations funded
Availability breach: Grant processing delayed, impacting mission delivery
This was Moderate impact—serious consequences, but not catastrophic.
The cost implication: Moderate impact systems require 325 baseline controls. Implementation typically costs $200,000-$500,000.
Here's the trap: About 70% of federal systems get categorized as Moderate by default. I call this "the path of least resistance categorization." Agencies figure Low might look like they're not taking security seriously, and High seems like overkill. So Moderate becomes the default.
This is expensive and often wrong. I've seen Low impact systems waste hundreds of thousands implementing unnecessary controls, while actual High impact systems got dangerously under-protected.
High Impact: When Failure Is Catastrophic
FIPS 199 Definition: "The loss of confidentiality, integrity, or availability could be expected to have a SEVERE or CATASTROPHIC adverse effect on organizational operations, organizational assets, or individuals."
What this actually means in practice:
Catastrophic financial loss
Severe mission degradation or complete mission failure
Severe harm or loss of life
Major damage to national security or public interest
Real-world example: I consulted on a Department of Defense weapons system command and control platform. The stakes:
Confidentiality breach: Enemy learns defensive capabilities, tactics
Integrity breach: Weapons systems could be misdirected
Availability breach: Critical defensive systems unavailable during attack
This was unquestionably High impact.
The cost implication: High impact systems require 421+ baseline controls. Implementation typically costs $1,000,000+ and requires specialized expertise.
"The difference between Moderate and High isn't just 100 more controls. It's the difference between a security program and a security fortress. Choose wisely."
The Categorization Process: Step-by-Step
After conducting over 200 system categorizations across federal agencies, I've refined this process to minimize errors and maximize accuracy. Here's exactly how I do it:
Step 1: Identify Information Types (The Part Everyone Gets Wrong)
This is where most categorization efforts derail. FIPS 199 requires you to identify the information types processed, stored, or transmitted by the system. But what does that actually mean?
Don't: Just list generic categories like "PII" or "financial data"
Do: Use NIST Special Publication 800-60 Volume II, which provides specific information types aligned to federal government functions.
Here's a table I use as a starting point:
Mission-Based Information Types | Examples | Common Impact Levels |
|---|---|---|
Administrative Management | Budget planning, procurement records | Low to Moderate |
Benefits Management | Social Security benefits, veterans benefits | Moderate to High |
Financial Management | Accounting systems, payment processing | Moderate to High |
Health Information | Patient records, medical research | Moderate to High |
Law Enforcement | Criminal records, investigations | High |
National Defense | Weapons systems, classified intelligence | High |
Emergency Response | 911 systems, disaster coordination | Moderate to High |
Public Information | Government websites, press releases | Low |
I worked with the Social Security Administration on categorizing their benefits determination system. We identified these information types:
Citizen benefit records (High confidentiality—SSNs, financial data)
Benefits calculation algorithms (High integrity—incorrect calculations affect payments)
Benefits payment processing (High availability—delays harm vulnerable citizens)
Each information type was assessed independently before determining the overall system categorization.
Step 2: Determine Impact Levels for Each Security Objective
This is where the rubber meets the road. For each information type, you assess the potential impact of compromise across confidentiality, integrity, and availability.
I use a structured questioning approach:
For Confidentiality:
1. Who should NOT have access to this information?
2. What happens if they get it anyway?
3. How much harm would unauthorized disclosure cause?
4. To whom would it cause harm? (Individuals, organization, national security)
5. How long would the harm persist?
For Integrity:
1. What decisions are made based on this information?
2. What happens if the information is incorrect or modified?
3. How quickly would unauthorized changes be detected?
4. What are the consequences of making decisions based on bad data?
5. Can the damage from unauthorized modification be reversed?
For Availability:
1. How time-sensitive is access to this information?
2. What mission functions stop if the system is unavailable?
3. How long can operations continue without this system?
4. What is the impact of downtime on people or missions?
5. Are there backup systems or manual processes available?
Let me show you how this works with a real example. I helped the Federal Aviation Administration categorize their air traffic control data systems:
Security Objective | Impact Analysis | Level |
|---|---|---|
Confidentiality | Flight plan data includes some proprietary airline information, but disclosure would cause minimal harm | LOW |
Integrity | Incorrect flight data could lead to aircraft collisions, loss of life, catastrophic safety incidents | HIGH |
Availability | System downtime grounds aircraft, disrupts national airspace, creates safety risks, causes massive economic impact | HIGH |
Overall System | Highest impact level determines categorization | HIGH |
See what happened there? Even though confidentiality was Low, the system is High overall because both integrity and availability were High impact.
Step 3: Select the Provisional Impact Level
Here's the rule that's deceptively simple: The system's security categorization is the high-water mark of the impact levels.
SC system = {(confidentiality, impact), (integrity, impact), (availability, impact)}
If any one security objective is High, the system is High. If all are Low, the system is Low. Anything else is Moderate.
But—and this is crucial—you can adjust based on system-specific factors. I'll explain that in a moment.
Step 4: Document Your Rationale (This Saves You Later)
I cannot overstate how important documentation is. Here's why:
In 2019, I reviewed a categorization that had been completed three years earlier. The system was categorized as High, requiring expensive controls. Nobody remembered why. The original team had moved on. The documentation said only: "High impact system per FIPS 199."
We had to re-do the entire categorization from scratch because nobody could justify the High categorization. Turns out it should have been Moderate. The agency had wasted $400,000 over three years on unnecessary controls.
Here's my documentation template:
Element | Description |
|---|---|
System Name | Official system name and acronym |
System Description | Purpose, function, and scope |
Information Types | Specific NIST 800-60 types processed |
Confidentiality Assessment | Impact analysis and level with detailed justification |
Integrity Assessment | Impact analysis and level with detailed justification |
Availability Assessment | Impact analysis and level with detailed justification |
Provisional Categorization | Overall system categorization |
Adjustment Factors | Any factors requiring adjustment up or down |
Final Categorization | Approved categorization |
Approval Authority | Name, title, date |
Review Date | When categorization should be reviewed |
"Three years from now, when someone asks why this system is categorized the way it is, your documentation should tell the complete story. Write for that future person—they'll thank you."
The Adjustment Factors Nobody Talks About
Here's where experience matters. FIPS 199 and NIST 800-60 provide baseline guidance, but real-world systems often have factors that warrant adjustment.
When to Adjust UP (Increase Impact Level)
I've adjusted categorization upward in these situations:
Aggregation Effects: Individual data elements might be Low, but aggregated data could be High.
Example: A personnel system I categorized contained:
Individual employee names: Low
Individual salaries: Low
Individual security clearance levels: Moderate
ALL combined for entire intelligence agency: High (complete personnel roster with clearances is highly sensitive)
Worst-Case Scenarios: Sometimes you need to consider unlikely but catastrophic scenarios.
Example: A building access control system at a federal courthouse was initially categorized as Moderate. But we considered: what if an attacker gained control during a high-profile terrorist trial? They could enable an attack on judges, witnesses, or jurors. We adjusted to High.
Regulatory Requirements: Some laws mandate specific protection levels.
Example: Systems handling classified information must be categorized at least High, regardless of impact analysis.
When to Adjust DOWN (Decrease Impact Level)
This is rarer, but it happens:
Effective Compensating Controls: If robust controls already exist outside the system that mitigate impact.
Example: A financial transaction system might appear High for integrity, but if every transaction requires out-of-band verification through a separate, independent system, the impact of system compromise is reduced.
Limited Scope: If the system processes only a tiny subset of sensitive data.
Example: A training system that processes sample data (no actual citizen records) might be categorized lower than the production system it's designed to train people on.
Time-Bound Data: If information becomes non-sensitive quickly.
Example: A system that processes public announcements for 24-hour embargoes. After release, the data is public, so availability and integrity only matter for a brief window.
Common Categorization Mistakes (And How to Avoid Them)
After fifteen years, I've seen every categorization mistake possible. Here are the big ones:
Mistake #1: Categorizing Based on System Type, Not Impact
Wrong thinking: "It's a database, so it must be High."
Right thinking: "What's IN the database determines impact, not the technology."
I've seen public-facing websites (Low) and mainframe databases (also Low) that people assumed were high-risk because of the technology. I've also seen simple spreadsheets that were legitimately High impact because they contained nuclear facility security plans.
Mistake #2: Ignoring Availability
Wrong thinking: "Availability doesn't matter as much as confidentiality."
Right thinking: For many federal missions, availability IS the mission.
I worked with FEMA on their disaster response coordination systems. Confidentiality? Low—most disaster information is public. Integrity? Moderate—incorrect info is bad but correctable. Availability? HIGH—during disasters, minutes of downtime can cost lives.
The system was High, and it wasn't even close. Yet their initial categorization had focused almost entirely on confidentiality and marked it Moderate.
Mistake #3: One-Size-Fits-All Categorization
Wrong thinking: "All our systems handle similar data, so they're all the same categorization."
Right thinking: Context matters as much as content.
I helped an agency that had ten different systems all handling employee data. Some were categorized identically. But:
The payroll system (High availability—missed payroll affects thousands)
The training tracking system (Low availability—delays are inconvenient)
The security clearance system (High confidentiality—clearance data is sensitive)
The cafeteria badge system (Low everything—it's lunch)
Same data type, completely different categorizations based on use and impact.
Mistake #4: Letting Politics Override Analysis
This is the hardest one. Sometimes categorization becomes political.
A system owner wants Low categorization (less work, lower cost). A CISO wants High (better security posture, more resources). Neither is actually analyzing impact—they're pursuing agendas.
I was in a meeting where a program manager literally said: "If we categorize this as High, we won't have budget to deliver the mission. So it needs to be Moderate."
That's backwards. The categorization should drive the budget conversation, not the other way around.
My response was: "If the mission requires High security but you can't afford High controls, you have three options: increase the budget, reduce the system scope, or don't build the system. What you can't do is pretend High-risk systems are Moderate-risk because it's more convenient."
"Categorization is a technical risk assessment, not a budget negotiation. The moment you let budget drive categorization, you've failed at risk management."
Tools and Templates That Actually Help
Here are resources I use constantly:
NIST SP 800-60 Volume II: The Information Type Catalog
This is your bible for federal information types. It provides:
32 business areas
134 information types
Recommended provisional impact levels for each
Pro tip: Don't just accept the recommended levels blindly. They're starting points, not answers. I've adjusted probably 40% of them based on agency-specific context.
FIPS 199 Impact Assessment Worksheet
I've developed a worksheet I use for every categorization. Here's a simplified version:
Assessment Question | Answer | Impact |
|---|---|---|
What is the worst-case harm from unauthorized disclosure? | [Specific scenario] | L/M/H |
Who would be harmed? (Self, Public, National Security) | [Specific parties] | |
Is this information classified or subject to Privacy Act? | Yes/No | |
What is the worst-case harm from unauthorized modification? | [Specific scenario] | L/M/H |
How quickly would modification be detected? | [Timeframe] | |
What decisions rely on this information's accuracy? | [Specific decisions] | |
What is the worst-case harm from system unavailability? | [Specific scenario] | L/M/H |
What mission functions would stop? | [Specific functions] | |
How long can the mission tolerate downtime? | [Timeframe] | |
Are there backup systems or manual processes? | Yes/No |
Categorization Comparison Matrix
When I'm working with multiple similar systems, I create a comparison matrix to ensure consistency:
System Name | Primary Function | Confidentiality | Integrity | Availability | Overall | Justification |
|---|---|---|---|---|---|---|
System A | Personnel records | High | Moderate | Moderate | High | SSNs, salary data |
System B | Training tracking | Low | Low | Low | Low | Public information |
System C | Security clearances | High | High | Moderate | High | Classified adjudication |
System D | Badge access | Moderate | Moderate | Moderate | Moderate | Facility security |
This visual comparison helps spot inconsistencies and ensures similar systems get similar treatment.
The Categorization Review Process
Categorization isn't one-and-done. Here's how I structure the review and approval process:
Step 1: Technical Review (1-2 weeks)
The system owner and security team complete the initial categorization using FIPS 199 and NIST 800-60.
Key participants:
System owner (understands mission)
ISSO (understands security)
Privacy officer (understands data sensitivity)
Mission area representatives (understand impact)
Step 2: Peer Review (1 week)
Other system owners and security professionals review the categorization for consistency with similar systems.
This catches:
Inconsistencies with comparable systems
Missing information types
Impact assessments that don't match organizational norms
Step 3: Management Review (1 week)
Senior management reviews for:
Alignment with mission priorities
Resource implications
Risk acceptance decisions
Step 4: Authorizing Official Approval (1 week)
The final approval authority (typically a senior executive) reviews and approves.
Important: The AO can adjust categorization, but must document the rationale.
Step 5: Scheduled Re-categorization
Categorization should be reviewed:
At least every 3 years
When mission changes significantly
When system changes significantly
When threat environment changes
After security incidents
I worked with an agency that categorized a system in 2015 as Moderate. By 2018, the system had evolved to process significantly more sensitive data and support more critical missions. Nobody re-categorized it. An audit found the system was actually High and had been operating with inadequate controls for three years. The remediation cost $1.2 million.
Real-World Case Study: Getting It Right
Let me share a success story that illustrates everything I've covered.
In 2020, I helped the Department of Energy categorize a new renewable energy grid management system. Here's how we worked through it:
The Challenge
The system would:
Monitor energy production from solar and wind facilities
Balance grid load in real-time
Coordinate with traditional power plants
Provide data to energy traders and utilities
Initial assessments ranged wildly—some stakeholders said Low, others said High.
The Process
We assembled a cross-functional team and systematically worked through FIPS 199:
Information Types Identified:
Real-time grid performance data
Energy production forecasts
Equipment status and diagnostics
Trading and pricing information
Critical infrastructure locations
Confidentiality Assessment:
Information Type | Disclosure Impact | Level |
|---|---|---|
Grid performance | Competitors could gain advantage; minimal harm | Low |
Production forecasts | Market-sensitive but not catastrophic | Moderate |
Equipment diagnostics | Could reveal vulnerabilities but limited scope | Moderate |
Trading information | Financial harm to market participants | Moderate |
Infrastructure locations | Potential terrorist targeting | High |
Provisional Confidentiality: High (due to infrastructure location data)
Integrity Assessment:
Information Type | Modification Impact | Level |
|---|---|---|
Grid performance | Bad data could cause load imbalances, blackouts | High |
Production forecasts | Market manipulation, financial loss | Moderate |
Equipment diagnostics | Missed maintenance, equipment failure | Moderate |
Trading information | Market manipulation, financial loss | Moderate |
Infrastructure locations | Misdirected emergency response | Moderate |
Provisional Integrity: High (grid stability depends on data accuracy)
Availability Assessment:
Information Type | Unavailability Impact | Level |
|---|---|---|
Grid performance | Cannot balance load, rolling blackouts | High |
Production forecasts | Suboptimal grid management, higher costs | Moderate |
Equipment diagnostics | Delayed maintenance, potential failures | Moderate |
Trading information | Market disruption, financial loss | Moderate |
Infrastructure locations | Emergency response delays | Moderate |
Provisional Availability: High (grid management requires real-time data)
The Decision
System Categorization: HIGH
Rationale:
Confidentiality High due to critical infrastructure data
Integrity High due to grid stability dependence
Availability High due to real-time operational requirements
Resource Impact:
421 baseline controls required
Implementation budget: $1.8 million
Annual O&M: $400,000
The Outcome
The agency initially resisted—they'd budgeted for a Moderate system. But we documented the analysis clearly, presented the risk scenarios, and showed the potential consequences of under-protecting the system.
They approved the High categorization and adjusted the budget.
Three years later, the system successfully defended against a sophisticated attack that targeted grid management systems nationwide. The High-level controls—particularly the network segmentation and advanced monitoring—detected and stopped the attack before any damage occurred.
The CISO told me: "If we'd categorized this as Moderate to save money, we'd be dealing with blackouts and a congressional investigation. Best $1.8 million we ever spent."
"Proper categorization doesn't prevent attacks. It ensures you're prepared with the right defenses when attacks come—and they always come."
Your Categorization Checklist
Based on 15+ years and 200+ categorizations, here's my final checklist:
Before You Start:
[ ] Read FIPS 199 completely
[ ] Review NIST SP 800-60 Volumes I and II
[ ] Identify all stakeholders
[ ] Block adequate time (don't rush this)
[ ] Gather system documentation
During Categorization:
[ ] Identify ALL information types (use NIST 800-60 Vol II)
[ ] Assess each information type independently
[ ] Consider worst-case scenarios for each security objective
[ ] Document rationale for every decision
[ ] Check for aggregation effects
[ ] Review for consistency with similar systems
[ ] Consider adjustment factors
[ ] Have technical peer review
[ ] Get stakeholder buy-in
[ ] Obtain formal approval
After Categorization:
[ ] Document in System Security Plan
[ ] Brief implementation team on implications
[ ] Adjust budget if necessary
[ ] Select appropriate control baseline (NIST 800-53)
[ ] Schedule re-categorization review
[ ] Archive categorization documentation
Final Thoughts: Categorization as Strategic Advantage
After fifteen years in federal cybersecurity, I've come to see system categorization not as a compliance burden, but as strategic clarity.
Done right, categorization:
Forces you to understand what you're protecting
Aligns security investment with actual risk
Creates a defensible foundation for all security decisions
Prevents both over-spending and under-protecting
Provides clear communication to stakeholders and auditors
Done wrong, categorization:
Wastes resources on unimportant systems
Leaves critical systems dangerously exposed
Creates compliance issues that cost millions to fix
Undermines confidence in your security program
The choice is yours. Invest the time upfront to get categorization right, or pay exponentially more later to fix it.
I know which one I'd choose. After watching agencies waste millions on botched categorizations, I've learned that categorization is the cheapest and most important security decision you'll make.
Get Step 1 right, and the rest of the RMF flows smoothly. Get it wrong, and you're building a security program on quicksand.
Choose wisely.