The conference room went silent. The Department of Veterans Affairs CISO had just asked me a seemingly simple question: "So, what level is our patient scheduling system?"
It was 2017, and I was three months into helping them implement FISMA compliance across their IT portfolio. The answer to his question would determine whether they'd need to implement 150 security controls or 400. It would impact their budget by millions. And if we got it wrong, we could either waste taxpayer money or—far worse—leave veteran healthcare data vulnerable.
"Give me 48 hours," I told him. "This decision is too important to rush."
That's the moment I truly understood the power and complexity of FIPS 199 system categorization. After fifteen years working with federal agencies, defense contractors, and regulated industries, I can tell you this: FIPS 199 categorization is the single most consequential decision you'll make in your FISMA compliance journey.
Get it right, and you'll build a security program that's both effective and efficient. Get it wrong, and you'll either drown in unnecessary controls or leave critical gaps in your defenses.
Let me show you how to get it right.
What FIPS 199 Actually Is (And Why It Matters)
FIPS Publication 199—officially titled "Standards for Security Categorization of Federal Information and Information Systems"—is the foundation of the entire FISMA compliance framework. Published by NIST, it provides the methodology for categorizing information and information systems based on potential impact.
Think of it as a triage system for your IT environment. Just like emergency rooms categorize patients to allocate resources effectively, FIPS 199 categorizes systems to apply appropriate security controls.
Here's what most people miss: FIPS 199 isn't just about compliance bureaucracy. It's about intelligent resource allocation.
I worked with a civilian agency in 2019 that had categorized everything as "High" impact. Why? Because they were terrified of getting it wrong. The result? They were spending $14 million annually maintaining security controls on a public-facing informational website that literally just displayed office hours and directions.
Meanwhile, their actual mission-critical systems—the ones processing sensitive citizen data—were starved for resources because the budget was exhausted on over-securing low-risk systems.
"FIPS 199 categorization isn't about making systems more secure. It's about making the right systems appropriately secure."
The Three Security Objectives: Your Foundation
FIPS 199 evaluates every system based on three fundamental security objectives. These aren't abstract concepts—they're practical measures of what happens when things go wrong.
1. Confidentiality
The Question: What's the impact if unauthorized individuals access this information?
I'll never forget working with a DoD contractor who'd categorized their logistics planning system as "Low" for confidentiality. Their reasoning? "It's just supply chain data—delivery schedules, inventory levels, nothing classified."
Then someone pointed out that those delivery schedules showed when military units would be undersupplied. That inventory data revealed equipment shortages. The patterns exposed operational capabilities and limitations.
We immediately recategorized it as "High." The adversarial intelligence value was enormous.
2. Integrity
The Question: What's the impact if this information is modified or destroyed without authorization?
This is where I see the most mistakes. People focus on confidentiality and overlook integrity.
A state health department I consulted for had categorized their disease surveillance system as "Moderate" for integrity. "If data gets corrupted, we can just re-enter it," they reasoned.
But consider: What if malware altered infection rates during a disease outbreak? Public health decisions—school closures, resource allocation, emergency declarations—would be based on false data. People could die.
We recategorized it as "High" for integrity. The potential impact of data manipulation was catastrophic, even if the data wasn't particularly confidential.
3. Availability
The Question: What's the impact if this system or information becomes unavailable?
Here's a story that illustrates this perfectly:
In 2020, I was helping a federal benefits agency categorize their systems. They wanted to mark their public inquiry system as "Low" availability. "It's just a phone tree and web form," they said. "People can call back later."
Then COVID-19 hit. Millions of Americans suddenly needed unemployment benefits, emergency assistance, and critical support services. That "simple inquiry system" became the primary interface between desperate citizens and life-saving benefits.
System downtime didn't just mean inconvenience—it meant families couldn't pay rent, buy food, or access healthcare. We recategorized it as "High" for availability during the emergency, with plans to reassess post-crisis.
"Impact isn't about the system—it's about what that system enables. Always ask: What mission fails if this system fails?"
The Three Impact Levels: Breaking Down the Definitions
FIPS 199 defines three impact levels: Low, Moderate, and High. But here's what the official definitions don't tell you—the real-world implications I've learned from hundreds of categorization exercises.
Impact Level | Official Definition | What It Really Means | Annual Security Cost (Typical) |
|---|---|---|---|
LOW | Limited adverse effect | Minor inconvenience; workarounds exist; minimal mission impact | $50,000 - $150,000 |
MODERATE | Serious adverse effect | Significant mission degradation; financial loss; limited injury possible | $200,000 - $500,000 |
HIGH | Severe or catastrophic adverse effect | Mission failure; major financial loss; loss of life possible | $800,000 - $2,500,000+ |
Let me break down what these really mean in practice:
LOW Impact Systems
Real-World Examples I've Categorized:
Public-facing informational websites (no user data collection)
Internal phone directories
Meeting room scheduling systems
General office automation tools (for non-sensitive work)
The Test I Use: If this system went down for a week, could the organization still accomplish its mission? If yes, it's probably Low.
Security Control Baseline: 125 controls from NIST SP 800-53
A Cautionary Tale:
I once worked with an agency that categorized their training management system as Low. "It just tracks who took what training," they said.
But that system also tracked security clearance training, which determined who could access classified information. If someone manipulated the system to show they'd completed required training when they hadn't, they could gain unauthorized access to classified systems.
We recategorized it as Moderate. The integrity impact was higher than they'd initially recognized.
MODERATE Impact Systems
Real-World Examples:
Personnel management systems (non-classified HR data)
Financial management systems (internal accounting)
Case management systems (non-PII or moderate sensitivity PII)
Internal collaboration platforms with business-sensitive information
The Test I Use: If this system failed or was compromised, would it require executive leadership involvement to resolve? If yes, it's at least Moderate.
Security Control Baseline: 325 controls from NIST SP 800-53
The Budget Reality:
A federal contractor I worked with balked at the cost difference between Low and Moderate. "We're talking about $250,000 more per year," the CFO complained.
Six months after implementing Moderate controls, they detected and stopped an APT (Advanced Persistent Threat) attempting to exfiltrate bid data and proprietary methodologies. The security monitoring required by Moderate controls caught the attack in its early stages.
The estimated value of the intellectual property they protected? Over $40 million.
The CFO never questioned security spending again.
HIGH Impact Systems
Real-World Examples:
National security systems
Emergency response and 911 systems
Critical infrastructure control systems
Medical systems where failure could cause death or serious injury
Financial systems processing large-value transactions
Systems containing classified information
The Test I Use: Could a compromise of this system result in loss of life, catastrophic mission failure, or severe economic impact? If yes, it's High.
Security Control Baseline: 421+ controls from NIST SP 800-53
Why High Categorization Changes Everything:
I consulted for a power utility managing the electrical grid for a major metropolitan area. They initially wanted to categorize their SCADA (Supervisory Control and Data Acquisition) systems as Moderate.
The conversation went like this:
Me: "What happens if this system is compromised?"
Them: "Power outages, but we have redundancies."
Me: "How long to restore power if the control system is completely compromised?"
Them: "Worst case... maybe a week for full restoration."
Me: "How many people die if a major city has no power for a week in the summer or winter?"
Silence.
Them: "We need to categorize this as High."
Here's the truth about High impact systems: The security investment isn't optional—it's insurance against catastrophic failure.
The FIPS 199 Categorization Formula
Now let's get into the mechanics. FIPS 199 uses a specific formula to determine overall system categorization:
SC[information_type] = {(confidentiality, impact), (integrity, impact), (availability, impact)}
The overall system categorization is determined by the highest impact level across all three security objectives for all information types processed by the system.
This is called the "high water mark" principle, and it trips people up constantly.
Real-World Example: The Patient Scheduling System
Remember that VA patient scheduling system from the beginning? Here's how we categorized it:
Security Objective | Information Type | Impact Level | Reasoning |
|---|---|---|---|
Confidentiality | Patient PII | MODERATE | HIPAA-protected data; disclosure causes embarrassment, potential discrimination |
Confidentiality | Medical appointments | MODERATE | Reveals medical conditions; moderate privacy impact |
Integrity | Scheduling data | HIGH | Incorrect appointments could delay critical care; potential loss of life |
Integrity | Patient identity | HIGH | Wrong patient could receive wrong treatment; life-threatening |
Availability | Scheduling system | MODERATE | Veterans can call to reschedule; disruption is serious but not life-threatening |
Overall System Categorization: HIGH
Even though confidentiality and availability were only Moderate, the High integrity impact drove the overall categorization. Why? Because if someone maliciously altered scheduling data—for example, canceling chemotherapy appointments or changing surgical schedules—patients could die.
The CISO initially pushed back. "High categorization will cost us millions more," he said.
I asked him: "How do you categorize a system where the impact of failure is veteran deaths?"
The decision became clear.
"The high water mark principle exists for a reason: your system is only as secure as its weakest critical objective."
Common Categorization Mistakes (And How to Avoid Them)
After working through hundreds of FIPS 199 categorizations, I've seen the same mistakes repeated across agencies and contractors. Here are the big ones:
Mistake #1: Confusing System Purpose with System Content
A Defense contractor wanted to categorize their "unclassified" network as Low across the board. "It only handles unclassified information," they argued.
But that unclassified network contained:
Proprietary technical designs
Contract bid information
Export-controlled technical data
Personnel records with PII
Just because information isn't classified doesn't mean it's low impact. The content matters more than the label.
Mistake #2: Ignoring Aggregate Impact
I worked with an agency that categorized their travel expense system as Low. Individual travel vouchers? Not particularly sensitive.
But the system contained:
Movement patterns of senior executives
Travel schedules revealing operational priorities
Aggregate spending revealing program budgets
Credit card information
The aggregate view was far more sensitive than individual records. We recategorized it as Moderate.
Mistake #3: Static Categorization in Dynamic Environments
Here's something that catches people off guard: system categorization can change based on circumstances.
That public inquiry system I mentioned earlier? It was legitimately Low impact in normal times. During the COVID-19 emergency, it became High impact because the mission criticality changed.
FIPS 199 allows for temporary re-categorization based on current operational needs. You should review categorizations:
Annually (at minimum)
When mission changes
When system functionality changes
When threat environment changes
During emergencies or heightened operations
Mistake #4: Bottom-Up Instead of Top-Down Analysis
The wrong way to categorize: Start with the system and ask "What's the impact?"
The right way: Start with the mission and ask "What happens if this system fails?"
I helped a federal law enforcement agency categorize their case management system. Looking at the system itself, it seemed like a Moderate impact database.
But when we looked at the mission—active criminal investigations, witness protection, undercover operations—the categorization became obvious: High. Confidentiality or integrity failures could get people killed.
Mistake #5: Ignoring External Dependencies
Your system doesn't exist in isolation. I've seen organizations categorize individual systems without considering their role in larger processes.
A financial agency categorized their data warehouse as Low. "It's just archived reports," they said.
But their audit systems, compliance reporting, and financial oversight all depended on that warehouse. If the integrity of that historical data was compromised, their entire financial accountability framework would collapse.
We recategorized it as Moderate based on the downstream dependencies.
The Categorization Process: A Step-by-Step Guide
Here's the methodology I use, refined over fifteen years of federal compliance work:
Step 1: Identify Information Types (Week 1-2)
Use NIST SP 800-60 as your guide. It provides standard information type taxonomies.
Pro Tip: Don't try to boil the ocean. Start with your most critical systems first.
Deliverable: Information type inventory for each system
Step 2: Assess Provisional Impact Levels (Week 2-3)
For each information type, determine the provisional impact level for confidentiality, integrity, and availability.
The Key Questions to Ask:
Security Objective | Assessment Question |
|---|---|
Confidentiality | What's the worst-case impact of unauthorized disclosure? Consider: reputational damage, privacy violations, competitive disadvantage, national security implications |
Integrity | What's the worst-case impact of unauthorized modification or destruction? Consider: decision-making based on bad data, mission failure, safety implications, financial loss |
Availability | What's the worst-case impact of system unavailability? Consider: mission degradation timeline, alternative capabilities, recovery time objectives, life safety |
Deliverable: Provisional impact assessment matrix
Step 3: Apply the High Water Mark (Week 3)
Remember: The overall system impact level is the highest impact level across all security objectives for all information types.
Common Question: "Can we have different impact levels for different security objectives?"
Answer: Yes! Your system could be High for integrity, Moderate for confidentiality, and Low for availability. But the overall categorization would be High, and you'd implement the High baseline with the option to tailor controls based on specific risks.
Deliverable: Overall system categorization
Step 4: Document Rationale (Week 3-4)
This is critical and often overlooked. Your categorization will be reviewed by:
Authorizing Officials
Auditors
Inspector Generals
Congressional oversight (for some agencies)
You need clear, defensible rationale for every decision.
What to Document:
Information types identified
Impact analysis for each security objective
Consideration of special factors (aggregation, privacy, etc.)
High water mark application
Final categorization decision
Approval signatures
Deliverable: System Categorization Document
Step 5: Obtain Approval (Week 4-5)
System categorization requires formal approval from designated officials. In most agencies, this includes:
Information System Owner
Information System Security Officer
Authorizing Official (or designated representative)
Pro Tip: Brief stakeholders early. Don't surprise your Authorizing Official with a High categorization they weren't expecting.
Deliverable: Approved System Categorization
Step 6: Implement Appropriate Controls (Months 2-12+)
Once categorization is approved, you implement the corresponding NIST SP 800-53 security control baseline:
Categorization | Control Baseline | Typical Implementation Time |
|---|---|---|
Low | NIST SP 800-53B Low Baseline | 3-6 months |
Moderate | NIST SP 800-53B Moderate Baseline | 6-12 months |
High | NIST SP 800-53B High Baseline | 12-24 months |
Special Categorization Scenarios
Over the years, I've encountered some unique situations that don't fit neatly into standard guidance:
Cloud Systems and Shared Responsibility
When categorizing cloud-based systems, you need to consider both your responsibilities and the cloud provider's.
I worked with an agency migrating to AWS GovCloud. They wanted to categorize their cloud environment as Low because "Amazon handles the security."
Wrong. So wrong.
The Reality: The cloud provider secures the infrastructure (High categorization on their side). You still need to categorize your data, applications, and configurations based on their impact.
We ended up with:
Infrastructure: High (AWS's responsibility)
Application: Moderate (Agency's responsibility)
Data: High (Agency's responsibility)
Overall system categorization: High, with AWS providing FedRAMP High infrastructure controls and the agency implementing application and data controls.
Development, Test, and Production Environments
Here's a question I get constantly: "Can dev/test environments have lower categorizations than production?"
The Short Answer: Sometimes, but be very careful.
The Long Answer: It depends on what data you're using.
A DoD contractor I worked with wanted to categorize their development environment as Low while production was High. "It's just test data," they claimed.
Except they were using sanitized production data for realistic testing. "Sanitized" didn't mean "secure"—re-identification was still possible. We kept the dev environment at High.
The Rule I Follow: If you're using production data (even masked), match the production categorization. If you're using completely synthetic data, you can potentially lower the categorization—but document why thoroughly.
Systems in Transition
What do you do when a system is being decommissioned but still contains historical data?
I helped an agency categorize a legacy system being replaced. The new system would be Moderate, but the old system had 20 years of case files.
We kept the legacy system at its original High categorization until all data was properly archived or destroyed. Just because a system is "legacy" doesn't mean the data is suddenly less sensitive.
Multi-Tenant Systems
This is tricky. A single system might process Low, Moderate, and High impact data for different tenants.
The Principle: Categorize based on the highest impact tenant/data type, unless you have documented, auditable separation.
I worked with a shared service provider that wanted to categorize their multi-tenant platform as Moderate even though some tenants had High impact data. "We have separate databases," they argued.
But the application code, authentication systems, and administrative access were shared. A compromise of the platform could affect all tenants. We categorized it as High and implemented compensating controls for lower-impact tenants.
The Hidden Costs of Incorrect Categorization
Let me share two cautionary tales that illustrate the real-world consequences:
Over-Categorization: The $4.2 Million Mistake
A civilian agency categorized their entire IT portfolio as High. Every system. No exceptions.
Why? Fear. Fear of audits, fear of breaches, fear of getting it wrong.
The result:
$4.2 million in unnecessary annual security costs
14-month average implementation timeline for new systems
Innovation ground to a halt because security overhead was unbearable
Talented staff left for less bureaucratic organizations
When I helped them recategorize appropriately:
60% of systems were actually Low or Moderate
They saved $2.8 million annually
Implementation timelines dropped to 3-6 months for Low systems
They could finally afford to properly secure their actual High-impact systems
"Over-categorization doesn't make you more secure. It makes you less secure by starving your truly critical systems of resources."
Under-Categorization: The Breach That Changed Everything
A federal contractor categorized their proposal development system as Low. "It's just pre-award information," they reasoned.
That system contained:
Proprietary pricing models
Technical approach methodologies
Team composition and key personnel
Past performance documentation
A foreign adversary compromised the system and exfiltrated five years of proposal data. The competitor used that information to systematically underbid the contractor on every future proposal.
The company lost $127 million in revenue over three years before they connected the dots. They eventually went bankrupt.
All because they under-categorized a system to save $180,000 annually in security costs.
Best Practices from the Field
After hundreds of categorization efforts, here's what separates successful programs from struggling ones:
1. Include Mission Experts, Not Just IT Staff
Your security team understands security. Your mission team understands impact.
I always insist on including:
Program managers who understand mission criticality
Legal counsel who understand regulatory requirements
Privacy officers who understand PII sensitivity
Business owners who understand financial impact
The best categorizations come from multidisciplinary teams.
2. Document Everything (Seriously, Everything)
Your future self will thank you. Auditors will thank you. Your successor will thank you.
I create a categorization package that includes:
Information type inventory
Impact assessment worksheets
Stakeholder interviews and meeting notes
Risk considerations
Approval documentation
Review schedule
When the auditor asks "Why did you categorize this as High?" you should be able to point to specific, documented rationale.
3. Plan for Reassessment
Set calendar reminders. Make it part of your annual security assessment. Don't let categorizations go stale.
I recommend reassessing:
Annually (minimum)
After major system changes
After mission changes
After significant security incidents
When new threats emerge
4. Use the "What If" Test
Before finalizing any categorization, run through worst-case scenarios:
What if all data was disclosed publicly?
What if the data was maliciously altered?
What if the system was unavailable for a week? A month?
If your answers make you uncomfortable, your categorization might be too low.
5. Don't Be Afraid to Disagree with System Owners
System owners want low categorizations because they mean lower costs and faster implementations.
Your job is to provide an honest assessment, even when it's unpopular.
I've had heated arguments with system owners who insisted their system was Low. When I explained the potential impact in executive briefings, authorizing officials consistently sided with higher categorizations.
Stand your ground when you're right.
The Categorization Decision Matrix
Here's a practical tool I use to guide categorization decisions:
Impact Level | Confidentiality | Integrity | Availability |
|---|---|---|---|
LOW | Public or non-sensitive information | Errors cause minor inconvenience; easy to detect and correct | Downtime causes minor inconvenience; alternatives readily available |
MODERATE | Sensitive but not classified; PII; business confidential | Errors cause financial loss or mission degradation; detection possible but not immediate | Downtime causes mission degradation; alternatives available but costly |
HIGH | Classified information; life safety data; critical intellectual property | Errors cause catastrophic consequences; loss of life possible; difficult to detect | Downtime causes mission failure; no alternatives; loss of life possible |
Moving Forward: Your Categorization Action Plan
If you're facing FIPS 199 categorization right now, here's your roadmap:
Week 1: Prepare
Gather system documentation
Identify stakeholders
Review NIST SP 800-60 information types
Schedule categorization workshops
Week 2-3: Assess
Inventory information types
Conduct impact analysis for each security objective
Document rationale
Apply high water mark
Week 4: Review and Approve
Brief stakeholders on preliminary categorization
Address concerns and questions
Obtain formal approval
Document final decision
Month 2+: Implement
Select appropriate control baseline
Begin control implementation
Develop security assessment plan
Prepare for authorization
A Final Word of Advice
FIPS 199 categorization is not a checkbox exercise. It's the strategic foundation of your entire security program.
I've seen organizations rush through categorization to "get it done" and spend years recovering from that mistake. I've seen others invest the time upfront and build security programs that are both effective and efficient.
The difference between these outcomes? Treating categorization as a thoughtful, deliberate process rather than an administrative burden.
That VA patient scheduling system I mentioned at the beginning? We categorized it as High. The implementation took 16 months and cost $3.2 million.
Three years later, that system processes 2.4 million appointments annually for veterans across the country. It's never had a significant security incident. Veterans receive timely, accurate care because the scheduling data they rely on is protected appropriately.
When I asked the CISO if the investment was worth it, he said: "We're protecting the people who protected our country. The only wrong answer would have been to under-invest in their security."
That's what FIPS 199 categorization is really about: matching your security investment to the value and criticality of what you're protecting.
Get the categorization right, and everything else follows. Rush it or get it wrong, and you'll be fighting uphill for years.
"FIPS 199 categorization is where good security programs are born—and where bad ones go to die."
Choose wisely. Document thoroughly. Sleep better knowing your systems are appropriately protected.