I still remember the first time I sat across from a federal agency CIO as we reviewed their Security Assessment Report (SAR). The 400-page document sat between us like a referendum on three years of security work. His hands were shaking slightly as he turned to the findings section.
"How bad is it?" he asked.
That question—and the hours of explanation that followed—taught me more about FISMA assessment reports than any certification course ever could. After 15 years of conducting, reviewing, and remediating FISMA assessments across dozens of federal agencies, I've learned that the SAR isn't just a compliance document. It's a roadmap, a report card, and sometimes, a wake-up call all rolled into one.
Let me walk you through everything you need to know about FISMA Security Assessment Reports, from someone who's been on both sides of the table.
What Exactly Is a Security Assessment Report (SAR)?
Think of the SAR as the definitive statement on your federal information system's security posture. It's the formal output of Step 4 in the NIST Risk Management Framework (RMF), where an independent assessor examines every security control you've implemented and documents what they found.
But here's what the official NIST documentation won't tell you: the SAR is where theory meets reality. You might have beautiful policy documents and impressive architecture diagrams, but the SAR reveals what's actually happening in your environment.
I worked with a Department of Defense contractor in 2021 whose documentation was immaculate. Every control was documented. Every procedure was written. Their Security Plan looked like it should be taught in graduate programs.
Then the assessment happened. Out of 325 controls, 142 had findings. The gap between what they said they were doing and what they were actually doing was staggering.
"A Security Assessment Report doesn't tell you what you want to hear. It tells you what you need to know."
The Anatomy of a FISMA SAR: Breaking Down the Document
Let me demystify what's actually in one of these reports. I've reviewed over 200 SARs in my career, and while each is unique, they all follow a similar structure mandated by NIST SP 800-53A.
Executive Summary: The Section Everyone Reads
This is where assessors summarize their findings at a high level. It typically includes:
Overall assessment methodology
Scope of the assessment
High-level findings summary
Risk ratings overview
Critical recommendations
I always tell agencies: if your executives will only read one section, this is it. Make sure you understand every word.
Assessment Methodology: How They Did What They Did
This section documents:
Assessment procedures used (NIST SP 800-53A)
Sampling methodology
Tools and techniques employed
Interview processes
Documentation reviewed
Testing conducted
Here's an insider tip: understanding the methodology is crucial for challenging findings. I've helped agencies successfully dispute findings by demonstrating that the assessment procedure wasn't properly followed.
Detailed Findings: The Heart of the Matter
This is the section that keeps federal IT professionals up at night. For every control assessed, you'll find:
Control Identifier: The specific NIST 800-53 control (e.g., AC-2, IA-5, SC-7)
Control Description: What the control is supposed to do
Assessment Objective: What the assessor was looking for
Assessment Method: How they tested it (Examine, Interview, Test)
Assessment Findings: What they actually found
Risk Rating: Severity of any deficiencies
Let me show you what this looks like in practice:
Control | Control Name | Assessment Method | Finding Status | Risk Level | Number of Findings |
|---|---|---|---|---|---|
AC-2 | Account Management | Examine, Interview, Test | Not Satisfied | High | 3 |
AC-3 | Access Enforcement | Test | Partially Satisfied | Medium | 2 |
AC-6 | Least Privilege | Examine, Interview | Satisfied | N/A | 0 |
IA-2 | Identification and Authentication | Test | Not Satisfied | High | 4 |
IA-5 | Authenticator Management | Examine, Test | Partially Satisfied | Medium | 2 |
SC-7 | Boundary Protection | Test | Satisfied | N/A | 0 |
SI-4 | Information System Monitoring | Examine, Interview, Test | Not Satisfied | Critical | 5 |
This table tells a story. SI-4 (monitoring) has five critical findings—that's a red flag suggesting the agency can't effectively detect security incidents. AC-2 and IA-2 high-risk findings indicate serious problems with who can access what.
Understanding Assessment Results: What the Ratings Actually Mean
FISMA assessments typically use a four-level rating system for each control finding:
Satisfied
The control is implemented correctly and operating as intended. This is what everyone wants to see.
In my experience, agencies typically achieve "Satisfied" ratings on 60-75% of controls in mature programs. If you're seeing higher than 80%, your assessor might not be digging deep enough. Below 50%? You have serious work to do.
Partially Satisfied
The control is implemented but has deficiencies that reduce its effectiveness. This is the most common rating I see.
Here's a real example from a 2022 assessment I conducted: An agency had implemented multi-factor authentication (IA-2(1)) but only for remote access. Internal users weren't required to use MFA. Partial credit.
Not Satisfied
The control is not implemented, or implementation is so deficient that it provides minimal security value.
This is the rating that generates POA&M items and uncomfortable conversations with authorizing officials.
Not Applicable
The control doesn't apply to this system. For example, wireless access controls (AC-18) don't apply if the system doesn't use wireless.
Be careful here. I've seen agencies claim controls as "not applicable" that absolutely applied. Assessors check this closely.
The Assessment Methods: How Assessors Actually Test Controls
Understanding how assessors evaluate controls helps you prepare better and challenge findings when appropriate. NIST 800-53A defines three assessment methods:
Examine (E)
Assessors review documentation, configurations, and evidence. This includes:
Policies and procedures
System configuration files
Log files and reports
Training records
Incident response documentation
Pro tip from the trenches: Organize your evidence before the assessment. I worked with an agency that had everything they needed but couldn't find it during the assessment. They received 23 findings simply due to poor documentation organization. It took them six months to remediate findings for controls they were actually implementing.
Interview (I)
Assessors talk to personnel who implement, manage, or use the controls:
System administrators
Security officers
End users
Management
Here's something I learned the hard way in 2018: inconsistent answers during interviews create findings. If your system administrator says one thing and your users say something different, the assessor assumes the control isn't working as documented.
I always recommend interview preparation. Not scripting—that's obvious and counterproductive—but ensuring everyone understands how controls actually work.
Test (T)
Assessors actively test the control's operation:
Attempting unauthorized access
Testing backup restoration procedures
Validating encryption implementation
Verifying logging and monitoring
Checking patch management processes
Testing reveals the truth. I once assessed an agency that documented quarterly vulnerability scanning. When we tested by checking scan reports, we found they hadn't scanned in 11 months. The scanner had failed, and nobody noticed.
"Documentation tells you what should happen. Testing reveals what actually happens. The gap between the two is where security breaks down."
Common Finding Categories: What Typically Goes Wrong
After reviewing hundreds of SARs, patterns emerge. Here are the most common finding categories I encounter:
Access Control Failures
Finding Type | Typical Issues | Frequency | Risk Level |
|---|---|---|---|
Orphaned Accounts | Terminated employee accounts still active | Very High | High |
Excessive Privileges | Users with more access than job requires | Very High | Medium-High |
Shared Accounts | Multiple users sharing credentials | High | High |
Missing MFA | No multi-factor authentication implemented | High | Critical |
Inadequate Reviews | User access not reviewed periodically | Very High | Medium |
Real story: I assessed a Department of Energy system in 2020 that had 47 active accounts for people who'd left the agency. One had been gone for 3.5 years. Their account still had administrative privileges. When I asked why, the system administrator said, "I didn't have a process for notification when people left."
We created one. Six months later, no more orphaned accounts.
Audit and Accountability Gaps
These findings are epidemic across federal systems:
Logs not reviewed regularly (or at all)
Insufficient log retention periods
Critical events not logged
No correlation between different log sources
Missing audit record protection
I worked with an agency that collected millions of log entries daily but never looked at them. When we discovered a compromised account during assessment, we checked the logs. The breach indicators had been there for seven months.
They now have a SOC that reviews logs 24/7. It wasn't cheap, but it was cheaper than a breach.
Configuration Management Nightmares
Control Area | Common Deficiency | Impact | Remediation Difficulty |
|---|---|---|---|
Baseline Configuration | No documented baseline exists | High | Medium |
Change Control | Changes made without approval | Critical | Hard |
Security Configuration | Default/weak settings in use | High | Easy-Medium |
Configuration Monitoring | No detection of unauthorized changes | Critical | Medium |
Patch Management | Critical patches not applied timely | Critical | Easy-Medium |
Case study: A federal healthcare system I assessed in 2021 had no configuration baseline. When we asked what the approved configuration was, they couldn't tell us. Every server was configured differently based on who built it and when.
We documented a baseline, implemented configuration management tools, and established change control. Eighteen months later, their attack surface had decreased by 40% simply because they knew what "normal" looked like.
Incident Response Inadequacy
This one breaks my heart because it's so preventable:
No documented incident response plan
Plan never tested
Incident response team not trained
No defined incident categories or severity levels
Missing coordination with US-CERT
I assessed a system that experienced a ransomware attack during our assessment period. They had an incident response plan—83 pages of beautifully written procedures.
Nobody knew where it was. Nobody had read it. It referenced tools they didn't have and people who'd retired.
When the incident happened, they improvised. Badly. It took them 19 days to restore operations.
The Risk Rating System: Understanding Severity
Not all findings are created equal. SARs typically categorize findings using a risk-based approach:
Critical
Immediate threat to mission operations
High likelihood of exploitation
Significant impact if exploited
Requires immediate remediation
Example: No logging on a system processing classified information. If something goes wrong, you'll never know what happened or who did it.
High
Significant security control weakness
Could reasonably lead to compromise
Requires urgent remediation (typically 30 days)
Example: Missing multi-factor authentication for privileged users. Single-factor authentication is one phishing attack away from total compromise.
Medium
Security control weakness present
Lower probability of exploitation
Should be remediated within 90 days
Example: Infrequent user access reviews. Not immediately dangerous but creates accumulating risk over time.
Low
Minor control weakness
Minimal security impact
Should be remediated within 180 days
Example: Policy document missing revision dates. Technically a finding, but not keeping anyone awake at night.
Here's how findings typically distribute in federal systems:
Risk Level | Percentage of Findings | Typical Count (325 controls) | Remediation Priority |
|---|---|---|---|
Critical | 2-5% | 7-16 | Immediate |
High | 15-25% | 49-81 | 30 days |
Medium | 35-45% | 114-146 | 90 days |
Low | 30-45% | 98-146 | 180 days |
"A SAR with zero findings isn't a sign of perfect security. It's a sign of insufficient assessment depth."
Reading Between the Lines: What the SAR Doesn't Say
After 15 years, I've learned to read what's not explicitly written in SARs:
The "Technically Compliant" Warning Sign
I reviewed a SAR where an agency had zero findings on backup controls (CP-9). Impressive, right?
Digging deeper, I found they did perform backups. Daily. To a drive in the same rack as the production system. When I asked about offsite storage, they said, "The control doesn't specifically require it."
Technically true. Practically useless. A fire, flood, or theft would destroy production and backups simultaneously.
The SAR said "Satisfied." Reality said "Vulnerable."
The "We'll Fix It Later" Pattern
Watch for trends across multiple assessment periods. If the same findings appear year after year with slightly different wording, that's a cultural problem, not a technical one.
I worked with an agency that had the same password complexity finding for four consecutive years. Each year, they'd promise to implement the policy. Each year, user complaints would derail implementation.
Year five, new leadership made it non-negotiable. Complaints lasted two weeks, then disappeared. The finding was finally closed.
The Resource Gap Indicator
Sometimes SARs reveal resource problems. If you see findings like:
Security tools not properly configured
Logs collected but not reviewed
Procedures documented but not followed
Training required but not delivered
These often indicate understaffing, not incompetence. The team knows what to do but doesn't have the capacity to do it.
From Assessment to Action: Turning Findings Into Fixes
Here's where the real work begins. The SAR documents problems; your Plan of Action and Milestones (POA&M) solves them.
Immediate Triage (Days 1-7)
First week priorities:
Critical findings: These need same-day attention. No exceptions.
High findings affecting security controls: Prioritize based on:
Ease of exploitation
Potential impact
Compensating controls in place
Findings with compliance deadlines: Some findings trigger regulatory timelines.
Real example: A federal financial system had a critical finding for missing encryption on a database containing Social Security numbers. We implemented database-level encryption within 48 hours using existing tools. Cost: $0. Time: Two days. Risk reduction: Massive.
Building Your POA&M (Weeks 1-2)
For every finding, document:
POA&M Element | Description | Example |
|---|---|---|
Finding ID | Reference from SAR | SAR-2024-AC-002 |
Control | NIST 800-53 control affected | AC-2: Account Management |
Weakness Description | What's wrong | 23 orphaned accounts for terminated employees |
Risk Level | From SAR | High |
Remediation Plan | How you'll fix it | Implement automated account lifecycle management |
Resources Required | What you need | Identity management system ($50K), 40 hours admin time |
Milestones | Key dates | Design: 30 days, Implementation: 60 days, Testing: 75 days |
Completion Date | Final deadline | 90 days from finding date |
Status | Current state | In Progress - 45% complete |
Common Remediation Strategies
Based on hundreds of remediation projects, here are approaches that actually work:
Quick Wins First: Start with findings you can close in under 30 days. Build momentum and credibility.
I helped an agency close 34 findings in the first month by focusing on:
Documentation updates (17 findings)
Simple configuration changes (12 findings)
Immediate procedural fixes (5 findings)
This demonstrated progress and freed resources for harder problems.
Consolidate Related Findings: Don't treat each finding separately. Group them by root cause.
Example: An agency had 15 findings related to account management. Rather than 15 separate fixes, we implemented an identity management system that resolved all 15 simultaneously.
Leverage Existing Tools: You probably have security tools you're not fully using.
A Department of Agriculture system had findings for missing vulnerability scanning. They owned Tenable Nessus but had never configured it. Two days of work, zero dollars spent, findings closed.
Accept Risk Strategically: Not everything needs to be fixed immediately.
I worked with an agency that had low-priority findings on a system being decommissioned in six months. We documented a risk acceptance rather than spending resources on fixes for a dying system.
The authorizing official approved. Resources went to systems that mattered.
The Reassessment Cycle: Proving You Fixed It
Here's something that surprises many agencies: closing findings in your POA&M doesn't automatically close them in your SAR.
You need to demonstrate to assessors that:
The remediation was implemented as planned
The control now functions correctly
The fix is sustainable
Preparing for Control Validation
When I validate remediated findings, I look for:
Evidence of Implementation
Configuration screenshots with timestamps
Policy documents with revision dates
Training records showing completion
Tool reports demonstrating functionality
Operational Effectiveness
The control works in practice, not just on paper
Users understand and follow procedures
Monitoring shows the control functioning
Incidents handled according to updated procedures
Sustainability
The fix isn't dependent on one person
Procedures are documented
Training is ongoing
Monitoring ensures continued compliance
Validation Methods by Control Type
Control Type | Validation Approach | Evidence Required | Common Pitfalls |
|---|---|---|---|
Technical Controls | Test configuration and logs | System screenshots, log samples | Config drift after initial fix |
Administrative Controls | Review documentation and interviews | Updated policies, training records | Documents created but not followed |
Physical Controls | On-site inspection | Photos, access logs, inspection reports | Controls work until tested |
"Remediation isn't complete when you think you've fixed it. It's complete when an assessor verifies you've fixed it."
Special Considerations for High-Value Assets (HVA)
If you're dealing with High-Value Assets designated by OMB, everything I've described gets more intense.
Enhanced Assessment Requirements
HVAs face:
More frequent assessments (often quarterly)
More rigorous testing procedures
Additional architecture reviews
Threat-based assessment scenarios
Independent verification and validation
I assessed an HVA for a major federal agency in 2023. The standard assessment was already comprehensive. The HVA assessment included:
Red team penetration testing
Purple team exercises
Supply chain risk assessment
Insider threat analysis
Advanced persistent threat scenarios
The resulting SAR was 600+ pages. Findings were categorized not just by severity, but by:
Threat actor capability required to exploit
Potential impact on national security
Detection difficulty
Remediation complexity
HVA Remediation Expectations
For HVAs, expect:
Critical findings: 24-48 hour remediation
High findings: 7-14 day remediation
Enhanced compensating controls if fixes take longer
Executive-level reporting on remediation status
Potential system shutdown if critical findings not addressed
This isn't theoretical. I've seen HVAs taken offline until critical findings were resolved. The operational impact was severe, but the security risk was deemed unacceptable.
Common Mistakes That Make Everything Harder
Let me save you some pain by sharing mistakes I see repeatedly:
Mistake #1: Treating the SAR as a Surprise
If findings in your SAR surprise you, your internal control testing is insufficient.
Better approach: Conduct continuous self-assessment. Know your weaknesses before the assessor finds them. I help agencies implement monthly control sampling—they find and fix issues before they become formal findings.
Mistake #2: Arguing with Assessors
I've watched agencies spend weeks arguing about findings instead of fixing them. Even if you win the argument, you've delayed remediation and damaged relationships.
Better approach: If you disagree with a finding, document your reasoning clearly, provide evidence, and request reconsideration. If denied, fix the issue and move forward.
Mistake #3: Documentation Theater
Creating documents to satisfy assessors without actually implementing controls is a waste of everyone's time.
I assessed a system with a beautiful 50-page incident response plan that referenced tools they didn't have, used an organizational structure that didn't exist, and included contact information for people who'd retired.
Better approach: Document what you actually do. Then improve what you do. The documentation will follow naturally.
Mistake #4: Ignoring Root Causes
Fixing symptoms without addressing root causes means findings recur.
Example: An agency had repeated findings for missing patches. They'd remediate each specific patch called out, then get dinged for new missing patches next assessment.
Root cause? No patch management process. Once we implemented a formal process, the findings stopped.
The Future of FISMA Assessment Reports
Based on trends I'm seeing across the federal government, here's what's coming:
Increased Automation
Agencies are implementing continuous assessment tools that:
Automatically test controls daily
Generate real-time compliance status
Alert on control failures immediately
Reduce manual assessment burden
I'm working with agencies using tools like:
Automated compliance scanning (Tenable, Rapid7)
Continuous monitoring platforms (Splunk, ArcSight)
Configuration management (Ansible, Puppet)
Identity governance (SailPoint, Okta)
These don't eliminate the need for SAR, but they make the assessment process more efficient and findings less surprising.
Risk-Based Assessment Focus
We're moving from "check every control annually" to "focus on highest risks continuously."
NIST is emphasizing assessment tailoring based on:
System criticality
Threat landscape
Previous assessment results
Continuous monitoring data
This means more resources focused on areas that matter most.
Integration with Zero Trust
As federal agencies implement Zero Trust Architecture (per OMB M-22-09), assessment approaches are evolving to focus on:
Identity verification strength
Device trust assessment
Least privilege implementation
Micro-segmentation effectiveness
Continuous authentication and authorization
The SARs I'm seeing now include entire sections dedicated to Zero Trust principles.
Your Action Plan: Making Sense of Your SAR
If you're staring at a SAR right now, here's your roadmap:
Week 1: Understand
Read the executive summary thoroughly
Review methodology to understand approach
Categorize findings by risk level
Identify any findings you want to dispute
Schedule briefing with assessment team
Week 2: Prioritize
Triage critical and high findings
Identify quick wins
Group related findings
Estimate resources needed
Get executive buy-in for remediation plan
Week 3-4: Plan
Create detailed POA&M
Assign ownership for each finding
Establish milestones and deadlines
Identify needed resources
Set up regular status tracking
Month 2-3: Execute
Implement remediation plans
Document all changes
Test fixes thoroughly
Prepare validation evidence
Update POA&M status regularly
Month 4+: Validate
Request assessment of remediated controls
Provide evidence packages
Address any validation findings
Close completed items in POA&M
Prepare for next assessment cycle
Final Thoughts: The SAR as a Strategic Tool
After 15 years in this field, I've come to see the Security Assessment Report not as a judgment, but as intelligence.
It tells you where your security program is strong and where it's vulnerable. It identifies risks before they become breaches. It provides justification for security investments that might otherwise be denied.
The best security leaders I know don't fear the SAR—they leverage it.
They use findings to:
Secure budget for needed tools and staff
Build executive awareness of security needs
Demonstrate compliance to stakeholders
Track security program maturity over time
Justify organizational changes
One CISO told me: "My SAR is my best friend during budget season. When I need $2 million for a new SIEM, I don't argue theoretical benefits. I point to specific findings showing we can't detect incidents. Game over—I get the budget."
"A Security Assessment Report is expensive, time-consuming, and occasionally painful. It's also the most honest conversation you'll have about your security posture all year. Embrace it."
Your SAR isn't the end of the story—it's the beginning of a more secure system. Every finding is an opportunity to improve. Every risk identified is a breach prevented.
Take it seriously. Learn from it. Use it to build something better.
Because in federal IT security, the assessment report you dread today might be the thing that saves you tomorrow.