I'll never forget the look on my client's face when their auditor selected 47 access review instances to test—and found exceptions in 9 of them. The CTO turned pale. "We thought we were doing everything right," he said. "We have the process documented. We do the reviews. How did we miss so many?"
The answer was simple but painful: they'd been checking their own homework without understanding how auditors would actually test their controls.
After guiding over 40 companies through SOC 2 audits in the past 15 years, I've learned that understanding sample selection isn't just about passing an audit—it's about building controls that actually work. Let me show you how auditors think, what they look for, and how to prepare your organization for the scrutiny that's coming.
What Nobody Tells You About Sample Selection
Here's the uncomfortable truth: when an auditor selects samples, they're not trying to find evidence that your controls work. They're trying to find evidence that your controls might NOT work.
This isn't cynicism—it's their job. Auditors are professionally skeptical. They're trained to doubt, verify, and dig deeper when something seems off. Understanding this mindset transforms how you prepare for an audit.
I learned this lesson the hard way in 2017. I was consulting for a fintech startup pursuing their first SOC 2 Type II. We'd documented beautiful procedures. The team was trained. Everything looked perfect on paper.
Then the auditor started testing. Out of 25 user access provisioning samples, she found issues with 6. Not because the process was fundamentally broken, but because the evidence we'd collected didn't match what she needed to see.
We'd taken screenshots of our IAM system showing users were created. But we couldn't prove who approved each request, when the approval happened, or that we'd verified the user's role before granting access. The control existed—the evidence didn't.
That audit cost us an extra two months and $40,000 in remediation. But it taught me something invaluable: the quality of your evidence matters more than the quality of your controls.
"A perfect control with poor evidence fails the audit. An imperfect control with excellent evidence at least gives you a fighting chance."
Understanding Audit Sample Selection: The Core Methodology
Let me demystify how auditors select samples. It's not random, and it's not arbitrary. There's a structured methodology based on decades of audit standards and professional guidelines.
The Three Pillars of Sample Selection
Auditors use three fundamental approaches, often in combination:
Approach | Purpose | When Used | Typical Sample Size |
|---|---|---|---|
Statistical Sampling | Provide mathematically valid conclusions about the entire population | High-volume, routine transactions | 25-60 items |
Non-Statistical Sampling | Focus on high-risk areas and key controls | Judgment-based selections | 15-40 items |
Key Items Selection | Test critical or unusual transactions | Specific high-value or high-risk items | 5-15 items |
In my experience, most SOC 2 audits use non-statistical sampling with targeted key item testing. Why? Because SOC 2 isn't about financial accuracy—it's about control effectiveness across varied scenarios.
The Population Concept: What Are They Actually Testing?
Here's where many organizations get confused. When an auditor talks about "population," they mean all instances where a control should have operated during the audit period.
Let me break this down with a real example from a client I worked with last year:
Control Statement: "Access provisioning requests require documented approval from the user's manager before accounts are created."
Population Definition: All new user accounts created during the 6-month audit period (Type II)
What the auditor tests: Did EACH selected new user account have documented manager approval BEFORE creation?
Sounds simple, right? But here's where it gets tricky.
This client created 147 user accounts during their audit period. The auditor selected 25 for testing. In 3 cases, they found:
One approval email dated after the account was created
One approval from a peer, not a manager
One approval that said "approved for read-only access" but the account had admin rights
Three exceptions out of 25 samples is a 12% error rate. That's not a control failure in audit terms—it's a control catastrophe.
"Auditors don't judge you by your intentions. They judge you by your evidence. And they judge harshly."
The Sample Selection Formula: What Determines Sample Size?
After watching dozens of audits, I've identified the factors that influence how many samples an auditor will select:
Primary Factors Affecting Sample Size
Factor | Impact on Sample Size | Why It Matters |
|---|---|---|
Population Size | Larger population = more samples (but not proportionally) | Auditors need enough samples to be confident, but diminishing returns kick in |
Control Frequency | Daily controls get more samples than annual controls | More opportunities for failure = more testing needed |
Automation Level | Manual controls get more samples than automated controls | Human error is more variable than system error |
Prior Results | Previous exceptions = more samples this year | Once burned, twice careful |
Control Risk | High-risk controls get more samples | Critical controls demand higher confidence |
Auditor Confidence | Good documentation reduces sample size slightly | Trust, but verify—documentation helps auditors trust |
Let me share how this played out with a healthcare SaaS company I advised in 2022.
They had a control for reviewing user access quarterly. The population was 4 quarterly reviews over their 12-month Type II period. Simple, right?
Wrong.
The auditor didn't sample "quarterly reviews." She sampled user accounts reviewed within each quarterly review. Each review covered approximately 400 users. She selected 25 users from each of the 4 reviews—100 total samples.
Then she verified:
Was each user account actually reviewed?
Was the review completed timely (within the quarter)?
Were inappropriate access rights identified?
Were identified issues remediated?
Was remediation verified?
That's 500 pieces of evidence to collect and organize. The client hadn't anticipated this. They had summary review documents but no per-user evidence. We spent three weeks reconstructing the evidence from system logs, screenshots, and email threads.
Lesson learned: Understanding what constitutes "one sample" from an auditor's perspective is critical.
Control Testing Methods: How Auditors Actually Test
Different controls require different testing approaches. Here's how auditors think about testing various control types:
Automated vs. Manual Controls: Testing Approaches
Control Type | Testing Method | Sample Size | Evidence Required |
|---|---|---|---|
Fully Automated | Configuration review + limited sample testing | 5-15 samples | System configuration screenshots + sample transactions showing automation worked |
System-Enforced | Configuration review + exception testing | 10-20 samples | Configuration proof + samples proving the system prevented violations |
Manual with System Support | Transaction testing across the period | 25-40 samples | Human decision evidence + system records of execution |
Fully Manual | Extensive transaction testing | 30-50 samples | Documentation of human decision-making and execution |
I learned the importance of this distinction when working with a payment processing company in 2020.
They had two controls for preventing unauthorized database access:
Control 1 (Automated): The database management system automatically logs all access attempts and queries.
Control 2 (Manual): The security team reviews database access logs weekly for anomalies.
For Control 1, the auditor tested 10 samples. She verified the system configuration, then spot-checked 10 access events to confirm they were logged correctly. Testing took about 2 hours.
For Control 2, the auditor tested 25 samples—one for each week during the 6-month period. For each sample, she verified:
The review was performed on schedule
The reviewer was qualified
Anomalies were identified (or documented why there were none)
Identified issues were escalated
Escalated issues were resolved
Testing took three days and required hundreds of pages of evidence.
The punchline? Both controls were marked as effective, but the automated control required 90% less effort to test. This is why smart organizations automate everything they possibly can.
"Every manual control is an opportunity for human error. And every human error is an opportunity for audit exceptions."
Common Control Types and Sample Selection Strategies
Let me walk you through the most common SOC 2 controls and how auditors typically sample them. This is based on patterns I've observed across 40+ audits.
Access Control Testing
Control Example: "User access is provisioned based on approved requests and assigned according to the principle of least privilege."
Aspect | Details |
|---|---|
Population | All new user accounts created during audit period |
Typical Sample Size | 25-40 users |
Evidence Required | • Access request form/ticket<br>• Manager approval with timestamp<br>• Documentation of role/permissions needed<br>• Proof access granted matches approval<br>• Evidence of background check (if required) |
Common Pitfalls | • Approval dated after account creation<br>• Insufficient detail on approved permissions<br>• Access granted exceeds what was approved<br>• Missing approvals for rush requests |
Real story: An e-commerce platform I worked with had a Slack-based approval process. Managers would reply "approved" to access requests. Seems fine, right?
The auditor rejected it. Why? Because Slack messages can be edited or deleted, and there's no reliable timestamp proving the approval came before the access was granted. We had to implement a ticketing system with immutable audit trails.
The lesson: convenient processes often lack audit-worthy evidence.
Change Management Testing
Control Example: "System changes undergo testing and approval before production deployment."
Aspect | Details |
|---|---|
Population | All production changes during audit period |
Typical Sample Size | 25-40 changes |
Evidence Required | • Change request with business justification<br>• Testing results/screenshots<br>• Approval from authorized person<br>• Deployment records with timestamp<br>• Post-deployment verification<br>• Rollback plan documentation |
Common Pitfalls | • Emergency changes bypassing approval<br>• Testing documented after deployment<br>• Approvals from unauthorized personnel<br>• No evidence of post-deployment verification |
I once had a DevOps team that was furious about change management requirements. "We deploy 50 times a day," the VP of Engineering said. "You want us to document all of that?"
Yes. Yes, I did.
We implemented automated change tracking through their CI/CD pipeline. Every deployment automatically:
Captured the change request from Jira
Recorded automated test results
Logged the approver from GitHub pull request reviews
Timestamped the deployment
Documented rollback procedures
The auditor selected 35 changes randomly across the six-month period. We provided all evidence in under an hour. Zero exceptions.
The lesson: Automation doesn't just speed up your work—it creates audit evidence as a byproduct.
Security Monitoring and Incident Response
Control Example: "Security events are monitored, and incidents are responded to according to documented procedures."
Aspect | Details |
|---|---|
Population | • All security alerts during period<br>• All identified incidents during period |
Typical Sample Size | • 25-40 alerts (if high volume)<br>• ALL incidents (typically low volume) |
Evidence Required | • Alert/incident ticket with timestamp<br>• Classification and priority assignment<br>• Investigation notes and findings<br>• Resolution actions taken<br>• Communication to stakeholders (if applicable)<br>• Post-incident review (for major incidents) |
Common Pitfalls | • Alerts acknowledged but not investigated<br>• Incidents closed without documentation<br>• Response time exceeds policy requirements<br>• No evidence of stakeholder notification |
A financial services client learned this painfully. They had excellent monitoring—their SIEM generated thousands of alerts monthly. The problem? They'd tuned it to reduce noise, and the auditor selected 30 alerts that had been auto-dismissed by their system.
When she asked for evidence that these alerts were reviewed, they had nothing. The alerts never reached a human. The auditor classified it as a control failure.
We spent six weeks implementing a review process for auto-dismissed alerts and collecting evidence retroactively. It was expensive and embarrassing.
The lesson: If your control says "security events are monitored," you need evidence that humans actually looked at them—even the ones that seem benign.
Sample Selection Risk Areas: Where Auditors Dig Deeper
After 15 years, I can predict with scary accuracy where auditors will focus their testing. Here are the high-risk areas that consistently get extra scrutiny:
High-Risk Control Areas
Risk Area | Why Auditors Focus Here | Sample Size Impact |
|---|---|---|
Privileged Access | Admin rights can bypass all other controls | +50% more samples |
Third-Party Access | External access increases risk surface | All third-party users tested |
Emergency Changes | Bypass normal controls, high failure risk | All emergency changes tested |
Terminated Employees | Access removal failures create security holes | +30% more samples |
Exception Processes | Deviations from normal process = higher risk | All exceptions tested |
Customer Data Access | Direct impact on Trust Services Criteria | +40% more samples |
Let me tell you about a marketing automation company I consulted for in 2023. They had 15 contractors with database access. The auditor didn't sample 25 contractors—she tested ALL 15, plus an additional 25 regular employees.
Why? Because contractors represent elevated risk:
They're not full employees
They may work for competitors
Access duration might exceed engagement period
Background checks may be less rigorous
For those 15 contractors, she verified:
Signed NDA before access granted
Background check completed
Access approved by both client manager and legal
Access scope properly limited
Regular review of continued need
Timely removal when engagement ended
Three contractors had issues. One still had access two weeks after their contract ended. Another had broader access than their approved scope. The third never had a background check completed.
Three exceptions out of 15 tested = 20% failure rate = significant deficiency in their final report.
"Auditors love testing the edges of your processes. That's where controls break down. That's where they find exceptions."
Evidence Quality: The Make-or-Break Factor
I've seen perfect controls fail audits because of poor evidence. I've also seen mediocre controls pass because of excellent documentation. Evidence quality matters more than most organizations realize.
Evidence Quality Framework
Quality Level | Characteristics | Audit Risk | Example |
|---|---|---|---|
Excellent | • Timestamped<br>• Immutable<br>• Independently verifiable<br>• Complete information | Very Low | System-generated audit log with all required data fields |
Good | • Dated<br>• Difficult to alter<br>• Contains key information | Low | Approved ticket in tracking system with approval timestamp |
Acceptable | • Dated<br>• Traceable to source<br>• Most information present | Medium | Email approval chain with full headers |
Questionable | • Approximate date<br>• Editable format<br>• Missing some information | High | Screenshot without date or Word document |
Unacceptable | • No date<br>• Easily fabricated<br>• Insufficient information | Very High | Verbal confirmation or undated notes |
Real example: A SaaS company had a control requiring security reviews of code changes. Developers would review each other's code before merging. Good practice!
Their evidence? The reviewer would write "LGTM" (Looks Good To Me) in the pull request comments.
The auditor rejected it. "LGTM" doesn't demonstrate what was reviewed, what security issues were considered, or that the reviewer was qualified to assess security.
We revised the process. Security-focused pull request template with specific questions:
Are there any authentication/authorization changes?
Is sensitive data handled appropriately?
Are inputs validated?
Are errors handled securely?
Are there any cryptographic operations?
Reviewers had to address each question. Evidence went from "unacceptable" to "good." Zero exceptions in the audit.
The lesson: Specific, detailed evidence beats vague confirmations every time.
Handling Exceptions: When Samples Fail
Here's an uncomfortable reality: most SOC 2 audits find at least some exceptions. It's not necessarily bad—it depends on the nature, frequency, and severity of the exceptions.
Exception Severity Framework
Severity | Definition | Impact on Report | Example |
|---|---|---|---|
Control Deficiency | Isolated instance of control not operating as designed | Noted in report, limited impact | One access request missing approval out of 40 tested |
Significant Deficiency | Multiple instances or pattern of control failures | Modified opinion possible | 15% of access requests missing approval |
Material Weakness | Control completely ineffective or systematic failure | Qualified opinion or failure | No approval process actually implemented despite documented control |
I worked with a healthcare tech company that found 2 exceptions in 73 total samples tested across all controls. That's less than 3%—well within acceptable ranges. The auditor noted them as "control deficiencies" but issued an unqualified (clean) opinion.
Compare that to a fintech startup where the auditor found 14 exceptions in 40 samples for a single critical control (access provisioning). That's 35%—a material weakness. They couldn't issue a clean report.
The difference? The healthcare company had:
Strong overall control environment
Excellent documentation
Quick remediation when issues were found
Evidence the exceptions were truly isolated incidents
The fintech company had:
Inconsistent process execution
Poor documentation
Repeated issues with the same control
No evidence of improvement over time
"Auditors understand that humans make mistakes. What they can't tolerate is systematic failure or lack of organizational commitment to controls."
Preparing for Sample Selection: Practical Strategies
After guiding 40+ companies through audits, here's my battle-tested approach to preparation:
90-Day Preparation Roadmap
Phase | Timeline | Activities | Deliverable |
|---|---|---|---|
Phase 1: Assessment | Days 1-30 | • Map all controls to evidence sources<br>• Identify evidence gaps<br>• Review previous audit findings<br>• Conduct internal sampling test | Gap analysis document |
Phase 2: Remediation | Days 31-60 | • Close evidence gaps<br>• Improve documentation<br>• Automate evidence collection where possible<br>• Train control owners | Evidence collection playbook |
Phase 3: Validation | Days 61-75 | • Perform full internal audit<br>• Test sample selection methodology<br>• Review evidence quality<br>• Remediate any new findings | Internal audit report |
Phase 4: Finalization | Days 76-90 | • Organize evidence repository<br>• Create evidence index<br>• Prepare control owners<br>• Schedule readiness meeting with auditor | Audit-ready evidence package |
Let me share how this worked for a data analytics company I advised last year.
Day 1-30: We identified 47 controls requiring testing. For each control, we:
Listed the population
Estimated sample size
Identified evidence sources
Rated evidence quality
Flagged gaps
We found 12 controls with "questionable" or "unacceptable" evidence.
Day 31-60: We systematically improved each gap:
Implemented ticketing system for access requests (previously email)
Automated change log collection from CI/CD pipeline (previously manual)
Created standardized security review checklists (previously ad-hoc)
Set up automated weekly access reviews (previously quarterly and manual)
Day 61-75: I personally selected random samples using the auditor's methodology:
Selected 382 total samples across all controls
Tested evidence availability and quality
Identified 23 instances where evidence was missing or insufficient
Remediated issues and collected missing evidence
Day 76-90: We created an organized evidence repository:
Structured folders for each control
Clear naming conventions for all evidence files
Index spreadsheet mapping controls to evidence locations
Brief descriptions of each piece of evidence
When the audit started, the auditor was impressed. "This is the most organized evidence package I've seen this year," she said.
The audit took 6 weeks instead of the typical 10-12 weeks. They found 3 minor exceptions across 400+ samples tested. Clean report issued.
Total preparation cost: $45,000 in consultant time plus internal effort.
Value delivered: Saved 6 weeks of auditor time (probably $30,000), avoided remediation delays, received clean report on first attempt.
Advanced Sampling Strategies: What Experienced Auditors Do
As you go through multiple audit cycles, you'll notice auditors get more sophisticated in their sampling. Here are advanced techniques I've seen:
Stratified Sampling Approach
Instead of randomly selecting from the entire population, sophisticated auditors divide populations into subgroups (strata) and sample from each.
Example: For user access provisioning, they might stratify by:
Employee type (full-time, contractor, temporary)
Access level (standard, elevated, administrative)
Department (engineering, operations, support, executive)
Timing (first quarter vs. later quarters)
This ensures they test the full diversity of scenarios, not just the most common cases.
A cloud infrastructure company I worked with learned this lesson. They had 200 new users during their audit period. 180 were engineers with standard access. 15 were contractors. 5 were executives with broad access.
If the auditor randomly selected 25 samples, statistics suggest she'd get about 23 engineers, 2 contractors, and 0-1 executives.
Instead, she stratified:
15 engineers (from 180 population)
5 contractors (from 15 population)
5 executives (from 5 population)
Guess where she found the exceptions? Contractors (2 exceptions) and executives (1 exception).
The standard employee provisioning was rock-solid. The exception processes—where things got rushed or special treatment was requested—that's where controls broke down.
"Smart auditors don't just test your processes. They test your exceptions to your processes. That's where the truth lives."
Risk-Based Sample Timing
Auditors also think about when samples occurred during the audit period.
Time Period | Risk Factor | Why It Matters |
|---|---|---|
First Month | Control implementation | Were controls actually in place from day one? |
Mid-Period | Routine operation | Do controls work consistently? |
Final Month | Audit preparation | Are you just documenting controls for the audit? |
After Hours/Weekends | Reduced oversight | Do controls work when management isn't watching? |
I saw this with an e-learning platform. They had strong access controls—during business hours. After 6 PM and on weekends, their ticketing system wasn't monitored. Access requests were auto-approved.
The auditor specifically selected samples from after-hours periods. Found systematic control bypasses. Material weakness.
The fix required 24/7 on-call coverage and automated approval workflows. Expensive lesson.
Common Mistakes That Lead to Audit Exceptions
Let me save you from the painful lessons I've watched organizations learn:
Top 10 Sample Selection Preparation Mistakes
Mistake | Why It's Dangerous | How to Avoid It |
|---|---|---|
Assuming "similar" evidence is acceptable | Auditors require specific evidence types | Map exact evidence requirements for each control |
Collecting evidence retroactively | Creates gaps, inconsistencies, and timestamp issues | Collect evidence as controls operate |
Not testing your own controls | You find out about failures during the audit | Run internal audits quarterly |
Relying on verbal confirmations | Leaves no audit trail | Document everything in writing |
Using easily-editable formats | Auditors question evidence integrity | Use system-generated, timestamped evidence |
Incomplete evidence | Even one missing element fails the sample | Create evidence checklists for control owners |
Not understanding your population | You collect wrong evidence for wrong samples | Map populations before audit period starts |
Trusting that "everyone knows the process" | Knowledge gaps become exceptions | Document procedures and train consistently |
Saving evidence preparation for audit time | Creates time pressure and missed evidence | Collect and organize evidence monthly |
Not communicating with your auditor | Misaligned expectations = surprises = exceptions | Have pre-audit meetings to align on methodology |
The 30-Day Pre-Audit Checklist
When your audit is 30 days away, here's what you need to do:
Week 1: Evidence Inventory
[ ] Create master list of all controls
[ ] Map each control to evidence location
[ ] Verify evidence exists for entire audit period
[ ] Identify any missing evidence
[ ] Begin retroactive evidence collection where possible
Week 2: Quality Review
[ ] Review evidence quality using the framework above
[ ] Upgrade questionable evidence to acceptable/good
[ ] Ensure all evidence has proper timestamps
[ ] Verify evidence is appropriately stored/organized
[ ] Create evidence index/map for auditor
Week 3: Internal Testing
[ ] Select random samples using anticipated methodology
[ ] Test evidence retrieval time
[ ] Identify any issues with evidence access
[ ] Run through evidence collection with control owners
[ ] Document any remaining gaps or concerns
Week 4: Finalization
[ ] Hold readiness meeting with all control owners
[ ] Create quick-reference guide for evidence locations
[ ] Set up dedicated workspace for auditor
[ ] Prepare control descriptions and narratives
[ ] Schedule kickoff meeting with auditor
Real-World Case Study: Turning Exceptions Into Excellence
Let me close with a success story that illustrates everything I've discussed.
A healthcare data analytics company came to me after failing their first SOC 2 attempt. They'd received a qualified opinion due to significant deficiencies in multiple control areas.
Their problems:
47 exceptions found across 289 samples tested (16% failure rate)
Poor documentation quality
Manual processes with inconsistent execution
Evidence collected retroactively after auditor requested it
Multiple control owners unclear on their responsibilities
Our 6-month transformation:
Month 1-2: Control Environment Redesign
Automated 60% of manual controls
Implemented centralized ticketing system
Created standardized evidence templates
Established weekly control effectiveness reviews
Month 3-4: Documentation and Training
Rewrote all control descriptions with clear evidence requirements
Trained each control owner on their specific responsibilities
Implemented automated evidence collection workflows
Created evidence quality review process
Month 5: Internal Audit
Selected 350 samples across all controls
Tested evidence availability and quality
Found and remediated 12 potential issues
Verified all evidence met quality standards
Month 6: Final Preparation
Organized evidence repository
Conducted auditor pre-meeting
Performed final evidence review
Prepared control owners for interviews
Results of Second Audit:
412 samples tested (more than first audit)
4 exceptions found (0.97% failure rate)
All exceptions classified as minor control deficiencies
Clean, unqualified report issued
Audit completed in 7 weeks vs. 14 weeks for failed attempt
Cost comparison:
First (failed) audit: $85,000 auditor fees + 6 months remediation
Transformation program: $120,000 consultant fees + internal effort
Second (successful) audit: $75,000 auditor fees + minimal internal effort
Total cost of failure: $280,000+ Cost of doing it right: $195,000
But here's the real win: they built a sustainable compliance program. Their third annual audit took only 5 weeks and found zero exceptions. They've become an efficiency machine.
The CTO told me: "I used to resent SOC 2 as a necessary evil. Now I see it as the backbone of our operational excellence. We're more efficient, more secure, and more confident than ever before."
"The goal isn't to pass the audit. The goal is to build controls so robust that passing the audit is inevitable."
Your Next Steps
If you're preparing for a SOC 2 audit, here's what I recommend:
Immediate (This Week):
Map all your controls to evidence sources
Identify your highest-risk control areas
Calculate estimated sample sizes for each control
Assess current evidence quality
Short-term (This Month):
Close any critical evidence gaps
Implement automated evidence collection where possible
Run a sample internal test of 5-10 controls
Create documentation standards
Medium-term (Next Quarter):
Conduct full internal audit using auditor methodology
Remediate all identified issues
Train control owners on evidence requirements
Build organized evidence repository
Long-term (Ongoing):
Perform quarterly internal control testing
Continuously improve evidence quality
Automate more manual processes
Build compliance into daily operations
Final Thoughts
Sample selection is where theory meets reality in SOC 2 audits. You can have beautifully documented controls, but if you can't provide quality evidence for the auditor's samples, you'll fail.
After 15 years and 40+ audits, I've learned that organizations that succeed share common traits:
They treat evidence collection as a real-time activity, not a pre-audit scramble
They automate evidence collection wherever possible
They run regular internal audits using the same methodology external auditors use
They view exceptions as learning opportunities, not failures
They invest in tools and processes that make compliance easier
The companies that struggle? They treat SOC 2 as a periodic audit event rather than a continuous operational practice.
Don't be the CTO getting a call at 2:47 AM because your audit found significant deficiencies. Be the CTO who gets a congratulatory email about a clean report after a smooth, efficient audit.
Understanding sample selection methodology isn't about gaming the audit. It's about building controls that work so consistently that no matter which samples the auditor selects, they'll find evidence of excellence.