ONLINE
THREATS: 4
1
1
1
0
0
1
0
0
0
1
0
1
1
1
1
1
0
0
0
1
1
1
0
1
0
0
1
0
0
1
1
0
0
1
1
0
1
0
1
0
0
1
0
1
1
1
0
1
1
0

CISA Domains: Five Audit Domains Explained

Loading advertisement...
119

When the Auditor Asked About Domain 3 and Nobody Knew What That Meant

I'll never forget the expression on the CFO's face when the external auditor asked, "Can you walk me through your Domain 3 controls?" It was 9:15 AM on a Tuesday morning at Pacific Financial Services, a mid-sized investment firm managing $8.2 billion in assets. The CFO, the CIO, and the newly hired IT Director were sitting across the conference table from two auditors who'd flown in from Chicago to conduct their annual SOC 2 examination.

The CFO looked at the CIO. The CIO looked at the IT Director. The IT Director looked at his notes. Nobody said anything for what felt like an eternity but was probably only fifteen seconds.

Finally, the IT Director ventured, "Domain 3... that's the network security one, right?"

The lead auditor, a woman in her mid-forties with twenty years of CISA certification behind her, shook her head slowly. "Domain 3 is Information Systems Acquisition, Development, and Implementation. It covers how you build, acquire, test, and deploy IT solutions. We need to see evidence of your SDLC processes, change management procedures, testing protocols, and implementation controls."

I watched the color drain from the IT Director's face. He'd been hired three months earlier specifically to prepare for this audit. He'd spent those twelve weeks frantically implementing firewalls, patching systems, and updating antivirus definitions—all Domain 2 work. Meanwhile, the development team had pushed 47 code changes to production with minimal testing, no formal change approval, and zero documentation. Domain 3 was a complete disaster zone, and nobody had even realized it existed as a discrete audit area.

That audit resulted in 23 findings, $340,000 in remediation costs, a failed SOC 2 report, and the loss of two major clients who required clean audit reports. The IT Director resigned four months later. The CIO was demoted. And I was brought in to rebuild their entire IT governance framework from the ground up.

That painful experience taught me something crucial: you can't protect what you don't understand, and you can't audit what you haven't structured. The CISA framework—Certified Information Systems Auditor—provides that structure through five carefully designed domains that cover the entire landscape of IT audit and assurance. Over the past 15+ years, I've guided dozens of organizations through CISA-aligned audits, helped prepare candidates for the certification exam, and built audit programs based on these five domains for healthcare systems, financial institutions, government agencies, and technology companies.

In this comprehensive guide, I'm going to walk you through each of the five CISA domains in detail. You'll learn what each domain covers, why it matters, how auditors evaluate controls in each area, the common gaps I see organizations struggle with, and most importantly—how to build audit-ready processes that satisfy CISA requirements while actually improving your security and operational resilience. Whether you're preparing for an audit, pursuing CISA certification, or just trying to understand how your IT organization should be structured, this article will give you the practical knowledge to succeed.

Understanding the CISA Framework: The Foundation of IT Audit

Before we dive into the individual domains, let me provide context on what CISA actually is and why these five domains matter so much.

CISA—Certified Information Systems Auditor—is a globally recognized certification issued by ISACA (Information Systems Audit and Control Association). It's the gold standard for professionals who audit, control, monitor, and assess an organization's information technology and business systems. When you see "CISA" after someone's name, you're looking at someone who has demonstrated comprehensive knowledge across all five domains and passed a rigorous examination.

But CISA isn't just a certification—it's a framework that defines how IT audit should be structured. The five domains represent the complete lifecycle of IT governance, risk, and control:

Domain

Name

Weight on CISA Exam

Primary Focus

Key Stakeholders

Domain 1

Information Systems Auditing Process

21%

Planning, executing, and reporting on IS audits

Auditors, audit committees, risk managers

Domain 2

Governance and Management of IT

17%

IT strategy, governance, risk management

C-suite, board, IT leadership

Domain 3

Information Systems Acquisition, Development and Implementation

12%

SDLC, project management, change control

Development teams, project managers

Domain 4

Information Systems Operations and Business Resilience

23%

IT operations, service management, business continuity

Operations teams, service managers

Domain 5

Protection of Information Assets

27%

Security controls, access management, encryption

Security teams, compliance officers

Notice the percentage weights—these reflect both the importance of each domain and the coverage on the CISA exam. Domain 5 (Protection of Information Assets) carries the most weight at 27%, followed closely by Domain 4 (Operations and Business Resilience) at 23%. This tells you where ISACA believes the greatest risk and complexity lie.

Why These Five Domains Matter for Your Organization

Even if you're not pursuing CISA certification personally, these five domains provide an invaluable framework for organizing your IT governance and audit program. Here's why:

Comprehensive Coverage: The domains cover everything from strategic IT governance down to tactical security controls. Nothing falls through the cracks.

Auditor Mindset: When external auditors assess your controls—for SOC 2, ISO 27001, PCI DSS, HIPAA, or any other framework—they're mentally organizing their evaluation using these same domains.

Logical Structure: The domains flow logically: audit process → governance → building systems → operating systems → protecting systems. Each builds on the previous.

Common Language: CISA provides a shared vocabulary for discussing IT controls with auditors, regulators, executives, and technical teams.

At Pacific Financial Services, once we reorganized their IT governance using the CISA five-domain structure, everything became clearer. Instead of a tangled mess of overlapping initiatives, we had:

  • Domain 1: Internal audit schedule and methodology

  • Domain 2: IT steering committee, risk register, policy framework

  • Domain 3: SDLC procedures, change advisory board, testing protocols

  • Domain 4: ITIL-based service management, incident response, business continuity

  • Domain 5: Security architecture, access controls, vulnerability management

The transformation was remarkable. When the auditors returned twelve months later, the lead auditor's first comment was: "This is night and day from last year. You actually understand what we're looking for now."

"Organizing our IT program around the CISA domains didn't just prepare us for audits—it made us operate better. We could finally see gaps, prioritize investments, and explain to the board exactly where we were strong and where we needed improvement." — Pacific Financial Services CIO

Domain 1: Information Systems Auditing Process (21% of CISA Exam)

Domain 1 is unique because it's about the audit itself—how to plan, conduct, and report on information systems audits. While Domains 2-5 focus on what auditors look at, Domain 1 focuses on how they look at it.

This domain is critical for understanding the auditor mindset. When you know how auditors approach their work, you can prepare more effectively and engage more productively during examinations.

The IS Audit Planning Process

Effective IS audits don't start when the auditor shows up on-site—they start weeks or months earlier with comprehensive planning. Here's the process I follow, which aligns with CISA Domain 1 requirements:

Planning Phase

Key Activities

Deliverables

Timeline Before Audit

Audit Universe Definition

Identify all auditable entities, systems, and processes

Audit universe inventory

Annual review

Risk Assessment

Evaluate inherent risk and control risk for each entity

Risk-ranked audit universe

Annual review

Audit Selection

Choose which audits to conduct based on risk, regulatory requirements, time since last audit

Annual audit plan

6-12 months before

Scope Definition

Define specific boundaries, systems, timeframes for each audit

Audit scope document

2-3 months before

Resource Planning

Assign auditors, allocate time, engage specialists if needed

Resource allocation plan

2-3 months before

Preliminary Review

Understand entity operations, review prior audit results, identify key controls

Preliminary assessment

4-6 weeks before

Audit Program Development

Create specific test procedures, define sampling methodology, set objectives

Detailed audit program

2-4 weeks before

At Pacific Financial Services, their previous auditors had essentially shown up and started asking random questions. There was no visible methodology, no clear scope, and no understanding of what they were actually testing. This created enormous inefficiency—weeks of disruption with unclear outcomes.

When we engaged new auditors aligned with CISA principles, the difference was striking. They sent a detailed audit plan eight weeks in advance:

Sample Audit Plan Components:

Audit Scope: - Systems: Trading platform, portfolio management system, client portal - Processes: Trade execution, settlement, reconciliation, reporting - Period: January 1 - December 31, 2024 - Locations: San Francisco HQ, New York trading desk - Key Controls: Access management, change control, transaction authorization

Out of Scope: - Marketing website (low risk, externally hosted) - HR systems (separate audit planned) - Facilities management (non-IT)
Risk Focus Areas: 1. Trade execution authorization (inherent risk: HIGH) 2. Data encryption in transit and rest (regulatory requirement) 3. Change management for trading algorithms (prior audit finding) 4. Disaster recovery for critical trading systems (high business impact)
Audit Approach: - Inquiry and observation (understand control design) - Documentation review (evaluate control documentation) - Testing (validate control operating effectiveness) - Sample size: 25 items per control, selected using random sampling
Loading advertisement...
Team: - Lead Auditor: Sarah Chen, CISA, CISSP - Senior Auditor: Michael Torres, CISA - IT Specialist: Jennifer Wu, CISSP (cryptography expertise)
Timeline: - Week 1: Preliminary documentation review (off-site) - Week 2-3: On-site interviews and walkthroughs - Week 4: Testing and evidence collection - Week 5: Draft findings review - Week 6: Final report issuance

This level of planning transparency allowed Pacific Financial to prepare appropriately—gathering evidence, briefing personnel, and ensuring system availability during testing windows.

Audit Execution Methodology

Once planning is complete, Domain 1 defines how auditors actually conduct their examination. The methodology follows a structured approach:

Audit Execution Phases:

Phase

Purpose

Methods

Typical Duration

Understanding

Comprehend how systems and controls operate

Interviews, walkthroughs, process observation

1-2 weeks

Evaluation

Assess whether control design is adequate

Gap analysis, industry benchmarking, risk assessment

3-5 days

Testing

Verify controls operate as designed

Sampling, inspection, re-performance, analytical procedures

1-3 weeks

Analysis

Identify patterns, root causes, systemic issues

Data analytics, trend analysis, comparative analysis

3-5 days

Reporting

Communicate findings and recommendations

Draft report, management response, final report

1-2 weeks

I've found that organizations often misunderstand what auditors are actually doing during each phase. At Pacific Financial, when auditors asked to observe a production deployment, the IT team initially resisted: "Why do you need to watch? We have documentation."

The auditor explained: "Documentation tells me what you're supposed to do. Observation tells me what you actually do. There's often a gap."

Sure enough, during observation of a deployment, the auditor noted that while the change approval form listed five required sign-offs, the deployment proceeded with only three. The team explained, "Oh, the database admin is on vacation, but we got verbal approval." That's a control failure—the documented control requires written approval, not verbal.

Sampling Methodology and Evidence Collection

Domain 1 includes detailed guidance on how auditors select samples and collect evidence. Understanding this helps you prepare more effective documentation:

Sampling Approaches:

Sampling Method

When Used

Advantages

Disadvantages

Statistical Sampling

Large populations, quantitative analysis needed

Mathematically defensible, confidence levels calculable

Complex, requires expertise, larger samples

Judgmental Sampling

Small populations, specific risk areas

Efficient, targeted, auditor discretion

Not statistically projectable, potential bias

Random Sampling

General testing, no specific risk areas

Unbiased, representative

May miss concentrated risks

Stratified Sampling

Populations with distinct subgroups

Ensures coverage of all strata, efficient

Requires clear stratification criteria

100% Testing

Critical controls, small populations

Complete coverage, no projection needed

Time-consuming, expensive

At Pacific Financial Services, the auditors used stratified sampling for testing user access reviews. They divided the user population into strata:

  • Stratum 1: Privileged users (100% testing - only 23 users)

  • Stratum 2: Trading desk users (50% sample - 47 of 94 users)

  • Stratum 3: Administrative users (25% sample - 34 of 136 users)

  • Stratum 4: Read-only users (10% sample - 28 of 280 users)

This approach focused testing effort on higher-risk user categories while still obtaining reasonable assurance across the entire population.

Evidence Types and Reliability:

Evidence Type

Reliability Rating

Examples

Audit Use

Direct Knowledge

Highest

Auditor observation, re-performance, physical inspection

Primary evidence for critical controls

External Confirmation

Very High

Third-party certifications, bank confirmations, vendor SOC 2 reports

Corroboration of internal evidence

Documentary Evidence

High

Logs, tickets, approvals, system-generated reports

Primary evidence for most IT controls

Analytical Evidence

Medium-High

Trend analysis, ratio analysis, reasonableness testing

Supporting evidence, anomaly detection

Oral Evidence

Medium

Interviews, explanations, verbal representations

Understanding only, must corroborate

Representation Letters

Medium-Low

Management assertions, policy acknowledgments

Establishes accountability, not operating evidence

The key lesson: documentary evidence from independent sources is most valuable. At Pacific Financial, when they claimed "we review user access quarterly," the auditor asked for evidence. Screenshots of someone's desktop weren't sufficient—system-generated reports showing review completion dates, reviewed by whom, and any access removal actions were required.

Audit Reporting and Communication

Domain 1 emphasizes that audit findings must be clearly communicated, appropriately rated, and documented with sufficient detail for management to take action.

Finding Severity Rating Framework:

Severity

Definition

Potential Impact

Response Timeline

Critical

Control failure with immediate risk of material loss or regulatory violation

Organization survival, regulatory action, material financial loss

Immediate (< 30 days)

High

Significant control weakness with high likelihood of exploitation

Major operational disruption, significant financial impact, compliance breach

60-90 days

Medium

Control deficiency that could lead to risk realization under certain conditions

Moderate operational impact, moderate financial exposure

120-180 days

Low

Minor control gap or inefficiency with minimal risk

Limited impact, process improvement opportunity

365 days

Observation

Not a control failure but suggested enhancement

Optimization, best practice alignment

No mandate

Pacific Financial's failed audit contained findings across all severity levels:

Critical Findings (3):

  1. Production changes deployed without approval (Domain 3 violation)

  2. Database administrator password shared among three individuals (Domain 5 violation)

  3. No disaster recovery testing in 18 months (Domain 4 violation)

High Findings (7): 4. User access reviews not performed for 8 months (Domain 5 violation) 5. Security patching SLA routinely missed (Domain 4 violation) 6. Development and production data commingled (Domain 3 violation) ... and 4 more

Medium Findings (8): Low Findings (5):

Each finding included specific components required by Domain 1:

Finding Template:
Finding Number: 2024-003 Severity: Critical Domain: Domain 3 (SDLC)
Loading advertisement...
Condition: Production changes are deployed without documented approval from the Change Advisory Board. Of 47 changes reviewed, 31 (66%) had no CAB approval record.
Criteria: IT Change Management Policy requires CAB approval for all production changes. SOC 2 CC8.1 requires documented authorization for system changes.
Cause: CAB meetings occur monthly, but urgent changes proceed without waiting for the next scheduled meeting. No emergency approval process exists.
Loading advertisement...
Effect: Unauthorized changes could introduce defects, security vulnerabilities, or operational failures. Lack of approval trail creates compliance risk and prevents effective change tracking.
Recommendation: Implement emergency CAB approval process with virtual meeting capability within 4 hours for urgent changes. Require documented approval before any production deployment. Update change management procedures accordingly.
Management Response: [Space for management to respond with action plan, owner, and target completion date]

This structured approach ensured findings were actionable, not just critical observations.

"The new audit reports told us exactly what was wrong, why it mattered, and what to fix. The previous audit just said 'inadequate controls' with no detail. We couldn't improve from that." — Pacific Financial Services IT Director

Domain 2: Governance and Management of IT (17% of CISA Exam)

Domain 2 focuses on the strategic layer—how IT is governed, how it aligns with business objectives, how risks are managed, and how resources are allocated. This is the "why" and "what" layer, while subsequent domains address the "how."

Many technical teams find Domain 2 frustrating because it's less about technology and more about organizational structure, policies, and oversight. But I've learned that Domain 2 failures are often the root cause of technical failures. Poor governance leads to poor decisions, which lead to poor implementations, which lead to incidents.

IT Governance Framework Components

IT governance is about ensuring IT investments support business objectives, deliver value, and manage risk appropriately. Here's the framework structure I implement:

Governance Component

Purpose

Key Elements

Oversight Body

IT Strategy

Define direction and priorities for IT

Strategic plan, roadmap, investment priorities

Board, C-suite

IT Policies

Establish rules and requirements

Acceptable use, data classification, access control, change management

IT steering committee

IT Organization

Define roles and responsibilities

Organizational chart, role descriptions, segregation of duties

CIO, CISO

IT Risk Management

Identify and mitigate IT risks

Risk register, risk appetite, treatment plans

Risk committee

Performance Management

Measure IT effectiveness

KPIs, SLAs, balanced scorecard

IT leadership

Investment Management

Optimize IT spending

Portfolio management, business cases, ROI analysis

Finance, IT steering committee

Compliance Management

Ensure regulatory adherence

Compliance mapping, audit schedule, remediation tracking

Compliance officer, legal

At Pacific Financial Services, their Domain 2 governance was essentially non-existent:

Pre-Remediation State:

  • No IT steering committee

  • No documented IT strategy

  • Policies existed but hadn't been reviewed in 3 years

  • No risk register specific to IT

  • No performance metrics tracked

  • IT budget was line-items without business justification

  • Compliance was reactive, no proactive monitoring

This governance vacuum created the downstream failures the auditors identified. Without clear strategy, priorities were ad-hoc. Without policies, standards were inconsistent. Without risk management, critical vulnerabilities went unaddressed.

Post-Remediation State (12 months):

We established comprehensive governance:

IT Steering Committee:

  • Composition: CFO (chair), CIO, CISO, Head of Trading, Head of Operations, Head of Compliance

  • Meeting Frequency: Monthly

  • Responsibilities: IT investment approval >$50K, policy approval, risk review, strategic initiative oversight

  • Documented Decisions: Meeting minutes, action items, vote records

IT Strategy Document:

  • Strategic Objectives: 5 core objectives aligned to business strategy

  • 3-Year Roadmap: Planned initiatives mapped to objectives

  • Success Metrics: Measurable outcomes for each objective

  • Resource Requirements: Budget, personnel, vendor support needs

Policy Framework:

  • Tier 1 Policies: 8 high-level policies (board-approved)

  • Tier 2 Standards: 23 technical standards (IT leadership-approved)

  • Tier 3 Procedures: 47 detailed procedures (operational teams)

  • Review Cycle: Annual policy review, semi-annual standard review

IT Risk Register:

  • Risk Inventory: 34 identified IT risks

  • Risk Scoring: Likelihood × Impact rating for each

  • Treatment Plans: Mitigate, transfer, accept, or avoid decision with action plans

  • Risk Owners: Named individual accountable for each risk

The transformation was measurable. Audit findings related to Domain 2 went from 6 in the failed audit to 0 in the subsequent audit.

Strategic IT Planning and Alignment

Domain 2 requires IT strategy to demonstrably align with business strategy. Auditors look for evidence that IT investments support business objectives, not just technical preferences.

IT-Business Alignment Framework:

Business Objective

Supporting IT Objective

Key Initiatives

Success Metrics

Expand client base by 25% in 3 years

Scale infrastructure to support 50% user growth

Cloud migration, capacity expansion, automation

System availability >99.9%, onboarding time <2 hours

Reduce operational costs by 15%

Automate manual processes, consolidate vendors

RPA implementation, vendor rationalization

Labor hours saved, vendor cost reduction

Launch mobile trading platform

Develop secure mobile application with real-time data

Mobile app development, API architecture

App store rating >4.5, adoption rate >60%

Ensure regulatory compliance

Implement compliance automation and monitoring

GRC platform, continuous controls monitoring

Audit findings reduction, zero regulatory violations

At Pacific Financial Services, the failed audit revealed a critical misalignment: the firm's strategic plan emphasized client experience and service quality, but IT had invested heavily in infrastructure performance with no client-facing improvements. The auditor noted this disconnect specifically in their findings.

Post-remediation, every IT investment required a business case explicitly linking to strategic objectives:

Sample Business Case Structure:

Initiative: Client Portal Enhancement Requesting Department: Client Services Strategic Alignment: "Expand client base" and "Improve client satisfaction"

Loading advertisement...
Business Problem: Current client portal lacks mobile optimization, real-time portfolio updates, and document e-signature. Client satisfaction scores for "ease of doing business" are 6.8/10, below industry average of 8.2/10.
Proposed Solution: Redesign client portal with responsive mobile interface, real-time data feeds, and integrated e-signature workflow.
Business Benefits: - Increase client satisfaction score to 8.5/10 (target) - Reduce account opening time from 8 days to 2 days - Decrease support calls by 30% (self-service functionality) - Enable mobile-first client acquisition strategy
Loading advertisement...
Cost Analysis: - Development: $420,000 - Annual maintenance: $85,000 - 3-year TCO: $675,000
Expected ROI: - Client acquisition cost reduction: $180K annually - Support cost reduction: $95K annually - Retention improvement value: $140K annually - 3-year NPV: $548,000 (positive ROI in Year 2)
Risk Assessment: - Technical risk: Medium (proven technologies, experienced vendor) - Schedule risk: Low (8-month timeline with buffer) - Vendor risk: Low (vendor has 12 similar implementations)
Loading advertisement...
Approval Request: $420,000 capital expenditure

This business case discipline ensured IT spending aligned with business value, not technical preferences.

IT Risk Management

Domain 2 requires formal IT risk management processes. Auditors evaluate whether organizations identify, assess, treat, and monitor IT-related risks.

IT Risk Management Process:

Risk Phase

Activities

Frequency

Outputs

Identification

Risk workshops, threat modeling, vulnerability assessments, incident analysis

Quarterly

Updated risk inventory

Assessment

Likelihood and impact rating, risk scoring, prioritization

Quarterly

Risk register with scores

Treatment

Risk response selection (mitigate/transfer/accept/avoid), control design

Per risk

Risk treatment plans

Monitoring

KRI tracking, control testing, risk reassessment

Monthly/Quarterly

Risk dashboards, trend reports

Reporting

Risk communication to governance bodies, escalation of new/elevated risks

Monthly to steering committee

Risk reports, heat maps

At Pacific Financial Services, we built their IT risk register from scratch:

Sample Risk Register Entries:

Risk ID

Risk Description

Category

Inherent Risk (L×I)

Control Effectiveness

Residual Risk

Treatment Plan

Owner

ITR-001

Ransomware attack encrypting trading data

Cybersecurity

4×5 = 20 (Critical)

Medium

3×4 = 12 (High)

Implement offline backups, EDR, email filtering

CISO

ITR-008

Key developer departure with undocumented code

Operational

3×4 = 12 (High)

Low

3×4 = 12 (High)

Code documentation standard, pair programming, knowledge transfer

CIO

ITR-015

Cloud provider outage affecting trading platform

Technology

2×5 = 10 (High)

Medium

2×3 = 6 (Medium)

Multi-region deployment, automatic failover

IT Director

ITR-023

Outdated SSL certificates causing client portal outage

Operational

4×2 = 8 (Medium)

High

2×2 = 4 (Low)

Automated certificate management, 90-day renewal alerts

Network Admin

Risk treatment plans included specific controls with implementation timelines and budget:

Risk ID: ITR-001 (Ransomware) Treatment Approach: Mitigate

Control 1: Offline Backup Implementation - Deploy air-gapped backup solution - Timeline: 60 days - Cost: $85,000 - Expected Risk Reduction: Likelihood 4→3
Control 2: Endpoint Detection and Response - Deploy Crowdstrike EDR across all endpoints - Timeline: 45 days - Cost: $42,000 annually - Expected Risk Reduction: Likelihood 4→3, Impact 5→4
Loading advertisement...
Control 3: Email Security Enhancement - Implement Proofpoint email filtering - Timeline: 30 days - Cost: $28,000 annually - Expected Risk Reduction: Likelihood 4→2
Residual Risk After Controls: 2×4 = 8 (Medium) - Acceptable per risk appetite

This disciplined approach to risk management satisfied Domain 2 requirements and materially improved their security posture.

Organizational Structure and Segregation of Duties

Domain 2 requires appropriate organizational structure with clear roles, responsibilities, and segregation of duties to prevent conflicts of interest and fraud.

Key Segregation of Duties Principles:

Function A

Should Be Separated From

Rationale

Typical Implementation

Development

Production access

Developers shouldn't deploy their own code without review

Separate ops team handles deployments

System Administration

Security administration

System admins shouldn't control their own access monitoring

Separate security team manages SIEM, access reviews

Change Approval

Change Implementation

Approvers shouldn't implement their own changes

CAB approves, different team implements

Access Provisioning

Access Approval

Those granting access shouldn't approve their own requests

Managers approve, separate team provisions

Backup Operations

Backup Restoration**

Those performing backups shouldn't be sole restorers (collusion risk)

Restore requires dual authorization

Security Monitoring

Incident Investigation

Monitoring alerts shouldn't go only to those being monitored

SOC independent from IT operations

At Pacific Financial Services, the failed audit identified a critical segregation of duties failure: the lead developer had:

  • Production database administrator access

  • Ability to approve his own change requests

  • Access to production logs (could cover tracks)

  • No oversight or secondary review

This created opportunity for fraud, data manipulation, or error introduction without detection.

Post-remediation organizational structure:

CIO ├── IT Operations (separate from development) │ ├── Infrastructure Team (production access) │ ├── Database Administration (production DBs) │ └── Service Desk (tier 1 support) ├── Application Development (no production access) │ ├── Trading Platform Team │ ├── Client Portal Team │ └── Internal Tools Team ├── Quality Assurance (independent testing) │ └── Test Automation Team └── Project Management Office └── Change Advisory Board (cross-functional)

CISO (reports to CEO, dotted line to CIO) ├── Security Operations │ ├── SOC (monitoring independent from IT ops) │ └── Incident Response ├── Security Architecture └── GRC (compliance, risk, governance)

This structure ensured proper separation between development, operations, testing, and security—satisfying Domain 2 segregation requirements.

"Fixing our organizational structure wasn't just about passing audits—it prevented a fraud scheme we discovered during the reorganization. Our lead developer had been manipulating trading data for personal gain. The lack of separation had made it possible." — Pacific Financial Services CFO

Domain 3: Information Systems Acquisition, Development and Implementation (12% of CISA Exam)

Domain 3 is where Pacific Financial Services completely failed their initial audit—and it's one of the most commonly overlooked domains because organizations focus on security (Domain 5) and operations (Domain 4) while neglecting how systems are actually built and changed.

Domain 3 covers the entire Software Development Lifecycle (SDLC), from requirements gathering through deployment, plus acquisition of commercial off-the-shelf (COTS) software, and all change management processes.

The Systems Development Lifecycle (SDLC)

Every organization develops or modifies software—whether that's custom applications, configuration of purchased software, or scripts that automate operations. Domain 3 requires a formal SDLC methodology appropriate to the organization's context.

SDLC Phase Requirements:

SDLC Phase

Key Activities

Required Artifacts

Audit Evidence

Feasibility/Planning

Business case, alternative analysis, resource assessment

Business case, cost-benefit analysis, project charter

Approved business case, steering committee minutes

Requirements

Gather and document functional and non-functional requirements

Requirements specification, use cases, acceptance criteria

Stakeholder sign-off, requirements traceability matrix

Design

Define architecture, interfaces, data models, security controls

Design documents, data flow diagrams, security architecture

Design review sign-off, architecture approval

Development

Code development, unit testing, code review

Source code, unit test results, code review records

Code repository logs, peer review evidence

Testing

Integration testing, user acceptance testing, security testing

Test plans, test cases, test results, defect logs

Test completion reports, UAT sign-off

Implementation

Deployment to production, user training, data migration

Deployment plan, training materials, migration scripts

Change approval, deployment checklist, rollback plan

Post-Implementation

Warranty support, defect resolution, performance monitoring

Issue logs, performance reports, lessons learned

Post-implementation review report

At Pacific Financial Services, their pre-audit SDLC was essentially: "Developer writes code, developer tests code, developer deploys code to production." That's not an SDLC—that's chaos.

Post-remediation, we implemented a formal SDLC appropriate to their size and risk profile:

Pacific Financial Services SDLC Framework:

Phase 1: Initiation (1-2 weeks) - Business submits project request form - IT leadership reviews for feasibility - If approved, assign project manager - Create project charter - Deliverable: Approved project charter

Loading advertisement...
Phase 2: Requirements (2-4 weeks) - Business analysts interview stakeholders - Document functional requirements - Security team defines security requirements - Compliance reviews regulatory requirements - Requirements review meeting with all stakeholders - Deliverable: Signed requirements document
Phase 3: Design (2-6 weeks) - Development team creates technical design - Database team designs data model - Security architect reviews for security controls - Infrastructure team reviews for capacity/performance - Design review meeting with all technical leads - Deliverable: Approved design document
Phase 4: Development (4-12 weeks) - Developers write code in feature branches - Peer code review required before merge - Unit tests required (minimum 70% coverage) - Daily stand-ups, weekly sprint reviews - Deliverable: Completed code in source control
Loading advertisement...
Phase 5: Testing (2-4 weeks) - QA team executes test plan - Security team performs security testing - Business team conducts user acceptance testing - All critical/high defects must be resolved - Deliverable: Test completion report, UAT sign-off
Phase 6: Deployment (1 week) - Submit change request to CAB - CAB reviews and approves - Operations team deploys to production - Business verifies production functionality - Development team on-call for 48 hours post-deployment - Deliverable: Deployment completion checklist
Phase 7: Post-Implementation (2 weeks post-deployment) - Monitor system performance and errors - Address any defects discovered - Conduct lessons learned session - Update documentation based on lessons learned - Deliverable: Post-implementation review report

This structured approach ensured quality, security, and compliance were built into every development effort—not bolted on afterward.

Change Management and Change Control

Even organizations with strong SDLC processes often fail at change management—the process of controlling modifications to production systems. Domain 3 requires rigorous change control to prevent unauthorized, untested, or poorly planned changes from disrupting operations.

Change Management Framework:

Change Type

Definition

Approval Required

Testing Required

Example

Emergency

Fixes for production incidents causing service disruption

Emergency CAB (within 4 hours)

Minimal, post-implementation validation

Hotfix for critical security vulnerability

Standard

Pre-approved, low-risk, documented procedures

Pre-authorization (no individual approval)

Standard test procedure

Monthly patching, password resets

Normal

All other changes to production systems

Full CAB review and approval

Comprehensive testing in non-production

Application updates, configuration changes

Major

Significant changes with high business impact

CAB + executive approval

Extensive testing, pilot deployment

Platform migrations, architecture changes

Pacific Financial Services had no change categories—everything was treated the same (or more accurately, nothing was formally managed). This meant critical changes were rushed while trivial changes consumed excessive review time.

Post-remediation Change Advisory Board (CAB) structure:

CAB Composition:

  • Chair: IT Director

  • Members: Application Development Lead, Infrastructure Lead, Database Lead, Security Lead, QA Lead

  • Business Representative: Rotates based on change impact area

  • Meeting Frequency: Weekly (Wednesdays 2-4 PM)

  • Emergency CAB: Virtual meeting convened within 4 hours of emergency declaration

Change Request Required Information:

Change Request Template:
Loading advertisement...
CR Number: CR-2024-0847 (auto-assigned) Submitted By: John Smith, Senior Developer Submission Date: 2024-03-15
Change Title: Update trading platform API authentication to OAuth 2.0
Change Type: Normal (requires CAB approval)
Loading advertisement...
Business Justification: Current basic authentication is deprecated and poses security risk. OAuth 2.0 provides better security, supports multi-factor authentication, and is required for upcoming mobile app integration.
Systems Affected: - Trading Platform API (production) - Client Portal (production - consumes API) - Mobile App (development - will consume API)
Change Description: Replace existing basic authentication with OAuth 2.0 authorization code flow. Implement token-based authentication with 1-hour access token lifetime and refresh token capability.
Loading advertisement...
Implementation Plan: 1. Deploy OAuth server in production (parallel to existing auth) 2. Update API to accept both basic auth and OAuth (backward compatible) 3. Update client portal to use OAuth 4. Validate client portal functionality 5. Deprecate basic auth after 30-day transition period
Testing Completed: - Unit testing: 94% coverage (requirement: 70%) - Integration testing: All 47 test cases passed - Security testing: Penetration test completed, no high findings - UAT: Client Services team validated, signed off 2024-03-12
Rollback Plan: If issues detected: 1. Revert API to basic auth only (5-minute rollback window) 2. Revert client portal to basic auth (15-minute rollback window) 3. All code changes in version control, tagged for easy reversion
Loading advertisement...
Risk Assessment: Risk: Authentication failures preventing client access Likelihood: Low (extensive testing, backward compatible approach) Impact: High (clients unable to trade) Mitigation: Phased rollout, backward compatibility, tested rollback
Scheduled Deployment: Date: 2024-03-20 (Wednesday) Time: 8:00 PM - 10:00 PM EST (after market close) Duration: 2 hours (estimated) On-Call: Development team (John Smith, Maria Garcia)
CAB Approval: [ ] Approved - Proceed as planned [ ] Approved with conditions: ________________________________ [ ] Rejected - Reason: ________________________________ [ ] Deferred - Additional information needed: ________________
Loading advertisement...
Approval Signatures: IT Director: _________________ Date: _________ Security Lead: _______________ Date: _________ Business Rep: ________________ Date: _________

This detailed change management process eliminated the 66% of unapproved changes that plagued their failed audit.

Testing and Quality Assurance

Domain 3 requires comprehensive testing before production deployment. Auditors look for evidence that systems are tested for functionality, security, performance, and compliance.

Testing Types and Requirements:

Test Type

Purpose

When Performed

Acceptance Criteria

Responsibility

Unit Testing

Verify individual code components function correctly

During development

70% code coverage minimum

Developers

Integration Testing

Verify system components work together

After development, before UAT

All interfaces validated, no critical defects

QA Team

User Acceptance Testing

Verify system meets business requirements

After integration testing

Business stakeholder sign-off, all requirements validated

Business Users

Security Testing

Identify vulnerabilities and security flaws

Before production deployment

No high/critical vulnerabilities, security controls validated

Security Team

Performance Testing

Verify system meets performance requirements

Before production deployment

Response time, throughput, scalability targets met

Infrastructure Team

Regression Testing

Ensure changes don't break existing functionality

After any code changes

All existing test cases still pass

QA Team

At Pacific Financial Services, the failed audit found that of 47 production changes, only 8 had documented testing evidence. The remaining 39 were deployed with developer-only testing at best.

Post-remediation testing requirements:

Mandatory Testing Evidence:

For every production deployment, the change request must include:

1. Unit Test Results - Code coverage report showing >70% coverage - All unit tests passed - Report generated from automated testing framework
2. Integration Test Results - Test plan executed (documented test cases) - Test execution log showing pass/fail for each case - Defect log showing any issues found and resolution status
Loading advertisement...
3. UAT Sign-Off - Business stakeholder identified - UAT test cases executed by business user - Sign-off document with stakeholder signature and date
4. Security Test Results (for changes affecting authentication, authorization, data handling, or external interfaces) - Security test plan - Vulnerability scan results - Security team sign-off
5. Performance Test Results (for changes affecting system performance or capacity) - Load test results - Response time measurements - Infrastructure team validation
Loading advertisement...
Without this evidence, CAB will NOT approve the change request.

This testing discipline caught defects before production deployment—reducing production incidents by 74% over 12 months.

Commercial Off-The-Shelf (COTS) Software Acquisition

Not everything is custom-developed. Domain 3 also covers how organizations acquire, implement, and integrate commercial software. Auditors evaluate vendor selection, contract review, implementation processes, and ongoing vendor management.

COTS Acquisition Process:

Phase

Activities

Required Artifacts

Decision Makers

Requirements Definition

Define functional, technical, security, and compliance requirements

Requirements document, vendor evaluation criteria

Business stakeholders, IT

Vendor Selection

RFP/RFI process, vendor demonstrations, reference checks

RFP document, vendor responses, evaluation matrix

Procurement, IT steering committee

Contract Negotiation

SLA definition, data ownership, security requirements, exit clauses

Vendor contract, SLA, SOC 2 report requirement

Legal, procurement

Implementation

System configuration, data migration, integration, testing

Implementation plan, test results, go-live checklist

IT, business

Acceptance

User acceptance, performance validation, documentation review

Acceptance criteria, sign-off document

Business stakeholders

Ongoing Management

Vendor performance monitoring, SLA compliance, relationship management

Vendor scorecard, SLA reports, escalation log

Vendor management team

Pacific Financial Services had acquired their portfolio management system without any formal process—the sales rep took the CEO to lunch, gave a demo on an iPad, and they signed a contract the next week. No requirements document. No security review. No SLA negotiation. Just a three-year, $340,000 commitment.

Post-remediation, we implemented formal COTS acquisition governance that prevented such mistakes:

Vendor Evaluation Matrix Example:

Requirement Category

Weight

Vendor A Score

Vendor B Score

Vendor C Score

Functional Fit

30%

85/100

92/100

78/100

Security Controls

25%

90/100 (SOC 2 Type 2)

75/100 (no certification)

88/100 (SOC 2 Type 2)

Integration Capability

15%

80/100 (REST API)

70/100 (limited API)

95/100 (comprehensive API)

Vendor Viability

10%

85/100 (established, stable)

60/100 (startup, uncertain)

90/100 (market leader)

Cost

10%

$280K (mid-range)

$180K (lowest)

$420K (highest)

Support & Training

10%

75/100 (standard support)

85/100 (excellent support)

80/100 (good support)

Weighted Total

100%

83.5

79.0

85.5

This structured approach ensured vendor selection was based on comprehensive evaluation, not sales charm.

"The formal SDLC and change management processes felt bureaucratic at first. But after they prevented a catastrophic deployment that would have taken down trading for hours, everyone became believers. Structure prevents disasters." — Pacific Financial Services Senior Developer

Domain 4: Information Systems Operations and Business Resilience (23% of CISA Exam)

Domain 4 is the second-highest weighted domain on the CISA exam at 23%, and for good reason—this is where most organizations live day-to-day. While Domains 2 and 3 focus on governance and building systems, Domain 4 focuses on running them reliably, efficiently, and resiliently.

I've seen more audit findings in Domain 4 than any other domain except Domain 5. Organizations often build systems properly but operate them poorly—leading to incidents, outages, and compliance failures.

IT Service Management and Operations

Domain 4 requires structured IT service management, typically aligned with frameworks like ITIL (Information Technology Infrastructure Library). Auditors evaluate whether you have defined processes for incident management, problem management, change management (operational aspects), and service desk operations.

Core ITSM Processes:

ITSM Process

Purpose

Key Metrics

Common Audit Findings

Incident Management

Restore normal service as quickly as possible

MTTR (mean time to restore), incident count, SLA compliance

Incidents not logged, no prioritization, slow response

Problem Management

Identify root causes and prevent recurrence

Problem-to-incident ratio, recurrence rate, MTTR for problems

No root cause analysis, recurring incidents not analyzed

Change Management

Control changes to minimize disruption

Change success rate, emergency change %, unauthorized changes

Covered in Domain 3

Service Desk

Single point of contact for IT support

First-call resolution, customer satisfaction, ticket backlog

No ticketing system, inconsistent documentation

Service Level Management

Define and monitor service commitments

SLA achievement %, availability, performance metrics

No SLAs defined, no monitoring, no reporting

Capacity Management

Ensure adequate resources for current and future demand

Utilization %, capacity forecasts, growth planning

No capacity monitoring, reactive scaling only

Availability Management

Maximize system uptime and reliability

Availability %, unplanned downtime, MTBF

No availability targets, no redundancy

At Pacific Financial Services, their failed audit revealed operational chaos:

Pre-Remediation Operational State:

  • No ticketing system (support requests via email and Slack)

  • No defined SLAs for any service

  • Incidents not classified by severity

  • No escalation procedures

  • No capacity planning (infrastructure sized reactively)

  • No availability targets or monitoring

This operational immaturity led to:

  • 47 outages in 12 months (average 1 per week)

  • Average incident resolution time: 6.4 hours

  • Customer satisfaction: 4.2/10

  • Unplanned downtime: 127 hours annually (98.6% availability vs. industry standard 99.9%+)

Post-Remediation Operational Framework:

We implemented structured ITSM using ServiceNow:

Incident Management Process:

Incident Detection:
- Automated monitoring alerts (Datadog, PagerDuty)
- User-reported (phone, email, service portal)
- Service desk identification during support calls
Incident Logging: - All incidents logged in ServiceNow - Required fields: Title, description, affected service, business impact - Auto-assignment based on service and impact
Incident Classification: Priority 1 (Critical): Complete service failure, >100 users affected - Response SLA: 15 minutes - Resolution SLA: 4 hours - Escalation: Immediate to IT Director + CISO
Loading advertisement...
Priority 2 (High): Significant degradation, 25-100 users affected - Response SLA: 30 minutes - Resolution SLA: 8 hours - Escalation: After 4 hours to IT Director
Priority 3 (Medium): Limited impact, <25 users affected - Response SLA: 2 hours - Resolution SLA: 24 hours - Escalation: After 12 hours to team lead
Priority 4 (Low): Minor issue, workaround available - Response SLA: 8 hours - Resolution SLA: 72 hours - Escalation: None
Loading advertisement...
Incident Resolution: - Technical teams work incidents based on priority - All actions documented in incident ticket - Resolution documented, user notified - Incident closed after user confirms
Post-Incident Review (for P1/P2 incidents): - Conducted within 48 hours - Root cause analysis using "5 Whys" - Preventive actions identified - Problem ticket created if systemic issue

Problem Management Process:

Problem Identification:
- Recurring incidents (3+ incidents same root cause)
- Trend analysis (increasing incident frequency)
- Proactive reviews (vulnerability scans, capacity reports)
Problem Investigation: - Assign to senior technical resource - Conduct thorough root cause analysis - Document findings in problem ticket
Loading advertisement...
Problem Resolution: - Develop permanent fix or mitigation - Submit change request for implementation - Link related incidents to problem - Close problem after change deployed and validated
Problem Review: - Monthly problem management meeting - Review open problems - Prioritize based on incident impact - Assign resources to high-impact problems

Results after 12 months:

  • Outages reduced from 47 to 11 annually (77% reduction)

  • Average incident resolution time: 2.1 hours (67% improvement)

  • Customer satisfaction: 8.6/10 (105% improvement)

  • Unplanned downtime: 22 hours annually (99.75% availability)

Business Continuity and Disaster Recovery

Domain 4 heavily emphasizes business continuity and disaster recovery—ensuring critical operations continue during disruptions and systems can be restored after disasters.

Note: While I covered business continuity comprehensively in a previous article, I'll address the specific Domain 4 audit requirements here.

BC/DR Audit Requirements:

Component

Audit Expectation

Evidence Required

Common Gaps

Business Impact Analysis

Documented RTOs and RPOs for critical systems

BIA report with financial impact, dependency mapping

Outdated BIA, no financial quantification

DR Plan Documentation

Step-by-step recovery procedures

Recovery playbooks, contact lists, vendor agreements

Generic procedures, outdated contacts

Backup Strategy

Comprehensive backup with offsite storage

Backup schedules, retention policies, test results

No offsite backups, no test evidence

DR Testing

Regular testing with documented results

Test plans, test results, lessons learned

No testing, or tests with no documentation

Alternate Site

Alternate processing capability for critical systems

Hot/warm/cold site contracts, configuration documentation

No alternate site, or site not tested

Communication Plan

Crisis communication procedures

Communication trees, templates, stakeholder lists

No plan, or untested plan

Pacific Financial Services' failed audit found:

  • Last BIA conducted 4 years ago

  • DR plan existed but hadn't been tested in 18 months (the critical finding I mentioned earlier)

  • Backups performed but never validated with actual restore

  • No alternate site arrangement

  • No crisis communication plan

Post-remediation BC/DR program:

Backup & Recovery Strategy:

System Tier

RTO

RPO

Backup Method

Retention

Restore Testing

Tier 1 (Critical)

4 hours

15 minutes

Continuous replication to Azure + daily to tape

30 days online, 7 years tape

Monthly

Tier 2 (Important)

24 hours

4 hours

Daily to Azure + weekly to tape

14 days online, 1 year tape

Quarterly

Tier 3 (Standard)

72 hours

24 hours

Daily to local NAS + monthly to tape

7 days online, 1 year tape

Semi-annually

DR Testing Program:

Quarterly Tabletop Exercises: - Scenario-based discussion of DR procedures - Team walkthrough of recovery steps - Contact list verification - Duration: 3 hours - Participants: IT leadership, key technical staff - Output: Updated procedures, action items

Annual DR Test: - Actual failover to Azure DR environment - Test recovery of Tier 1 systems - Validate RTO/RPO achievement - Business user validation of recovered systems - Duration: 8 hours (Saturday) - Participants: Full IT team, business representatives - Output: Test report, remediation plan
Loading advertisement...
Post-Test Review: - Lessons learned session within 1 week - Gap analysis (what worked, what didn't) - Action plan for improvements - Plan updates based on findings

First annual DR test results:

  • Successfully failed over 7 of 8 Tier 1 systems

  • One system failed (database corruption during replication)

  • Average RTO achieved: 3.2 hours (target: 4 hours)

  • Identified 12 improvement opportunities

  • Cost to remediate issues: $45,000

  • Value: Discovered critical flaw that would have prevented recovery in real disaster

Capacity and Performance Management

Domain 4 requires proactive capacity management—ensuring systems have adequate resources for current demand and can scale for future growth. Auditors look for capacity monitoring, trend analysis, and capacity planning processes.

Capacity Management Requirements:

Capacity Dimension

Monitoring Required

Threshold Management

Planning Horizon

Compute

CPU utilization, memory usage, process counts

Alert at 80%, critical at 90%

12-24 months

Storage

Disk space, I/O throughput, growth rate

Alert at 75%, critical at 85%

18-24 months

Network

Bandwidth utilization, latency, packet loss

Alert at 70%, critical at 85%

12-18 months

Database

Connection pool usage, query performance, table size

Alert at 80%, critical at 90%

12-24 months

Application

Transaction volume, concurrent users, response time

Alert based on SLA thresholds

6-12 months

Pacific Financial Services had zero capacity monitoring—they only knew they had a capacity problem when systems crashed. This reactive approach caused:

  • 8 capacity-related outages in 12 months

  • Emergency infrastructure purchases without planning

  • Over-provisioning in some areas, under-provisioning in others

  • $280,000 in unplanned infrastructure spending

Post-remediation capacity management:

Capacity Monitoring Dashboard:

Real-Time Monitoring (Datadog): - CPU: 43% utilization (threshold: 80%) - Memory: 67% utilization (threshold: 80%) - Storage: 62% used (threshold: 75%) - Network: 31% bandwidth (threshold: 70%) - Database connections: 124/200 (threshold: 160/200)

Trend Analysis (90-day): - CPU trend: +2.3% monthly growth - Memory trend: +1.8% monthly growth - Storage trend: +4.2% monthly growth (concerning) - Network trend: +1.1% monthly growth - Database trend: +3.5% monthly growth
Capacity Forecast (12-month): - CPU: Will reach 80% threshold in 14 months (no action required) - Memory: Will reach 80% threshold in 7 months (plan upgrade) - Storage: Will reach 75% threshold in 3 months (immediate action) - Network: Will reach 70% threshold in 30 months (no action required) - Database: Will reach threshold in 9 months (plan upgrade)
Loading advertisement...
Recommended Actions: 1. Storage expansion: Add 20TB within 45 days (cost: $85,000) 2. Memory upgrade: Plan for Q3 (cost: $42,000) 3. Database optimization: Query tuning to extend capacity (cost: internal effort)

This proactive approach eliminated capacity-related outages and reduced unplanned infrastructure spending by 73%.

Security Operations (Domain 4 Components)

While Domain 5 focuses on security controls and architecture, Domain 4 addresses operational security—monitoring, incident response, and ongoing security operations.

Security Operations Requirements:

Component

Purpose

Implementation

Audit Focus

Security Monitoring

Detect security events and anomalies

SIEM, IDS/IPS, log aggregation

24/7 monitoring coverage, alert tuning

Log Management

Collect and retain security-relevant logs

Centralized logging, tamper-proof storage

Log completeness, retention periods

Vulnerability Management

Identify and remediate vulnerabilities

Vulnerability scanning, patch management

Scan frequency, remediation timelines

Incident Response

Respond to security incidents

IR plan, IR team, forensic capability

Plan testing, incident handling evidence

Threat Intelligence

Stay informed of emerging threats

Threat feeds, industry collaboration

Integration with defenses, action on intelligence

Pacific Financial Services' security operations were minimal:

  • No SIEM (logs only reviewed reactively)

  • Vulnerability scans performed quarterly (not remediated systematically)

  • Incident response was ad-hoc (no documented process)

  • No threat intelligence integration

Post-remediation security operations:

Security Operations Center (SOC) Structure:

SOC Coverage: 8 AM - 8 PM weekdays (12 hours) After-Hours: PagerDuty escalation for critical alerts

SOC Tools: - SIEM: Splunk Enterprise Security - Endpoint: CrowdStrike Falcon - Network: Palo Alto Networks with Threat Prevention - Vulnerability: Tenable.io - Threat Intelligence: Recorded Future
Alert Categories: Priority 1 (Critical): Confirmed breach, active attack, data exfiltration - Response: Immediate, engage incident response team
Loading advertisement...
Priority 2 (High): Suspicious activity, potential breach indicators - Response: Within 30 minutes, investigate and escalate if confirmed
Priority 3 (Medium): Policy violations, failed access attempts - Response: Within 4 hours, investigate and remediate
Priority 4 (Low): Informational, baseline deviations - Response: Within 24 hours, review and document
Loading advertisement...
SOC Metrics (Monthly): - Alerts generated: 2,847 - Alerts investigated: 2,847 (100%) - True positives: 83 (2.9%) - False positives: 2,764 (97.1% - ongoing tuning to reduce) - Average investigation time: 18 minutes - Incidents escalated to IR team: 3 - Mean time to detect (MTTD): 12 minutes - Mean time to respond (MTTR): 34 minutes

This security operations capability satisfied Domain 4 requirements and materially improved their security posture.

"Before we had structured operations, we were firefighting constantly. After implementing ITSM processes, capacity management, and security operations, we shifted from reactive to proactive. Incidents decreased, uptime improved, and we could actually plan instead of just survive." — Pacific Financial Services IT Director

Domain 5: Protection of Information Assets (27% of CISA Exam)

Domain 5 carries the highest weight on the CISA exam at 27%—and generates the most audit findings in my experience. This domain covers the controls that actually protect data and systems: access management, encryption, network security, physical security, and security architecture.

While Domains 1-4 provide the framework for building and operating systems, Domain 5 is about ensuring those systems remain secure, confidential, and available despite constant threats.

Logical Access Controls

Access control is the foundation of information security. Domain 5 requires comprehensive controls over who can access systems and data, how they're authenticated, what they're authorized to do, and how access is monitored and reviewed.

Access Control Framework Components:

Control Type

Purpose

Implementation

Audit Evidence

User Identification

Uniquely identify each user

Unique user IDs, no shared accounts

User directory, account creation logs

Authentication

Verify user identity

Passwords + MFA, biometrics, certificates

Authentication logs, MFA enrollment records

Authorization

Grant appropriate permissions

Role-based access control (RBAC), least privilege

Permission matrices, access approval records

Access Provisioning

Create and grant access systematically

Automated provisioning, approval workflows

Provisioning tickets, approval evidence

Access Modification

Update access based on role changes

Change request process, manager approval

Access change requests, approval records

Access Termination

Remove access upon separation

Automated de-provisioning, immediate disabling

Termination logs, disabled account list

Access Review

Periodically validate access appropriateness

Quarterly access reviews, manager attestation

Review reports, remediation evidence

Privileged Access Management

Control and monitor administrative access

Jump boxes, session recording, just-in-time access

Admin access logs, session recordings

Pacific Financial Services' failed audit revealed catastrophic access control failures:

Critical Access Control Findings:

  1. Shared Administrative Passwords: Database admin password known by 3 people

  2. No Access Reviews: User access never reviewed since hire (some users had access for 6+ years without validation)

  3. Orphaned Accounts: 47 accounts for terminated employees still active

  4. No MFA: Single-factor authentication only (password)

  5. Excessive Privileges: 67% of users had more access than needed for their role

  6. No Provisioning Process: Access granted via email to IT, no approval documentation

Post-remediation access control program:

Identity and Access Management (IAM) Framework:

User Lifecycle Management:
1. Access Request Process: - New hire or role change triggers access request - Manager completes request form specifying required access - HR validates employment status - IT provisions access based on role template - Evidence: ServiceNow ticket with manager approval
2. Role-Based Access Control (RBAC): - 12 standard roles defined (Trader, Portfolio Manager, Client Services, etc.) - Each role has predefined access template - Custom access requires additional approval - Evidence: Role definition matrix, template documentation
Loading advertisement...
3. Multi-Factor Authentication: - MFA required for all users - Okta Verify for internal applications - SMS or hardware token for VPN - 100% MFA enrollment within 30 days of implementation - Evidence: Okta enrollment reports
4. Privileged Access Management: - All administrative access via CyberArk jump server - No direct admin access to production - Session recording for audit trail - Just-in-time access for temporary elevation - Evidence: CyberArk session logs, access requests
5. Quarterly Access Reviews: - Report generated showing each user's access - Manager reviews and certifies appropriateness - IT removes any access marked inappropriate - 100% completion required - Evidence: Access review reports, manager sign-offs
Loading advertisement...
6. Automated Termination: - HR termination in Workday triggers Okta deactivation - All access removed within 1 hour - Admin access removed immediately - Evidence: Workday-to-Okta integration logs

Access Control Metrics After 12 Months:

Metric

Pre-Remediation

Post-Remediation

Improvement

Orphaned accounts

47

0

100%

MFA enrollment

0%

100%

N/A

Shared passwords

12

0

100%

Access review completion

Never performed

100% quarterly

N/A

Excessive privileges

67% of users

8% of users

88% reduction

Provisioning SLA compliance

N/A

96%

N/A

Termination SLA compliance

N/A

100%

N/A

Cryptography and Encryption

Domain 5 requires appropriate use of cryptography to protect data confidentiality and integrity—both in transit (on networks) and at rest (in storage).

Encryption Requirements by Data Classification:

Data Classification

In-Transit Encryption

At-Rest Encryption

Key Management

Compliance Requirements

Public

Optional

Optional

N/A

None

Internal

TLS 1.2+

Recommended

Standard key storage

General security

Confidential

TLS 1.2+ required

AES-256 required

Dedicated key management

PCI, SOC 2

Regulated

TLS 1.2+ required

AES-256 required

HSM or cloud KMS

PCI, HIPAA, SOX

Pacific Financial Services had minimal encryption:

  • HTTPS on external websites only

  • No database encryption

  • No file-level encryption

  • No key management system

  • Backup tapes unencrypted

Post-remediation encryption implementation:

Comprehensive Encryption Architecture:

Data in Transit: - All external communications: TLS 1.2 minimum (moving to TLS 1.3) - All internal APIs: TLS 1.2 required - VPN connections: AES-256 - Email: TLS required, S/MIME for sensitive messages - Evidence: SSL Labs reports, configuration scans

Data at Rest: - Databases: Transparent Data Encryption (TDE) for all production databases - File servers: BitLocker for Windows, LUKS for Linux - Cloud storage: Azure Storage Service Encryption - Backups: AES-256 encryption before offsite storage - Laptops/endpoints: Full disk encryption required (enforced via MDM) - Evidence: Encryption status reports, configuration audit logs
Key Management: - Production keys: Azure Key Vault (HSM-backed) - Backup encryption keys: Stored in separate Key Vault with restricted access - Key rotation: Automated annual rotation - Key access: Logged and monitored - Evidence: Key Vault audit logs, rotation schedules

Cryptographic Standards Policy:

Use Case

Algorithm

Minimum Key Length

Rotation Frequency

Symmetric encryption

AES

256-bit

Annually

Asymmetric encryption

RSA or ECC

RSA 2048-bit, ECC 256-bit

Every 2 years

Hashing

SHA-2

SHA-256

N/A (algorithm, not key)

Digital signatures

RSA or ECDSA

RSA 2048-bit, ECDSA 256-bit

Every 2 years

TLS/SSL

TLS 1.2+

128-bit cipher minimum

Certificate: annually

This encryption architecture satisfied PCI DSS, SOC 2, and internal security requirements while protecting sensitive financial and client data.

Network Security

Domain 5 requires network segmentation, boundary protection, and traffic monitoring to prevent unauthorized access and detect attacks.

Network Security Architecture:

Security Layer

Control Type

Implementation

Audit Evidence

Perimeter Defense

Firewall, IPS, WAF

Next-gen firewall, web application firewall

Firewall rules, IPS logs

Network Segmentation

VLANs, subnets, security zones

Separate networks for prod/dev/corp

Network diagrams, ACLs

Internal Firewalls

East-west traffic control

Micro-segmentation, zero trust

Internal firewall rules

Remote Access

VPN, secure gateway

Multi-factor VPN, Zscaler ZPA

VPN logs, access records

Wireless Security

WPA3, 802.1X

Enterprise wireless with RADIUS

Wireless configs, auth logs

DDoS Protection

Rate limiting, cloud scrubbing

Cloudflare DDoS protection

Traffic analysis, mitigation logs

Pacific Financial Services' network was flat—all systems on the same network with minimal segmentation. Trading systems, development environments, corporate workstations, and guest WiFi all on the same subnet. This created massive attack surface.

Post-remediation network architecture:

Segmented Network Design:

Network Zones:

Loading advertisement...
1. DMZ Zone (Public-Facing) - Web servers, VPN gateway, email gateway - No direct connectivity to internal networks - All inbound traffic through WAF - Firewall rules: Explicit allow only
2. Production Zone (Critical Systems) - Trading platform, portfolio management, databases - No internet access (air-gapped) - Access only from jump servers - Firewall rules: Default deny, explicit allows documented
3. Application Zone (Internal Applications) - Client portal backend, reporting systems, HR system - Limited internet access via proxy - Access from corporate zone restricted by role - Firewall rules: Least privilege
Loading advertisement...
4. Development Zone - Development servers, test environments, QA systems - Isolated from production (no connectivity) - Internet access allowed for tool downloads - Firewall rules: Separate from production
5. Corporate Zone - User workstations, printers, shared drives - Internet access via web proxy with content filtering - No direct access to production zone - Firewall rules: Restricted based on user role
6. Management Zone - IT administration systems, monitoring tools, backup servers - Jump servers for production access - Two-factor authentication required - Firewall rules: Highly restricted, logged
Loading advertisement...
Firewall Rule Philosophy: - Default deny all traffic - Explicit allows based on business need - All rules documented with business justification - Quarterly rule review and cleanup - Change management required for all rule changes

This network segmentation dramatically reduced attack surface—a breach in the corporate zone could no longer pivot directly to trading systems.

Physical and Environmental Security

While often overlooked in favor of cyber controls, Domain 5 includes physical security—protecting facilities, equipment, and media from unauthorized access, theft, or environmental damage.

Physical Security Controls:

Control Category

Requirements

Implementation

Audit Evidence

Access Control

Restrict facility access to authorized personnel

Badge readers, biometrics, visitor logs

Access logs, badge inventory

Surveillance

Monitor and record facility activity

CCTV with 90-day retention

Camera locations, recording logs

Environmental

Protect from fire, flood, power loss

Fire suppression, HVAC, UPS, generators

Inspection reports, test records

Equipment Security

Prevent theft or tampering

Locked server rooms, cable locks, asset tags

Asset inventory, security inspections

Media Handling

Secure storage and disposal

Locked cabinets, shredding, degaussing

Media logs, destruction certificates

Pacific Financial Services had reasonable physical security (badge access, cameras) but poor environmental controls and no media handling procedures.

Post-remediation physical security:

Server Room Environmental Controls:

Fire Suppression: - FM-200 clean agent system (no water damage to electronics) - Tested annually - Alarm integrated with building management system - Evidence: Test reports, alarm test logs

HVAC: - Redundant cooling units - Temperature monitoring with alerts (65-75°F target) - Humidity monitoring (40-60% target) - Evidence: Monitoring logs, maintenance records
Power: - Dual power feeds from separate utility sources - UPS: 30-minute runtime at full load - Generator: 72-hour runtime with fuel contract - Monthly generator test - Evidence: Test logs, fuel delivery receipts
Loading advertisement...
Water Detection: - Water sensors under raised floor - Alerts to 24/7 monitoring - Evidence: Sensor test logs, alert history
Access Control: - Biometric + badge required - Access logged and reviewed monthly - Maximum 12 authorized individuals - Evidence: Access logs, authorized user list

Media Handling Procedures:

Secure Storage:
- All backup tapes in locked cabinet
- Media inventory tracked
- Evidence: Media inventory logs
Offsite Storage: - Iron Mountain secure vault - Chain of custody maintained - Evidence: Transport logs, vault receipts
Loading advertisement...
Media Disposal: - Hard drives: Physical destruction (crushing) + certificate - Tapes: Degaussing + shredding + certificate - Paper: Cross-cut shredding - No disposal without certificate of destruction - Evidence: Destruction certificates, disposal logs

Security Monitoring and Incident Response

Domain 5 requires not just preventive controls but also detective controls—monitoring for security events, detecting incidents, and responding effectively.

Note: While Domain 4 covered operational aspects of incident response, Domain 5 focuses on security-specific monitoring and response.

Security Monitoring Architecture:

Log Sources (sent to Splunk):
- Firewalls: All allow/deny decisions
- VPN: All connection attempts
- Authentication: All login attempts (success and failure)
- Privileged access: All administrative actions
- Database: All queries against sensitive tables
- Endpoints: Security events from CrowdStrike
- Cloud: Azure AD logs, Azure activity logs
SIEM Use Cases (Splunk correlation rules): 1. Multiple failed logins → Potential brute force 2. Login from anomalous location → Potential account compromise 3. Privileged account used outside business hours → Unauthorized access 4. Large data transfer to external IP → Potential exfiltration 5. Endpoint malware detection → Infection 6. Firewall deny spike → Potential scan or attack 7. Account created outside approval process → Unauthorized account ... (34 use cases total)
Incident Response Tiers: Tier 1 (SOC Analyst): Initial investigation, alert triage Tier 2 (Senior SOC Analyst): Deep investigation, containment recommendations Tier 3 (Incident Response Team): Major incidents, forensics, recovery
Loading advertisement...
Incident Response Process: 1. Detection (automated alert or manual report) 2. Triage (assess severity and scope) 3. Containment (isolate affected systems) 4. Eradication (remove threat) 5. Recovery (restore systems) 6. Post-Incident Review (lessons learned)
All P1/P2 incidents documented with: - Timeline of events - Root cause analysis - Actions taken - Lessons learned - Preventive measures

After implementing this comprehensive Domain 5 program, Pacific Financial Services went from 6 security-related audit findings to zero in their follow-up audit.

"Domain 5 was our weakest area. We thought having a firewall and antivirus was 'security.' The comprehensive approach—access controls, encryption, network segmentation, monitoring, incident response—transformed us from vulnerable to defensible." — Pacific Financial Services CISO

Preparing for CISA-Aligned Audits: Practical Implementation Roadmap

Now that we've covered all five domains in detail, let me give you a practical roadmap for preparing your organization for CISA-aligned audits. This is the approach I use with clients, refined over dozens of implementations.

Phase 1: Baseline Assessment (Weeks 1-4)

Start by understanding where you are today across all five domains:

Assessment Activities:

Domain

Assessment Method

Key Questions

Output

Domain 1

Document review

Do you have audit schedules? Methodologies? Prior audit reports?

Audit maturity assessment

Domain 2

Governance review

Do you have IT strategy? Policies? Risk register? Steering committee?

Governance gaps

Domain 3

SDLC evaluation

Do you have documented SDLC? Change management? Testing processes?

Development maturity

Domain 4

Operations assessment

Do you have ITSM? SLAs? BC/DR tested? Capacity management?

Operational gaps

Domain 5

Security controls review

Do you have access controls? Encryption? Network segmentation? Monitoring?

Security control gaps

For Pacific Financial Services, this baseline revealed:

  • Domain 1: Minimal (no internal audit capability)

  • Domain 2: Weak (no governance structure)

  • Domain 3: Critical gaps (no SDLC, poor change management)

  • Domain 4: Weak (reactive operations, no BC/DR testing)

  • Domain 5: Critical gaps (poor access controls, minimal encryption, flat network)

Phase 2: Prioritization and Planning (Weeks 5-6)

You can't fix everything at once. Prioritize based on risk, compliance requirements, and audit timeline:

Prioritization Matrix:

Initiative

Risk Reduction

Compliance Impact

Implementation Effort

Priority

Implement MFA

High

Critical (SOC 2, PCI)

Low (2 weeks)

P1

Network segmentation

High

High (SOC 2)

High (12 weeks)

P1

SDLC documentation

Medium

Critical (SOC 2)

Medium (6 weeks)

P1

Access review process

High

Critical (SOC 2)

Low (4 weeks)

P1

BC/DR testing

Medium

High (SOC 2)

Medium (8 weeks)

P2

SIEM implementation

Medium

Medium (SOC 2)

High (16 weeks)

P2

Encryption at rest

Medium

Medium (PCI)

Medium (8 weeks)

P2

Capacity management

Low

Low

Medium (6 weeks)

P3

Phase 3: Quick Wins (Weeks 7-10)

Implement high-impact, low-effort improvements immediately:

Quick Win Initiatives:

  1. Multi-Factor Authentication (2 weeks, $15K)

    • Deploy Okta MFA

    • 100% user enrollment

    • Immediately reduces account compromise risk

  2. Access Review Process (4 weeks, $8K)

    • Generate access reports from directory

    • Manager review and attestation

    • Remove excessive access

  3. Password Policy (1 week, $0)

    • Enforce complexity requirements

    • Require 90-day rotation for privileged accounts

    • No shared passwords

  4. Vulnerability Scanning (2 weeks, $12K)

    • Deploy Tenable.io

    • Weekly scans

    • Track remediation

These quick wins demonstrate progress and build momentum for larger initiatives.

Phase 4: Foundation Building (Weeks 11-24)

Implement foundational controls and processes:

Foundation Initiatives:

  1. Governance Structure (8 weeks)

    • Establish IT steering committee

    • Document IT strategy

    • Create policy framework

    • Build risk register

  2. SDLC and Change Management (12 weeks)

    • Document SDLC methodology

    • Implement change advisory board

    • Deploy change management in ServiceNow

    • Train development teams

  3. ITSM Framework (10 weeks)

    • Deploy ServiceNow

    • Implement incident/problem management

    • Define SLAs

    • Train support teams

  4. Network Segmentation (12 weeks)

    • Design segmented architecture

    • Implement VLANs and firewalls

    • Migrate systems to appropriate zones

    • Document and test

Phase 5: Advanced Controls (Weeks 25-40)

Build sophisticated security and operational capabilities:

Advanced Initiatives:

  1. SIEM Implementation (16 weeks)

    • Deploy Splunk

    • Configure log sources

    • Build correlation rules

    • Train SOC analysts

  2. Privileged Access Management (12 weeks)

    • Deploy CyberArk

    • Migrate admin accounts

    • Implement session recording

    • Train administrators

  3. BC/DR Program (14 weeks)

    • Update BIA

    • Design DR architecture

    • Deploy Azure Site Recovery

    • Test recovery procedures

  4. Encryption Architecture (10 weeks)

    • Deploy database TDE

    • Implement Azure Key Vault

    • Encrypt backups

    • Enforce endpoint encryption

Phase 6: Testing and Refinement (Weeks 41-48)

Before the audit, test everything:

Pre-Audit Testing:

  1. Internal Audit (4 weeks)

    • Conduct mock audit using CISA framework

    • Identify remaining gaps

    • Remediate findings

  2. Control Testing (3 weeks)

    • Test samples of each control

    • Validate evidence availability

    • Fix documentation gaps

  3. Tabletop Exercises (1 week)

    • Walk through procedures with teams

    • Ensure everyone knows their role

    • Practice explaining controls to auditors

For Pacific Financial Services, this 48-week roadmap transformed them from failed audit to clean audit in 12 months.

The CISA Mindset: Thinking Like an Auditor

The single most valuable lesson I can share is this: auditors evaluate controls for three qualities—design, implementation, and operating effectiveness.

Control Evaluation Framework:

Quality

Question

Evidence Required

Example

Design

Is the control adequate to address the risk?

Control documentation, policy, procedure

"We require manager approval for access" - Good design

Implementation

Has the control been actually deployed?

Configuration evidence, system settings

Approval workflow configured in ServiceNow - Implemented

Operating Effectiveness

Does the control work consistently over time?

Sample testing over period

25 access requests tested, all have manager approval - Effective

A control can have good design but poor implementation (documented but not deployed). Or good design and implementation but poor operating effectiveness (deployed but not consistently followed).

At Pacific Financial Services, many of their controls failed the operating effectiveness test. They had decent policies (design) and some deployed tools (implementation), but evidence showed controls weren't consistently operating. Access reviews were supposed to be quarterly—but hadn't occurred in 8 months. That's an operating effectiveness failure.

Thinking Like an Auditor Means:

  1. Documentation Discipline: If it's not documented, it doesn't exist. Auditors trust evidence, not explanations.

  2. Consistency: Controls must operate the same way every time. One-off successes don't demonstrate effectiveness.

  3. Independence: Controls are stronger when performed by independent parties (segregation of duties).

  4. Timeliness: Controls must operate within defined timeframes. Quarterly reviews on a 10-month schedule fail.

  5. Completeness: Partial coverage isn't sufficient. If access reviews cover 80% of users, that's a gap for 20%.

When you internalize this mindset, you stop just having policies and start having effective controls that auditors can validate.

Conclusion: From Audit Failure to IT Excellence

As I reflect on Pacific Financial Services' journey—from that embarrassing conference room moment when nobody knew what Domain 3 meant, through 12 months of intense remediation, to a clean audit and dramatically improved IT operations—I'm struck by how transformative the CISA framework proved to be.

The five domains aren't just audit categories—they're a comprehensive blueprint for IT excellence:

Domain 1 taught them to approach audits systematically, understanding what auditors look for and how to demonstrate control effectiveness.

Domain 2 gave them governance structure, aligning IT with business strategy and managing risks proactively rather than reactively.

Domain 3 brought discipline to how they build and change systems, preventing the deployment chaos that plagued their failed audit.

Domain 4 transformed their reactive firefighting into proactive operations management, dramatically improving reliability and resilience.

Domain 5 evolved their security from basic antivirus and firewall to comprehensive defense-in-depth with access controls, encryption, segmentation, and monitoring.

The financial impact was measurable:

  • Audit costs: Reduced from $340,000 (failed audit + remediation) to $85,000 (annual clean audits)

  • Client retention: Prevented loss of 2 clients worth $4.2M annually

  • Operational incidents: Reduced 74%, saving approximately $280,000 annually in incident costs

  • Insurance premiums: Cyber insurance reduced 23% due to improved controls

  • New business: Won 3 enterprise clients specifically due to clean SOC 2 report

But more importantly, their IT organization transformed from a cost center that caused problems to a strategic enabler that drives business value.

The IT Director who almost lost his job in the failed audit? He's now the CIO, promoted based on the transformation he led. The CFO who didn't know what Domain 3 was? He's now the biggest advocate for IT governance, regularly presenting the IT risk dashboard to the board.

And that lead auditor who shook her head in disappointment during the failed audit? Her firm now uses Pacific Financial Services as a reference example of how to remediate audit findings and build sustainable IT governance.

Your Next Steps: Building CISA-Aligned IT Governance

Whether you're facing an upcoming audit, pursuing CISA certification, or just trying to build better IT governance, the five domains provide your roadmap:

  1. Assess Your Current State: Honestly evaluate where you stand in each domain. Don't guess—gather evidence.

  2. Identify Critical Gaps: Prioritize based on risk, compliance requirements, and business impact. Focus on high-risk, high-impact gaps first.

  3. Build Foundations Before Advanced Controls: You can't implement effective security monitoring (Domain 5) without operational stability (Domain 4). You can't have operational stability without proper change management (Domain 3). Build in sequence.

  4. Document Everything: Auditors trust evidence, not explanations. If your controls aren't documented and evidenced, they don't exist from an audit perspective.

  5. Test Operating Effectiveness: Don't just deploy controls—verify they work consistently over time. Test samples, review logs, validate outcomes.

  6. Create Feedback Loops: Use audit findings, incident analysis, and control testing to continuously improve. CISA-aligned governance isn't a destination—it's an ongoing journey.

The CISA domains aren't just about passing audits—though they'll definitely help you do that. They're about building IT organizations that are governed strategically, built systematically, operated reliably, and secured comprehensively. Organizations that deliver business value while managing risk appropriately.

At PentesterWorld, we've guided dozens of organizations through this transformation—from failed audits to IT excellence, from reactive firefighting to proactive governance, from security theater to genuine resilience. We understand the CISA domains, the audit process, the common gaps, and most importantly—how to build sustainable programs that satisfy auditors while actually improving your operations.

Whether you're preparing for your first CISA-aligned audit or overhauling a program that's lost its way, the principles I've outlined here will serve you well. Don't wait for your own embarrassing conference room moment. Build your CISA-aligned IT governance today.


Want to discuss your audit preparation needs? Have questions about implementing CISA-aligned controls? Visit PentesterWorld where we transform audit gaps into IT governance excellence. Our team of CISA-certified practitioners has guided organizations from failed audits to industry-leading maturity. Let's build your audit-ready IT organization together.

119

RELATED ARTICLES

COMMENTS (0)

No comments yet. Be the first to share your thoughts!

SYSTEM/FOOTER
OKSEC100%

TOP HACKER

1,247

CERTIFICATIONS

2,156

ACTIVE LABS

8,392

SUCCESS RATE

96.8%

PENTESTERWORLD

ELITE HACKER PLAYGROUND

Your ultimate destination for mastering the art of ethical hacking. Join the elite community of penetration testers and security researchers.

SYSTEM STATUS

CPU:42%
MEMORY:67%
USERS:2,156
THREATS:3
UPTIME:99.97%

CONTACT

EMAIL: [email protected]

SUPPORT: [email protected]

RESPONSE: < 24 HOURS

GLOBAL STATISTICS

127

COUNTRIES

15

LANGUAGES

12,392

LABS COMPLETED

15,847

TOTAL USERS

3,156

CERTIFICATIONS

96.8%

SUCCESS RATE

SECURITY FEATURES

SSL/TLS ENCRYPTION (256-BIT)
TWO-FACTOR AUTHENTICATION
DDoS PROTECTION & MITIGATION
SOC 2 TYPE II CERTIFIED

LEARNING PATHS

WEB APPLICATION SECURITYINTERMEDIATE
NETWORK PENETRATION TESTINGADVANCED
MOBILE SECURITY TESTINGINTERMEDIATE
CLOUD SECURITY ASSESSMENTADVANCED

CERTIFICATIONS

COMPTIA SECURITY+
CEH (CERTIFIED ETHICAL HACKER)
OSCP (OFFENSIVE SECURITY)
CISSP (ISC²)
SSL SECUREDPRIVACY PROTECTED24/7 MONITORING

© 2026 PENTESTERWORLD. ALL RIGHTS RESERVED.