SDLC Phases and Security Activities
The following table summarizes security activities across each SDLC phase:| Phase | Primary Security Activities | Key Outputs |
|---|---|---|
| Requirements | Risk assessment, abuse cases, compliance mapping | Security requirements with acceptance criteria |
| Design | Threat modeling, architecture review, ADRs | Security design patterns, compensating controls |
| Implementation | SAST, SCA, secure coding, code review | Vulnerability-free code, security checklists |
| Verification | DAST, IAST, fuzzing, penetration testing | Security test results, policy compliance |
| Release | Artifact signing, provenance verification | Signed artifacts, deployment approval |
| Operations | Monitoring, incident response, vulnerability management | Security metrics, lessons learned |
Requirements Phase
Effective security requirements emerge from systematic risk analysis:- Risk-driven requirements: Threat modeling and risk assessment outputs should directly inform security requirements, ensuring coverage of identified attack vectors
- Verifiable acceptance criteria: Each security requirement needs measurable acceptance criteria that enable automated or manual verification
- Prioritization parity: Security requirements must be prioritized alongside functional requirements to ensure appropriate resourcing and sprint allocation
- Abuse case documentation: Document how the system could be misused to systematically identify security requirements (see OWASP Abuse Case Cheat Sheet)
- Early compliance mapping: Identify applicable compliance frameworks (SOC 2, PCI DSS, HIPAA, GDPR) early to prevent late-stage surprises
Design Phase
Architecture decisions have outsized security impact. Address security during design through:- Architecture security reviews: Conduct reviews before implementation begins, focusing on trust boundaries, data flows, and authentication mechanisms
- Structured threat modeling: Use methodologies like STRIDE or PASTA to systematically identify threats and mitigations
- Abuse case integration: Design should explicitly prevent documented abuse cases with appropriate controls
- Compensating controls: For accepted risks, identify and document compensating controls that reduce residual risk to acceptable levels
- Security design patterns: Apply established patterns (defense in depth, least privilege, secure defaults) to prevent common vulnerability classes
- Architecture Decision Records: Document security trade-offs in ADRs to preserve decision rationale for future maintainers
Implementation Phase
Secure implementation requires both tooling and developer enablement:| Tool Category | Purpose | Integration Point | Action on Finding |
|---|---|---|---|
| Secure frameworks (“paved roads”) | Secure-by-default libraries | Development | Prevention |
| Security linters | Catch common mistakes | Pre-commit, CI | Warning/Block |
| SAST | Find code vulnerabilities | CI pipeline | Block on high severity |
| SCA | Identify vulnerable dependencies | CI pipeline | Block on critical CVEs |
- Paved roads: Provide secure-by-default frameworks that make the secure path the easy path
- Security linters: Run on every commit using tools like Semgrep, ESLint security plugins, or Bandit for Python
- Static Application Security Testing (SAST): Tools like SonarQube, Checkmarx, or CodeQL should block builds on high-severity findings
- Software Composition Analysis (SCA): Use Snyk, Dependabot, or OWASP Dependency-Check to identify vulnerable dependencies
- Security-focused code review: Train reviewers on common vulnerability patterns and enforce security checklists via PR templates
- Pre-commit hooks: Enable local security checks for fast feedback before code leaves the developer machine
Verification Phase
Verification validates that security controls function as designed:- Dynamic Application Security Testing (DAST): Tools like OWASP ZAP or Burp Suite test running applications for runtime vulnerabilities
- Interactive Application Security Testing (IAST): Combines SAST and DAST approaches with runtime instrumentation for improved accuracy and context
- Fuzzing: Use AFL, libFuzzer, or OSS-Fuzz to discover edge cases—prioritize coverage of parsers and input handlers
- Penetration testing: Engage skilled testers to simulate real attack scenarios, following methodologies like OWASP Testing Guide or PTES
- Policy enforcement: Validate compliance with security policies; violations should block deployment
- Negative testing: Verify security controls work by testing that unauthorized actions are properly denied
- Security acceptance tests: Automate verification of security requirements as executable specifications
Release Phase
Secure release practices protect the software supply chain:- Artifact signing: Sign all release artifacts using Sigstore or GPG to prove authenticity
- Provenance verification: Verify artifacts originate from trusted builds using SLSA framework attestations
- Security-inclusive change management: Require security review as part of change approval processes
- Tested rollback plans: Validate rollback procedures before release to enable rapid recovery
- Gradual deployment: Use canary releases or progressive rollouts with monitoring to limit blast radius
Operations Phase
Operational security activities complete the feedback loop:- Security event detection: Implement logging and monitoring capable of detecting security events and anomalies
- Incident response readiness: Conduct regular drills to validate incident response procedures
- Vulnerability management: Track and remediate vulnerabilities according to severity-based SLAs (see NIST vulnerability management guidance)
- Post-incident learning: Incorporate lessons learned into processes, requirements, and tests to prevent recurrence
- Security metrics reporting: Track and report metrics demonstrating program effectiveness to stakeholders
Security Gates
Security gates define mandatory checkpoints where specific criteria must be met before proceeding to the next phase. Well-designed gates balance rigor with development velocity.Gate Design Principles
- Objective and measurable criteria: Gate criteria should be automatable where possible, removing subjectivity and enabling consistent enforcement
- Blocking on failure: Gate failures must block progression to ensure issues are addressed before they propagate downstream
- Clear escalation paths: Define how exceptions are handled when legitimate business needs conflict with gate criteria
Common Security Gates
| Gate | Timing | Key Criteria | Prevents |
|---|---|---|---|
| Requirements | Before design | Security requirements defined with acceptance criteria | Building insecure features |
| Design | Before implementation | Architecture reviewed, threat model complete | Architectural security flaws |
| Code | Before merge | SAST clean, SCA clean, security review complete | Vulnerable code merging |
| Test | Before staging | Security tests pass, penetration test complete | Untested security controls |
| Release | Before production | Artifacts signed, final scans clean, approvals obtained | Vulnerable releases |
Gate Automation
Maximize gate automation to ensure consistency and reduce friction:- Automate objective checks: SAST, SCA, policy validation, and test execution should run automatically in CI/CD pipelines
- Define clear manual criteria: For gates requiring human judgment, document specific criteria and designate qualified approvers
- Track gate metrics: Monitor pass/fail rates, time-to-pass, and retry frequency to identify systemic issues
Policy-as-Code and Test-Driven Security
Encoding security requirements as code enables automation, version control, and consistent enforcement across environments.Policy-as-Code Implementation
Security policies expressed as code can be validated, tested, and enforced automatically:- Policy engines: Use Open Policy Agent (OPA) with Rego or Cedar for declarative policy definition
- Multi-stage enforcement: Enforce policies in CI/CD (pre-deployment) and at runtime (admission controllers, service mesh) for defense-in-depth
- Blocking violations: Policy violations should block merges and deployments, not just generate warnings
- Policy testing: Policies are code—test them with unit tests covering expected allow/deny scenarios
Security Acceptance Tests
Transform security requirements into executable specifications:- Codified requirements: Each security requirement should map to one or more automated tests
- CI/CD integration: Run security tests on every commit for fast feedback
- Coverage measurement: Track what percentage of security requirements have corresponding tests
- Continuous verification: Security tests should run not just at release, but continuously against production-like environments
Abuse Case Regression Tests
Prevent security regressions by converting abuse cases into permanent test fixtures:- Regression test conversion: Each documented abuse case should become a regression test that verifies the attack is prevented
- Mitigation validation: Tests should verify that identified mitigations actually prevent the abuse scenario
- Ongoing maintenance: Maintain abuse case tests as the system evolves to ensure continued protection
Exception and Risk Acceptance Management
Not every security finding can be immediately remediated. A formal exception process ensures risks are explicitly accepted by appropriate stakeholders.Exception Process Requirements
| Exception Severity | Required Approver | Max Duration | Review Frequency |
|---|---|---|---|
| Critical | CISO or delegate | 7 days | Daily |
| High | Security lead + Engineering director | 30 days | Weekly |
| Medium | Security team member | 90 days | Monthly |
| Low | Engineering lead | 180 days | Quarterly |
- Formal approval workflow: Exceptions require documented approval from stakeholders with appropriate authority
- Impact-based thresholds: Higher-impact exceptions require higher-level approval
- Designated approvers: Pre-identify who can approve exceptions at each threshold level
- Documented rationale: Record the business justification and risk analysis enabling future review
Compensating Controls
When accepting risk through exceptions, compensating controls reduce exposure:- Mandatory compensating controls: Each exception should identify controls that reduce the risk to an acceptable level
- Documentation and verification: Document what compensating controls are in place and verify they function as intended
- Continuous monitoring: Monitor compensating controls to detect failures that would expose the accepted risk
Exception Expiration
Prevent exceptions from becoming permanent by enforcing time limits:- Mandatory expiration dates: All exceptions must have defined expiration dates forcing periodic review
- Re-approval requirements: Renewal requires fresh approval, not automatic extension
- Automated flagging: Systems should automatically flag expired exceptions and alert responsible parties
Secure SDLC Metrics
Metrics enable data-driven security program management. Track these key indicators to measure secure SDLC effectiveness.Key Metrics Dashboard
| Metric Category | Key Measures | Target Trend | Alert Threshold |
|---|---|---|---|
| Security Test Coverage | % of security requirements with tests | Increasing | Below 80% |
| Gate Pass Rate | % of attempts passing on first try | Increasing | Below 70% |
| Defect Escape Rate | Vulnerabilities found in production | Decreasing | Above baseline |
| MTTR | Days from discovery to remediation | Decreasing | Exceeds SLA |
| Security Debt | Count of known unfixed vulnerabilities | Stable or decreasing | Increasing trend |
Security Test Coverage
- Coverage measurement: Track what percentage of documented security requirements have corresponding automated tests
- Type-specific tracking: Break down coverage by requirement type (authentication, authorization, encryption, etc.) to identify gaps
- Trend monitoring: Coverage should increase over time as the security test suite matures
Gate Pass/Fail Rates
- First-pass rate: Measures development team security awareness—low rates indicate training needs
- Failure analysis: Categorize failures by root cause to identify systemic issues worth addressing
- Retry rate: High retry rates suggest unclear gate criteria or tooling issues
Defect Escape Rate
Defect escape rate is the ultimate measure of secure SDLC effectiveness:- Production vulnerability tracking: Count vulnerabilities discovered in production that should have been caught earlier
- Phase attribution: Identify which phase should have caught each escaped defect to target process improvements
- Trend analysis: Escape rate should decrease over time as processes mature
Mean Time to Remediate (MTTR)
- Severity-stratified measurement: Track MTTR separately for critical, high, medium, and low severity findings
- SLA compliance: Compare actual MTTR against defined SLAs to measure operational effectiveness
- Trend analysis: Increasing MTTR trends indicate capacity or process issues requiring attention
Security Debt
- Debt quantification: Track known vulnerabilities awaiting remediation, weighted by severity and age
- Remediation planning: Each debt item should have an assigned owner and remediation plan
- Sustainability monitoring: Consistently increasing security debt indicates unsustainable development pace
Secure SDLC Maturity Model
Organizations progress through maturity levels as secure SDLC practices become embedded. Use this model based on OWASP SAMM and BSIMM to assess and improve your security posture.| Maturity Level | Characteristics | Process Quality | Typical Metrics |
|---|---|---|---|
| Level 1: Ad Hoc | No defined process, reactive security, inconsistent activities | Unpredictable | No formal tracking |
| Level 2: Defined | Documented processes, repeatable activities, basic tooling | Consistent | Pass/fail rates |
| Level 3: Measured | Security metrics tracked, data-driven decisions, KPIs defined | Quantified | Full dashboard |
| Level 4: Optimized | Continuous improvement, high automation, proactive security | Optimizing | Leading indicators |
Progression Indicators
Moving between maturity levels requires specific capabilities:- Ad Hoc → Defined: Document security activities, implement basic security gates, train development teams
- Defined → Measured: Implement metrics collection, establish baselines, create security dashboards
- Measured → Optimized: Use data to drive improvement, maximize automation, implement predictive analytics
Conclusion
Secure Software Development Lifecycle embeds security into every development phase through structured activities in requirements, design, implementation, verification, release, and operations. Security engineers encode security as tests and policies, implement automated security gates, and measure program effectiveness through key metrics. Key success factors:- Security activities integrated into each SDLC phase with clear ownership
- Automated security gates with objective, measurable criteria
- Policy-as-code and test-driven security for consistent enforcement
- Formal exception management with compensating controls and expiration dates
- Metrics tracking coverage, gate performance, escape rates, and remediation times
References
- BSIMM (Building Security In Maturity Model) - Industry benchmark for software security initiatives
- OWASP SAMM (Software Assurance Maturity Model) - Open framework for building security into development
- NIST SP 800-64: Security Considerations in the System Development Life Cycle - Federal guidance on integrating security into SDLC
- Microsoft Security Development Lifecycle (SDL) - Microsoft’s security development practices
- SAFECode Fundamental Practices for Secure Software Development - Industry consortium guidance
- CWE Top 25 Most Dangerous Software Weaknesses - Common vulnerability reference for requirements and testing

