Cloud Penetration Testing Across AWS, Azure, and GCP: What It Actually Covers and Why Traditional Pentesting Is Not Enough | Lorikeet Security Skip to main content
Back to Blog

Cloud Penetration Testing Across AWS, Azure, and GCP: What It Actually Covers and Why Traditional Pentesting Is Not Enough

Lorikeet Security Team April 9, 2026 52 min read

TL;DR: Traditional penetration testing was designed for networks with clear boundaries — firewalls, servers, endpoints. Cloud environments have none of that. They are API-driven, identity-centric, and composed of hundreds of interconnected services where a single misconfigured IAM policy can expose more data than a compromised domain controller ever could. Cloud penetration testing evaluates provider-specific attack surfaces across AWS, Azure, and GCP — targeting IAM misconfigurations, storage exposure, metadata service abuse, serverless injection, privilege escalation paths, and lateral movement techniques that automated scanners cannot reliably detect. This guide breaks down exactly what a cloud pentest covers, how each major provider differs, and why your organization almost certainly has cloud misconfigurations right now.

Why Cloud Environments Need Specialized Penetration Testing

The migration to cloud infrastructure has fundamentally altered the attack surface that organizations must defend. In a traditional on-premises environment, the security perimeter is relatively well-defined: firewalls separate internal networks from the internet, servers sit in data centers with physical access controls, and the network topology is stable enough that a penetration tester can map it, probe it, and report on it using well-established methodologies. Cloud environments break every one of those assumptions.

In AWS, Azure, and GCP, infrastructure is provisioned through APIs. Resources are ephemeral — an EC2 instance might exist for hours, a Lambda function for milliseconds. Network boundaries are defined by security groups and virtual private clouds that can be modified programmatically by anyone with the right IAM permissions. Storage is accessible via URLs. Identity is the new perimeter, and identity in the cloud is staggeringly complex. AWS alone has over 17,000 distinct IAM actions across its services. Azure has thousands of RBAC permissions. GCP has its own parallel universe of IAM bindings and service accounts.

Traditional penetration testing methodologies — PTES, OWASP, NIST SP 800-115 — were designed for a world where you scan IP ranges, find open ports, exploit vulnerable services, and pivot through networks. They do not adequately address the question of whether your AssumeRole trust policy allows any AWS account to assume a privileged role, or whether your Azure service principal has Owner permissions on every subscription, or whether your GCP service account keys were committed to a public repository three years ago and never rotated.

This is not a theoretical problem. The Verizon Data Breach Investigations Report has consistently shown that cloud misconfiguration is one of the fastest-growing categories of data breaches. Capital One's 2019 breach — which exposed over 100 million customer records — was caused by a misconfigured WAF role that allowed server-side request forgery (SSRF) to the EC2 instance metadata service, which returned temporary IAM credentials with access to S3 buckets containing sensitive data. That attack path involved zero software vulnerabilities. It was entirely a configuration and architecture issue. A traditional pentest looking for SQL injection and cross-site scripting would never have found it.

Cloud penetration testing exists to fill this gap. It is a specialized discipline that combines deep knowledge of cloud provider APIs, IAM systems, service interactions, and provider-specific attack techniques with the adversarial mindset of traditional penetration testing. It answers the question that keeps CISOs awake at night: if an attacker gained initial access to our cloud environment — through a compromised developer credential, a vulnerable application, or a misconfigured service — how far could they go, and what could they reach?


The Shared Responsibility Model and Where Organizations Fail

Every major cloud provider operates under a shared responsibility model. The concept is straightforward: the cloud provider is responsible for security of the cloud (the physical infrastructure, hypervisors, network fabric, and managed service internals), while the customer is responsible for security in the cloud (their configurations, data, identity management, network controls, and application code). In practice, this delineation creates a massive gray area where organizations consistently fail.

The shared responsibility model shifts depending on the service tier. With Infrastructure as a Service (IaaS) — EC2 instances, Azure VMs, GCE instances — the customer is responsible for everything from the operating system up: patching, hardening, network configuration, identity management, and application security. With Platform as a Service (PaaS) — RDS, Azure App Service, Cloud SQL — the provider handles the OS and runtime, but the customer still owns data classification, access controls, and application-level configuration. With Software as a Service (SaaS) — Office 365, Google Workspace — the customer's responsibility narrows to identity management, data governance, and configuration settings. But "narrowing" does not mean "trivial." Misconfigured SaaS settings have led to some of the most significant data exposures in recent years.

Where organizations fail is almost always at the boundaries. They assume that because AWS "manages" an RDS instance, the database is secure. It is not, if the security group allows 0.0.0.0/0 on port 3306 and the master password is admin123. They assume that because Azure "manages" Key Vault, their secrets are protected. They are not, if every service principal in the tenant has Get and List permissions on every secret. They assume that because GCP "manages" Cloud Storage, their data is safe. It is not, if the bucket's IAM binding grants allUsers the storage.objectViewer role.

The most common failures we observe in cloud penetration testing engagements fall into predictable categories. First, identity overprivilege: organizations grant broad permissions during development ("just give it admin so it works") and never scope them down. Second, network overexposure: security groups and firewall rules that are far more permissive than necessary, often because the original developer left and nobody understands why port 8080 is open to the internet. Third, secrets mismanagement: API keys, database credentials, and service account keys stored in environment variables, code repositories, or configuration files instead of dedicated secrets management services. Fourth, logging and monitoring gaps: CloudTrail disabled in secondary regions, Azure Activity Log not forwarded to a SIEM, GCP audit logs not retained beyond 30 days. Fifth, encryption negligence: data at rest using provider-managed keys (or no encryption at all) instead of customer-managed keys, and data in transit over unencrypted channels between internal services.

A cloud penetration test specifically evaluates each of these failure modes. It validates whether the shared responsibility model is being upheld on the customer's side, and it does so from an attacker's perspective — not by checking boxes on a compliance framework, but by actively attempting to exploit misconfigurations to demonstrate real business impact.


AWS-Specific Attack Surfaces

Amazon Web Services is the most widely adopted cloud provider and, consequently, the one with the broadest and most thoroughly researched attack surface. AWS penetration testing must account for the sheer breadth of services (over 200 at last count) and the intricate ways they interact through IAM, networking, and service-level integrations. Below are the primary attack surfaces we target in every AWS cloud penetration test.

IAM Misconfigurations and Privilege Escalation

AWS Identity and Access Management (IAM) is the foundational control plane for every AWS account. Every API call — whether it creates an EC2 instance, reads from an S3 bucket, or invokes a Lambda function — is authorized through IAM. This makes IAM the single most critical attack surface in any AWS environment, and it is the one most frequently misconfigured.

IAM policies in AWS are JSON documents that specify which principals (users, roles, groups) can perform which actions on which resources under which conditions. The expressiveness of this policy language is both its strength and its weakness. A single wildcard in the wrong place can grant far more access than intended. Consider a policy that grants s3:* on Resource: "*" — this grants full S3 access to every bucket in the account, including buckets that might contain database backups, application logs with PII, or infrastructure-as-code templates with embedded credentials.

Privilege escalation through IAM is a well-documented attack technique. Rhino Security Labs identified over 20 distinct privilege escalation paths in AWS IAM, and the number has grown since. The core principle is simple: if a user or role has the ability to modify IAM policies, create new roles, or attach policies, they can grant themselves additional permissions — potentially up to full administrator access. For example, a user with iam:CreatePolicyVersion permission can create a new version of an existing policy with expanded permissions and set it as the default. A user with iam:AttachUserPolicy can attach the AdministratorAccess managed policy to their own user. A user with iam:PassRole and lambda:CreateFunction can create a Lambda function that executes with a more privileged role, effectively borrowing that role's permissions.

During a cloud pentest, we systematically enumerate all IAM users, roles, groups, and policies. We map the effective permissions for each principal, identify overprivileged accounts, and test known privilege escalation paths. We look for users with programmatic access keys that have not been rotated, roles with trust policies that are too permissive, and inline policies that grant unnecessary permissions. We check for the use of iam:PassRole with wildcards, which allows a principal to pass any role to any service — a common escalation vector. We examine cross-account role assumptions to determine whether external accounts can assume roles in the target environment. And we validate whether multi-factor authentication is enforced for privileged operations, particularly for the root account.

S3 Bucket Policies and Object-Level Access

Amazon S3 remains one of the most commonly misconfigured AWS services. Despite years of publicity around S3 data breaches and Amazon's introduction of S3 Block Public Access as an account-level setting, we still find publicly accessible buckets in a significant percentage of engagements. The reasons are varied: legacy buckets created before Block Public Access existed, buckets in accounts where the account-level setting was never enabled, and buckets with complex policies that inadvertently grant more access than intended.

S3 access control is governed by multiple overlapping mechanisms: bucket policies, IAM policies, access control lists (ACLs), S3 Block Public Access settings, and S3 access points. The interaction between these mechanisms is not straightforward. A bucket policy might deny public access, but an ACL on a specific object might grant it. An IAM policy might restrict a user's S3 access, but a bucket policy with a broad principal might override that restriction. During a pentest, we evaluate all layers of S3 access control for every bucket we can discover.

Beyond public access, we look for buckets that are accessible to any authenticated AWS user (the AuthenticatedUsers group), buckets with cross-account access policies, and buckets that store sensitive data without server-side encryption. We check whether bucket versioning is enabled (which preserves previous versions of objects, potentially including sensitive data that was "deleted"), whether access logging is enabled, and whether the bucket has any event notifications configured that might reveal the bucket's purpose to an attacker. We also test for S3 bucket takeover vulnerabilities, where a bucket name referenced by DNS (such as a CNAME pointing to bucket.s3.amazonaws.com) no longer exists and can be claimed by an attacker.

EC2 Instance Metadata Service (IMDS) Exploitation

The EC2 Instance Metadata Service is a local HTTP endpoint (http://169.254.169.254) accessible from within any EC2 instance. It provides instance-level information including the instance ID, availability zone, security group membership, and — critically — temporary IAM credentials for any IAM role attached to the instance. These credentials are valid for several hours and grant whatever permissions the instance role has.

The original metadata service (IMDSv1) responds to simple HTTP GET requests with no authentication. This design made it trivially exploitable through server-side request forgery (SSRF) vulnerabilities in applications running on EC2 instances. An attacker who discovers an SSRF vulnerability in a web application hosted on EC2 can direct the application to make requests to http://169.254.169.254/latest/meta-data/iam/security-credentials/, retrieve the role name, and then request http://169.254.169.254/latest/meta-data/iam/security-credentials/{role-name} to obtain temporary AccessKeyId, SecretAccessKey, and Token values. These credentials can then be used from outside the instance to interact with AWS APIs as the instance's role.

AWS introduced IMDSv2 to mitigate this attack. IMDSv2 requires a PUT request to obtain a session token, which must then be included as an X-aws-ec2-metadata-token header in subsequent GET requests. Because most SSRF vulnerabilities only allow GET requests (or do not allow custom headers), IMDSv2 makes SSRF-based metadata theft significantly more difficult. However, IMDSv2 is not enabled by default on all instance types, and many organizations have not migrated existing instances. During a cloud pentest, we check whether IMDSv2 is enforced (HttpTokens: required) on all instances, test for SSRF vulnerabilities in hosted applications, and validate that instance roles follow the principle of least privilege.

Lambda Function Injection and Serverless Attack Surface

AWS Lambda functions execute code in response to events — API Gateway requests, S3 object uploads, SQS messages, DynamoDB stream changes, and dozens of other triggers. Because Lambda abstracts away the underlying infrastructure, organizations often treat Lambda functions as inherently secure. They are not. Lambda functions are code, and code has vulnerabilities.

The primary attack vectors against Lambda functions include: event data injection, where untrusted input from the event source is used unsafely in the function's code (for example, inserting user-supplied values directly into SQL queries or OS commands); dependency vulnerabilities, where the Lambda deployment package includes libraries with known CVEs; environment variable exposure, where sensitive credentials are stored as Lambda environment variables and accessible to anyone with lambda:GetFunctionConfiguration permission; and overprivileged execution roles, where the Lambda function's IAM role has far more permissions than the function actually needs.

We test Lambda functions by reviewing their triggers (to understand what untrusted input they receive), examining their code and dependencies (if source code access is provided), checking their environment variables for secrets, analyzing their execution roles for overprivilege, and testing for injection vulnerabilities through their event sources. We also look for Lambda functions that execute in a VPC with access to internal resources — these functions can potentially be used as pivot points for lateral movement within the VPC's network.

STS Token Abuse and Credential Chaining

The AWS Security Token Service (STS) is the mechanism that enables temporary credential issuance, cross-account role assumption, and identity federation. STS is what makes the AssumeRole, AssumeRoleWithSAML, and AssumeRoleWithWebIdentity API calls possible. It is also the mechanism that attackers abuse for credential chaining — using one set of compromised credentials to obtain additional credentials with different (and potentially broader) permissions.

The most common STS abuse pattern involves role chaining: an attacker compromises initial credentials (perhaps from a leaked .env file or a compromised CI/CD pipeline), uses those credentials to call sts:AssumeRole on a role with a permissive trust policy, and then uses the assumed role's permissions to escalate further. This can chain through multiple roles, potentially across multiple AWS accounts if cross-account trust is configured. Each hop in the chain might grant progressively broader access, and the credential rotation that happens at each step can make forensic analysis more difficult.

During a pentest, we map all role trust policies to identify which principals can assume which roles, including cross-account trust. We test whether external AWS accounts (including our own attacker accounts) can assume any roles. We check for the sts:AssumeRole permission with wildcard resources, which would allow a principal to assume any role in the account. We also examine SAML and OIDC federation configurations for weaknesses that could allow an attacker to forge assertions or tokens and assume federated roles.


Azure-Specific Attack Surfaces

Microsoft Azure presents a unique penetration testing challenge because of its deep integration with Azure Active Directory (now Entra ID), its hybrid identity model that bridges on-premises Active Directory with cloud resources, and its extensive enterprise service ecosystem. Azure's attack surface is fundamentally shaped by its identity architecture, and many of the most critical vulnerabilities we find are rooted in how Entra ID is configured and how its permissions cascade through the Azure resource hierarchy.

Azure AD / Entra ID Misconfigurations

Entra ID (formerly Azure Active Directory) is not just an authentication provider — it is the control plane for the entire Azure ecosystem and, in many organizations, for Microsoft 365, Dynamics 365, and hundreds of integrated SaaS applications. Misconfigurations in Entra ID can have blast radii that extend far beyond Azure resources.

The most critical Entra ID misconfigurations we test for include: overprivileged directory roles, particularly the Global Administrator role, which grants unrestricted access to all Entra ID and Azure resources; application registration misconfigurations, where enterprise applications are granted excessive Microsoft Graph API permissions (especially Application.ReadWrite.All or RoleManagement.ReadWrite.Directory, which allow an application to modify any application or assign any directory role); consent grant attacks, where the tenant is configured to allow user consent for enterprise applications, enabling an attacker to create a malicious application and trick a user into granting it access to their mailbox, files, or other resources; and conditional access policy gaps, where MFA is not required for all scenarios, or where legacy authentication protocols bypass modern security controls.

We also examine the tenant's external collaboration settings, which control how guest users can interact with the directory. Many organizations have overly permissive guest settings that allow external users to enumerate the directory, read group memberships, or access applications that were not intended for external access. We check whether self-service password reset is configured securely, whether password protection policies prevent common passwords, and whether sign-in logs and audit logs are being collected and monitored.

A particularly dangerous misconfiguration involves the User.ReadWrite.All and Directory.ReadWrite.All Microsoft Graph permissions. Applications with these permissions can modify any user's properties, including their authentication methods. An attacker who compromises such an application can add their own authentication method to a Global Administrator account, effectively taking over the entire tenant. This is not a theoretical attack — it has been observed in real-world incidents and is a standard technique we test for in every Azure engagement.

Managed Identity Abuse

Azure Managed Identities provide an Azure AD identity for Azure resources, eliminating the need to manage credentials in code. There are two types: system-assigned (tied to the lifecycle of a specific resource) and user-assigned (independent lifecycle, can be attached to multiple resources). Managed identities authenticate to Azure services by requesting a token from the Azure Instance Metadata Service (IMDS) at http://169.254.169.254/metadata/identity/oauth2/token.

The attack surface mirrors AWS's instance metadata exploitation. If an attacker gains code execution on an Azure resource with a managed identity — through an application vulnerability, a compromised CI/CD pipeline, or a container escape — they can request a token for any Azure service that the managed identity has access to. The token is obtained by making an HTTP GET request to the metadata endpoint with the appropriate resource URI. Unlike AWS IMDSv2, Azure's IMDS does not have a session token mechanism, although it does require the Metadata: true header, which provides some SSRF mitigation.

We test managed identities by enumerating which Azure resources have identities assigned, what RBAC roles those identities hold, and whether those roles are scoped appropriately (at the resource, resource group, or subscription level). We look for managed identities with Contributor or Owner roles at the subscription level, which grant far more access than typically necessary. We test whether managed identities can access resources across resource groups or subscriptions that they should not have access to, and we check whether the managed identity's token can be used to authenticate to other Azure services like Key Vault, Storage, or SQL Database.

Storage Account Access and Configuration

Azure Storage accounts are the primary data storage mechanism in Azure, encompassing Blob storage (object storage, comparable to S3), File shares, Queue storage, and Table storage. Storage accounts have their own access control mechanisms that are separate from (but interact with) Azure RBAC, making them a common source of misconfigurations.

The main attack vectors we test include: shared access signatures (SAS) with excessive permissions or no expiration, which are bearer tokens that grant access to specific storage resources — if a SAS token with broad permissions leaks (through logs, error messages, or source code), it provides direct access to the storage account without requiring Azure AD authentication; storage account access keys, which provide root-level access to the entire storage account and are the equivalent of a superuser password — many organizations embed these keys in application code or configuration files; public blob access, where containers are configured with "Blob" or "Container" level public access, exposing their contents to the internet; and shared key authorization not being disabled, which means anyone with the storage account key can bypass all Azure AD-based access controls.

We also check whether storage accounts enforce HTTPS-only access, whether minimum TLS versions are set appropriately, whether soft delete is enabled (to prevent data destruction by attackers), and whether storage analytics and logging are configured to detect unauthorized access patterns.

Key Vault Exposure

Azure Key Vault is the recommended service for storing secrets, encryption keys, and certificates. However, Key Vault is only as secure as its access policies and network configuration. We routinely find Key Vaults with overly permissive access policies that grant Get and List permissions to broad groups of users or service principals, network rules that allow access from any network (instead of being restricted to specific VNets or IP ranges), and secrets that have not been rotated in months or years.

Key Vault supports two permission models: the legacy access policy model and Azure RBAC. The access policy model has significant limitations — it does not support fine-grained conditional access, and its permissions cannot be scoped to individual secrets. Azure RBAC integration provides more granular control but requires careful configuration. We test both models, looking for misconfigurations that could allow an attacker to retrieve secrets they should not have access to. We also check whether Key Vault diagnostic logging is enabled, which is critical for detecting unauthorized access to secrets.

A common attack chain we test involves compromising a managed identity that has Key Vault access, retrieving database connection strings or API keys from the vault, and using those credentials to access additional resources. This chain demonstrates how a single misconfigured managed identity can cascade into a full environment compromise.

Runbook and Automation Abuse

Azure Automation accounts host runbooks — scripts that automate operational tasks such as VM patching, resource provisioning, and incident response. Automation accounts have their own identities (Run As accounts or managed identities) and often hold credentials for accessing other Azure resources and external systems. This makes them high-value targets.

The attack surfaces we test include: overprivileged Automation identities that have Contributor or Owner roles at high scope levels; credentials stored as Automation variables or certificates that can be retrieved by anyone with Contributor access to the Automation account; runbooks that execute with elevated permissions and can be modified by users with Contributor access to inject malicious code; and webhook-triggered runbooks that can be invoked by anyone who possesses the webhook URL, which may be exposed through logs, documentation, or source control.

We also examine Azure Logic Apps and Azure Functions in a similar light, looking for overprivileged managed identities, exposed HTTP triggers, and sensitive data handled insecurely within workflow definitions.


GCP-Specific Attack Surfaces

Google Cloud Platform has a smaller market share than AWS and Azure but is widely used by organizations that leverage Google's data analytics, machine learning, and container orchestration (GKE) services. GCP's security model has some fundamental differences from AWS and Azure — particularly in its IAM binding model, its default networking configuration, and its approach to service accounts — that create unique attack surfaces.

Service Account Key Sprawl

GCP service accounts are the primary mechanism for non-human identities. They are used by applications, VMs, Cloud Functions, and GKE workloads to authenticate to GCP APIs. Unlike AWS IAM roles (which use short-lived credentials) or Azure managed identities (which manage credentials transparently), GCP service accounts can have user-managed keys — JSON files containing a private key that never expires and provides persistent access to whatever the service account has access to.

Service account key sprawl is one of the most pervasive security issues in GCP environments. Keys are generated for development and testing, distributed to team members, embedded in CI/CD pipelines, and stored in code repositories. Once a key is generated, it remains valid until explicitly deleted from the service account. There is no built-in expiration mechanism. Google Cloud recommends using Workload Identity Federation, service account impersonation, or attached service accounts instead of user-managed keys, but many organizations have not migrated away from key-based authentication.

During a GCP pentest, we enumerate all service accounts and their keys. We check key creation dates and identify keys that have not been rotated (Google recommends rotation every 90 days). We look for service accounts with keys that have been downloaded but never used (suggesting they may be stored insecurely somewhere outside GCP). We search for service account keys in source code repositories, CI/CD configurations, and environment variables. And we test whether compromised keys can be used to access sensitive resources.

We also look for the default service accounts that GCP automatically creates for certain services (such as the Compute Engine default service account and the App Engine default service account). These default accounts often have the Editor role at the project level, which grants broad read-write access to most resources. Organizations that have not removed or restricted these default accounts have a significant privilege escalation risk, because any VM or App Engine service running with the default service account inherits these broad permissions.

IAM Binding Flaws

GCP's IAM model differs from AWS and Azure in a fundamental way: permissions are granted through IAM bindings that attach roles to members at specific resource hierarchy levels (organization, folder, project, or individual resource). A binding consists of a role (a collection of permissions), a member (user, group, service account, or domain), and a resource (where the binding is applied). Bindings are inherited — a binding at the organization level applies to every folder, project, and resource below it.

This inheritance model creates subtle misconfiguration risks. A role granted at the organization level might have been intended for a specific use case but ends up granting access to every project in the organization. We see this frequently with roles/owner and roles/editor bindings at the organization or folder level, which grant sweeping permissions across all projects.

We also test for bindings that grant access to allUsers (anyone on the internet, authenticated or not) or allAuthenticatedUsers (any Google account). These special members are legitimate in certain contexts (for example, a public Cloud Storage bucket serving a static website), but they are frequently applied to resources that should not be publicly accessible — Cloud Functions, Cloud Run services, Pub/Sub topics, and BigQuery datasets.

Custom roles in GCP can include permissions that enable privilege escalation. For instance, a custom role with iam.serviceAccountKeys.create permission allows the holder to create a new key for any service account they have access to, effectively impersonating that service account. A custom role with iam.roles.update can modify existing custom roles to add additional permissions. We systematically analyze all custom roles for these escalation-enabling permissions.

Metadata Server Attacks

GCP Compute Engine instances, like AWS EC2 instances, expose a metadata server at http://metadata.google.internal (also accessible at http://169.254.169.254). The metadata server provides instance information, project-level metadata, and — most importantly — OAuth2 access tokens for the instance's service account. These tokens can be used to authenticate to any GCP API that the service account has access to.

GCP's metadata server requires a Metadata-Flavor: Google header on all requests, which provides some protection against SSRF attacks (similar to Azure's Metadata: true header). However, this protection is not foolproof — some SSRF vulnerabilities allow attackers to control request headers, and certain proxy configurations may add the header automatically. The access token obtained from the metadata server at http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token grants whatever permissions the instance's service account has, which — as discussed above — may include the overly broad default Editor role.

GCP also stores project-level metadata, including SSH keys and startup scripts, on the metadata server. If the project's metadata includes a project-wide SSH key, anyone with the corresponding private key can SSH into any instance in the project that has not disabled project-wide SSH keys. We check for this configuration, as well as for sensitive data stored in instance metadata attributes (such as database passwords or API keys passed as metadata during instance creation).

Cloud Functions Exploitation

Google Cloud Functions, like AWS Lambda, is a serverless compute platform that executes code in response to events — HTTP requests, Pub/Sub messages, Cloud Storage events, and Firestore triggers. The security considerations are similar to Lambda: event data injection, dependency vulnerabilities, environment variable exposure, and overprivileged service accounts.

GCP Cloud Functions have some additional attack surfaces specific to the platform. First, Cloud Functions can be invoked by anyone if their IAM binding includes allUsers with the cloudfunctions.invoker role — this is sometimes set intentionally for public APIs but is often applied accidentally. Second, Cloud Functions run with a service account whose credentials are available to the function code; if that service account has broad permissions, a code injection vulnerability in the function can be leveraged for privilege escalation. Third, Cloud Functions store their source code in a Cloud Storage bucket, and the bucket's permissions may not be adequately restricted — an attacker who can read the bucket can obtain the function's source code, and an attacker who can write to the bucket can modify the function's code.

We test Cloud Functions by enumerating all functions and their configurations, checking their IAM bindings for public access, reviewing their service account permissions, examining their environment variables for secrets, and testing their HTTP triggers for injection vulnerabilities. For functions that process events from other GCP services (Pub/Sub, Cloud Storage), we test whether we can inject malicious events into the function's trigger source.


Cross-Cloud Issues

Organizations that operate across multiple cloud providers — and most enterprises do — face an additional category of risk that exists in the gaps between providers. These cross-cloud issues are often invisible to provider-specific security tools and are frequently overlooked in penetration testing engagements that focus on a single provider.

Overprivileged Service Accounts

The principle of least privilege is universally acknowledged and almost universally violated. Across AWS, Azure, and GCP, we consistently find service accounts, roles, and service principals with permissions far exceeding what they need. The reasons are predictable: developers request broad permissions during the build phase and nobody scopes them down, permission creep accumulates as new features are added to existing services, and the complexity of each provider's IAM system makes it genuinely difficult to determine the minimum required permissions for a given workload.

In multi-cloud environments, the problem compounds. A team managing AWS and GCP might apply GCP's simpler IAM model to AWS (resulting in overly broad AWS policies), or apply AWS's granular policy model to GCP (resulting in confusion and workarounds that weaken security). There is no cross-provider IAM abstraction that allows organizations to define permissions consistently. Each provider requires its own access management approach, its own monitoring, and its own review process. Without dedicated cloud security expertise for each provider, overprivilege is almost inevitable.

Secrets in Environment Variables

Despite the availability of dedicated secrets management services in every major cloud provider (AWS Secrets Manager, Azure Key Vault, GCP Secret Manager), we routinely find sensitive credentials stored as environment variables in compute resources across all three platforms. Lambda function environment variables containing database passwords. Azure App Service application settings with third-party API keys. GCP Cloud Run environment variables with service account credentials for other cloud providers.

Environment variables are problematic for several reasons. They are visible to anyone with read access to the resource's configuration (which is often a broader group than those who need access to the secret). They appear in logs, deployment histories, and infrastructure-as-code templates. They are not automatically rotated. And they are frequently the first thing an attacker looks for after gaining access to a cloud resource, because they so often contain credentials that enable lateral movement.

In multi-cloud environments, the problem is worse because organizations may use one provider's secrets management for that provider's resources but fall back to environment variables for cross-provider secrets. For example, an AWS Lambda function that needs to access an Azure SQL Database might store the Azure connection string as an environment variable because it cannot natively access Azure Key Vault. This creates a credential that exists outside of both providers' secrets management and audit logging.

Insufficient Logging and Detection

Effective cloud security requires comprehensive logging across every provider. In AWS, this means CloudTrail for API activity, VPC Flow Logs for network traffic, GuardDuty for threat detection, and CloudWatch for application and system logs. In Azure, this means Activity Log for control plane operations, diagnostic logs for data plane operations, NSG flow logs for network traffic, and Microsoft Defender for Cloud for threat detection. In GCP, this means Cloud Audit Logs for API activity, VPC Flow Logs for network traffic, and Security Command Center for threat detection.

What we find in practice is that logging is incomplete. CloudTrail might be enabled in the primary region but not in secondary regions. Azure diagnostic logs might be collected for some resource types but not others. GCP audit logs might be retained for only 30 days. Flow logs, which are essential for detecting data exfiltration, are often not enabled at all because of their cost.

In multi-cloud environments, the additional challenge is log aggregation and correlation. An attack that spans multiple cloud providers — beginning with a compromised AWS access key, pivoting to Azure through a shared secret, and exfiltrating data from GCP — will generate log entries in three different logging systems with three different formats and three different retention policies. Without a centralized SIEM that ingests logs from all providers, this attack path may never be detected.

Network Segmentation Failures

Cloud networking is fundamentally different from on-premises networking, and the abstraction layers (VPCs, VNets, subnets, security groups, NSGs, firewall rules) create a false sense of security if they are not rigorously configured and tested. Common failures include: overly broad security group rules that allow all traffic between all instances in a VPC; peering connections between VPCs that should be isolated; VPN tunnels between cloud environments and on-premises networks that provide a bridge for lateral movement; and default network configurations that were never hardened.

In multi-cloud environments, network interconnections (such as AWS Transit Gateway connected to Azure ExpressRoute via a third-party SD-WAN solution) create additional attack paths that are difficult to map and test. These interconnections are often configured by network engineers who may not have security as their primary concern, and they frequently lack the logging and monitoring necessary to detect abuse.


Cloud Penetration Testing Methodology

A structured cloud penetration testing methodology adapts the adversarial mindset of traditional pentesting to the identity-centric, API-driven nature of cloud environments. While the specific techniques vary by provider, the overall methodology follows a consistent five-phase approach.

Phase 1: Reconnaissance

Cloud reconnaissance begins with identifying the target organization's cloud footprint. This includes discovering which cloud providers are in use, what services are exposed to the internet, and what information is publicly available about the cloud environment.

External reconnaissance techniques include: DNS enumeration to discover cloud-hosted services (looking for CNAME records pointing to .amazonaws.com, .azure.com, .googleapis.com, .cloudfront.net, and other cloud provider domains); certificate transparency log analysis to discover subdomains and services; S3/Blob/GCS bucket enumeration using common naming patterns based on the organization's name and known projects; public code repository searches for leaked credentials, infrastructure-as-code templates, and cloud configuration files; job listing analysis to identify which cloud services and technologies the organization uses; and cloud service enumeration through tools like nuclei templates that detect specific cloud services based on response headers and behaviors.

If the engagement includes internal testing (which is typical for cloud pentests), reconnaissance also involves enumerating the internal cloud environment using provided credentials or assumed roles. This includes listing all resources across all regions, mapping IAM principals and their permissions, identifying network topology and interconnections, and cataloging storage resources and their access controls. Tools like ScoutSuite, Prowler, and CloudFox automate much of this internal reconnaissance, providing a comprehensive baseline of the environment's security posture.

Phase 2: Identity and Access Enumeration

Identity enumeration is the most critical phase of a cloud penetration test because identity is the primary control plane in every cloud provider. This phase involves systematically mapping all principals (users, groups, roles, service accounts, service principals, managed identities), their permissions, and their trust relationships.

In AWS, this means using tools like enumerate-iam to discover permissions for compromised credentials, Pacu to enumerate IAM policies and identify privilege escalation paths, and manual analysis of trust policies and permission boundaries. In Azure, this means using AzureHound (the Azure equivalent of BloodHound) to map Entra ID relationships, enumerate RBAC role assignments, and identify attack paths from compromised accounts to high-value targets. In GCP, this means using gcloud iam commands and tools like CloudFox to enumerate IAM bindings, service accounts, and their keys.

The output of this phase is a comprehensive map of the identity landscape: who has access to what, through which mechanisms, and what trust relationships exist between principals. This map forms the basis for the subsequent exploitation phases.

Phase 3: Privilege Escalation

Armed with the identity map from Phase 2, the next step is to identify and exploit privilege escalation paths. These are sequences of legitimate actions that, when chained together, result in the attacker gaining more permissions than they started with.

Privilege escalation in the cloud is fundamentally different from privilege escalation on a traditional host. On a Linux server, privilege escalation typically involves exploiting a kernel vulnerability, a SUID binary, or a misconfigured sudo rule. In the cloud, privilege escalation involves exploiting IAM misconfigurations — permissions that, intentionally or not, allow a principal to modify their own (or others') access.

Common cloud privilege escalation techniques include: IAM policy manipulation (creating new policy versions, attaching additional policies); role assumption chains (using one role to assume another with broader permissions); service exploitation (using permissions on one service to gain access to another — for example, using lambda:CreateFunction and iam:PassRole to execute code with a more privileged role); resource policy modification (changing S3 bucket policies, KMS key policies, or SQS queue policies to grant additional access); and credential theft (accessing secrets managers, environment variables, or metadata services to obtain credentials for other accounts).

We systematically test all identified escalation paths, documenting each step and the resulting access level. The goal is to demonstrate the maximum access an attacker could achieve from a given starting point, which helps the organization understand the real-world impact of each misconfiguration.

Phase 4: Lateral Movement

Lateral movement in cloud environments takes different forms than in traditional networks. Instead of pivoting through compromised hosts using tools like PsExec or SSH, cloud lateral movement typically involves:

During this phase, we document every resource we can access from each compromised identity, creating a comprehensive view of the blast radius. This helps the organization understand how far an attacker could reach from any given entry point and prioritize remediation accordingly.

Phase 5: Data Exfiltration Assessment

The final phase assesses the organization's ability to detect and prevent data exfiltration from cloud storage. This involves testing whether an attacker with access to cloud storage (S3, Blob Storage, GCS) can download large volumes of data without triggering alerts. We test egress controls, DLP policies, and monitoring capabilities.

Specific techniques include: testing whether VPC endpoints or private links restrict data egress to approved destinations; checking whether S3 bucket policies include deny statements for actions from outside the VPC; verifying that storage access logging captures download events; and testing whether network-level controls (such as VPC flow log analysis or cloud-native DLP services) detect bulk data transfers.

We do not exfiltrate actual sensitive data during a pentest. Instead, we demonstrate the capability to access it and document the volume of data that could be exfiltrated, the lack of controls that would prevent it, and the absence of detection mechanisms that would alert on it. This provides the evidence the organization needs to justify investment in data loss prevention controls.


Cloud Penetration Testing Tools

Effective cloud penetration testing requires a combination of automated tools for broad coverage and manual testing for depth. The following tools form the core of our cloud pentesting toolkit, supplemented by custom scripts and provider-native CLI tools (aws-cli, az, gcloud).

ScoutSuite

ScoutSuite is an open-source multi-cloud security auditing tool developed by NCC Group. It supports AWS, Azure, GCP, Oracle Cloud, and Alibaba Cloud. ScoutSuite uses the cloud provider's APIs to collect configuration data for all resources in scope, then evaluates that data against a comprehensive ruleset that covers IAM, networking, storage, logging, encryption, and service-specific configurations.

ScoutSuite's primary strength is its breadth of coverage — it evaluates hundreds of configuration checks across all major cloud services and produces an interactive HTML report that makes it easy to navigate findings by service and severity. We use ScoutSuite as a baseline assessment tool at the beginning of every cloud pentest to quickly identify the highest-priority misconfigurations and to ensure that we have not overlooked any configuration issues that our manual testing might miss.

However, ScoutSuite is a configuration auditor, not a penetration testing tool. It identifies what could be exploited, but it does not exploit anything. It cannot chain misconfigurations together to demonstrate attack paths, and it cannot evaluate the business impact of a finding. That is where manual testing and more specialized tools come in.

Prowler

Prowler is an open-source security assessment tool focused primarily on AWS, with growing support for Azure and GCP. Prowler evaluates cloud environments against a wide range of security benchmarks, including CIS Benchmarks, PCI DSS, HIPAA, GDPR, NIST 800-53, and the AWS Well-Architected Framework. It produces detailed findings with severity ratings, remediation guidance, and compliance mapping.

Prowler is particularly useful for AWS engagements because of its depth of AWS-specific checks. It evaluates IAM policies with more granularity than most tools, checking for specific dangerous permission combinations and testing for known privilege escalation paths. It also checks for less obvious misconfigurations, such as EC2 instances with public IP addresses in private subnets, Lambda functions with deprecated runtimes, and S3 buckets with versioning disabled. We use Prowler alongside ScoutSuite for AWS engagements, as the two tools have complementary rulesets.

Pacu

Pacu is an open-source AWS exploitation framework developed by Rhino Security Labs. Unlike ScoutSuite and Prowler, which are assessment tools, Pacu is an offensive tool designed for active exploitation. It includes modules for reconnaissance, privilege escalation, credential theft, persistence, and data exfiltration in AWS environments.

Pacu's modules automate common attack techniques, such as: enumerating permissions for a given set of credentials; testing all known IAM privilege escalation paths; creating backdoor IAM users or roles for persistence; exfiltrating data from S3 buckets, EC2 snapshots, and RDS snapshots; and evading CloudTrail logging through various techniques. We use Pacu for the active exploitation phases of AWS engagements, complementing our manual testing with its automated privilege escalation discovery.

Pacu operates with a session-based model, where each session is associated with a set of AWS credentials. This allows us to test multiple starting positions (simulating different levels of initial access) and compare the results to understand how the blast radius changes based on the attacker's entry point.

AzureHound

AzureHound is the Azure data collector for the BloodHound attack path analysis platform. AzureHound collects data about Entra ID users, groups, applications, service principals, and their relationships, as well as Azure RBAC role assignments and resource group hierarchies. This data is imported into BloodHound Community Edition, which visualizes the data as a graph and can identify attack paths from any compromised account to high-value targets (such as Global Administrator or subscription Owner).

AzureHound is indispensable for Azure pentests because Entra ID's permission model is graph-based — a user might not have direct access to a resource, but they might be a member of a group that owns a service principal that has a role assignment on a subscription that contains the resource. Manually tracing these paths is impractical in any non-trivial environment. BloodHound automates this graph analysis, revealing attack paths that would take hours or days to discover manually.

We run AzureHound at the beginning of every Azure engagement and use the resulting graph throughout the assessment to identify targets, plan attack paths, and validate our findings. The visual representation of attack paths is also invaluable for communicating results to stakeholders who may not be familiar with Azure's permission model.

CloudFox

CloudFox is an open-source tool developed by Bishop Fox for identifying exploitable attack paths in cloud infrastructure. It supports AWS and GCP (with Azure support in development) and focuses specifically on the types of misconfigurations that lead to privilege escalation and lateral movement.

CloudFox's modules include: permissions (enumerating effective permissions for all principals), instances (identifying EC2/GCE instances with attached roles and their permissions), env-vars (extracting environment variables from Lambda functions, ECS tasks, and other compute resources), endpoints (discovering service endpoints that might be exploitable), and secrets (identifying secrets stored in Secrets Manager, SSM Parameter Store, and other locations). CloudFox presents its output in a format that is designed for offensive operators, making it easy to identify the most promising attack paths quickly.

We use CloudFox as a complement to ScoutSuite and Prowler, focusing specifically on the offensive perspective. Where ScoutSuite might flag an overprivileged role as a "high severity finding," CloudFox helps us understand exactly how that role could be exploited, what it could access, and what the real-world impact would be.


Common Cloud Misconfigurations Across AWS, Azure, and GCP

The following table compares the most common cloud misconfigurations we find during penetration testing engagements, showing how each manifests differently across the three major cloud providers. Understanding these provider-specific differences is essential for organizations operating in multi-cloud environments.

Misconfiguration AWS Azure GCP Severity
Overprivileged Identity IAM policies with *:* on Resource: *; AdministratorAccess attached to service roles Service principals with Owner or Contributor at subscription level; Global Admin role over-assigned Service accounts with roles/editor or roles/owner at organization or project level; default service accounts not restricted Critical
Public Storage Exposure S3 buckets with public ACLs or bucket policies allowing s3:GetObject to *; Block Public Access not enabled Blob containers with public access level set to "Blob" or "Container"; storage account allowing anonymous access Cloud Storage buckets with IAM binding granting allUsers the storage.objectViewer role Critical
Metadata Service Exploitation IMDSv1 enabled (no session token required); instance roles with excessive permissions IMDS accessible without restriction; managed identities overprivileged at high scope Metadata server accessible with Metadata-Flavor header; default compute service account has Editor role High
Secrets in Environment Variables Lambda environment variables with database passwords and API keys; ECS task definitions with embedded credentials App Service application settings with connection strings; Function App settings with third-party keys Cloud Functions environment variables with service account keys; Cloud Run env vars with database credentials High
Privilege Escalation via IAM iam:CreatePolicyVersion, iam:AttachUserPolicy, iam:PassRole with wildcard resources Microsoft.Authorization/roleAssignments/write at high scope; ability to modify app registrations with sensitive API permissions iam.serviceAccountKeys.create, iam.roles.update, iam.serviceAccounts.actAs on privileged service accounts Critical
Insufficient Logging CloudTrail not enabled in all regions; S3 access logging disabled; GuardDuty not active Activity Log not forwarded to SIEM; diagnostic logs not enabled for data plane operations Data Access audit logs disabled for non-critical services; VPC Flow Logs not enabled High
Network Overexposure Security groups allowing 0.0.0.0/0 inbound on management ports (22, 3389); default VPC in use NSGs with Any/Any allow rules; public IP addresses on internal services Firewall rules allowing 0.0.0.0/0 on all ports; default network with auto-mode subnets High
Stale Credentials IAM access keys not rotated in 90+ days; unused IAM users with active keys App registration client secrets not rotated; service principal certificates expired but still valid User-managed service account keys never rotated; default service account keys downloaded and unused High
Missing Encryption S3 buckets without default encryption; EBS volumes without encryption; RDS instances without encryption at rest Storage accounts not enforcing HTTPS; SQL databases without TDE; Key Vault-backed encryption not used Cloud Storage buckets using Google-managed keys instead of CMEK; Cloud SQL without encryption Medium
Cross-Account/Tenant Trust Role trust policies allowing sts:AssumeRole from external accounts; S3 bucket policies granting cross-account access Guest user access overly permissive; B2B collaboration settings allowing full directory enumeration Cross-project service account impersonation; IAM bindings granting access to external Google accounts High
Serverless Misconfiguration Lambda functions with AdministratorAccess execution role; API Gateway without authorization Azure Functions with system-assigned identity having Contributor role; HTTP-triggered functions without auth Cloud Functions with allUsers invoker binding; Functions running as default compute service account High

Cloud Versus Traditional Penetration Testing: What Is Different

Cloud penetration testing and traditional penetration testing share the same fundamental goal — identify security weaknesses before attackers do — but the techniques, targets, and methodologies diverge significantly. Understanding these differences is essential for organizations evaluating their penetration testing needs.

The Perimeter Has Dissolved

Traditional penetration testing begins by mapping the network perimeter: scanning IP ranges, identifying open ports, fingerprinting services, and looking for vulnerabilities in internet-facing systems. The perimeter is a defined boundary — a firewall, a DMZ, a set of public IP addresses. Everything inside the perimeter is "internal," and gaining access to the internal network is often the primary objective.

In cloud environments, the perimeter is not a network boundary — it is an identity boundary. Access to cloud resources is controlled by IAM policies, not by firewalls (although network controls exist as a supplementary layer). An attacker with valid cloud credentials can access resources from anywhere in the world, regardless of network controls. The "internal network" in a cloud environment is a VPC, but the control plane (the APIs that manage cloud resources) is accessible over the public internet. A traditional pentest that focuses on port scanning and network exploitation will miss the most critical cloud attack surface: identity.

Infrastructure Is Code and Configuration

In traditional environments, infrastructure is physical or virtual — servers that are provisioned once and run for years, with configurations that drift over time. Penetration testing involves interacting with running systems: scanning them, connecting to them, exploiting them.

In cloud environments, infrastructure is defined by code (Terraform, CloudFormation, ARM templates, Pulumi) and by API-driven configurations. The security posture of a cloud environment is determined more by its configuration than by its running software. An S3 bucket's security depends on its bucket policy, ACLs, and account-level Block Public Access settings — not on a running web server. A Lambda function's security depends on its execution role, event source mapping, and environment variable configuration — not on an OS that can be port-scanned. Cloud pentesting requires deep expertise in reading and evaluating these configurations, which is a fundamentally different skill from exploiting running services.

Ephemeral Resources and Dynamic Environments

Traditional infrastructure is relatively static. Servers run for months or years. IP addresses are stable. Network topology changes infrequently. A penetration test conducted today will produce results that are largely valid next month.

Cloud infrastructure is dynamic. Auto-scaling groups spin up and terminate instances based on demand. Serverless functions exist only during execution. Container orchestration platforms (EKS, AKS, GKE) continuously schedule and reschedule workloads. Infrastructure-as-code pipelines can redeploy the entire environment in minutes. A cloud penetration test must account for this dynamism — testing static configurations at a point in time, while also evaluating the processes and pipelines that create and modify those configurations.

Attack Paths Are Identity-Based

In a traditional pentest, attack paths follow network topology: compromise an external-facing web server, pivot to the internal network through a database connection, move laterally using stolen credentials or pass-the-hash techniques, escalate privileges on a domain controller. The path is network-based, and each hop involves a network connection.

In a cloud pentest, attack paths follow identity relationships: compromise a set of credentials, enumerate their permissions, escalate privileges through IAM misconfigurations, assume cross-account roles, access storage services, retrieve additional credentials from secrets managers or environment variables, and use those credentials to access further resources. The path is identity-based, and each hop involves an API call. An attacker might never touch the network layer — they might compromise an entire cloud environment using nothing but API calls from their laptop.

Compliance Implications Differ

Traditional penetration testing aligns well with compliance frameworks that were designed for on-premises infrastructure: PCI DSS Requirement 11.3, SOC 2 CC7.1, ISO 27001 Annex A.12.6. These frameworks mandate regular penetration testing of networks and applications, and traditional pentests satisfy these requirements directly.

Cloud penetration testing addresses a different set of compliance requirements — and often fills gaps that traditional pentesting leaves open. CIS Benchmarks for AWS, Azure, and GCP provide detailed configuration baselines. SOC 2's CC6.1 (logical access controls) is directly tested by cloud IAM assessments. PCI DSS 4.0's requirement for segmentation testing in cloud environments requires techniques specific to VPCs and security groups. ISO 27017, the cloud-specific extension to ISO 27001, mandates testing of cloud-specific controls. Organizations that rely solely on traditional penetration testing for compliance may find that their cloud-specific risks are not adequately addressed.

Aspect Traditional Pentesting Cloud Pentesting
Primary Target Networks, servers, applications IAM policies, service configurations, identity relationships
Attack Surface Open ports, vulnerable services, web applications API endpoints, IAM permissions, storage policies, metadata services
Privilege Escalation Kernel exploits, SUID binaries, misconfigured sudo IAM policy manipulation, role chaining, service-to-service credential theft
Lateral Movement Network pivoting, pass-the-hash, RDP/SSH Cross-account role assumption, managed identity abuse, secrets retrieval
Perimeter Firewall, DMZ, public IP addresses Identity (IAM), API gateway, network controls as supplementary layer
Key Tools Nmap, Burp Suite, Metasploit, BloodHound ScoutSuite, Prowler, Pacu, AzureHound, CloudFox
Scope Duration Point-in-time assessment of stable infrastructure Point-in-time assessment of dynamic, ephemeral infrastructure
Provider Knowledge General OS, network, and application expertise Deep expertise in provider-specific services, APIs, and IAM models

When You Need Cloud Penetration Testing

Not every organization needs a cloud penetration test right now, but most organizations that operate cloud workloads will need one sooner than they think. The following scenarios are strong indicators that a cloud penetration test should be prioritized.

You have never had a cloud-specific security assessment. If your organization has been running workloads in AWS, Azure, or GCP for more than a year without a dedicated cloud security assessment, you almost certainly have misconfigurations. Cloud environments accumulate technical debt faster than traditional infrastructure because of the speed at which resources are provisioned and the complexity of IAM systems. The longer you wait, the more misconfigurations accumulate, and the harder they become to remediate.

You are preparing for a compliance audit. SOC 2, PCI DSS, ISO 27001, HIPAA, and other frameworks increasingly require evidence that cloud-specific risks have been assessed. A cloud penetration test provides this evidence and also identifies issues that need to be remediated before the audit. Addressing cloud misconfigurations proactively is far less costly and disruptive than dealing with audit findings or (worse) a data breach.

You have recently migrated workloads to the cloud. Cloud migrations are high-risk events. Configurations that worked on-premises may not translate correctly to cloud-native equivalents. Permissions that were granular in Active Directory may become overly broad in Entra ID. Network segmentation that was enforced by physical firewalls may be weakened in a VPC. A post-migration cloud pentest validates that security controls were correctly implemented in the new environment.

You operate in a multi-cloud environment. Organizations running workloads across AWS, Azure, and GCP face unique cross-cloud risks that are invisible to single-provider assessments. Inconsistent IAM policies, shared credentials, and inter-cloud network connections create attack paths that span providers. A comprehensive cloud pentest across all active providers identifies these cross-cloud risks.

You have experienced a security incident involving cloud resources. Post-incident, a cloud penetration test helps identify whether the attack vector has been fully remediated, whether similar vulnerabilities exist elsewhere in the environment, and whether the attacker may have established persistence mechanisms that survived initial incident response.

Your development teams are deploying infrastructure-as-code without security review. If Terraform, CloudFormation, or ARM templates are being deployed to production without security review, misconfigurations are being introduced at the speed of CI/CD. A cloud pentest quantifies the current risk and provides the evidence needed to justify integrating security review into the deployment pipeline.


What a Cloud Pentest Report Should Include

A cloud penetration testing report should provide actionable intelligence — not just a list of findings, but a narrative that explains the attack paths, their business impact, and exactly what needs to change. Here is what to expect from a thorough cloud pentest report.

Executive summary: A non-technical overview of the engagement's scope, key findings, overall risk assessment, and top recommendations. This section is written for leadership and board members who need to understand the business impact without technical details.

Methodology: A description of the testing approach, tools used, and scope coverage. This establishes credibility and allows the technical team to understand what was tested and what was not.

Detailed findings: Each finding should include a clear title, severity rating (based on exploitability and business impact, not just CVSS), a description of the misconfiguration, step-by-step evidence of exploitation (with screenshots, API responses, and command output), the business impact of the finding, and specific remediation steps with code examples where appropriate (such as corrected IAM policies, Terraform snippets, or CLI commands).

Attack path narratives: The most valuable part of a cloud pentest report is the attack path narrative — a step-by-step walkthrough of how individual findings chain together to enable a complete compromise scenario. For example: "Starting with a leaked access key (Finding #3), we enumerated IAM permissions and discovered privilege escalation via iam:CreatePolicyVersion (Finding #7), which granted access to Secrets Manager where we retrieved database credentials (Finding #12), allowing us to access the production RDS instance containing 2.3 million customer records (Finding #15)." These narratives translate technical findings into business risk that stakeholders can understand.

Prioritized remediation plan: Findings should be grouped by priority, with quick wins (changes that can be made in hours with immediate risk reduction) separated from strategic improvements (architectural changes that require planning and implementation time). Each remediation should include an effort estimate and a recommended timeline.

Positive findings: A good report also documents what is working well. Security controls that are correctly implemented should be acknowledged, both to give credit to the teams that implemented them and to provide a baseline that the organization can build on.


Building a Cloud Security Testing Program

A single cloud penetration test provides a valuable snapshot, but cloud security requires ongoing attention. Organizations serious about cloud security should build a testing program that combines periodic deep assessments with continuous monitoring.

Annual or semi-annual cloud penetration testing provides the depth of manual analysis needed to identify complex attack paths, test for privilege escalation, and validate that security controls work as intended. These engagements should be scoped to cover all active cloud providers and should include both external (reconnaissance, publicly accessible resources) and internal (assumed breach, identity-based) testing.

Continuous cloud security posture management (CSPM) fills the gaps between formal penetration tests. Tools like AWS Security Hub, Microsoft Defender for Cloud, and GCP Security Command Center continuously evaluate cloud configurations against benchmarks and alert on misconfigurations as they are introduced. CSPM catches configuration drift and new misconfigurations but cannot replace the manual analysis and exploitation testing of a pentest.

Infrastructure-as-code security scanning shifts security left by evaluating Terraform, CloudFormation, and ARM templates before they are deployed. Tools like tfsec, checkov, and cfn-nag integrate into CI/CD pipelines and prevent misconfigurations from reaching production. This is a preventive control that reduces the number of findings in future penetration tests.

Cloud security training for engineering teams ensures that the people provisioning and configuring cloud resources understand the security implications of their decisions. Training should be provider-specific (because IAM works differently in AWS, Azure, and GCP) and should include hands-on exercises that demonstrate common misconfigurations and their consequences.

Together, these components create a defense-in-depth approach to cloud security that reduces risk continuously rather than at annual intervals.

Key Takeaway: Cloud penetration testing is not a replacement for traditional pentesting — it is a necessary complement. Organizations running workloads in AWS, Azure, or GCP need both: traditional pentesting for web applications, networks, and endpoints, and cloud-specific pentesting for IAM policies, service configurations, identity relationships, and provider-specific attack surfaces. Ignoring either leaves significant blind spots in your security posture.

Test Your Cloud Security Posture

Lorikeet Security's cloud penetration tests go beyond automated scanning — we manually test IAM policies, identity federation, network segmentation, and data exposure across AWS, Azure, and GCP.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!