If your organization runs workloads in AWS, GCP, or Azure, there is a statistically overwhelming chance that something is misconfigured right now. This is not an exaggeration. Industry research consistently finds that over 90% of organizations have cloud misconfigurations, and that misconfiguration is the leading cause of cloud data breaches. The defaults are not secure. The shared responsibility model means your cloud provider secures the physical infrastructure, the hypervisor, and the network fabric. Everything above that line is on you.

Most teams don't have dedicated cloud security expertise. They have engineers who are very good at building things and shipping features, but who learned cloud infrastructure on the job and inherited configurations from whoever set up the account three years ago. A cloud security assessment exists to find the gaps before an attacker or an auditor does.


Your cloud is probably misconfigured (and that's normal)

The numbers are stark. Gartner has projected that through 2025, 99% of cloud security failures would be the customer's fault. Multiple breach analysis reports from the past two years confirm this trend is holding. The most common root causes are not sophisticated attacks. They are storage buckets left open to the public, IAM policies granting wildcard access, logging services that were never turned on, and security groups that allow the entire internet to reach sensitive ports.

The shared responsibility model is conceptually simple but operationally confusing. AWS secures the infrastructure. You secure what you deploy on it. Azure secures the physical data centers. You secure your virtual machines, identity configuration, and data. GCP secures the global network. You secure your project-level IAM, your Cloud Storage permissions, and your firewall rules.

The problem is that "you secure it" is doing an enormous amount of heavy lifting. Cloud providers offer hundreds of services, each with their own permission models, encryption options, and networking configurations. A single AWS account might use EC2, S3, RDS, Lambda, ECS, API Gateway, CloudFront, and a dozen other services. Each one has security-relevant settings. Each one can be misconfigured independently. And the defaults are almost always optimized for ease of use, not security.

The core problem: Cloud providers give you the tools to be secure. They do not make you secure by default. A cloud security assessment identifies the gap between what your provider offers and what your team has actually configured.


What a cloud security assessment actually covers

A cloud security assessment is not a traditional penetration test. A pentest focuses on exploitation: can an attacker break in, move laterally, and access sensitive data? A cloud security assessment is broader. It reviews your configuration, your architecture, and your operational practices against established benchmarks like CIS Benchmarks and cloud provider best practices.

The assessment typically covers eight major domains:

The output is not a list of exploitable vulnerabilities. It's a prioritized report of configuration weaknesses, architectural risks, and compliance gaps, with specific remediation guidance for each finding.


The IAM problem: over-permissioned everything

If there is one finding we see in virtually every cloud security assessment, it's this: IAM policies are too permissive. It is the single most common and most dangerous class of cloud misconfiguration, and it exists because granting broad access is easy while scoping precise permissions is tedious.

The patterns repeat across every provider and every engagement:

The fix is not complicated in theory: implement least privilege. In practice, it requires auditing every IAM policy, analyzing actual usage patterns (AWS IAM Access Analyzer, GCP IAM Recommender, and Azure AD access reviews help here), and scoping permissions down to what each principal actually needs. It's tedious work, but it eliminates the most common path an attacker uses to escalate from initial access to full account compromise.


Storage misconfigurations that lead to breaches

Cloud storage breaches make headlines with predictable regularity. A major financial institution lost over 100 million customer records because of a misconfigured WAF and overly permissive IAM role that allowed access to S3 buckets containing sensitive data. A major gaming and streaming platform had its entire source code and internal tools exposed through a misconfigured server that provided access to cloud storage. These are not obscure companies with underfunded security teams. They are well-resourced organizations that still got the basics wrong.

The storage misconfigurations we find most frequently include:

All three major providers now offer account-level settings to block public access to storage (S3 Block Public Access, GCS Uniform Bucket-Level Access, Azure Storage account public access settings). These should be enabled at the account or organization level and only overridden for specific, documented use cases like public static asset hosting.


Network and VPC security

Cloud networking is where traditional infrastructure security concepts meet cloud-specific abstractions, and the result is a configuration surface that most teams don't fully understand. The most common network security findings in cloud assessments fall into predictable categories.

Security groups allowing 0.0.0.0/0 on sensitive ports. We find this in nearly every assessment. SSH (port 22), RDP (port 3389), database ports (3306, 5432, 27017), and sometimes even administrative interfaces wide open to the entire internet. The security group was created during development, the engineer added 0.0.0.0/0 to test connectivity, and it was never scoped down.

Default VPC usage. AWS creates a default VPC in every region with default subnets that auto-assign public IP addresses. Running production workloads in the default VPC is a common anti-pattern because the default configuration prioritizes convenience over security. Production environments should use custom VPCs with intentional subnet design, private subnets for backend services, and public subnets only for load balancers and bastion hosts.

No network segmentation. Development, staging, and production environments running in the same VPC or project with no network-level isolation. A compromised development workload can reach production databases because there's no boundary between them.

Public-facing resources that should be private. Databases, caches, and internal APIs with public IP addresses or internet-facing load balancers when they should only be accessible from within the VPC. This is often a result of "it works" engineering where the path of least resistance was to make the resource publicly accessible.

Missing VPC flow logs. VPC Flow Logs (AWS), VPC Flow Logs (GCP), and NSG Flow Logs (Azure) provide network-level visibility into traffic patterns. Without them, you cannot detect lateral movement, unusual traffic patterns, or data exfiltration. They are essential for incident response and compliance, but they are not enabled by default.

Access patterns for administration. Bastion hosts with SSH open to the internet instead of using AWS Systems Manager Session Manager, GCP Identity-Aware Proxy, or Azure Bastion for secure administrative access without exposing management ports.


Logging, monitoring, and the gaps nobody checks

Logging is the security control that everyone agrees is important and nobody verifies is working correctly. In a cloud security assessment, we consistently find that logging is partially configured, meaning it is enabled in some regions but not all, or it is logging to a destination that can be tampered with, or it is not being monitored for suspicious activity.

CloudTrail not enabled in all regions. AWS CloudTrail can be configured as a single-region trail or an all-regions trail. A single-region trail only captures API calls made in that specific region. If an attacker creates resources in a region you're not logging, you won't see it. Multi-region trails should be the default, and they should be logging to a dedicated, restricted S3 bucket with object-level logging enabled.

Logs not sent to a tamper-proof destination. If CloudTrail logs are stored in an S3 bucket in the same account, an attacker who compromises that account can delete the logs. Best practice is to send logs to a separate logging account with a bucket policy that prevents deletion, or to use CloudTrail Lake or a SIEM integration that provides immutable storage.

No alerting on critical events. Logging without alerting is forensics preparation, not security monitoring. At minimum, you should have alerts for root account usage, IAM policy changes, security group modifications, unauthorized API calls, and console logins from unusual locations. AWS CloudWatch Alarms, GCP Cloud Monitoring, and Azure Monitor can all be configured for these use cases, but they rarely are by default.

Logs not retained long enough for compliance. SOC 2 typically requires one year of audit log retention. ISO 27001 requires log retention appropriate to your risk assessment. Many cloud accounts have default log retention of 90 days or less. If an auditor asks for logs from eight months ago and they've been deleted, that's a compliance finding.

The logging checklist: Is it enabled? In all regions? Is it logging management events and data events? Is the destination tamper-proof? Are you alerting on critical events? Are logs retained long enough for your compliance requirements? If the answer to any of these is "I'm not sure," that's a finding.


Cloud security by provider

Each cloud provider implements security controls differently. The following comparison maps equivalent services and highlights common pitfalls specific to each platform. Understanding these differences matters when your assessment covers multi-cloud environments or when you're evaluating which provider-native tools to enable.

Capability AWS GCP Azure
IAM Model Policy-based (JSON policies attached to users, roles, groups) Role-based (IAM roles bound at org/project/resource level) RBAC via Azure AD with role assignments at management group, subscription, or resource scope
Default Encryption SSE-S3 for S3, EBS encryption opt-in (can enforce via SCP) All data encrypted at rest by default with Google-managed keys Azure Storage Service Encryption enabled by default, disk encryption opt-in
Logging Service CloudTrail (API), VPC Flow Logs, S3 access logs Cloud Audit Logs (Admin Activity, Data Access), VPC Flow Logs Azure Monitor, Activity Log, NSG Flow Logs, Diagnostic Settings
Security Scanning AWS Security Hub, Inspector, GuardDuty, Config Security Command Center (SCC), Security Health Analytics Microsoft Defender for Cloud, Secure Score, Compliance Manager
Network Firewall Security Groups, NACLs, AWS Network Firewall, WAF VPC Firewall Rules, Cloud Armor, Hierarchical Firewall Policies NSGs, Azure Firewall, Application Gateway WAF
Secrets Management AWS Secrets Manager, Systems Manager Parameter Store Secret Manager Azure Key Vault
Container Security ECR image scanning, ECS/EKS with Fargate isolation Artifact Registry scanning, GKE with Workload Identity ACR image scanning, AKS with Azure AD pod identity
Compliance Tools AWS Audit Manager, Config Rules, Artifact Assured Workloads, Organization Policy constraints Microsoft Purview Compliance Manager, Azure Policy
Common Pitfalls Overly permissive S3 bucket policies, default VPC usage, CloudTrail single-region trails Primitive roles (Owner/Editor) used instead of predefined roles, default service account over-use Legacy Classic resources, overly broad Azure AD role assignments, NSG rules with Any/Any

A cross-provider pattern worth noting: every cloud provider offers a built-in security posture assessment tool (Security Hub, Security Command Center, Defender for Cloud). These tools are often available at no additional cost for basic functionality, yet we frequently find them disabled or unreviewed. Enabling these tools and addressing their high-severity findings is one of the highest-value, lowest-effort improvements you can make.


When to run a cloud security assessment

A cloud security assessment is not a one-time exercise. Your cloud environment changes constantly. New services are deployed, new team members make configuration changes, and new features introduce new security surface. The right cadence depends on your rate of change and your compliance requirements, but there are specific inflection points where an assessment is especially valuable.

After initial cloud migration. You've moved workloads from on-premises or from another provider. The migration was focused on functionality: does everything work? Security configuration was secondary. An assessment immediately after migration catches the shortcuts and compromises that were necessary to hit the migration deadline.

Before a SOC 2 or ISO 27001 audit. Auditors will ask about your cloud security controls. An assessment gives you a chance to find and fix issues before the auditor finds them for you. A finding in your own assessment is a remediation item. A finding in an audit report is a qualification or exception that your customers will see.

After major architecture changes. You moved from EC2 to ECS. You added a new region. You implemented a multi-account strategy. You adopted Terraform or Pulumi for infrastructure as code. Any significant architectural change introduces new configuration that may not match your security baseline.

After a security incident. If you've experienced a breach, a near-miss, or even a suspicious event that turned out to be benign, an assessment validates that the root cause has been addressed and identifies other weaknesses that could lead to similar incidents.

Annually as a baseline. Even without a specific trigger, annual cloud security assessments establish a baseline and track your security posture over time. They're also a common requirement for compliance frameworks and investor due diligence.

After onboarding new infrastructure team members. A new DevOps engineer or platform team member will make configuration changes based on their experience and preferences. An assessment a few months after onboarding validates that those changes align with your security requirements, and it's a non-confrontational way to verify that institutional knowledge was transferred correctly.

A practical rule: If you can't confidently answer "who has access to what, and is everything encrypted and logged?" for every service in your cloud environment, it's time for an assessment.

Get Your Cloud Reviewed Before Auditors Do

We assess AWS, GCP, and Azure environments for the misconfigurations that lead to breaches and compliance failures. Reports formatted for SOC 2 and ISO 27001.

Book a Cloud Assessment View All Services
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.