The General Data Protection Regulation is one of the most significant pieces of data protection legislation ever enacted, and it has teeth. Since it came into force in May 2018, European data protection authorities have imposed billions of euros in fines. Many of the largest penalties have been for security failures, not just privacy violations. If you process the personal data of EU residents, the security obligations in the GDPR are not optional, and regulators have made it very clear that vague policies and checkbox compliance are not enough.
This post breaks down what the GDPR actually requires from a technical security perspective, what regulators have fined companies for getting wrong, and what your engineering and security teams need to implement in practice.
GDPR Article 32: Security of Processing
Article 32 is the core of the GDPR's security requirements. It is titled "Security of processing" and it places a direct obligation on both data controllers and data processors to implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. The full text is deliberately broad, but it specifically calls out four measures:
- The pseudonymisation and encryption of personal data.
- The ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services.
- The ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident.
- A process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.
That fourth point is particularly important for security teams. It is a direct mandate for ongoing security testing. Not a one-time audit. Not an annual checkbox. A process for regularly testing and evaluating your security measures. This is the article that gives penetration testing, vulnerability scanning, and security assessments their legal weight under GDPR.
Key principle: Article 32 uses the phrase "appropriate to the risk." This means the GDPR does not prescribe a single set of controls for every organisation. A startup processing email addresses for a newsletter has different obligations than a hospital processing health records. But "appropriate to the risk" is not a loophole. Regulators have consistently held that companies processing large volumes of personal data, sensitive data, or data at scale must implement robust security measures, and they have fined companies that failed to do so.
What "Appropriate Technical and Organisational Measures" Means in Practice
The phrase "appropriate technical and organisational measures" appears throughout the GDPR, and it is intentionally vague. The regulation was designed to be technology-neutral and to survive changes in the threat landscape. But that vagueness can be frustrating for technical teams who need to know what to actually build and deploy.
In practice, regulators and data protection authorities across Europe have established a clear pattern of expectations through their enforcement actions, guidance documents, and decisions. Here is what "appropriate" looks like for most organisations:
Technical Measures
- Encryption of personal data at rest and in transit
- Access controls enforcing the principle of least privilege
- Multi-factor authentication for systems that process personal data
- Network segmentation to isolate systems containing personal data
- Logging and monitoring of access to personal data
- Vulnerability management including regular scanning and patching
- Penetration testing to validate the effectiveness of security controls
- Backup and disaster recovery with tested restoration procedures
- Pseudonymisation and data minimisation where feasible
- Secure development practices including code review and security testing in the SDLC
Organisational Measures
- Information security policies that are documented, communicated, and enforced
- Staff training on data protection and security awareness
- Incident response procedures with defined roles and escalation paths
- Vendor management including security assessments of data processors
- Data Protection Impact Assessments for high-risk processing activities
- Appointed Data Protection Officer where required by Article 37
- Regular audits and reviews of security controls and policies
None of these are explicitly listed as mandatory in Article 32 itself. But enforcement actions have made it clear that regulators expect these measures as a baseline for any organisation processing personal data at any meaningful scale.
Encryption Requirements: At Rest and In Transit
Encryption is one of only two technical measures explicitly named in Article 32 (the other being pseudonymisation). While the GDPR does not mandate encryption in every case, it is one of the clearest and most defensible measures you can implement. Regulators have consistently cited the absence of encryption as a factor in imposing fines.
Encryption in Transit
All data transmitted over networks should be encrypted using TLS 1.2 or higher. This includes:
- All web traffic (HTTPS everywhere, no mixed content)
- API communications between services
- Database connections from application servers
- Email transmission where personal data is included (STARTTLS at minimum)
- File transfers and data synchronisation
- Internal service-to-service communication within your infrastructure
This is not controversial. TLS is table stakes. But regulators have still found companies transmitting personal data over unencrypted channels, including internal networks where "it is behind the firewall" was considered sufficient. It is not.
Encryption at Rest
Personal data stored in databases, file systems, backups, and archives should be encrypted at rest. The specifics depend on your infrastructure:
- Database encryption: Use transparent data encryption (TDE) or application-level encryption for sensitive fields. At minimum, enable volume-level encryption on your database servers.
- Object storage: Enable server-side encryption on S3 buckets, Azure Blob Storage, or equivalent. Use KMS-managed keys rather than default encryption where possible.
- Backups: Encrypted backups are not optional. If your production database is encrypted but your backups are not, you have a gap that regulators will find.
- Laptops and endpoints: Full-disk encryption (BitLocker, FileVault) on any device that may contain personal data. The British Airways breach originated from compromised endpoints, and the ICO's fine reflected the lack of endpoint controls.
Practical note: Article 34(3)(a) provides a significant incentive for encryption. If a data breach occurs but the data was encrypted and the keys were not compromised, you may not be required to notify affected individuals. Encryption does not prevent breaches, but it significantly reduces the regulatory and reputational impact when they occur.
Pseudonymisation and Data Minimisation
Pseudonymisation is the second technical measure explicitly named in Article 32. It is the process of replacing directly identifying information with artificial identifiers, so that the data cannot be attributed to a specific individual without additional information that is stored separately.
In practice, this means:
- Tokenisation of personal identifiers in databases, where the mapping between tokens and real identifiers is stored in a separate, access-controlled system
- Hashing of identifiers where lookup is not required (for example, hashing email addresses for analytics matching)
- Separation of identifying and non-identifying data across different databases or schemas, so that a breach of one system does not expose complete personal profiles
- Use of synthetic or anonymised data in development, testing, and staging environments rather than copies of production data
Data minimisation is a broader GDPR principle (Article 5(1)(c)) that requires you to only collect and retain personal data that is necessary for your stated purpose. From a security perspective, data you do not have cannot be breached. Technical teams should implement:
- Retention policies enforced at the database level, with automated deletion of data that has exceeded its retention period
- Collection limits in forms and APIs that only accept fields necessary for the processing purpose
- Data purging from logs, caches, and temporary storage that might inadvertently retain personal data beyond its useful life
- Regular data audits to identify and remove unnecessary personal data that has accumulated over time
Access Controls and the Principle of Least Privilege
While access controls are not explicitly named in Article 32, they are implied by the requirement to ensure "ongoing confidentiality" of personal data. Regulators have repeatedly cited inadequate access controls as a basis for fines. The principle of least privilege means that every user, system, and process should have only the minimum access necessary to perform its function.
What Regulators Expect
- Role-based access control (RBAC) with clearly defined roles that map to job functions
- Multi-factor authentication on all systems that process personal data, especially for administrative and privileged access
- Regular access reviews to ensure that access rights are current and that former employees, contractors, and role-changers have had their access revoked or adjusted
- Privileged access management with separate administrative accounts, just-in-time access provisioning, and audit logging of all privileged operations
- Password policies that align with current best practices (NIST 800-63B), not the outdated complexity rules that lead to password reuse
- API authentication and authorisation that prevents horizontal and vertical privilege escalation
The Marriott breach is an instructive example. Attackers had access to the Starwood reservation database for nearly four years. The investigation revealed that compromised credentials were not detected because there were insufficient monitoring and access control measures in place. The ICO's fine of 18.4 million pounds reflected these failures directly.
Common gap: Many organisations implement access controls for their production applications but neglect administrative access to infrastructure, databases, and cloud consoles. If a developer has unrestricted read access to your production database via a direct connection, that is a least-privilege violation regardless of how well your application's RBAC works. Regulators evaluate the full chain of access, not just what end users see.
Data Breach Notification: The 72-Hour Rule
Articles 33 and 34 of the GDPR establish the breach notification requirements. These are among the most operationally demanding aspects of the regulation, and getting them wrong can compound the penalty significantly.
Article 33: Notification to the Supervisory Authority
When a personal data breach occurs, you must notify the relevant supervisory authority (the data protection authority in the EU member state where you have your main establishment) within 72 hours of becoming aware of it. This notification must include:
- The nature of the breach, including the categories and approximate number of data subjects and personal data records affected
- The name and contact details of your Data Protection Officer or other contact point
- A description of the likely consequences of the breach
- A description of the measures taken or proposed to address the breach, including measures to mitigate its possible adverse effects
The 72-hour clock starts when you become "aware" of the breach. Regulators have interpreted this strictly. If your security monitoring should have detected the breach but did not because you lacked adequate logging or monitoring, regulators may consider the clock to have started when you should have become aware, not when you actually did.
Article 34: Notification to Data Subjects
If the breach is likely to result in a "high risk to the rights and freedoms" of individuals, you must also notify the affected individuals directly, without undue delay. This notification must describe the breach in clear, plain language and include the same information provided to the supervisory authority, along with recommendations for how individuals can protect themselves.
There are three exemptions from individual notification:
- The data was encrypted or otherwise rendered unintelligible to anyone not authorised to access it
- You have taken subsequent measures that ensure the high risk is no longer likely to materialise
- Individual notification would involve disproportionate effort, in which case a public communication is acceptable
What This Means for Technical Teams
Seventy-two hours is not much time. Your incident response capabilities need to be mature enough to detect, assess, contain, and report a breach within that window. This requires:
- Real-time monitoring and alerting that can detect unauthorised access, data exfiltration, and anomalous behaviour
- Centralised logging with sufficient retention to support forensic investigation
- A documented and tested incident response plan with clear roles, escalation paths, and decision criteria for when to notify
- Breach assessment templates that allow you to quickly categorise the breach and determine the risk to data subjects
- Pre-established relationships with your supervisory authority and legal counsel so that you are not scrambling to figure out who to call at 2am on a Saturday
Enforcement reality: Several significant GDPR fines have included an uplift for delayed notification. If you discover a breach and take months to report it, the fine will reflect that failure on top of whatever security failings caused the breach in the first place. The 72-hour window is taken seriously.
Data Protection Impact Assessments (DPIAs)
Article 35 requires a Data Protection Impact Assessment before you begin any processing that is likely to result in a "high risk to the rights and freedoms" of individuals. A DPIA is a structured assessment that identifies the risks of a processing activity and documents the measures you will take to mitigate those risks.
When Is a DPIA Required?
A DPIA is mandatory when the processing involves:
- Systematic and extensive profiling with significant effects on individuals (for example, automated credit scoring or behavioural advertising at scale)
- Large-scale processing of special categories of data (health data, biometric data, racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, sex life or orientation)
- Systematic monitoring of a publicly accessible area on a large scale (for example, CCTV with facial recognition)
Data protection authorities have published additional lists of processing activities that require a DPIA. As a general rule, if you are processing sensitive personal data at scale, using new technologies, combining datasets in novel ways, or processing data about vulnerable individuals (children, employees, patients), you should conduct a DPIA.
What a DPIA Should Include
- A systematic description of the processing operations and their purposes
- An assessment of the necessity and proportionality of the processing
- An assessment of the risks to the rights and freedoms of data subjects
- The measures you will take to address those risks, including safeguards, security measures, and mechanisms to ensure compliance
For technical teams, the DPIA is where security architecture meets data protection. It forces you to document what data you are collecting, where it flows, how it is stored, who has access, and what controls protect it. If your engineering team is building a new feature that processes personal data, the DPIA should be part of the design process, not an afterthought.
Regularly Testing and Evaluating Security Measures
Article 32(1)(d) requires "a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing." This is the GDPR's direct mandate for ongoing security testing.
Regulators interpret this requirement broadly. It encompasses:
- Penetration testing: Manual, expert-led testing of your applications, APIs, infrastructure, and cloud environments to identify vulnerabilities that automated tools miss
- Vulnerability scanning: Regular automated scanning to identify known vulnerabilities, misconfigurations, and missing patches
- Security audits: Periodic reviews of your security policies, controls, and procedures against established frameworks
- Red team exercises: Simulated attacks that test your detection and response capabilities end-to-end
- Tabletop exercises: Scenario-based walkthroughs of your incident response plan to identify gaps before a real incident exposes them
- Code reviews and static analysis: Security-focused review of application code to identify vulnerabilities before they reach production
The key word in Article 32(1)(d) is "regularly." A single penetration test conducted three years ago does not satisfy this requirement. Regulators expect to see an ongoing programme of security testing with a cadence appropriate to the risk of your processing activities.
Practical recommendation: At minimum, conduct annual penetration testing of your systems that process personal data. Supplement this with quarterly vulnerability scanning and continuous monitoring. If you process sensitive data at scale, consider a more frequent testing cadence or a continuous penetration testing programme that provides ongoing assurance rather than point-in-time snapshots.
Cross-Border Data Transfers and Security Implications
Chapter V of the GDPR restricts transfers of personal data outside the European Economic Area (EEA) unless adequate safeguards are in place. While this is primarily a legal and governance issue, it has significant technical security implications.
Adequacy Decisions and Transfer Mechanisms
Personal data can be transferred outside the EEA if the destination country has been granted an adequacy decision by the European Commission, or if appropriate safeguards are in place. The most common safeguards are Standard Contractual Clauses (SCCs) and, for US transfers, the EU-US Data Privacy Framework.
Technical Implications
From a security perspective, cross-border transfers require technical teams to consider:
- Encryption of data in transit between jurisdictions, including point-to-point encryption for particularly sensitive transfers
- Data residency controls to ensure that personal data is stored and processed only in approved locations (for example, configuring your cloud provider to restrict data to EU regions)
- Access controls that restrict which personnel in which locations can access personal data, particularly where transfers are based on SCCs that include supplementary measures
- Transfer impact assessments that evaluate the legal framework in the destination country and determine whether additional technical measures (such as encryption where the keys are held only in the EEA) are necessary
- Audit logging of cross-border access so that you can demonstrate compliance with transfer restrictions
The Schrems II decision invalidated the EU-US Privacy Shield and raised the bar for supplementary measures required for cross-border transfers. If your infrastructure spans multiple regions, your technical architecture needs to reflect the legal constraints on where personal data can be stored, accessed, and processed. This is an area where ISO 27001 certification can help demonstrate the security posture that regulators and business partners expect for cross-border data handling.
Notable GDPR Fines for Security Failures
The enforcement record tells you more about what regulators actually expect than the text of the regulation itself. Here are three of the most significant fines imposed for security failings under GDPR.
Meta (Facebook) - Ireland
While the headline fine related to cross-border data transfers to the US without adequate safeguards, the Irish Data Protection Commission's decision highlighted the inadequacy of Meta's technical measures to protect personal data transferred internationally. The case underscored that technical controls for cross-border data protection are not optional supplementary measures but core compliance obligations. Meta was also fined 265 million euros in a separate case for a data scraping breach that exposed personal data of over 533 million users, with the DPC citing insufficient technical measures to prevent unauthorised harvesting of publicly accessible data.
British Airways - United Kingdom
Attackers compromised British Airways' website and mobile application through a supply chain attack, injecting malicious code that skimmed payment card data from approximately 400,000 customers over a two-month period. The ICO found that BA had failed to implement appropriate security measures including multi-factor authentication, timely patching and updating of systems, rigorous testing of the payment environment, and adequate monitoring and logging to detect the attack. The fine was originally proposed at 183 million pounds but was reduced due to the economic impact of COVID-19. The ICO specifically noted that the measures BA lacked were "not cutting-edge technology" but rather "basic security measures."
Marriott International - United Kingdom
The Starwood reservation database was compromised in 2014, two years before Marriott acquired Starwood. The breach was not discovered until 2018 and affected approximately 339 million guest records worldwide. The ICO found that Marriott failed to undertake sufficient due diligence during the acquisition, did not implement adequate security measures for the acquired systems, and failed to monitor the compromised systems effectively. The case established that acquiring companies inherit the data protection obligations and security liabilities of their targets, and that inadequate post-acquisition security integration is an enforcement risk.
The pattern across these cases is clear. Regulators are not looking for exotic technical failures. They are looking at whether organisations implemented basic, widely-accepted security measures: encryption, access controls, monitoring, patching, and testing. The companies that received the largest fines were not victims of sophisticated zero-day attacks. They failed to implement measures that were well within their technical and financial capacity.
GDPR vs SOC 2 vs ISO 27001: Overlap and Differences
Many organisations pursuing GDPR compliance are also dealing with SOC 2 and ISO 27001. Understanding where these frameworks overlap and diverge helps you avoid duplicating effort and identify gaps in your security programme.
GDPR
Type: Legal regulation
Scope: Personal data of EU residents
Focus: Data protection rights and security
Enforcement: Government regulators with fining power
Certification: No formal certification; compliance is demonstrated through documentation, DPIAs, and audit trails
SOC 2
Type: Audit framework
Scope: Service organisation controls
Focus: Trust Service Criteria (security, availability, processing integrity, confidentiality, privacy)
Enforcement: Market-driven; customers require it
Certification: Attestation report from a licensed CPA firm
ISO 27001
Type: International standard
Scope: Information security management system (ISMS)
Focus: Systematic management of information security risks
Enforcement: Market-driven; customers and partners require it
Certification: Formal certification from accredited certification body
Where They Overlap
All three frameworks require:
- Risk assessment processes
- Access controls and authentication
- Encryption of data
- Incident response and breach management
- Vendor and third-party risk management
- Regular security testing and monitoring
- Documented policies and procedures
- Employee training and awareness
Key Differences
- GDPR is law; SOC 2 and ISO 27001 are frameworks. Non-compliance with GDPR carries legal penalties. SOC 2 and ISO 27001 are voluntary (though often commercially required). You can lose a SOC 2 report or ISO 27001 certificate and face business consequences, but not regulatory fines for the framework itself.
- GDPR has specific data subject rights (access, erasure, portability, objection) that SOC 2 and ISO 27001 do not directly address. These are legal obligations that require both technical implementation and process support.
- SOC 2 is attestation-based and results in a report from a CPA firm. ISO 27001 results in a certificate from an accredited certification body. GDPR has no formal certification mechanism, though certifications under Article 42 are emerging.
- ISO 27001 Annex A provides 93 specific controls that map well to GDPR's technical requirements. Many organisations use ISO 27001 as the operational framework for demonstrating GDPR security compliance, particularly for SaaS companies expanding into European markets.
Strategic approach: If you are building a compliance programme from scratch, ISO 27001 provides the most structured foundation for satisfying GDPR's security requirements. Its Annex A controls map directly to most of what GDPR Article 32 requires. SOC 2 adds value for US-market sales. And GDPR compliance layers on top with the specific data subject rights, breach notification, and cross-border transfer requirements that the other two frameworks do not fully address.
How Penetration Testing Supports GDPR Compliance
Penetration testing directly addresses several GDPR requirements and provides evidence that regulators value during investigations and enforcement proceedings.
Direct Compliance Support
- Article 32(1)(d): Penetration testing is one of the most direct ways to satisfy the requirement for "regularly testing, assessing and evaluating the effectiveness of technical and organisational measures." A professional pentest report provides documented evidence that you are actively testing your security controls.
- Article 32(1)(b): Testing validates the "ongoing confidentiality, integrity, availability and resilience" of your systems by identifying vulnerabilities that could compromise these properties before an attacker does.
- Article 5(1)(f): The integrity and confidentiality principle requires appropriate security. Penetration testing demonstrates that you are proactively verifying your security rather than simply assuming it is adequate.
Indirect Compliance Benefits
- DPIA input: Penetration test findings inform your Data Protection Impact Assessments by identifying real security risks to personal data, not theoretical ones.
- Breach prevention: The most effective way to comply with breach notification requirements is to prevent breaches in the first place. Penetration testing identifies and helps remediate vulnerabilities before they are exploited.
- Enforcement mitigation: In enforcement proceedings, regulators consider whether an organisation took "appropriate" measures. A history of regular penetration testing and documented remediation demonstrates a proactive security posture that regulators view favourably. It will not prevent a fine, but it can significantly reduce one.
- Vendor assurance: If you are a data processor, your controllers are required to verify that you have adequate security measures. Penetration test reports are one of the most requested forms of evidence in vendor security assessments.
What to Test for GDPR Purposes
When scoping a penetration test with GDPR compliance in mind, ensure the scope covers:
- All systems that process, store, or transmit personal data
- Authentication and authorisation controls (can an attacker access personal data they should not?)
- API endpoints that handle personal data (input validation, access controls, rate limiting)
- Data export and deletion functionality (are data subject rights implemented securely?)
- Administrative interfaces and privileged access paths
- Cloud infrastructure and configuration (are storage buckets, databases, and services properly secured?)
- Third-party integrations that handle personal data
Practical Checklist for GDPR Security Compliance
Use this checklist to assess your organisation's readiness for GDPR's security requirements. This is not exhaustive, but it covers the controls that regulators have consistently cited in enforcement actions.
Encryption and Data Protection
- TLS 1.2 or higher enforced on all external and internal communications
- Personal data encrypted at rest in all databases, file storage, and backups
- Encryption keys managed through a dedicated key management service, rotated regularly
- Pseudonymisation applied where feasible, especially in analytics and non-production environments
- Full-disk encryption on all endpoints (laptops, workstations) that may contain personal data
Access Controls
- Role-based access control implemented across all systems processing personal data
- Multi-factor authentication enforced for all user and administrative access
- Access reviews conducted at least quarterly, with evidence of revocation and adjustment
- Privileged access managed through dedicated PAM solution with just-in-time provisioning
- API endpoints enforce authorisation checks that prevent horizontal and vertical privilege escalation
- Former employees and contractors have access revoked within 24 hours of departure
Monitoring and Incident Response
- Centralised logging of authentication events, data access, and administrative actions
- Real-time alerting on anomalous access patterns and potential data exfiltration
- Documented incident response plan with clear roles, escalation paths, and 72-hour notification workflow
- Incident response plan tested through tabletop exercises at least annually
- Breach assessment templates prepared for rapid classification and risk evaluation
- Contact details for relevant supervisory authorities and legal counsel documented and accessible
Security Testing
- Annual penetration testing of all systems that process personal data
- Quarterly vulnerability scanning with documented remediation timelines
- Critical and high-severity vulnerabilities remediated within defined SLAs
- Security testing integrated into the SDLC (code review, SAST, DAST)
- Third-party penetration testing by qualified, independent testers
- Remediation verified through retesting, not just ticket closure
Data Governance
- Data processing inventory (Article 30 Record of Processing Activities) maintained and current
- Data retention policies enforced at the technical level with automated deletion
- Data subject rights (access, erasure, portability) implemented and tested
- DPIAs conducted for high-risk processing activities before processing begins
- Data processor agreements in place with all third parties that process personal data
- Cross-border transfer mechanisms documented with supplementary technical measures where required
Organisational Measures
- Information security policies documented, approved by management, and communicated to all staff
- Security awareness training delivered at least annually, with records maintained
- Data Protection Officer appointed where required, with adequate resources and independence
- Vendor security assessments conducted before onboarding and reviewed periodically
- Business continuity and disaster recovery plans tested and documented
Get GDPR-Ready with Confidence
Our penetration testing and security assessments are designed to help you meet GDPR Article 32 requirements. Detailed findings, actionable remediation guidance, and reports that demonstrate your commitment to data protection.
View Our Services Book a Consultation