If you build software that touches patient data, you have heard the word HIPAA more times than you can count. You probably know you need to be "HIPAA compliant." You may have even purchased a penetration test because someone told you it was required. But here is the uncomfortable truth: HIPAA compliance is far broader than a pentest, and most healthcare startups are leaving major gaps in their security programs because they do not understand what the Security Rule actually requires.
This guide breaks down the HIPAA Security Rule in terms your engineering team can act on. We will cover what the regulation actually says about security testing, why a penetration test alone is not sufficient, where healthcare companies consistently fail, and what you need to do to protect PHI and satisfy regulators.
The HIPAA Security Rule: What It Actually Says
The HIPAA Security Rule (45 CFR Part 164, Subparts A and C) establishes national standards for protecting electronic protected health information, or ePHI. Unlike PCI-DSS, which gives you a specific checklist of controls, HIPAA is deliberately flexible. It requires covered entities and business associates to implement safeguards that are "reasonable and appropriate" given the organization's size, complexity, capabilities, and the risks to ePHI.
The Security Rule organizes its requirements into three categories of safeguards: administrative, physical, and technical. Each category contains a mix of required and addressable implementation specifications. "Required" means you must implement the safeguard. "Addressable" does not mean optional. It means you must assess whether the specification is reasonable and appropriate for your environment, and if you decide not to implement it, you must document why and implement an equivalent alternative.
Critical distinction: "Addressable" in HIPAA does not mean "optional." It means you must evaluate the specification, determine if it applies to your environment, and either implement it, implement an equivalent measure, or document why neither is reasonable. Ignoring addressable specifications without documented rationale is a compliance violation.
Administrative Safeguards (164.308)
Administrative safeguards are the policies, procedures, and organizational measures that manage the security of ePHI. They include risk analysis, risk management, workforce security, information access management, security awareness training, security incident procedures, contingency planning, and evaluation. For most healthcare startups, this is where the biggest gaps exist. You may have strong encryption and access controls in your application, but if you have not conducted a formal risk assessment or do not have documented incident response procedures, you are out of compliance.
Physical Safeguards (164.310)
Physical safeguards address the physical access to systems and facilities where ePHI is stored or processed. This includes facility access controls, workstation use and security policies, and device and media controls. In a cloud-native world, you may assume physical safeguards are your cloud provider's problem. For the infrastructure layer, that is partially true if your cloud provider has a BAA in place. But physical safeguards also apply to your offices, employee workstations, and any devices that access ePHI. A developer laptop with unencrypted storage and no screen lock policy is a physical safeguard failure.
Technical Safeguards (164.312)
Technical safeguards are the technology controls that protect ePHI. These are where your engineering team will spend the most time and where security testing has the most direct impact. We will do a deep dive into these in a later section, but they cover access controls, audit controls, integrity controls, person or entity authentication, and transmission security.
What HIPAA Actually Requires for Security Testing
Here is where most healthcare companies get confused. HIPAA does not contain a line item that says "conduct an annual penetration test." What it does require, explicitly and without ambiguity, is a risk analysis and risk management process.
The Risk Analysis Requirement (164.308(a)(1)(ii)(A))
This is the single most important requirement in the Security Rule, and it is the one that OCR (the Office for Civil Rights, which enforces HIPAA) cites most frequently in enforcement actions. The regulation requires you to "conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity or business associate."
This is not a vulnerability scan. It is not a penetration test. It is a comprehensive risk assessment that examines your entire environment: your applications, infrastructure, processes, people, and vendors. It identifies threats, evaluates vulnerabilities, assesses the likelihood and impact of potential incidents, and determines what safeguards are in place to mitigate those risks.
The Evaluation Requirement (164.308(a)(8))
HIPAA also requires periodic "technical and nontechnical evaluation" of your security controls. This is where penetration testing and vulnerability assessments fit into the regulatory framework. The evaluation must be performed in response to environmental or operational changes and should assess whether your security policies and procedures meet the requirements of the Security Rule. A penetration test is one form of technical evaluation, but it is not the only form required.
What HIPAA Requires
A comprehensive risk analysis covering all ePHI, documented risk management plans, periodic technical and nontechnical evaluations, ongoing monitoring of security controls, and documented policies for every safeguard. This is a continuous program, not a point-in-time test.
What Most Companies Do
Run an annual penetration test, file the report, and call it compliant. No formal risk assessment. No documented risk management plan. No evaluation of administrative or physical safeguards. No ongoing monitoring. This approach leaves massive gaps that OCR will find.
HIPAA Compliance vs. Actual Security
This is a distinction that matters enormously in healthcare. Being HIPAA compliant means you have implemented the safeguards required by the Security Rule and can demonstrate compliance through documentation, risk assessments, and evidence of controls. Being actually secure means your systems, processes, and people can withstand real-world attacks and protect patient data from compromise.
These two things overlap, but they are not the same. We have seen organizations that pass HIPAA audits with flying colors but have critical vulnerabilities in their applications. We have also seen organizations with excellent security engineering practices that fail HIPAA evaluations because they lack the required documentation and formal processes.
The goal should be both. HIPAA provides a useful framework for thinking about security comprehensively. But if you treat compliance as a checkbox exercise, you will have the documentation without the protection. And when a breach happens, OCR will look not just at whether you had policies in place, but at whether your security measures were actually reasonable and appropriate given the risks.
The OCR test: After a breach, OCR does not just ask "were you compliant?" They ask "were your security measures reasonable and appropriate?" A stack of policies that nobody follows and a pentest report that nobody acts on will not help you. OCR looks at whether you actually implemented and maintained effective safeguards, not just whether you had the right paperwork.
Technical Safeguards Deep Dive
The technical safeguards under 164.312 are the controls that your engineering team is directly responsible for. Let us break down each one and what it means in practice for a modern healthcare application.
Access Controls (164.312(a)(1))
HIPAA requires that you implement technical policies and procedures for systems that maintain ePHI to allow access only to authorized persons or software programs. The implementation specifications include:
- Unique user identification (required): Every user must have a unique identifier. No shared accounts. No generic "admin" logins. Every action taken on ePHI must be traceable to a specific individual. This sounds basic, but we routinely find shared service accounts in healthcare applications that access patient data.
- Emergency access procedure (required): You must have procedures for obtaining access to ePHI during an emergency. This means break-glass accounts, emergency access protocols, and documented procedures for when your normal authentication systems are unavailable.
- Automatic logoff (addressable): Sessions that access ePHI should terminate after a period of inactivity. The timeout period should be based on risk. A clinician workstation in a shared area needs a shorter timeout than an admin portal accessed from a secured office.
- Encryption and decryption (addressable): ePHI at rest should be encrypted. While this is technically "addressable," it is nearly impossible to justify not encrypting ePHI in 2026. If you are storing patient data in plaintext, you need an extraordinarily compelling documented rationale, and you almost certainly do not have one.
Audit Controls (164.312(b))
You must implement hardware, software, and procedural mechanisms that record and examine activity in systems that contain or use ePHI. In practice, this means comprehensive logging of who accessed what ePHI, when, and what they did with it. This is not just application-level logging. It includes database access logs, API access logs, administrative access to infrastructure, and file-level access to any system containing ePHI.
The audit controls requirement is where many healthcare startups fall short. They log application events but not database queries. They track user logins but not data exports. They monitor production but not staging environments that contain copies of real patient data. A penetration test can validate whether your audit controls detect unauthorized access, but the controls need to exist first.
Integrity Controls (164.312(c)(1))
You must implement policies and procedures to protect ePHI from improper alteration or destruction. The addressable implementation specification calls for electronic mechanisms to corroborate that ePHI has not been altered or destroyed in an unauthorized manner. In practice, this means checksums or hashing for data integrity validation, database integrity constraints, backup verification procedures, and controls that prevent unauthorized modification of patient records.
Person or Entity Authentication (164.312(d))
You must implement procedures to verify that a person or entity seeking access to ePHI is the one claimed. This goes beyond username and password. For systems that access ePHI, you should be implementing multi-factor authentication, certificate-based authentication for system-to-system communication, and strong identity verification for password resets and account recovery. MFA is not technically "required" by the letter of the regulation, but try explaining to OCR after a breach that you decided single-factor authentication was reasonable and appropriate for accessing patient records.
Transmission Security (164.312(e)(1))
You must implement technical security measures to guard against unauthorized access to ePHI being transmitted over an electronic communications network. The addressable specifications include integrity controls for transmitted data and encryption. Again, while technically addressable, transmitting ePHI without encryption in 2026 is indefensible. TLS 1.2 or higher for data in transit is the baseline. API communications, email containing PHI, file transfers, and any other transmission of ePHI must be encrypted.
Penetration Testing for HIPAA: What to Scope
While a pentest alone does not satisfy HIPAA's security testing requirements, it is a critical component of your technical evaluation. Here is how to scope a penetration test that actually serves your HIPAA compliance needs.
Follow the PHI Data Flow
The most important scoping exercise for a HIPAA pentest is mapping your PHI data flows. Where does ePHI enter your system? Where is it stored? Where is it processed? Where is it transmitted? Who and what has access to it? Your pentest scope should cover every system, application, and network segment that touches ePHI at any point in its lifecycle.
This means testing:
- Patient-facing applications: Patient portals, telehealth platforms, mobile health apps, intake forms, and any interface where patients enter or view their health information.
- Clinical applications: EHR systems, clinical decision support tools, lab result systems, imaging systems, and any application used by healthcare providers to create, view, or modify patient records.
- APIs and integrations: HL7 FHIR endpoints, EHR integration APIs, third-party data exchanges, lab interfaces, pharmacy systems, and any API that transmits or receives ePHI.
- Infrastructure: Cloud environments (AWS, Azure, GCP) where ePHI is stored or processed, databases containing patient data, backup systems, and any infrastructure component in the ePHI data flow.
- Internal systems: Admin panels, billing systems, analytics platforms, and any internal tool that accesses ePHI. These are often less hardened than patient-facing applications and are where we find some of the most critical vulnerabilities.
BAA Requirements and Testing Boundaries
If you are a business associate (which most health tech companies are), your Business Associate Agreement with covered entities may include specific security testing requirements. Review your BAAs before scoping a pentest. Some covered entities require their business associates to conduct annual penetration testing, share results, and remediate findings within specific timeframes. Your pentest scope should satisfy any BAA obligations in addition to your own compliance needs.
Additionally, if your application integrates with a covered entity's systems, you need to coordinate testing boundaries. You cannot pentest a hospital's EHR system without their explicit authorization. Define clear rules of engagement that specify which systems are in scope for testing and which are off-limits.
Testing PHI-Specific Controls
A HIPAA-focused penetration test should specifically evaluate controls that are unique to healthcare environments:
- Role-based access to PHI: Can a standard user escalate privileges to access patient records they should not see? Can a nurse account access billing data? Can a billing user access clinical notes?
- PHI in logs and error messages: Does the application leak patient data in error messages, stack traces, debug output, or application logs? This is one of the most common findings in healthcare applications.
- Data segregation: In multi-tenant environments, can one organization's users access another organization's patient data? Tenant isolation failures in healthcare are breach-reportable events.
- PHI in non-production environments: Is real patient data being used in development, staging, or QA environments? Are those environments secured to the same standard as production?
- De-identification effectiveness: If you claim data is de-identified under the Safe Harbor or Expert Determination methods, can the pentest team re-identify patients from the available data?
- Audit trail integrity: Can a user modify or delete audit logs that record their access to ePHI? Can they access ePHI without generating a log entry?
Common HIPAA Security Failures in Healthcare Startups
After conducting security assessments for dozens of healthcare startups and digital health companies, these are the failures we see most consistently. None of them are exotic. All of them are preventable. And every single one of them has resulted in OCR enforcement actions against other organizations.
- No formal risk assessment. This is the number one finding in OCR investigations. The organization has never conducted a comprehensive, documented risk assessment covering all ePHI. They may have run vulnerability scans or conducted a pentest, but they have not done the systematic risk analysis that HIPAA requires. Without a risk assessment, you cannot demonstrate that your safeguards are reasonable and appropriate because you have not formally identified what you are protecting against.
- PHI in application logs. The application logs patient names, dates of birth, medical record numbers, or other PHI in application logs, error messages, or debug output. These logs are often stored in systems with weaker access controls than the production database and may be accessible to developers, support staff, or third-party logging services without BAAs in place.
- Real PHI in non-production environments. Development, staging, and QA environments contain copies of real patient data. These environments typically have weaker access controls, no audit logging, and broader developer access. Using synthetic or properly de-identified data in non-production environments is a basic hygiene step that many startups skip.
- Missing or incomplete BAAs. The organization uses third-party services that process ePHI (cloud providers, analytics tools, email services, logging platforms) without Business Associate Agreements in place. Every vendor that creates, receives, maintains, or transmits ePHI on your behalf must have a signed BAA. No exceptions.
- Inadequate access controls for APIs. The application has a well-secured front end but exposes API endpoints that return ePHI without proper authorization checks. Broken object-level authorization (BOLA) in healthcare APIs means one patient can view another patient's records by manipulating request parameters. This is a breach.
- No encryption at rest. Patient data is stored in databases or file systems without encryption. While encryption at rest is technically an "addressable" specification, there is no reasonable justification for not encrypting ePHI in modern cloud environments where encryption is often a single configuration toggle.
- Session management failures. Sessions do not expire after reasonable inactivity periods. Session tokens are predictable or insufficiently random. Users remain authenticated after password changes. These failures allow unauthorized access to ePHI through abandoned sessions or stolen tokens.
- No incident response plan. The organization has no documented procedures for responding to a security incident involving ePHI. HIPAA requires security incident procedures (164.308(a)(6)), and the Breach Notification Rule requires notification within 60 days. Without a plan, the response is chaotic, slow, and likely to violate notification timelines.
Health Tech Specific Risks
The healthcare technology landscape has evolved rapidly, and each category of health tech brings its own security challenges. Here are the risk areas we focus on when assessing modern healthcare applications.
Telehealth Platforms
Telehealth exploded during the COVID-19 pandemic and has remained a standard delivery model. The security risks are significant. Video consultations transmit ePHI in real time and must be encrypted end-to-end. Session recordings, if retained, are ePHI and must be stored and protected accordingly. Chat functionality within telehealth sessions often logs PHI in plaintext. Screen sharing can inadvertently expose other patients' records. And the platforms themselves often use WebRTC or similar protocols that can leak IP addresses and other metadata.
We regularly find that telehealth platforms have strong encryption for the video stream itself but weak security around the supporting infrastructure: appointment scheduling, patient intake forms, chat logs, and session metadata.
EHR Integrations
If your product integrates with Electronic Health Record systems, you are handling some of the most sensitive data in healthcare. EHR integrations via HL7 FHIR, HL7 v2, or proprietary APIs create complex data flows that are difficult to secure. Common issues include overly broad FHIR scopes that request more patient data than necessary, insecure token storage for OAuth-based EHR connections, lack of data minimization in API responses, and insufficient logging of data access through integration endpoints.
The 21st Century Cures Act and its information blocking provisions add another layer of complexity. You need to support data interoperability while maintaining security. These goals are not in conflict, but they require careful implementation.
Patient Portals
Patient portals are the most common attack surface in healthcare applications. They are internet-facing, they handle authentication for non-technical users, and they provide direct access to ePHI. The most common vulnerabilities we find in patient portals include weak password policies (or no password requirements at all for initial registration), insecure account recovery flows that can be exploited to take over other patients' accounts, broken access controls that allow patients to view other patients' records, insufficient rate limiting on authentication endpoints, and PHI exposure in URL parameters that get logged by web servers and analytics tools.
Mobile Health Apps
Mobile health applications introduce risks that do not exist in web applications. Data stored on the device may persist after the user logs out. Push notifications can display PHI on lock screens. The application may cache ePHI in ways that survive app deletion. Biometric authentication may fall back to device passcode, which may be a four-digit PIN. And the app may communicate with backend APIs over networks that the user controls, making man-in-the-middle attacks a real concern if certificate pinning is not implemented.
For a detailed look at mobile application security testing, see our guide on mobile app security testing.
OCR Enforcement Trends and Recent Penalties
Understanding how the Office for Civil Rights enforces HIPAA helps you prioritize your security investments. OCR has become increasingly aggressive in recent years, and the penalties are substantial.
Enforcement by the Numbers
Since the HIPAA enforcement program began, OCR has collected over $140 million in penalties and settlements. The trend is clear: penalties are getting larger, investigations are getting more thorough, and OCR is targeting organizations of all sizes, not just large health systems. Small practices and health tech startups are not exempt.
The most common findings in OCR enforcement actions are:
- Failure to conduct a risk analysis - this appears in the vast majority of OCR settlements. It is the single most cited deficiency.
- Failure to manage identified risks - conducting a risk assessment but not implementing a risk management plan to address the findings.
- Insufficient access controls - failing to limit access to ePHI to only those who need it for their job functions.
- Lack of encryption - storing or transmitting ePHI without encryption, particularly on portable devices and in email.
- Missing BAAs - using vendors that access ePHI without signed Business Associate Agreements.
The Right of Access Initiative
OCR has also been aggressively enforcing patients' right to access their own health information. While this is not directly a security testing issue, it affects how your application handles patient data requests. Your systems must be able to provide patients with their ePHI in a timely manner and in the format they request. Failing to do so has resulted in penalties ranging from $15,000 to over $200,000.
State Attorney General Enforcement
In addition to OCR, state attorneys general have the authority to bring civil actions for HIPAA violations on behalf of state residents. Several states have exercised this authority, and state-level enforcement adds another layer of regulatory risk. Some states also have their own health data privacy laws that impose requirements beyond HIPAA, including state breach notification laws with shorter notification timelines than HIPAA's 60-day window.
The cost of non-compliance: HIPAA penalties range from $100 to $50,000 per violation, with annual caps up to $1.5 million per violation category. But the real cost is broader: breach notification expenses, credit monitoring for affected patients, legal fees, reputational damage, and lost business. For a health tech startup, a significant HIPAA breach can be an extinction-level event.
The Practical HIPAA Security Checklist
Here is a practical checklist for healthcare companies that want to move beyond checkbox compliance and build a security program that actually protects patient data. This is not an exhaustive list of every HIPAA requirement, but it covers the areas where we see the most failures and the highest risk.
Risk Assessment and Management
- Conduct a comprehensive risk assessment covering all systems, applications, and processes that handle ePHI. Document it thoroughly.
- Create a risk management plan that addresses each identified risk with specific safeguards, responsible owners, and implementation timelines.
- Review and update the risk assessment at least annually and whenever significant changes occur to your environment or operations.
- Maintain a complete inventory of all systems that create, receive, store, or transmit ePHI.
- Document your risk acceptance decisions with clear rationale for any risks you choose to accept rather than mitigate.
Technical Controls
- Encrypt all ePHI at rest using AES-256 or equivalent. This includes databases, file storage, backups, and any other persistent storage.
- Encrypt all ePHI in transit using TLS 1.2 or higher. No exceptions for internal network traffic.
- Implement multi-factor authentication for all systems that access ePHI. MFA should be enforced, not optional.
- Enforce role-based access controls with the principle of least privilege. Users should access only the minimum ePHI required for their function.
- Implement comprehensive audit logging that records all access to ePHI, including who accessed it, when, what they accessed, and what actions they took.
- Configure automatic session timeouts for all applications that access ePHI. Timeouts should be based on risk assessment.
- Implement unique user identifiers. Eliminate shared accounts and generic logins.
- Deploy integrity controls to detect unauthorized modification of ePHI.
Security Testing
- Conduct annual penetration testing covering all systems in the ePHI data flow. Scope should include application, API, and infrastructure testing.
- Perform quarterly vulnerability scans of all systems that process ePHI.
- Test PHI-specific controls: role-based access, data segregation, audit trail integrity, and de-identification effectiveness.
- Validate that security controls detect and alert on unauthorized access attempts.
- Remediate critical and high-severity findings within 30 days. Document remediation with evidence of retesting.
- Conduct security code reviews for applications that handle ePHI, particularly before major releases.
Policies and Procedures
- Maintain documented security policies covering all HIPAA Security Rule requirements.
- Implement an incident response plan specific to ePHI breaches, including breach determination procedures and notification timelines.
- Establish a contingency plan covering data backup, disaster recovery, and emergency mode operations.
- Create and enforce a sanctions policy for workforce members who violate security policies.
- Document all BAAs with vendors that access ePHI. Maintain a current vendor inventory with BAA status.
- Conduct regular security awareness training for all workforce members who handle ePHI.
Operational Security
- Use synthetic or properly de-identified data in development, staging, and QA environments. Never use real patient data outside of production.
- Implement data loss prevention controls to detect and prevent unauthorized exfiltration of ePHI.
- Review and revoke access promptly when workforce members change roles or leave the organization.
- Monitor for PHI in application logs, error messages, and debug output. Strip or mask PHI from all logging.
- Maintain an asset inventory of all devices that access ePHI, including mobile devices and laptops.
- Implement endpoint protection on all devices that access ePHI, including full-disk encryption and remote wipe capability.
Building a HIPAA Security Program That Works
If you have read this far, you understand that HIPAA security is not a single test or a stack of policies. It is a continuous program that requires ongoing attention from your engineering, operations, and leadership teams. Here is how to approach it practically.
Start with the Risk Assessment
Everything in HIPAA flows from the risk assessment. It determines what safeguards are reasonable and appropriate for your organization. It identifies the threats and vulnerabilities you need to address. And it provides the documented foundation that OCR will evaluate if something goes wrong. If you do nothing else, do this. A thorough risk assessment is the single most important step you can take for HIPAA compliance.
Align Your Security Testing with Compliance Needs
Your penetration test should be scoped specifically to your ePHI environment and should test the controls that HIPAA requires. Work with a testing firm that understands healthcare and can map findings to HIPAA requirements. A generic web application pentest is better than nothing, but it will not address the healthcare-specific risks that regulators care about.
For a broader perspective on how compliance frameworks intersect with security testing, see our guides on SOC 2 vs. ISO 27001 for startups and the NIST Cybersecurity Framework practical guide. Many healthcare companies pursue multiple frameworks simultaneously, and understanding the overlaps saves significant effort.
Treat Compliance as a Continuous Process
HIPAA compliance is not something you achieve and then forget about. The Security Rule requires ongoing risk management, periodic evaluation, and continuous monitoring. Build security into your development lifecycle. Conduct regular access reviews. Update your risk assessment when your environment changes. And test your controls regularly, not just when an audit or assessment is due.
Do Not Forget Your Vendors
Your security program is only as strong as your weakest vendor. Every third party that handles ePHI on your behalf must have a BAA in place, and you should be evaluating their security posture as part of your risk management program. This includes cloud providers, analytics tools, email services, logging platforms, payment processors, and any other vendor that may come into contact with patient data.
HIPAA Security Testing and Compliance Support
We help healthcare companies build security programs that protect patient data and satisfy regulators. From risk assessments to penetration testing to ongoing compliance support, we understand what healthcare companies actually need.
Book a Consultation Read More Guides