Database Security Testing: The Risks Hiding in Your Data Layer | Lorikeet Security Skip to main content
Back to Blog

Database Security Testing: The Risks Hiding in Your Data Layer

Lorikeet Security Team March 2, 2026 10 min read
Database Security Assessment

Injection, Access Controls, Encryption, and Platform-Specific Vulnerabilities

Your database is where the data lives. Customer records, financial transactions, authentication credentials, personal health information, intellectual property. Every other security control in your environment exists to protect what ultimately resides in a database. And yet, in the majority of penetration tests we conduct, the database layer is the least tested and most misconfigured component of the infrastructure.

The assumption is familiar: the database sits behind the application, behind the firewall, behind the VPN. It is not directly exposed to the internet, so it does not need the same scrutiny as a web application or a public API. This reasoning was flawed ten years ago, and it is dangerous today. Databases get compromised through application-layer injection, lateral movement from compromised hosts, credential reuse from breached services, and increasingly through direct internet exposure in cloud environments where security groups are misconfigured. The path to your data is rarely a straight line through the front door.

This article covers what we test in database security assessments, why SQL injection is still a problem despite decades of awareness, database-specific vulnerabilities across PostgreSQL, MySQL, MongoDB, and Redis, and the security controls that separate hardened databases from vulnerable ones. If you are interested in how database security fits into the broader application testing picture, our guide on the OWASP Top 10 in real-world penetration tests provides useful context.


Why database security gets overlooked

Database security falls into a gap between responsibilities. Application developers assume the DBA handles database hardening. The DBA assumes the network team restricts access. The network team assumes the application layer prevents injection. The security team scans the application and the network perimeter but rarely performs dedicated database security testing. The result is that nobody owns the security of the data layer comprehensively.

The "behind the firewall" myth. The traditional security model placed databases deep inside the network and assumed that perimeter defenses would keep attackers out. This model fails for several reasons. First, attackers who compromise a web application or an employee workstation are already inside the perimeter. Second, cloud databases are often accessible from broader network ranges than their owners realize. Third, many development and staging databases have weaker controls than production but contain the same data. We routinely find staging databases with production data snapshots, accessible with default credentials, sitting on the same network as the application servers.

Trust boundary confusion. Applications treat the database as a trusted component. If the application can reach the database with valid credentials, everything the application sends is assumed to be legitimate. This assumption is the foundation of every SQL injection vulnerability: the database cannot distinguish between a legitimate query and an injected one because it trusts the application connection implicitly. The same applies to stored procedures, views, and functions that execute with elevated privileges regardless of who triggered them.

Compliance-driven testing misses depth. Organizations that perform database security testing often do so as part of a compliance requirement, PCI DSS, HIPAA, or SOC 2. These compliance-driven assessments typically check for a list of specific controls (encryption enabled, access controls configured, logging active) without testing whether those controls actually work against an adversary. A database can be "compliant" and still vulnerable if the encryption keys are stored alongside the encrypted data, if access controls grant excessive privileges, or if logging is configured but nobody monitors the logs.

What we see in practice: In over 70 percent of our internal penetration tests, the path from initial access to sensitive data passes through a database with at least one significant misconfiguration. The most common path is: compromised web application credentials lead to database access, database access with excessive privileges leads to data exfiltration, and the lack of monitoring means the breach goes undetected for days or weeks.


What we test in database security assessments

A comprehensive database security assessment evaluates the database from multiple angles: the configuration itself, the application layer that interacts with it, the network controls that restrict access, and the monitoring that detects abuse. Here is what each area covers.

Authentication and access controls

We test how the database authenticates connections and what privileges those connections receive. This includes testing for default credentials (sa/blank on SQL Server, root/empty on MySQL, postgres/postgres on PostgreSQL), brute-force susceptibility, password policy enforcement, and whether the database supports and requires encrypted authentication. We also evaluate whether application service accounts use shared credentials, whether individual user accounts exist for administrative access, and whether there is any form of certificate-based or IAM-integrated authentication.

Privilege analysis

We enumerate the privileges granted to every account that connects to the database. Application accounts should have the minimum privileges required to function: SELECT, INSERT, UPDATE, and DELETE on specific tables. They should not have CREATE, DROP, ALTER, GRANT, or any administrative privileges. We consistently find application accounts running with DBA or superuser privileges because "it was easier to set up that way" during development and nobody restricted it before deployment.

Encryption at rest and in transit

We verify whether data is encrypted at rest using transparent data encryption (TDE), filesystem-level encryption, or column-level encryption. We also test whether database connections require TLS and whether the TLS configuration is secure (protocol versions, cipher suites, certificate validation). Many databases accept both encrypted and unencrypted connections, which means an attacker who can intercept network traffic can downgrade the connection to plaintext.

Injection vectors

We test every point where application-supplied data reaches the database: query parameters, search functions, sorting and filtering, bulk operations, report generators, and administrative interfaces. We test for first-order injection (direct input to query), second-order injection (stored data used in subsequent queries), and blind injection techniques where the database response does not directly reveal data but can be inferred through timing or boolean conditions.

Stored procedures and functions

We audit stored procedures for dynamic SQL construction, excessive privileges, and input validation. Stored procedures that build queries by concatenating parameters are vulnerable to injection even when the calling application uses parameterized queries for its own statements. We also check for dangerous system procedures that are enabled by default, such as xp_cmdshell in SQL Server or COPY TO PROGRAM in PostgreSQL.

Audit logging and monitoring

We evaluate whether the database logs authentication attempts, privilege changes, schema modifications, data access patterns, and administrative operations. We also check whether those logs are stored in a location that is not writable by the database service account (an attacker who compromises the database should not be able to delete the audit trail) and whether anyone actually monitors the logs for suspicious activity.

Backup security

Database backups are often the weakest link. We check whether backups are encrypted, where they are stored, who has access to them, and whether they are tested for restoration. An unencrypted database backup stored on a network share with broad read access effectively negates every other security control on the production database. If an attacker can simply download the backup, none of the access controls, encryption, or monitoring on the live database matters.


SQL injection is not dead

The most common response we hear during database security assessments is: "We use parameterized queries, so we are not vulnerable to SQL injection." Parameterized queries are the correct primary defense against first-order SQL injection, and their adoption has meaningfully reduced the prevalence of trivial injection flaws. But they do not eliminate injection risk entirely. Here are the cases where parameterized queries are insufficient.

Second-order SQL injection

In second-order injection, the attacker submits malicious input that is safely stored in the database (the INSERT uses a parameterized query, so no injection occurs). Later, a different part of the application reads that stored value and uses it in a query without parameterization. The classic example is a username registration: the user registers with the username admin'--. The registration query is parameterized, so the username is stored safely. But when a password-change function retrieves the username and concatenates it into an UPDATE query, the injection executes. Testing for second-order injection requires understanding the full data flow through the application, not just testing individual inputs.

Stored procedure injection

Stored procedures that build dynamic SQL by concatenating their input parameters are vulnerable to injection regardless of whether the calling application uses parameterized queries. The parameterized query safely passes the input to the stored procedure, but the stored procedure then concatenates that input into a new query and executes it. This creates a false sense of security because the application code looks correct while the vulnerability exists in the database layer that the application team may not have visibility into.

Dynamic query builders and ORMs

Object-relational mappers (ORMs) like Hibernate, SQLAlchemy, Django ORM, and ActiveRecord generate parameterized queries for standard operations. But they also provide escape hatches for raw SQL, dynamic ordering, complex filtering, and custom queries. Developers who are comfortable that the ORM handles injection may not apply the same rigor when they use these escape hatches. We frequently find injection in .extra() calls in Django, Arel.sql() in Rails, createNativeQuery() in Hibernate, and $queryRaw in Prisma.

Identifier injection

Parameterized queries protect values but cannot parameterize identifiers like table names, column names, and schema names. If an application allows users to select which column to sort by, which table to query, or which schema to use, and those identifiers are inserted directly into the query, injection is possible. The defense requires strict allowlisting of valid identifiers, but this is often overlooked because developers assume all injection is handled by parameterization.

Real-world prevalence: SQL injection remains in the OWASP Top 10 (under Injection, A03:2021) and appears in approximately 25 percent of our web application penetration tests. The difference from a decade ago is that the injection points are harder to find, require more sophisticated techniques to exploit, and are often in administrative or secondary functions rather than the main user-facing features. But when found, they are just as devastating as they have always been.


Database-specific vulnerabilities

Each database platform has its own set of features that, when misconfigured or left at defaults, create security vulnerabilities. Our assessments test for platform-specific risks that automated scanners typically miss.

PostgreSQL

PostgreSQL is a powerful database with features that are useful for administrators and dangerous in the hands of an attacker who has gained sufficient privileges.

MySQL

MySQL's long history includes features that were designed for convenience and are now recognized as security risks.

MongoDB

MongoDB's history of insecure defaults and its JSON-based query language create a unique set of risks.

Redis

Redis occupies a unique position because it is often treated as a cache rather than a database, leading to weaker security controls despite frequently containing sensitive data like session tokens, API keys, and authentication state.


The principle of least privilege in practice

The most impactful finding in database security assessments is rarely an exotic vulnerability. It is excessive privileges. Application accounts that can DROP tables. Service accounts that can GRANT privileges. Development accounts with DBA access that persist into production. Every excessive privilege expands the blast radius of a compromise.

The following table compares the privileges we commonly find assigned to application database accounts versus what those accounts actually need.

Access Pattern Common Configuration Recommended Configuration
Application account GRANT ALL PRIVILEGES on entire database SELECT, INSERT, UPDATE, DELETE on specific tables only
Reporting queries Same account as application with full access Separate read-only account with SELECT on reporting views
Migration / schema changes Application account with DDL privileges in production Dedicated migration account used only during deployments, revoked after
Backup operations DBA account shared with other operations Dedicated backup account with SELECT and LOCK only
Admin / DBA access Shared root/sa account with password in config files Individual named accounts with MFA and session logging
Multi-tenant data Application-enforced tenant isolation with shared DB user Row-level security policies enforced at the database level

The gap between the "Common" and "Recommended" columns in this table represents the difference between a database that can withstand a compromise at the application layer and one where application compromise equals total data breach. Closing this gap is consistently one of the highest-ROI security improvements we recommend.


Database encryption: what to encrypt and how

Encryption in the database context is frequently misunderstood. Organizations often check the "encryption at rest" box and assume their data is protected. But encryption is only as strong as the key management that supports it, and different encryption approaches protect against different threat models.

Transparent data encryption (TDE)

TDE encrypts database files at the storage level. The encryption and decryption happen automatically when data is read from or written to disk. TDE protects against physical theft of disk media and unauthorized access to database files at the filesystem level. It does not protect against an attacker who has database credentials, because the data is decrypted transparently for any authenticated connection. TDE is the minimum baseline for compliance frameworks but should not be considered sufficient for protecting sensitive data.

Column-level encryption

Column-level encryption encrypts specific columns containing sensitive data (credit card numbers, SSNs, health records) with a key that is separate from the database access credentials. Even a DBA or an attacker with full database access sees only ciphertext in the encrypted columns. The decryption key is held by the application or a key management service, adding an additional access control layer. The trade-off is that encrypted columns cannot be indexed or searched efficiently, which impacts query performance.

Application-level encryption

The strongest approach encrypts data before it reaches the database. The database stores only ciphertext and never possesses the decryption keys. This protects against database compromise, DBA access, backup theft, and replication to unauthorized environments. The application manages encryption and decryption, and key management is entirely outside the database's control. This approach provides the best security but adds application complexity and eliminates the ability to perform server-side queries or aggregations on encrypted fields.

Key management

Regardless of the encryption approach, the keys must be managed securely. We find organizations storing encryption keys in application configuration files on the same server as the database, in environment variables accessible to the application runtime, or hardcoded in source code committed to version control. If the encryption key is compromised alongside the encrypted data, the encryption provides no protection. Keys should be stored in a dedicated key management service (AWS KMS, Azure Key Vault, HashiCorp Vault) with access controls, rotation policies, and audit logging independent of the database.


Monitoring and audit logging

Database monitoring serves two purposes: detecting active attacks and supporting forensic investigation after an incident. Without database-level logging, an organization that discovers a breach may not be able to determine what data was accessed, when, or by whom.

What to log

Alerting on anomalies

Logging without monitoring is security theater. The logs must feed into a system that detects anomalies and generates alerts. Key patterns to monitor include: login attempts from new or unexpected source IPs, queries against tables that the application normally never accesses, sudden increases in data retrieval volume, privilege escalation events, and any direct administrative access outside of scheduled maintenance windows. Integration with your SIEM (Splunk, Elastic, Sentinel) and on-call rotation ensures that alerts reach someone who can act on them.

The logging gap we consistently find: Organizations enable basic authentication logging but do not log data access queries. This means they can tell you who logged in but not what data they accessed. When a breach occurs, the investigation hits a dead end at "the application account connected at these times," with no visibility into which tables were queried or how many records were returned. Enabling query-level audit logging has a performance cost, but the forensic value during an incident far outweighs that cost for sensitive data stores.


Database security checklist

The following checklist covers the controls we evaluate in every database security assessment. Use it as a baseline for hardening your own database environments.

Authentication and access

Encryption

Injection prevention

Monitoring and operations

Platform hardening


Database security is not a one-time hardening exercise. It is an ongoing discipline that requires coordination between application developers, database administrators, infrastructure teams, and security. The controls described in this article are not exotic or expensive. They are foundational practices that protect the most valuable asset in your environment: the data itself.

The organizations that get database security right are the ones that treat the database as a security boundary rather than an implementation detail. They test it with the same rigor they apply to their web applications and network perimeter. They enforce least privilege even when it is inconvenient. And they monitor their data layer with the understanding that when an attacker reaches the database, everything else has already failed.

Secure Your Data Layer

Our database security assessments test authentication, privilege escalation, injection vectors, encryption, and platform-specific vulnerabilities across PostgreSQL, MySQL, MongoDB, Redis, and SQL Server. Find the risks hiding in your data before attackers do.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!