Injection, Access Controls, Encryption, and Platform-Specific Vulnerabilities
Your database is where the data lives. Customer records, financial transactions, authentication credentials, personal health information, intellectual property. Every other security control in your environment exists to protect what ultimately resides in a database. And yet, in the majority of penetration tests we conduct, the database layer is the least tested and most misconfigured component of the infrastructure.
The assumption is familiar: the database sits behind the application, behind the firewall, behind the VPN. It is not directly exposed to the internet, so it does not need the same scrutiny as a web application or a public API. This reasoning was flawed ten years ago, and it is dangerous today. Databases get compromised through application-layer injection, lateral movement from compromised hosts, credential reuse from breached services, and increasingly through direct internet exposure in cloud environments where security groups are misconfigured. The path to your data is rarely a straight line through the front door.
This article covers what we test in database security assessments, why SQL injection is still a problem despite decades of awareness, database-specific vulnerabilities across PostgreSQL, MySQL, MongoDB, and Redis, and the security controls that separate hardened databases from vulnerable ones. If you are interested in how database security fits into the broader application testing picture, our guide on the OWASP Top 10 in real-world penetration tests provides useful context.
Why database security gets overlooked
Database security falls into a gap between responsibilities. Application developers assume the DBA handles database hardening. The DBA assumes the network team restricts access. The network team assumes the application layer prevents injection. The security team scans the application and the network perimeter but rarely performs dedicated database security testing. The result is that nobody owns the security of the data layer comprehensively.
The "behind the firewall" myth. The traditional security model placed databases deep inside the network and assumed that perimeter defenses would keep attackers out. This model fails for several reasons. First, attackers who compromise a web application or an employee workstation are already inside the perimeter. Second, cloud databases are often accessible from broader network ranges than their owners realize. Third, many development and staging databases have weaker controls than production but contain the same data. We routinely find staging databases with production data snapshots, accessible with default credentials, sitting on the same network as the application servers.
Trust boundary confusion. Applications treat the database as a trusted component. If the application can reach the database with valid credentials, everything the application sends is assumed to be legitimate. This assumption is the foundation of every SQL injection vulnerability: the database cannot distinguish between a legitimate query and an injected one because it trusts the application connection implicitly. The same applies to stored procedures, views, and functions that execute with elevated privileges regardless of who triggered them.
Compliance-driven testing misses depth. Organizations that perform database security testing often do so as part of a compliance requirement, PCI DSS, HIPAA, or SOC 2. These compliance-driven assessments typically check for a list of specific controls (encryption enabled, access controls configured, logging active) without testing whether those controls actually work against an adversary. A database can be "compliant" and still vulnerable if the encryption keys are stored alongside the encrypted data, if access controls grant excessive privileges, or if logging is configured but nobody monitors the logs.
What we see in practice: In over 70 percent of our internal penetration tests, the path from initial access to sensitive data passes through a database with at least one significant misconfiguration. The most common path is: compromised web application credentials lead to database access, database access with excessive privileges leads to data exfiltration, and the lack of monitoring means the breach goes undetected for days or weeks.
What we test in database security assessments
A comprehensive database security assessment evaluates the database from multiple angles: the configuration itself, the application layer that interacts with it, the network controls that restrict access, and the monitoring that detects abuse. Here is what each area covers.
Authentication and access controls
We test how the database authenticates connections and what privileges those connections receive. This includes testing for default credentials (sa/blank on SQL Server, root/empty on MySQL, postgres/postgres on PostgreSQL), brute-force susceptibility, password policy enforcement, and whether the database supports and requires encrypted authentication. We also evaluate whether application service accounts use shared credentials, whether individual user accounts exist for administrative access, and whether there is any form of certificate-based or IAM-integrated authentication.
Privilege analysis
We enumerate the privileges granted to every account that connects to the database. Application accounts should have the minimum privileges required to function: SELECT, INSERT, UPDATE, and DELETE on specific tables. They should not have CREATE, DROP, ALTER, GRANT, or any administrative privileges. We consistently find application accounts running with DBA or superuser privileges because "it was easier to set up that way" during development and nobody restricted it before deployment.
Encryption at rest and in transit
We verify whether data is encrypted at rest using transparent data encryption (TDE), filesystem-level encryption, or column-level encryption. We also test whether database connections require TLS and whether the TLS configuration is secure (protocol versions, cipher suites, certificate validation). Many databases accept both encrypted and unencrypted connections, which means an attacker who can intercept network traffic can downgrade the connection to plaintext.
Injection vectors
We test every point where application-supplied data reaches the database: query parameters, search functions, sorting and filtering, bulk operations, report generators, and administrative interfaces. We test for first-order injection (direct input to query), second-order injection (stored data used in subsequent queries), and blind injection techniques where the database response does not directly reveal data but can be inferred through timing or boolean conditions.
Stored procedures and functions
We audit stored procedures for dynamic SQL construction, excessive privileges, and input validation. Stored procedures that build queries by concatenating parameters are vulnerable to injection even when the calling application uses parameterized queries for its own statements. We also check for dangerous system procedures that are enabled by default, such as xp_cmdshell in SQL Server or COPY TO PROGRAM in PostgreSQL.
Audit logging and monitoring
We evaluate whether the database logs authentication attempts, privilege changes, schema modifications, data access patterns, and administrative operations. We also check whether those logs are stored in a location that is not writable by the database service account (an attacker who compromises the database should not be able to delete the audit trail) and whether anyone actually monitors the logs for suspicious activity.
Backup security
Database backups are often the weakest link. We check whether backups are encrypted, where they are stored, who has access to them, and whether they are tested for restoration. An unencrypted database backup stored on a network share with broad read access effectively negates every other security control on the production database. If an attacker can simply download the backup, none of the access controls, encryption, or monitoring on the live database matters.
SQL injection is not dead
The most common response we hear during database security assessments is: "We use parameterized queries, so we are not vulnerable to SQL injection." Parameterized queries are the correct primary defense against first-order SQL injection, and their adoption has meaningfully reduced the prevalence of trivial injection flaws. But they do not eliminate injection risk entirely. Here are the cases where parameterized queries are insufficient.
Second-order SQL injection
In second-order injection, the attacker submits malicious input that is safely stored in the database (the INSERT uses a parameterized query, so no injection occurs). Later, a different part of the application reads that stored value and uses it in a query without parameterization. The classic example is a username registration: the user registers with the username admin'--. The registration query is parameterized, so the username is stored safely. But when a password-change function retrieves the username and concatenates it into an UPDATE query, the injection executes. Testing for second-order injection requires understanding the full data flow through the application, not just testing individual inputs.
Stored procedure injection
Stored procedures that build dynamic SQL by concatenating their input parameters are vulnerable to injection regardless of whether the calling application uses parameterized queries. The parameterized query safely passes the input to the stored procedure, but the stored procedure then concatenates that input into a new query and executes it. This creates a false sense of security because the application code looks correct while the vulnerability exists in the database layer that the application team may not have visibility into.
Dynamic query builders and ORMs
Object-relational mappers (ORMs) like Hibernate, SQLAlchemy, Django ORM, and ActiveRecord generate parameterized queries for standard operations. But they also provide escape hatches for raw SQL, dynamic ordering, complex filtering, and custom queries. Developers who are comfortable that the ORM handles injection may not apply the same rigor when they use these escape hatches. We frequently find injection in .extra() calls in Django, Arel.sql() in Rails, createNativeQuery() in Hibernate, and $queryRaw in Prisma.
Identifier injection
Parameterized queries protect values but cannot parameterize identifiers like table names, column names, and schema names. If an application allows users to select which column to sort by, which table to query, or which schema to use, and those identifiers are inserted directly into the query, injection is possible. The defense requires strict allowlisting of valid identifiers, but this is often overlooked because developers assume all injection is handled by parameterization.
Real-world prevalence: SQL injection remains in the OWASP Top 10 (under Injection, A03:2021) and appears in approximately 25 percent of our web application penetration tests. The difference from a decade ago is that the injection points are harder to find, require more sophisticated techniques to exploit, and are often in administrative or secondary functions rather than the main user-facing features. But when found, they are just as devastating as they have always been.
Database-specific vulnerabilities
Each database platform has its own set of features that, when misconfigured or left at defaults, create security vulnerabilities. Our assessments test for platform-specific risks that automated scanners typically miss.
PostgreSQL
PostgreSQL is a powerful database with features that are useful for administrators and dangerous in the hands of an attacker who has gained sufficient privileges.
- COPY TO/FROM PROGRAM: The
COPYcommand withPROGRAMallows executing arbitrary operating system commands from within PostgreSQL. If an attacker gains superuser access to PostgreSQL, they can immediately escalate to OS-level command execution. This is often the fastest path from SQL injection to full server compromise in PostgreSQL environments. - Large objects: PostgreSQL's large object interface allows reading and writing files on the server's filesystem. An attacker with the right privileges can use
lo_importandlo_exportto read sensitive configuration files, write web shells, or exfiltrate data to locations accessible from the network. - pg_read_server_files and pg_execute_server_program roles: PostgreSQL 11 introduced predefined roles that grant file read and program execution capabilities. If an application account is inadvertently granted one of these roles, the blast radius of a compromise expands significantly beyond database access.
- Trust authentication in pg_hba.conf: PostgreSQL's host-based authentication file can be configured to allow passwordless authentication from specific IP ranges. We find development and internal configurations that trust entire subnets, allowing any host on the network to connect as any user without a password.
MySQL
MySQL's long history includes features that were designed for convenience and are now recognized as security risks.
- UDF injection: MySQL supports User-Defined Functions loaded from shared libraries. An attacker with FILE privilege can write a malicious shared library to the plugin directory and load it as a UDF, gaining arbitrary code execution on the database server. This attack chain works because FILE privilege is granted more often than it should be.
- FILE privilege abuse: The FILE privilege allows reading any file the MySQL server process can access using
LOAD_FILE()and writing to any writable location usingINTO OUTFILEorINTO DUMPFILE. An attacker can read/etc/shadow, SSH keys, application configuration files, and other databases' data files. They can write web shells to the web root if the MySQL process has permission. - Symlink race conditions: MySQL's handling of table symlinks has historically been vulnerable to race conditions that allow file overwriting. While recent versions have mitigated many of these issues, older MySQL installations and forks (MariaDB, Percona) may still be vulnerable.
- FEDERATED engine abuse: The FEDERATED storage engine allows a MySQL server to access tables on a remote MySQL server. An attacker with CREATE privilege can set up FEDERATED tables pointing to external servers they control, exfiltrating data through what appears to be normal database operations.
MongoDB
MongoDB's history of insecure defaults and its JSON-based query language create a unique set of risks.
- NoSQL injection via operator injection: MongoDB queries use JSON objects, and if user input is incorporated into query objects without validation, attackers can inject MongoDB operators. A classic example is bypassing authentication by injecting
{"$gt": ""}into a password field, which matches any non-empty string. This is the MongoDB equivalent of' OR 1=1--. - $where and $regex denial of service: The
$whereoperator evaluates JavaScript expressions for every document in a collection. The$regexoperator with certain patterns can cause catastrophic backtracking. Both can be exploited for denial-of-service attacks that consume all available CPU on the database server. If user input reaches these operators, the database becomes trivially crashable. - No authentication by default (legacy versions): Older MongoDB versions shipped with authentication disabled by default. While this changed in MongoDB 3.6, we still encounter MongoDB instances running without authentication in production, either because they were deployed before the default changed or because authentication was disabled during development and never re-enabled.
- Server-side JavaScript execution: MongoDB's ability to execute JavaScript through
$where,mapReduce, and$accumulatorprovides attackers with code execution capabilities within the database context. When combined with operator injection, this can escalate from data access to arbitrary code execution.
Redis
Redis occupies a unique position because it is often treated as a cache rather than a database, leading to weaker security controls despite frequently containing sensitive data like session tokens, API keys, and authentication state.
- No authentication exposure: Many Redis instances run without any authentication, relying solely on network controls for access restriction. If network segmentation fails or the instance is accidentally exposed, anyone can read and write all data. The
CONFIG SET requirepassshould be the minimum, but we find it missing in a significant percentage of assessments. - EVAL RCE: Redis's
EVALcommand executes Lua scripts on the server. Combined with the ability to load modules and manipulate the server configuration, an attacker with access to an unauthenticated Redis instance can achieve remote code execution on the host. The well-known attack chain involves writing an SSH key to the authorized_keys file usingCONFIG SET dirandCONFIG SET dbfilename. - Data exfiltration through replication: An attacker can configure a victim Redis instance to replicate to an attacker-controlled server, effectively streaming all data out of the environment. The
SLAVEOF(orREPLICAOF) command requires no special privileges beyond basic Redis access.
The principle of least privilege in practice
The most impactful finding in database security assessments is rarely an exotic vulnerability. It is excessive privileges. Application accounts that can DROP tables. Service accounts that can GRANT privileges. Development accounts with DBA access that persist into production. Every excessive privilege expands the blast radius of a compromise.
The following table compares the privileges we commonly find assigned to application database accounts versus what those accounts actually need.
| Access Pattern | Common Configuration | Recommended Configuration |
|---|---|---|
| Application account | GRANT ALL PRIVILEGES on entire database | SELECT, INSERT, UPDATE, DELETE on specific tables only |
| Reporting queries | Same account as application with full access | Separate read-only account with SELECT on reporting views |
| Migration / schema changes | Application account with DDL privileges in production | Dedicated migration account used only during deployments, revoked after |
| Backup operations | DBA account shared with other operations | Dedicated backup account with SELECT and LOCK only |
| Admin / DBA access | Shared root/sa account with password in config files | Individual named accounts with MFA and session logging |
| Multi-tenant data | Application-enforced tenant isolation with shared DB user | Row-level security policies enforced at the database level |
The gap between the "Common" and "Recommended" columns in this table represents the difference between a database that can withstand a compromise at the application layer and one where application compromise equals total data breach. Closing this gap is consistently one of the highest-ROI security improvements we recommend.
Database encryption: what to encrypt and how
Encryption in the database context is frequently misunderstood. Organizations often check the "encryption at rest" box and assume their data is protected. But encryption is only as strong as the key management that supports it, and different encryption approaches protect against different threat models.
Transparent data encryption (TDE)
TDE encrypts database files at the storage level. The encryption and decryption happen automatically when data is read from or written to disk. TDE protects against physical theft of disk media and unauthorized access to database files at the filesystem level. It does not protect against an attacker who has database credentials, because the data is decrypted transparently for any authenticated connection. TDE is the minimum baseline for compliance frameworks but should not be considered sufficient for protecting sensitive data.
Column-level encryption
Column-level encryption encrypts specific columns containing sensitive data (credit card numbers, SSNs, health records) with a key that is separate from the database access credentials. Even a DBA or an attacker with full database access sees only ciphertext in the encrypted columns. The decryption key is held by the application or a key management service, adding an additional access control layer. The trade-off is that encrypted columns cannot be indexed or searched efficiently, which impacts query performance.
Application-level encryption
The strongest approach encrypts data before it reaches the database. The database stores only ciphertext and never possesses the decryption keys. This protects against database compromise, DBA access, backup theft, and replication to unauthorized environments. The application manages encryption and decryption, and key management is entirely outside the database's control. This approach provides the best security but adds application complexity and eliminates the ability to perform server-side queries or aggregations on encrypted fields.
Key management
Regardless of the encryption approach, the keys must be managed securely. We find organizations storing encryption keys in application configuration files on the same server as the database, in environment variables accessible to the application runtime, or hardcoded in source code committed to version control. If the encryption key is compromised alongside the encrypted data, the encryption provides no protection. Keys should be stored in a dedicated key management service (AWS KMS, Azure Key Vault, HashiCorp Vault) with access controls, rotation policies, and audit logging independent of the database.
Monitoring and audit logging
Database monitoring serves two purposes: detecting active attacks and supporting forensic investigation after an incident. Without database-level logging, an organization that discovers a breach may not be able to determine what data was accessed, when, or by whom.
What to log
- Authentication events: All successful and failed login attempts, including the source IP, user account, and authentication method. Failed logins are the earliest indicator of brute-force attacks and credential stuffing.
- Privilege changes: Any GRANT, REVOKE, CREATE USER, ALTER USER, or DROP USER operation. Changes to who can access what data are security-critical events that should trigger immediate review.
- Schema modifications: CREATE, ALTER, and DROP operations on tables, views, stored procedures, and triggers. Unauthorized schema changes can indicate an attacker establishing persistence or preparing for data exfiltration.
- Data access patterns: Queries against sensitive tables, bulk SELECT operations, and export operations. A query that selects all rows from a customer table at 3 AM from an application server that normally queries by individual customer ID is anomalous and should generate an alert.
- Configuration changes: Any modification to database configuration parameters, especially those related to authentication, networking, and logging itself. An attacker who gains administrative access will often disable logging as a first step.
Alerting on anomalies
Logging without monitoring is security theater. The logs must feed into a system that detects anomalies and generates alerts. Key patterns to monitor include: login attempts from new or unexpected source IPs, queries against tables that the application normally never accesses, sudden increases in data retrieval volume, privilege escalation events, and any direct administrative access outside of scheduled maintenance windows. Integration with your SIEM (Splunk, Elastic, Sentinel) and on-call rotation ensures that alerts reach someone who can act on them.
The logging gap we consistently find: Organizations enable basic authentication logging but do not log data access queries. This means they can tell you who logged in but not what data they accessed. When a breach occurs, the investigation hits a dead end at "the application account connected at these times," with no visibility into which tables were queried or how many records were returned. Enabling query-level audit logging has a performance cost, but the forensic value during an incident far outweighs that cost for sensitive data stores.
Database security checklist
The following checklist covers the controls we evaluate in every database security assessment. Use it as a baseline for hardening your own database environments.
Authentication and access
- Remove or disable all default accounts and change all default passwords
- Enforce strong password policies for database accounts (minimum length, complexity, rotation)
- Use individual named accounts for administrative access, never shared credentials
- Implement the principle of least privilege for all application and service accounts
- Use certificate-based or IAM-integrated authentication where supported
- Disable remote root/sa/superuser login
- Restrict database network access to only the hosts that need it (firewall rules, security groups)
Encryption
- Enable TDE or filesystem-level encryption for data at rest
- Require TLS for all database connections and disable unencrypted access
- Use TLS 1.2 or higher with strong cipher suites
- Implement column-level encryption for sensitive data fields (PII, credentials, financial data)
- Store encryption keys in a dedicated key management service, not on the database server
- Encrypt database backups with separate keys from the production database
Injection prevention
- Use parameterized queries for all application-to-database interactions
- Audit stored procedures for dynamic SQL construction and concatenation
- Validate and allowlist all database identifiers (table names, column names) supplied by users
- Test for second-order injection in data flows where stored values are reused in queries
- Disable dangerous functions and procedures (xp_cmdshell, LOAD_FILE, COPY TO PROGRAM) unless specifically required
Monitoring and operations
- Enable audit logging for authentication, privilege changes, DDL operations, and data access
- Store audit logs in a location not writable by the database service account
- Forward database logs to your SIEM and configure alerting on anomalous patterns
- Test backup restoration regularly and verify backup encryption
- Maintain an inventory of all database instances, including development and staging
- Apply security patches within your defined SLA (critical patches within 72 hours)
- Conduct quarterly access reviews to remove stale accounts and unnecessary privileges
Platform hardening
- Disable or remove unused database features, extensions, and modules
- Run the database process as a dedicated, non-root service account
- Remove sample databases and demo schemas from production instances
- Configure network binding to listen only on required interfaces, not 0.0.0.0
- Enable connection rate limiting to mitigate brute-force and denial-of-service attacks
- Implement row-level security for multi-tenant databases rather than relying solely on application logic
Database security is not a one-time hardening exercise. It is an ongoing discipline that requires coordination between application developers, database administrators, infrastructure teams, and security. The controls described in this article are not exotic or expensive. They are foundational practices that protect the most valuable asset in your environment: the data itself.
The organizations that get database security right are the ones that treat the database as a security boundary rather than an implementation detail. They test it with the same rigor they apply to their web applications and network perimeter. They enforce least privilege even when it is inconvenient. And they monitor their data layer with the understanding that when an attacker reaches the database, everything else has already failed.
Secure Your Data Layer
Our database security assessments test authentication, privilege escalation, injection vectors, encryption, and platform-specific vulnerabilities across PostgreSQL, MySQL, MongoDB, Redis, and SQL Server. Find the risks hiding in your data before attackers do.