Mobile App Penetration Testing: What We Find in iOS and Android Security Assessments | Lorikeet Security Skip to main content
Back to Blog

Mobile App Penetration Testing: What We Find in iOS and Android Security Assessments

Lorikeet Security Team April 7, 2026 11 min read

TL;DR: Mobile applications present a fundamentally different attack surface than web apps. The binary ships to the user's device, where it can be decompiled, instrumented, and manipulated at runtime. Hardcoded API keys, insecure local data storage, bypassable certificate pinning, and weak root/jailbreak detection are the findings we report most frequently. The backend APIs behind mobile apps are often less hardened than their web counterparts because developers assume the mobile client enforces security — it doesn't.

iOS vs Android: Common Findings Compared

Finding Category iOS Android
Insecure Data Storage Keychain misuse, plist files, Core Data unencrypted SharedPreferences in plaintext, SQLite unencrypted, external storage
Binary Analysis Objective-C metadata rich; Swift harder to reverse APK easily decompiled to Java/Smali; native libs harder
Certificate Pinning NSURLSession pinning, ATS configuration OkHttp CertificatePinner, network_security_config.xml
Root/Jailbreak Detection File existence checks, Cydia URL scheme SafetyNet/Play Integrity, su binary checks, Magisk detection
Runtime Manipulation Frida + Objection on jailbroken device Frida on rooted device or via Frida gadget injection
Hardcoded Secrets Strings in Mach-O binary, Info.plist strings.xml, BuildConfig, decompiled source
IPC Vulnerabilities URL schemes, universal links misuse Exported activities, broadcast receivers, content providers

Insecure Data Storage: The Most Common Finding

Mobile applications routinely store sensitive data on the device in ways that do not survive scrutiny. On Android, we find authentication tokens in SharedPreferences as plaintext XML, user PII in unencrypted SQLite databases, and sensitive files written to external storage (world-readable on older Android versions). On iOS, the Keychain is available but frequently misused — developers store tokens in UserDefaults (a plist file) or in Core Data without encryption, both of which are trivially readable on a jailbroken device or from an unencrypted device backup.

The risk is not theoretical. A stolen or lost device, a malicious app with storage access on Android, or a forensic extraction all expose this data. More practically, any attacker who has compromised the device — through malware, a jailbreak exploit, or physical access — can extract cached credentials and session tokens to impersonate the user on the backend API.

What we look for: After running the application through its core functionality, we examine the application's sandbox on a rooted/jailbroken device. Every file created or modified by the app is inspected for sensitive data — tokens, credentials, PII, financial data, cached API responses. On iOS, we also pull device backups and check whether sensitive Keychain items are included (the kSecAttrAccessibleAfterFirstUnlock attribute determines backup inclusion).


Certificate Pinning: Implementation and Bypass

Certificate pinning is a transport security control that restricts which TLS certificates the app will accept when communicating with its backend. Instead of trusting any certificate signed by a trusted CA (which includes any CA a compromised device trusts), the app only accepts certificates matching a specific pin — either the server's leaf certificate, an intermediate CA, or a public key hash.

From a testing perspective, bypassing pinning is a prerequisite for API-level testing. Without bypass, the proxy cannot intercept HTTPS traffic between the app and its backend. We use Frida scripts that hook the SSL/TLS validation functions at runtime, causing them to accept any certificate. On Android, the network_security_config.xml approach is common and can sometimes be bypassed by repackaging the APK with a modified config. On iOS, hooking SecTrustEvaluate or the URLSession delegate methods achieves the same result.

When Pinning Bypass Is the Finding

If the app implements pinning but a generic Frida script bypasses it in under a minute, the pinning implementation is not providing meaningful security against a motivated attacker. We report this as a finding when the bypass is trivial — when the app uses a popular library's default pinning that is well-documented in bypass scripts. Stronger implementations use multiple pinning checks, obfuscated pin values, and integrity checks that detect Frida injection — these take significantly more effort to bypass and represent a meaningful defense-in-depth layer.


Hardcoded Secrets and API Keys

Every mobile binary ships to the user's device, and anything embedded in that binary is extractable. Despite this fundamental reality, we consistently find hardcoded API keys, client secrets, encryption keys, and backend URLs with embedded credentials in mobile applications. On Android, a simple apktool decompilation followed by a grep for common patterns reveals secrets in strings.xml, BuildConfig.java, and inline string constants. On iOS, running strings on the Mach-O binary or examining embedded plist files yields similar results.

The most dangerous findings are hardcoded backend API keys with excessive permissions — AWS access keys, Firebase admin credentials, third-party payment API keys with production access. These keys often grant access far beyond what the mobile app legitimately needs, because the same key is used in the backend and was embedded in the client for convenience during development.

Remediation: Never embed secrets in mobile binaries. Use a backend-mediated authentication flow where the mobile app authenticates to your own backend, and the backend holds the third-party API keys. For keys that must exist on the client (like a Firebase config), ensure they are scoped to the minimum necessary permissions and cannot be used to access sensitive data or administrative functions.


Root and Jailbreak Detection Bypass

Many mobile applications — particularly in financial services, healthcare, and enterprise contexts — implement root/jailbreak detection to prevent execution on compromised devices. The logic typically checks for the presence of known files (/Applications/Cydia.app, /system/bin/su), attempts to write to protected paths, checks for known hooking frameworks, or queries Google's Play Integrity API (formerly SafetyNet).

From a penetration testing perspective, we bypass these controls to establish our testing environment and then assess whether the controls themselves are robust. Most client-side detection can be bypassed with Frida by hooking the detection functions and forcing them to return "safe" results. On Android, Magisk provides systemless root that hides from many detection mechanisms. On iOS, modern jailbreaks include bypass modules for common detection libraries.

The key question is not whether detection can be bypassed — given sufficient effort, client-side controls can always be circumvented — but whether the detection makes exploitation meaningfully harder. A single boolean check that can be toggled with one Frida script is low-value. Multiple layered checks, server-side attestation validation, and behavioral analysis that detect hooking frameworks raise the bar significantly.


Runtime Manipulation with Frida

Frida is the cornerstone tool for mobile application penetration testing. It allows testers to inject a JavaScript runtime into a running application process, hook arbitrary functions, modify arguments and return values, and trace execution flow — all without modifying the application binary. On a rooted Android device or jailbroken iOS device, Frida attaches to the running process and provides complete control over the application's behavior.

Common Frida operations during a mobile pentest include: bypassing certificate pinning to enable traffic interception, disabling root/jailbreak detection, extracting encryption keys from memory during cryptographic operations, modifying function return values to bypass client-side authorization checks, tracing API calls to understand application flow, and dumping decrypted data from memory.

For applications that must run on non-rooted devices during testing, Frida can be injected as a shared library (the "Frida gadget") by repackaging the application — on Android, this involves decompiling the APK, adding the gadget library, and recompiling with a debug signing key.


Backend API Security: The Hidden Attack Surface

The backend APIs that mobile applications communicate with are frequently less hardened than their web-facing counterparts. Developers often assume the mobile client will enforce authorization logic, send only valid parameters, and follow the intended workflow. Once we bypass certificate pinning and have full visibility into the API traffic, we test every endpoint for the same vulnerabilities we would in a web application pentest — broken access control, injection, mass assignment, and insecure direct object references.

A recurring pattern: the mobile app only shows the current user's data, but the API endpoint accepts any user ID and returns the corresponding data without server-side authorization checks. The mobile client was the only thing preventing horizontal privilege escalation. Similarly, administrative API endpoints that the mobile UI never calls may still be reachable and functional — the mobile app simply does not render the button, but the endpoint exists and accepts requests from any authenticated user.

Secure Your Mobile Applications

Lorikeet Security's mobile application penetration testing covers both iOS and Android — static analysis, dynamic instrumentation with Frida, API security testing, and data storage review. We test what automated scanners cannot: runtime manipulation, business logic, and real-world attack chains.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!