Thick client applications are desktop software that performs significant processing locally rather than delegating everything to a server. Think trading platforms, healthcare record systems, engineering tools, ERP clients, point-of-sale terminals, and enterprise management consoles. Unlike thin clients (browsers), thick clients run compiled code on the user's machine, store data locally, communicate with backend services over custom protocols, and interact deeply with the operating system.

This makes them harder to test than web applications. A web application pentest requires a browser and a proxy. A thick client assessment requires reverse engineering tools, debuggers, protocol analyzers, system monitors, and a methodology that accounts for the fact that you are testing software that runs in an environment an attacker controls. The tester needs to intercept traffic that may not be HTTP, decompile binaries that may be obfuscated, analyze local data storage across filesystems and registries, and hook into running processes to observe and modify behavior at runtime.

This article presents the methodology we use during thick client security assessments, covering each phase of testing with the specific tools and techniques that produce results. We organize the approach by testing phase rather than by vulnerability class, because the testing process is inherently sequential: you must understand the application before you can break it.


Phase 1: Reconnaissance and architecture mapping

Before touching a debugger or decompiler, the first step is understanding what the application is, how it was built, and how it communicates. This reconnaissance phase determines which tools and techniques will be most effective for the rest of the assessment.

Identifying the technology stack

The technology stack determines the entire testing approach. A Java application requires different tools than a .NET application, which requires different tools than a native C++ application. Here is how we identify what we are working with:

Mapping communication channels

Thick clients communicate with backend services, and the communication protocol determines how we intercept traffic. Start by running the application while monitoring network activity with Wireshark to capture all traffic at the packet level.[1] This reveals whether the application uses HTTP/HTTPS (proxiable with Burp Suite), raw TCP/UDP sockets (requiring protocol-level analysis), named pipes or RPC for local inter-process communication, database protocols (connecting directly to a database server), or proprietary binary protocols over TCP.

Many thick clients use HTTP for their backend API communication, in which case standard API security testing techniques apply directly. But a significant number use custom TCP protocols, direct database connections, or message queue systems (RabbitMQ, MSMQ) that require specialized interception techniques.

File and registry footprint

Use Process Monitor (Procmon) from Sysinternals to capture all file system and registry operations during application startup and normal usage.[2] Filter by the application's process name and observe which files it reads (configuration, cached data, license files), which files it writes (logs, temporary data, local databases), which registry keys it accesses (settings, license information, cached credentials), and which other processes it spawns or communicates with. This footprint map becomes the guide for the local data storage analysis phase.

Practical tip: Run Procmon before launching the application, not after. Many applications perform their most security-relevant operations during startup: reading credentials from storage, validating licenses, establishing backend connections, and loading configuration. If you start monitoring after the application is already running, you miss the most important activity.


Phase 2: Traffic interception

Intercepting the communication between a thick client and its backend is the equivalent of proxying a web application through Burp Suite. It allows you to see what data is sent and received, identify API endpoints and parameters, test for injection vulnerabilities, and manipulate requests to test authorization and business logic.

HTTP/HTTPS traffic

If the application communicates over HTTP/HTTPS, configure a system-level proxy and route traffic through Burp Suite. For applications that respect the system proxy setting, this is straightforward. For applications that hardcode proxy settings or ignore the system configuration, you may need to modify the application's configuration files, use a tool like Proxifier to force traffic through the proxy at the network level, or patch the binary to add proxy support.

Certificate pinning is the main obstacle to HTTPS interception. Many thick clients implement certificate pinning, rejecting any TLS certificate that does not match a hardcoded fingerprint or certificate authority. Bypassing this requires runtime manipulation. For Java applications, tools like Frida can hook the SSLContext and TrustManager implementations to accept any certificate.[3] For .NET applications, you can hook the ServicePointManager.ServerCertificateValidationCallback. For native applications, hooking the Winsock or OpenSSL/Schannel TLS validation functions achieves the same result.

Non-HTTP protocols

When the application uses custom TCP protocols, standard web proxies do not work. Instead, we use several approaches depending on the protocol structure:

Database protocol interception

Some thick clients connect directly to a database server (SQL Server, Oracle, MySQL, PostgreSQL) without an intermediate application server. This is a significant architectural finding on its own, because it means the database credentials must be stored on the client, and the client has direct access to execute SQL queries against the database. For these applications, we intercept the database protocol using the database's own audit logging, a TCP proxy specific to the database protocol, or by extracting the connection credentials and connecting directly with a standard database client to explore what operations the application's credentials permit.


Phase 3: Reverse engineering

Reverse engineering transforms a black-box assessment into a white-box one. By decompiling or disassembling the application, we gain access to the logic, the algorithms, the hardcoded values, and the architecture that are invisible from the outside.

Java applications: JD-GUI, CFR, and Procyon

Java applications compile to bytecode that runs on the JVM, and this bytecode is highly decompilable. JD-GUI is the classic graphical decompiler that renders Java bytecode back to readable source code. CFR and Procyon are command-line decompilers that often produce better output for modern Java features (lambdas, switch expressions, records).[4]

For Java applications packaged as JAR files, the decompilation process is trivial: a JAR is a ZIP archive containing .class files, and decompiling them produces readable Java source. For applications that use obfuscation tools like ProGuard or Zelix KlassMAster, the decompiled code has mangled names (a, b, c instead of meaningful identifiers), but the logic remains readable. The obfuscation makes analysis slower but does not prevent it.

What we extract from decompiled Java code: authentication and authorization logic, API endpoint URLs and request formats, encryption implementations and key management, database query construction, license validation logic, and any hardcoded credentials or tokens.

.NET applications: dnSpy and ILSpy

.NET applications compile to Common Intermediate Language (CIL), which decompiles cleanly back to C# or VB.NET source code. dnSpy is the preferred tool because it combines decompilation with runtime debugging: you can set breakpoints in the decompiled source, step through execution, inspect variables, and even modify the IL code and recompile the assembly.[5] ILSpy provides cleaner decompilation output and is useful as a complementary analysis tool.

.NET obfuscation tools (Dotfuscator, ConfuserEx, SmartAssembly) are more aggressive than Java obfuscators. Beyond name mangling, they can encrypt strings, add control flow obfuscation (inserting dead code and opaque predicates), and pack assemblies. The de4dot tool specifically targets .NET obfuscation and can automatically deobfuscate assemblies protected by many commercial and open-source obfuscators. Even with sophisticated obfuscation, the decompiled output is vastly more readable than native disassembly.

Native C/C++ applications: Ghidra and IDA Pro

Native applications compile to machine code, which is the most challenging to analyze. Unlike managed bytecode, machine code does not retain variable names, type information, or high-level control structures. Analysis requires a disassembler/decompiler that translates machine code into assembly language and, ideally, pseudocode.

Ghidra is the NSA's open-source reverse engineering framework. It provides disassembly, decompilation to C-like pseudocode, cross-referencing, scripting, and collaborative analysis features. For most thick client assessments, Ghidra is sufficient and produces good results.[6] IDA Pro is the commercial standard, with superior analysis engines, a larger plugin ecosystem, and better support for exotic architectures and file formats. IDA's Hex-Rays decompiler produces higher-quality pseudocode than Ghidra for complex functions, which matters when analyzing cryptographic implementations or complex business logic.[7]

Reverse engineering native code is time-intensive. We focus on specific areas rather than attempting to understand the entire binary: authentication functions (identified by cross-references to network APIs and string references like "login", "password", "token"), cryptographic operations (identified by constants like AES S-boxes, SHA magic numbers, or imports from cryptographic libraries), and license validation (identified by string references to license-related terms and conditional branches that determine feature access).

The choice between Ghidra and IDA Pro is not about quality for most assessments. Ghidra is free, actively maintained, and produces results that are adequate for 90% of thick client assessments. IDA Pro's advantages become significant for heavily obfuscated binaries, custom packers, and architectures beyond x86/x64/ARM. For teams building a thick client testing capability, start with Ghidra.


Phase 4: Local data storage analysis

Thick clients store data locally in a variety of locations and formats. Every one of these storage locations is accessible to an attacker who has access to the workstation, which includes any user of the machine, malware running in the user's context, and administrators.

Filesystem analysis

Using the Procmon footprint from Phase 1, examine every file the application reads and writes. Common locations and findings include:

Registry analysis

Windows Registry is a common storage location for thick client application settings, cached credentials, and licensing data. Use Registry Explorer from Eric Zimmerman's tools or the built-in regedit to examine the keys the application accesses (identified through Procmon). Look under HKEY_CURRENT_USER\Software\[AppName] and HKEY_LOCAL_MACHINE\SOFTWARE\[AppName] for stored credentials (sometimes Base64-encoded or weakly encrypted), connection strings, license keys and registration data, feature flags that control access to premium functionality, and cached authentication tokens.

A common pattern we encounter: the application stores a "licensed" registry value as 0 (unlicensed) or 1 (licensed). Changing the value from 0 to 1 in regedit unlocks the full application. This is the simplest form of client-side license enforcement, and it is still surprisingly common.

Credential storage assessment

For each location where the application stores credentials, we evaluate the protection mechanism:


Phase 5: DLL injection and API hooking

DLL injection and API hooking are active manipulation techniques that allow the tester to modify the application's behavior at runtime without patching the binary on disk. These techniques are essential for testing thick clients because they enable bypassing client-side validation and security checks, intercepting function calls to observe parameters and return values, modifying data in transit between the application and its dependencies, and testing what happens when the application receives unexpected inputs from "trusted" sources.

DLL injection techniques

DLL injection places a custom DLL into the target application's process space, where it can execute arbitrary code in the application's context. The most common techniques are:

API hooking with Frida

Frida is the most versatile tool in the thick client tester's arsenal. It is a dynamic instrumentation framework that injects a JavaScript engine into the target process, allowing you to hook any function, read and modify parameters, change return values, and call arbitrary functions, all without modifying the binary on disk.[3]

Frida works across Java, .NET, and native applications. For a Java thick client, you can hook methods by class and method name. For a .NET application, you can hook CLR methods. For native applications, you can hook exported functions by name or any function by address (determined through Ghidra/IDA analysis).

Practical examples of what API hooking enables during assessments:

API hooking with x64dbg

For native applications where Frida's JavaScript interface is insufficient (complex hooking scenarios, anti-tampering protections), x64dbg provides a full-featured debugger with conditional breakpoints, hardware breakpoints that are harder for anti-debug techniques to detect, scripting support, and plugin extensibility.[9] We use x64dbg to trace specific function calls, set breakpoints on cryptographic APIs (CryptEncrypt, CryptDecrypt, BCryptEncrypt), and analyze how the application handles authentication responses from the server.

Anti-tampering and anti-debug: Some thick clients implement protections against the techniques described here: checking for debugger attachment (IsDebuggerPresent), verifying binary integrity (checksum validation), detecting injected DLLs, and monitoring for hooking. These protections increase the effort required but do not prevent analysis. Each anti-debug technique has well-documented bypasses, and tools like ScyllaHide (an x64dbg plugin) automate the process of defeating common anti-debug checks.


Phase 6: Binary patching

Binary patching modifies the application's compiled code to change its behavior. This is the most direct way to bypass client-side controls, and it demonstrates to the application's developers exactly why client-side enforcement is insufficient.

Patching managed assemblies (.NET, Java)

For .NET applications, dnSpy allows editing IL code directly and recompiling the assembly. Common patches include changing conditional branches (turning "if not licensed" into "if licensed"), removing method calls (deleting the call to a validation function), and modifying constant values (changing a trial period from 30 to 999999 days). The patched assembly can be saved and the application restarted with the modified logic.

For Java applications, bytecode editors like Recaf or the ASM library allow similar modifications to .class files within JAR archives. The JAR can be modified in place or repacked with the patched classes.

Patching native binaries

Native binary patching requires more precision because you are modifying machine code. Using Ghidra or IDA to identify the target instruction, then a hex editor (HxD, 010 Editor) or the tool's built-in patching functionality to modify the bytes. Common patches change JNZ (jump if not zero) to JZ (jump if zero) to invert a conditional check, NOP out (replace with 0x90 no-operation bytes) instructions that perform unwanted checks, or modify immediate values in comparison instructions.

Code signing is the primary defense against binary patching. If the application or its libraries are digitally signed, modifications invalidate the signature. However, many thick clients do not verify their own signatures at runtime. They rely on the operating system to check the signature during installation (Authenticode verification) but never check it again during execution. For applications that do verify their signature at runtime, the verification code itself can be patched out, creating a bootstrap problem that demonstrates why tamper detection alone is not a security boundary.


Technology-specific testing considerations

Java thick clients

Java applications have specific characteristics that affect the testing approach. The JVM's serialization mechanism has been a persistent source of vulnerabilities. If the application deserializes untrusted data (from network communications, local files, or clipboard operations), it may be vulnerable to deserialization attacks using gadget chains from libraries like Apache Commons Collections, Spring, or Groovy. Tools like ysoserial generate exploitation payloads for known gadget chains.[10] Java's RMI (Remote Method Invocation) and JNDI (Java Naming and Directory Interface) have also been the basis for severe vulnerabilities, most notably CVE-2021-44228 (Log4Shell), which exploited JNDI lookups to achieve remote code execution.

.NET thick clients

Beyond the decompilation and patching covered above, .NET thick clients that use Windows Communication Foundation (WCF) for backend communication expose a WSDL (Web Services Description Language) endpoint that describes the entire service contract. This is equivalent to enabling GraphQL introspection in production: it gives the tester a complete map of available operations, parameter types, and data structures. WCF services may also be vulnerable to XML external entity (XXE) injection if the XML parser is not properly configured.

C/C++ thick clients

Native thick clients are susceptible to memory corruption vulnerabilities (buffer overflows, use-after-free, format string bugs) that are not present in managed languages. While exploiting these vulnerabilities is complex due to modern mitigations (ASLR, DEP, CFI, stack canaries), identifying them during a pentest demonstrates serious code quality issues. We use AddressSanitizer (when source is available), fuzzing with tools like AFL or WinAFL, and manual analysis with x64dbg to identify memory safety issues.


Reporting thick client findings

Thick client findings require careful contextualization because the threat model is different from web applications. A vulnerability that requires local access to the workstation is less severe than one exploitable remotely, unless the application processes data from untrusted sources or the workstation is a shared environment. Each finding should clearly state the prerequisite access level, the exploitation complexity, and the business impact.

The most important message for development teams receiving a thick client assessment report is this: the client cannot be trusted. Any logic running on the client, whether compiled native code, obfuscated .NET, or packed Java, can be analyzed, modified, and bypassed. Business logic enforcement, licensing, authorization, and data validation must happen on the server. The client application should be treated as a user interface that any attacker can fully control.

For organizations that distribute thick client applications to customers, partners, or employees, regular security assessments of these applications are essential. The attack surface is large, the tools for exploitation are mature and freely available, and the impact of a compromised thick client often extends beyond the application itself to the backend infrastructure it connects to.

Sources

  1. Wireshark Foundation, "Wireshark User's Guide," Wireshark Documentation. https://www.wireshark.org/docs/wsug_html_chunked/
  2. Microsoft, "Process Monitor v4.0," Sysinternals. https://learn.microsoft.com/en-us/sysinternals/downloads/procmon
  3. Frida, "Frida - A world-class dynamic instrumentation toolkit," Frida Documentation. https://frida.re/docs/home/
  4. B. Benoit, "CFR - Another Java Decompiler," CFR Project. https://www.benf.org/other/cfr/
  5. 0xd4d, "dnSpy - .NET Debugger and Assembly Editor," GitHub. https://github.com/dnSpy/dnSpy
  6. NSA, "Ghidra Software Reverse Engineering Framework," GitHub. https://github.com/NationalSecurityAgency/ghidra
  7. Hex-Rays, "IDA Pro - The Interactive Disassembler," Hex-Rays. https://hex-rays.com/ida-pro/
  8. MITRE, "T1055.001 - Process Injection: Dynamic-link Library Injection," MITRE ATT&CK. https://attack.mitre.org/techniques/T1055/001/
  9. x64dbg, "x64dbg - An open-source x64/x32 debugger for Windows," x64dbg Documentation. https://x64dbg.com/
  10. C. Frohoff, "ysoserial - A proof-of-concept tool for generating payloads that exploit unsafe Java object deserialization," GitHub. https://github.com/frohoff/ysoserial

Ship a Desktop Application? We Should Test It.

Thick client testing requires reverse engineering, protocol analysis, and binary manipulation skills that standard web pentests do not cover. Our team has the tools and methodology to find what attackers will find first.

Book a Consultation Our Services
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.