Desktop applications occupy a fundamentally different threat landscape than web applications. When you test a web app, the server controls the environment. The client is a browser with a well-defined sandbox, and your code runs in an environment you manage. Desktop applications invert that model entirely. Your code runs on the user's machine, in an environment the user (or an attacker) controls. Every binary, configuration file, local database, and network request is accessible to anyone with access to the workstation.

This distinction matters because the vulnerability classes are different. Web application pentests focus on injection, broken access control, and server-side misconfigurations. Desktop application pentests focus on reverse engineering, local privilege escalation, insecure data storage, binary manipulation, and trust boundary violations between the client and its backend services. The tools are different. The methodology is different. And the findings are often more severe, because a compromised desktop application can provide an attacker with access to the local operating system, not just the application itself.

This guide covers the specific vulnerability classes and testing techniques we use during desktop application penetration tests, organized by technology stack: Electron, .NET, and native (C/C++) applications.


Electron applications: web vulnerabilities with OS-level impact

Electron is the dominant framework for cross-platform desktop applications. Visual Studio Code, Slack, Discord, Microsoft Teams, Figma, Notion, and hundreds of enterprise applications are built on it. Electron bundles Chromium and Node.js into a single runtime, allowing developers to build desktop applications using HTML, CSS, and JavaScript. This is convenient for development, but it creates a unique security problem: web application vulnerabilities that would be sandboxed in a browser now have direct access to the operating system through Node.js APIs.[1]

nodeIntegration and contextIsolation

These two settings are the single most important security controls in any Electron application. nodeIntegration determines whether renderer processes (the Chromium windows that display the UI) can access Node.js APIs like require(), child_process, and fs. When nodeIntegration is enabled (set to true), any JavaScript running in the renderer, including injected scripts from XSS vulnerabilities, can execute arbitrary system commands, read and write files, and spawn processes. A simple cross-site scripting bug becomes remote code execution on the user's machine.

contextIsolation controls whether the preload script and the web page's JavaScript share the same global scope. When contextIsolation is disabled (set to false), the web page can access and modify objects from the preload script, which typically has elevated privileges. An attacker who achieves XSS can overwrite functions in the preload script's scope to intercept or modify privileged operations.[2]

Modern Electron versions (12+) default to nodeIntegration: false and contextIsolation: true, but we routinely find applications that override these defaults. Some do it intentionally because their architecture requires it. Others do it because the code was written years ago when the defaults were different and nobody has updated the configuration. Either way, the testing approach is the same: find any input that renders unsanitized HTML or JavaScript in the renderer process, then leverage that to access Node.js APIs.

Preload script vulnerabilities

Even with contextIsolation enabled and nodeIntegration disabled, Electron applications expose functionality to the renderer through preload scripts and the contextBridge API. The preload script runs with Node.js access and selectively exposes functions to the web page. The security of the entire application depends on what those exposed functions allow.

We frequently find preload scripts that expose overly broad APIs. A function intended to let the renderer read a specific configuration file might accept a file path parameter without validation, turning it into an arbitrary file read primitive. A function meant to open a specific URL in the system browser might accept any URL, enabling an attacker to launch arbitrary protocols. The principle of least privilege applies here: every function exposed through contextBridge should accept the minimum necessary parameters and validate every input rigorously.

Prototype pollution in Electron

Prototype pollution is a JavaScript vulnerability where an attacker modifies the prototype of a base object (like Object.prototype), causing the modification to propagate to all objects in the application. In a browser, this is typically limited to client-side impact. In Electron, prototype pollution can escalate to remote code execution.[3]

The attack chain works like this: the attacker finds a prototype pollution vector (often through a deep merge function, a JSON parser, or a query string parser). They pollute Object.prototype with properties that influence how Electron's internal APIs behave. For example, polluting the "shell" or "command" properties can cause child_process.spawn() calls to execute attacker-controlled commands. This is a well-documented attack path that has affected multiple Electron applications, and it highlights why web vulnerability classes cannot be dismissed as "just client-side" in the Electron context.

Extracting and analyzing Electron app source code

Electron applications package their JavaScript source code into an ASAR archive (app.asar), which is essentially an unencrypted archive format. Extracting it is trivial using the asar command-line tool: npx asar extract app.asar ./extracted. This gives the tester complete access to the application's source code, including API endpoints, authentication logic, hardcoded secrets, and the exact configuration of nodeIntegration, contextIsolation, and preload scripts.[4]

Some developers attempt to protect their source code by obfuscating the JavaScript or encrypting the ASAR file. These are speed bumps, not barriers. JavaScript obfuscation is reversible with tools like de4js or js-beautify. Custom ASAR encryption requires the decryption key to be bundled with the application (otherwise it could not run), so the key is always recoverable through binary analysis of the Electron main process.

Real-world finding: During an engagement, we extracted an Electron application's ASAR archive and found the application's entire backend API key, database connection strings, and an admin JWT hardcoded in a configuration module. The application was distributed to thousands of end users. Every user had the credentials needed to access the production database directly.


.NET application vulnerabilities

.NET desktop applications (Windows Forms, WPF, MAUI, and legacy .NET Framework apps) present a different set of opportunities. The most significant is that .NET assemblies compile to Common Intermediate Language (CIL), which is straightforwardly decompilable back to readable source code. Unlike native C/C++ binaries that compile to machine code and require laborious reverse engineering, .NET binaries can be decompiled to near-original C# source in seconds.

Decompilation with dnSpy and ILSpy

dnSpy is the primary tool for .NET reverse engineering during penetration tests. It decompiles .NET assemblies to readable C#, allows setting breakpoints and stepping through the code at runtime, and supports modifying the IL code and recompiling the assembly. For a penetration tester, dnSpy effectively gives you the application's source code and a fully featured debugger.[5]

ILSpy is an alternative decompiler that produces cleaner output for some code patterns and supports a wider range of .NET versions. We typically use both: dnSpy for runtime debugging and modification, ILSpy for clean source code analysis when dnSpy's decompilation is unclear.

What we look for in decompiled .NET code is specific. Hardcoded credentials (connection strings, API keys, encryption keys) embedded in the source. Client-side authorization logic that can be bypassed by modifying the binary. Cryptographic implementations using weak algorithms (MD5, SHA-1 for hashing; DES, 3DES for encryption; ECB mode for AES). SQL queries constructed through string concatenation instead of parameterized queries. Licensing or trial enforcement logic that runs entirely on the client. Every one of these is a common finding in .NET desktop application assessments.

Binary patching and logic bypass

Because .NET assemblies can be decompiled, modified, and recompiled, any logic that runs purely on the client side can be bypassed. License validation checks that return a boolean can be patched to always return true. Trial expiration logic can be removed entirely. Feature gates that disable premium functionality for free-tier users can be stripped out. Role-based UI restrictions (hiding admin buttons from regular users) can be undone.

This is not hypothetical. In a recent assessment, we decompiled a .NET desktop application, found the license validation method, changed the single IL instruction that compared the license status from brfalse (branch if false) to brtrue (branch if true), saved the modified assembly, and restarted the application. The trial license was now treated as a full enterprise license. The entire process took under five minutes.

The lesson for developers is clear: never enforce business logic, licensing, or access control purely on the client side. Every decision that matters must be validated on the server. The client application should be treated as a UI layer that can be fully compromised.

.NET configuration and secrets exposure

.NET applications frequently store configuration in app.config or appsettings.json files that ship alongside the binary. These files commonly contain database connection strings (including credentials), API endpoint URLs with embedded tokens, encryption keys used for local data protection, and SMTP server credentials for email functionality. Even when developers use .NET's built-in configuration encryption (DPAPI-based), the decryption key is tied to the machine or user context, and the application itself must be able to decrypt the values, meaning any tool running in the same context can also decrypt them.


DLL hijacking

DLL hijacking exploits the way Windows searches for Dynamic Link Libraries when an application loads. When an application calls LoadLibrary() or references a DLL without specifying the full path, Windows searches a series of directories in a defined order: the application's directory, the system directory, the Windows directory, and the directories in the PATH environment variable.[6] If an attacker can place a malicious DLL with the expected name in a directory that is searched before the legitimate DLL's location, the application loads the attacker's code instead.

There are several practical variations of this attack that we test for during desktop application assessments.

Application directory hijacking

If the application is installed in a directory where the current user has write permissions (common with per-user installations, portable applications, or applications installed outside of Program Files), the attacker can drop a malicious DLL into the application's own directory. This DLL will be loaded before any system DLL because the application directory is searched first. We use Process Monitor (Procmon) from Sysinternals to identify which DLLs an application attempts to load and from which paths, looking specifically for "NAME NOT FOUND" results that indicate the application is searching for a DLL that does not exist in the expected location.[7]

PATH-based hijacking

If any directory in the system PATH is writable by unprivileged users, placing a DLL there can affect any application that loads DLLs by name without a full path. This is a system-wide privilege escalation vector. We check PATH directory permissions as a standard part of every desktop application assessment.

Side-loading through signed binaries

A more sophisticated technique involves using a legitimately signed application binary to load a malicious DLL. The signed binary passes application allowlisting controls (because it is trusted), but it loads the attacker's DLL because of the search order vulnerability. This technique is frequently used by advanced threat actors and is documented extensively in the MITRE ATT&CK framework under T1574.002 (DLL Side-Loading).[8]

Testing methodology: We use Procmon to monitor all DLL load operations during application startup and normal usage. We filter for results where the application searches for a DLL in a writable directory. For each candidate, we compile a test DLL that logs the hijack (writing to a file or making a network callback) without disrupting the application's functionality. If the test DLL is loaded and executed, we have confirmed the DLL hijacking vulnerability.


Insecure local data storage

Desktop applications store data locally in ways that web applications simply cannot. Browsers enforce a storage sandbox. Desktop applications have access to the full filesystem, the Windows Registry, SQLite databases, custom file formats, and the operating system's credential storage mechanisms. The security of this locally stored data is a major focus during desktop application penetration tests.

Plaintext credential storage

We find credentials stored in plaintext with alarming frequency. Application configuration files, SQLite databases in the user's AppData directory, Windows Registry keys, and custom log files all commonly contain passwords, API tokens, session tokens, or encryption keys in cleartext. The "remember me" functionality in desktop applications often stores the user's actual password (not a session token) so it can re-authenticate automatically. This means compromising the user's workstation gives the attacker the user's credentials for the backend service, not just a session.

Weak encryption of stored data

Applications that do encrypt locally stored data frequently use weak or custom encryption schemes. We have found applications using XOR with a static key, Base64 encoding presented as "encryption," AES with a hardcoded key embedded in the binary, and custom algorithms that provide no meaningful security. The encryption key is always recoverable through decompilation (.NET, Java) or reverse engineering (native binaries), so the question is not whether the encryption can be broken but how long it takes. If the key is hardcoded in the application binary, the answer is usually minutes.

Windows Credential Manager and DPAPI

Windows provides the Data Protection API (DPAPI) and the Credential Manager for secure local storage. DPAPI encrypts data using a key derived from the user's login credentials, meaning the data can only be decrypted by the user who encrypted it (or by an administrator). Applications that use DPAPI or Credential Manager correctly are significantly harder to attack than those using custom encryption. However, "harder" is not "impossible." Tools like Mimikatz can extract DPAPI master keys from memory, and any process running as the same user can call the DPAPI decryption functions. The protection is against other users on the same machine and against offline attacks on the disk, not against an attacker who has achieved code execution in the user's session.


Hardcoded credentials and secrets

Hardcoded credentials are one of the most consistently high-impact findings in desktop application assessments. Unlike web applications where server-side code is not accessible to end users, desktop application binaries are distributed to every user. Any secret embedded in the binary is, by definition, shared with every user who has the application installed.

The types of secrets we commonly extract from desktop applications include:

For .NET applications, finding these is as simple as decompiling the binary and searching for common patterns: "password", "secret", "apikey", "connectionstring", "bearer". For native binaries, we use the strings utility to extract printable strings, then filter for patterns that look like credentials. For obfuscated binaries, dynamic analysis with a debugger (x64dbg for native, dnSpy for .NET) reveals credentials at runtime when they are loaded into memory or passed to API calls.


Memory analysis

Desktop applications keep sensitive data in process memory during operation. Unlike web applications where the server controls memory management, desktop application memory is accessible to any process running with the same user privileges (or higher). This makes memory analysis a core technique in desktop application penetration testing.

What we find in memory: Cleartext passwords that the user entered, decrypted data that is encrypted at rest but must be decrypted for use, session tokens and authentication cookies, decrypted license information, API responses containing sensitive data, and cryptographic keys loaded for active encryption operations.

We use x64dbg for runtime debugging and memory inspection of native applications. For .NET applications, dnSpy provides integrated debugging. For general-purpose memory analysis, tools like Process Hacker and Volatility allow dumping and searching process memory. The specific test is: log into the application, perform sensitive operations, then search the process memory for the credentials and sensitive data that were handled. If credentials persist in memory after the authentication flow completes, that is a finding. If sensitive data remains in memory after the user navigates away from the view that displayed it, that is a finding.

Secure memory handling requires explicit effort from developers. In C#, the SecureString class (now deprecated in .NET Core but still functional) attempts to protect string data in memory. In C/C++, developers should zero out buffers containing sensitive data after use with SecureZeroMemory() on Windows. In practice, most desktop applications make no effort to clear sensitive data from memory, and the garbage collector in managed languages (.NET, Java) makes it difficult even when developers try.


Insecure update mechanisms

How a desktop application updates itself is a critical security concern. An insecure update mechanism is a remote code execution vector: if an attacker can tamper with the update process, they can deliver malicious code that the application will execute with the application's privileges (often elevated privileges, since updates frequently require administrator access).

HTTP update channels

Applications that check for updates over HTTP (not HTTPS) allow any network-level attacker to inject a malicious update. The attacker intercepts the update check, responds with a spoofed update manifest pointing to their malicious binary, and the application downloads and executes it. This is a man-in-the-middle attack that is practical on public Wi-Fi networks, compromised routers, or any network where the attacker has positioning. Even on corporate networks, ARP spoofing or DHCP manipulation can achieve the necessary network position.

Missing signature verification on updates

Applications that download updates over HTTPS but do not verify the cryptographic signature of the downloaded binary are still vulnerable. If the update server is compromised (or if the attacker can redirect the DNS for the update domain), the application will accept any binary from the expected URL. Proper update security requires both transport security (HTTPS) and code signing verification. The application should verify that the downloaded update is signed by the expected publisher before executing it.[9]

Electron-specific update risks

Electron applications commonly use the electron-updater or Squirrel frameworks for auto-updates. These frameworks have had their own vulnerabilities. CVE-2024-39698 was a signature validation bypass in electron-updater where the update mechanism could be tricked into accepting unsigned or incorrectly signed packages on macOS.[10] We test the specific update framework in use, check whether signature verification is enforced, verify the update channel uses HTTPS with proper certificate validation, and attempt to serve a modified update package from a controlled server.

The impact of a compromised update mechanism cannot be overstated. It turns every installed copy of the application into a potential backdoor. The SolarWinds attack (2020) demonstrated this at scale: attackers compromised the build system, injected malicious code into a legitimate update, and the signed update was distributed to 18,000 organizations through the normal update channel. Desktop application update mechanisms deserve the same scrutiny.


Network traffic interception

Desktop applications communicate with backend services, and that network traffic is a primary analysis target. Unlike browsers, which enforce strict TLS and display certificate warnings, desktop applications implement their own HTTP clients and TLS handling, which is frequently less rigorous.

Wireshark captures all network traffic at the interface level, revealing connections to unexpected hosts, unencrypted communications, DNS queries that expose the application's backend infrastructure, and the protocol-level behavior of custom (non-HTTP) communication channels. For HTTP/HTTPS traffic, we use Burp Suite with the application configured to proxy through it, allowing us to intercept, inspect, and modify API requests and responses. For applications that use certificate pinning to prevent proxying, we use tools like Frida to hook the TLS validation functions and bypass the pinning at runtime.

What we commonly find through traffic analysis: API endpoints that accept more parameters than the UI sends (hidden functionality), sensitive data transmitted without encryption, API keys and tokens visible in request headers, overly verbose error responses from the backend, and client-side validation that can be bypassed by modifying the request in transit.


Our desktop application testing methodology

Our assessment methodology for desktop applications follows a structured approach that addresses each of the vulnerability classes described above.

Phase 1: Static analysis. We extract and decompile the application source code. For .NET, we use dnSpy and ILSpy. For Electron, we extract the ASAR archive. For native binaries, we use Ghidra or IDA Pro for disassembly. We search for hardcoded credentials, review cryptographic implementations, map the application's architecture, and identify trust boundaries.

Phase 2: Dynamic analysis. We run the application under debugging and monitoring tools. Procmon captures file and registry access. Wireshark and Burp Suite capture network traffic. x64dbg or dnSpy provide runtime debugging. We exercise every feature of the application while monitoring for security-relevant behavior: credential storage, sensitive data handling, update mechanisms, and inter-process communication.

Phase 3: Exploitation. For each identified vulnerability, we develop a proof-of-concept that demonstrates the impact. DLL hijacking proofs show code execution. Credential extraction proofs show the recovered secrets and demonstrate their use. Binary patching proofs show the bypassed controls. This phase confirms that theoretical vulnerabilities have real-world impact.

Phase 4: Reporting. Each finding includes the vulnerability description, affected component, severity rating, step-by-step reproduction instructions, evidence (screenshots, captured data, tool output), and specific remediation guidance. For desktop applications, remediation often requires architectural changes (moving logic server-side) rather than simple patches, so our guidance addresses both short-term mitigations and long-term architectural improvements.

Sources

  1. Electron, "Security, Native Capabilities, and Your Responsibility," Electron Documentation. https://www.electronjs.org/docs/latest/tutorial/security
  2. Electron, "Context Isolation," Electron Documentation. https://www.electronjs.org/docs/latest/tutorial/context-isolation
  3. M. Pagnotta, "Prototype Pollution to RCE in Electron Desktop Apps," Doyensec Blog, 2022. https://blog.doyensec.com/2022/09/27/electron-universal-prototype-pollution.html
  4. Electron, "ASAR Archives," Electron Documentation. https://www.electronjs.org/docs/latest/tutorial/asar-archives
  5. 0xd4d, "dnSpy - .NET Debugger and Assembly Editor," GitHub. https://github.com/dnSpy/dnSpy
  6. Microsoft, "Dynamic-Link Library Search Order," Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/dlls/dynamic-link-library-search-order
  7. Microsoft, "Process Monitor v4.0," Sysinternals. https://learn.microsoft.com/en-us/sysinternals/downloads/procmon
  8. MITRE, "T1574.002 - Hijack Execution Flow: DLL Side-Loading," MITRE ATT&CK. https://attack.mitre.org/techniques/T1574/002/
  9. Microsoft, "Introduction to Code Signing," Microsoft Learn. https://learn.microsoft.com/en-us/windows/win32/seccrypto/cryptography-tools
  10. GitHub Advisory Database, "CVE-2024-39698 - electron-updater signature validation bypass," GitHub Security Advisories. https://github.com/advisories/GHSA-9jxc-qjr9-vjxq

Need Your Desktop Application Tested?

Desktop apps run on machines you do not control. Our testers know how to decompile, debug, hijack, and break Electron, .NET, and native applications. Let us find the vulnerabilities before your users do.

Book a Consultation Our Services
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.