ESXicape: VM Escape Attacks, VSOCKpuppet, and Why Hypervisor Security Is Under Siege | Lorikeet Security Skip to main content
Back to Blog

ESXicape: VM Escape Attacks, VSOCKpuppet, and Why Hypervisor Security Is Under Siege

Lorikeet Security Team March 2, 2026 13 min read
Hypervisor Exploit Analysis

Three chained zero-days. One VM escape. An invisible backdoor on VSOCK port 10000.

Virtualization is the foundation of modern infrastructure. Every major cloud provider, every enterprise data center, and most mid-market companies depend on hypervisors to isolate workloads. The fundamental promise is simple: what happens inside a virtual machine stays inside that virtual machine. The guest cannot reach the host. The host is the trusted boundary.

That promise was shattered in March 2025, when VMware disclosed VMSA-2025-0004, a trio of zero-day vulnerabilities that, when chained together, allow an attacker with local administrator access inside a guest VM to escape the virtual machine entirely, gain kernel-level code execution on the ESXi hypervisor, and install a persistent backdoor that is invisible to every firewall and IDS on your network.

The exploit toolkit, first observed in the wild by Huntress researchers and later analyzed in depth by the Microsoft Threat Intelligence Center, had been in active use by a sophisticated threat actor for at least a year before VMware issued patches. The tooling was polished, well-tested, and supported 155 different ESXi builds spanning versions 5.1 through 8.0.

TL;DR: Three chained VMware zero-days (CVE-2025-22226, CVE-2025-22224, CVE-2025-22225) enable a full VM-to-hypervisor escape. The attack ends with VSOCKpuppet, a backdoor that communicates over VSOCK, a channel invisible to network security tooling. The exploit was built at least a year before disclosure. Patch ESXi immediately. If you cannot patch, isolate your hypervisors and monitor for the IOCs listed below.


Three CVEs, One Exploit Chain

VMware's advisory VMSA-2025-0004, published March 4, 2025, disclosed three vulnerabilities. Individually, each is serious. Chained together, they form a complete VM escape that ends with hypervisor kernel compromise. Understanding each link in the chain is critical to understanding why this attack is so significant.

CVE CVSS Type Component Role in Chain
CVE-2025-22226 7.1 (High) Out-of-bounds read HGFS (Host-Guest File Sharing) Leaks VMX process memory to defeat ASLR
CVE-2025-22224 9.3 (Critical) TOCTOU race condition VMCI (Virtual Machine Communication Interface) Out-of-bounds write into VMX process memory
CVE-2025-22225 8.2 (High) Arbitrary write ESXi kernel Escapes VMX sandbox, writes to hypervisor kernel

CVE-2025-22226: The information leak

The HGFS (Host-Guest File Sharing) subsystem handles shared folder functionality between a VM and its host. CVE-2025-22226 is an out-of-bounds read vulnerability in the HGFS driver that runs within the VMX (Virtual Machine Executable) process. The VMX process is the user-space component on the ESXi host that manages each individual VM. It handles device emulation, I/O, and the communication bridge between the guest and the hypervisor kernel.

By sending a specially crafted HGFS request from within the guest, an attacker can cause the VMX process to read beyond the bounds of a buffer and return the extra data to the guest. This leaked memory contains pointers from the VMX process address space, which the attacker uses to defeat Address Space Layout Randomization (ASLR). Without this leak, the subsequent exploitation stages would be unreliable because the attacker would not know where code and data structures are located in VMX memory.

The exploit toolkit includes support for 155 distinct ESXi builds across versions 5.1 through 8.0, each with hardcoded offsets tailored to that specific build. This level of build coverage indicates extensive reverse engineering and testing by the developers.

CVE-2025-22224: The memory corruption

The VMCI (Virtual Machine Communication Interface) is a high-speed communication channel between guest VMs and the host. CVE-2025-22224 is a Time-of-Check to Time-of-Use (TOCTOU) race condition in how VMCI handles shared memory regions. During the processing of a VMCI datagram, the VMX process checks the size of a memory region and then, in a separate step, copies data based on that size. By racing the check against the use, the attacker can cause an out-of-bounds write into the VMX process heap.

Armed with the leaked pointers from CVE-2025-22226, the attacker uses this write primitive to place stage 1 shellcode at a known location in VMX memory and corrupt function pointers to redirect execution. When the VMX process next invokes the corrupted function pointer, it jumps to the attacker's shellcode, giving them arbitrary code execution within the VMX process context.

CVE-2025-22225: The sandbox escape

Code execution inside the VMX process is significant but not the end goal. The VMX process runs in a sandboxed environment on ESXi, with limited access to the kernel. CVE-2025-22225 is an arbitrary write vulnerability in the ESXi hypervisor kernel that can be triggered from the VMX process. The stage 1 shellcode exploits this vulnerability to write stage 2 shellcode directly into the ESXi kernel, escaping the VMX sandbox entirely.

At this point, the attacker has kernel-level code execution on the hypervisor host. They can access every VM running on that host, read and modify host memory, and persist across VM reboots. The attack chain is complete.


How the Exploit Works Step by Step

The full exploitation sequence from initial guest compromise to hypervisor backdoor involves five distinct phases. The attack presumes the attacker already has local administrator access inside a Windows guest VM, which in the observed campaigns was achieved through compromised SonicWall VPN credentials followed by lateral movement through the internal network.

Phase 1 Disable VMCI Drivers
Phase 2 Load Malicious Driver
Phase 3 HGFS Memory Leak
Phase 4 VMCI Write Primitive
Phase 5 Kernel Escape + Backdoor

Phase 1: Disabling VMCI drivers

The exploit begins by disabling the standard VMware VMCI drivers installed in the guest Windows VM. The attacker uses devcon.exe (the Windows Device Console utility) to disable the vmci and vsock drivers. This is necessary because the exploit needs direct hardware I/O access to the VMCI device, which the standard drivers would interfere with. Disabling the drivers frees the device for the attacker's own driver to claim.

Phase 2: Loading the unsigned malicious driver

With the standard VMCI drivers disabled, the attacker loads MyDriver.sys, a custom unsigned kernel driver that provides direct access to the VMCI hardware. Loading an unsigned driver on modern Windows requires bypassing Driver Signature Enforcement (DSE). The tooling accomplishes this through KDU (Kernel Driver Utility), which exploits a legitimate but vulnerable signed driver already present on the system to map the unsigned driver into kernel memory. This is a well-known technique: the attacker does not need to disable Secure Boot or modify BCD settings. They leverage an existing signed driver vulnerability to load their own code.

Phase 3: CVE-2025-22226 - Leaking VMX memory

With direct VMCI hardware access established, the exploit triggers CVE-2025-22226 through the HGFS subsystem. The malicious driver sends a crafted HGFS request that causes the VMX process on the host to perform an out-of-bounds read and return the leaked memory to the guest. The exploit parses this leaked data to extract VMX process pointers, effectively defeating ASLR.

The exploit then looks up the leaked pointers against its database of 155 supported ESXi builds (spanning ESXi 5.1 through 8.0) to identify the exact host version and calculate the correct offsets for the next stage. If the host version is not in the database, the exploit exits cleanly without crashing anything.

Phase 4: CVE-2025-22224 - Writing shellcode into VMX

Using the pointers obtained in Phase 3, the exploit triggers CVE-2025-22224 through the VMCI interface. The TOCTOU race condition allows the attacker to write arbitrary data into the VMX process heap. The exploit writes stage 1 shellcode into a known memory region and corrupts one or more function pointers to redirect execution to the shellcode.

When the VMX process next calls the corrupted function, it executes the attacker's shellcode. At this point, the attacker has arbitrary code execution within the VMX user-space process on the ESXi host, but still inside the VMX sandbox.

Phase 5: CVE-2025-22225 - Escaping to the kernel and installing VSOCKpuppet

The stage 1 shellcode running in the VMX process exploits CVE-2025-22225 to perform an arbitrary write into the ESXi kernel. This write is used to inject and execute stage 2 shellcode in kernel context. The stage 2 shellcode performs the following actions on the ESXi host:

  1. Writes the VSOCKpuppet binary to /var/run/a on the ESXi filesystem
  2. Modifies /etc/inetd.conf to register VSOCKpuppet as a service on VSOCK port 10000, ensuring it starts on boot and persists across service restarts
  3. Restarts the inetd service to activate the backdoor immediately
  4. Cleans up forensic artifacts by deleting the stage 2 shellcode from memory and removing temporary files
VPN Compromise SonicWall creds
Lateral Movement Reach guest VM
VM Escape 3 chained CVEs
VSOCKpuppet Persistent backdoor

VSOCKpuppet: The Invisible Backdoor

The final payload deployed by the exploit chain is VSOCKpuppet, a lightweight Linux ELF binary written specifically for ESXi. What makes VSOCKpuppet remarkable is not its complexity but its communication channel.

What is VSOCK?

VSOCK (Virtual Sockets) is a communication mechanism designed for host-guest communication in virtualized environments. Unlike TCP/IP networking, VSOCK operates through the hypervisor's memory-mapped I/O interface. It does not use IP addresses, does not traverse network interfaces, and does not appear in any network traffic capture. VSOCK connections are identified by a Context ID (CID) and a port number, not by IP:port tuples.

From a security monitoring perspective, VSOCK traffic is completely invisible to:

VSOCKpuppet capabilities

VSOCKpuppet listens on VSOCK port 10000 and supports three command types:

The attacker communicates with VSOCKpuppet from any compromised guest VM running on the same host. Since the backdoor is registered with inetd, it persists across service restarts and survives normal ESXi maintenance operations. It does not persist across a full ESXi reinstall, but it does survive patches that do not modify inetd configuration.

Why VSOCK matters for defenders: If your detection strategy relies on network monitoring (and most do), VSOCKpuppet will not trigger a single alert. You need host-level integrity monitoring on your ESXi hosts to detect modifications to inetd.conf, unexpected binaries in /var/run/, and unusual VSOCK listener activity. Most organizations do not have this level of visibility into their hypervisor layer.


VMFUNC and the Future of VM Escape

While the ESXicape chain exploits software vulnerabilities in VMware's HGFS, VMCI, and kernel components, a parallel and arguably more concerning trend is emerging in hardware-level VM escape techniques. Intel VMFUNC represents the frontier of this threat.

What is VMFUNC?

VMFUNC (VM Functions) is an Intel VT-x feature introduced with Haswell processors. It allows a guest VM to invoke specific hypervisor-defined functions without triggering a VM exit. A VM exit is the mechanism by which the hypervisor regains control when a guest performs a privileged operation. VM exits are expensive (hundreds to thousands of CPU cycles), so VMFUNC was designed to optimize common operations by letting the guest handle them directly.

The most commonly implemented VM function is EPTP switching (Extended Page Table Pointer switching), which allows a guest to switch between different memory mapping configurations without hypervisor intervention. This is used legitimately for features like memory isolation and sandboxing within the guest.

The attack surface

Security researchers have demonstrated that VMFUNC can be abused in several ways:

Why this matters for the ESXi threat landscape

The ESXicape exploit chain targets software bugs that can be patched. VMFUNC attacks target hardware functionality that is working as designed. Mitigating VMFUNC-based attacks requires either disabling the feature (at a performance cost) or implementing additional hypervisor-level checks that add overhead to every memory access.

The broader trend is clear: as software-level hypervisor vulnerabilities become harder to find (VMware has invested heavily in code auditing and fuzzing), attackers are moving toward hardware-level attack surfaces. The combination of software exploits like ESXicape and emerging hardware techniques like VMFUNC represents a fundamental challenge for organizations that rely on virtualization as a security boundary.


Who Built This and When

Attribution for the ESXicape tooling is based on forensic artifacts recovered from the exploit binaries and supporting infrastructure. While attribution is always probabilistic, the evidence points to a sophisticated, well-resourced threat actor with ties to China.

Technical attribution indicators

Timeline

Date Event
Feb 2024 Earliest PDB timestamp in exploit binaries; development underway at least since this date
2024 (various) Active exploitation campaigns observed in the wild, primarily through compromised VPN appliances
Late 2024 Huntress researchers discover exploit artifacts during incident response engagements
Mar 4, 2025 VMware publishes VMSA-2025-0004, disclosing all three CVEs with patches
Mar 2025+ MSTIC and other security firms publish detailed technical analyses of the exploit chain

The year-long gap between the earliest known development date and the public disclosure is significant. It means the threat actor had exclusive access to a full VM escape capability for at least 12 months. During that window, any organization running ESXi was a potential target with no available patch and no public awareness of the threat.


Detection and Remediation

Immediate patching

Apply the patches from VMSA-2025-0004 to all affected VMware products. The advisory covers ESXi, VMware Workstation, VMware Fusion, VMware Cloud Foundation, and VMware Telco Cloud Platform. This is the definitive fix for the three software vulnerabilities.

Indicators of Compromise (IOCs)

If you suspect your environment may have been targeted before patches were available, look for these indicators:

Sigma rule for guest-side detection

Detection logic: Monitor for devcon.exe execution with arguments targeting vmci or vsock device classes. Monitor for unsigned driver loads via KDU or similar DSE bypass tools. Alert on any modification to the VMCI or VSOCK driver state in Windows guests. On the ESXi side, monitor inetd.conf for unauthorized modifications and alert on new VSOCK listeners.

YARA rule indicators

Security researchers have published YARA rules targeting the VSOCKpuppet binary, the stage 1 and stage 2 shellcode payloads, and the MyDriver.sys kernel driver. Key string indicators include:

Network and host monitoring


What This Means for Your Infrastructure

ESXicape forces a fundamental reassessment of how organizations think about virtualization security.

The hypervisor is now an attack target, not just a boundary

For years, security teams treated the hypervisor as a trusted component that did not require the same level of monitoring as guest workloads. ESXicape demonstrates that a compromised guest VM is a direct threat to the hypervisor and, by extension, every other VM on that host. Organizations need to extend their security monitoring to include the hypervisor layer, not just the workloads running on top of it.

Network monitoring has blind spots

VSOCKpuppet exploits a communication channel that is fundamentally invisible to network-based security tools. This is not a failure of those tools; it is a limitation of the detection approach. Organizations that rely exclusively on network monitoring for their virtualized infrastructure have a coverage gap that this class of attack exploits directly. Host-based integrity monitoring and hypervisor-level detection capabilities are necessary complements.

Patch cycles matter more at the infrastructure layer

The 12-month window between exploit development and patch availability underscores the importance of rapid patching at the infrastructure layer. When VMware releases a critical security advisory, the question is not whether to patch but how quickly you can do it. Organizations that took weeks or months to apply VMSA-2025-0004 were at risk the entire time.

VPN compromise is still the front door

The observed attack campaigns began with compromised SonicWall VPN credentials. The most sophisticated VM escape in the world still needs initial access, and that access came through the same vectors we see in every penetration test: stolen credentials, unpatched edge devices, and weak network segmentation. Hardening your perimeter remains the most effective way to prevent even the most advanced attacks from reaching your hypervisors.

Regular security assessments must include virtualization

If your penetration testing scope stops at the guest OS level, you are missing the infrastructure that underpins everything. Cloud and infrastructure security assessments should evaluate hypervisor patch levels, management interface exposure, network segmentation between management and workload traffic, and the monitoring capabilities you have at the hypervisor layer. A comprehensive penetration test evaluates the full stack, not just the application or the guest OS.

Is your virtualization infrastructure secure?

Lorikeet Security's infrastructure penetration testing and attack surface management evaluate your hypervisor layer, not just the workloads running on it. Find out what an attacker would see before they exploit it.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!