Infrastructure Scenario Tests

We test Corax against real-world infrastructure failures across every vendor, platform, and scenario. Browse the results below.

276
Total Tests
100.0%
Pass Rate
276
Passed
0
Failed

DHCP Server Failover — Primary DHCP Down

PASS

The primary Windows DHCP server crashes and the DHCP failover partner does not transition to 'Partner Down' state due to a misconfigured maximum client lead time (MCLT). Clients with expiring leases cannot renew and start losing connectivity.

ServerPattern: DHCP_EXHAUSTIONSeverity: CRITICALConfidence: 92%Remote Hands35 correlated

DHCP Scope Exhaustion — New Devices Can't Get IPs

PASS

The DHCP scope for the main user VLAN is 99% exhausted. New devices connecting to the network fail to obtain IP addresses. Users reporting 169.254.x.x APIPA addresses. Scope was sized for 200 but 210 devices now on the VLAN due to BYOD growth.

ServerPattern: DHCP_EXHAUSTIONSeverity: CRITICALConfidence: 92%Auto-Heal25 correlated

Virtual Switch Misconfiguration — VMs Lose Network

PASS

An administrator changes the Hyper-V virtual switch binding from the production NIC team to a single disconnected NIC. All 25 VMs on that virtual switch lose network connectivity instantly. The admin is locked out of remote management.

VendorPattern: HYPERV_EVENTSeverity: CRITICALConfidence: 95%Remote Hands36 correlated

Hyper-V Host Cluster Node Failure

PASS

A Hyper-V host in a 3-node failover cluster experiences a blue screen of death. Failover clustering attempts to live-migrate VMs to surviving nodes but 4 highly available VMs fail to migrate due to anti-affinity rules and insufficient resources.

VendorPattern: HYPERV_EVENTSeverity: CRITICALConfidence: 95%Remote Hands41 correlated

vMotion Failure During DRS Rebalance

PASS

A DRS-triggered vMotion fails mid-migration due to a vMotion network MTU mismatch. The VM enters a stuck state — partially migrated, with the source host holding the memory pages and the destination unable to complete the switchover.

VendorPattern: VMWARE_EVENTSeverity: CRITICALConfidence: 92%Remote Hands27 correlated

VMFS Datastore Inaccessible — All VMs on Store Frozen

PASS

A VMFS 6 datastore becomes inaccessible due to a storage path failure (all-paths-down APD condition). All 22 VMs on the datastore freeze with I/O errors. vSphere triggers APD timeout handling after 140 seconds.

VendorPattern: VMWARE_EVENTSeverity: CRITICALConfidence: 90%Remote Hands50 correlated

vCenter Server Unavailable — Management Plane Down

PASS

The vCenter Server Appliance (VCSA) becomes unresponsive due to a database corruption in the embedded PostgreSQL. All management operations are impossible. VMs continue running but no changes, migrations, or monitoring can occur.

VendorPattern: VMWARE_EVENTSeverity: CRITICALConfidence: 92%Remote Hands22 correlated

ESXi Host Purple Screen of Death

PASS

An ESXi 8.0 host experiences a Purple Screen of Death (PSOD) due to a faulty network driver. All 35 VMs on the host crash simultaneously. HA attempts to restart them on surviving hosts but resource contention causes slow recovery.

VendorPattern: VMWARE_EVENTSeverity: CRITICALConfidence: 92%Remote Hands37 correlated

Meraki License Expiry — Features Disabled Across Network

PASS

Meraki Enterprise licenses expire on a Saturday night. Dashboard access becomes read-only. Advanced features including Auto VPN, traffic shaping, and client analytics are disabled. APs continue broadcasting but without content filtering or group policies.

VendorPattern: MERAKI_EVENTSeverity: CRITICALConfidence: 95%Remote Hands23 correlated

Meraki VPN Hub Failure — Auto VPN Mesh Disrupted

PASS

The Meraki VPN concentrator hub at the data center fails, breaking all Auto VPN tunnels in the mesh. 8 branch sites lose connectivity to central resources including file shares, ERP, and VoIP.

VendorPattern: MERAKI_EVENTSeverity: CRITICALConfidence: 85%Remote Hands56 correlated

Meraki AP Fleet Offline — 20 APs Lose Cloud Connectivity

PASS

An upstream switch reboot causes 20 Meraki MR46 access points to lose their uplink simultaneously. APs lose Meraki Dashboard cloud connectivity and fall into local management mode. SSIDs remain broadcasting but no new clients can authenticate via RADIUS.

VendorPattern: MERAKI_EVENTSeverity: CRITICALConfidence: 85%Remote Hands46 correlated

Meraki MX Appliance Failover — Warm Spare Takes Over

PASS

The primary Meraki MX450 appliance at a large campus fails due to a firmware crash. The warm spare MX450 assumes the primary role after a 45-second failover gap. All site-to-site VPN tunnels and client connections are disrupted during the transition.

VendorPattern: MERAKI_EVENTSeverity: CRITICALConfidence: 95%Remote Hands46 correlated

NIC Errors — Bad SFP Causing Packet Loss on Production Server

PASS

A production server's 10G SFP is failing, causing CRC errors and packet drops on the network interface. Applications experience intermittent connectivity and retransmissions.

ServerPattern: NIC_ERRORSSeverity: CRITICALConfidence: 85%Remote Hands5 correlated

SNMP Authentication Failure Storm from Rogue Scanner

PASS

A vulnerability scanner on the network is using incorrect SNMP community strings, generating thousands of SNMP authentication failure traps from every managed device. NMS is overwhelmed.

NetworkPattern: SNMP_TRAP_ERRORSeverity: CRITICALConfidence: 85%Remote Hands12 correlated

Ransomware Encryption Detected on File Server

PASS

A ransomware attack is actively encrypting files on the primary file server. Hundreds of files are being renamed with .encrypted extension. Multiple users report locked files. The attack originated from a phished employee workstation.

SecurityPattern: UNKNOWNSeverity: CRITICALConfidence: 95%Remote Hands13 correlated

DNS Server Failure — Both Internal DNS Servers Down

PASS

Both internal DNS servers fail simultaneously (primary: disk full, secondary: expired DNSSEC key). All internal name resolution fails. Every service that depends on DNS stops working.

CascadePattern: DNS_FAILURESeverity: CRITICALConfidence: 85%Auto-Heal75 correlated

Partial Power Failure — UPS Battery Exhaustion in Server Room

PASS

A utility power outage exceeds UPS battery runtime. The UPS runs out of battery and shuts down. Half the rack loses power. Servers, switches, and storage go offline simultaneously. Generator fails to start due to dead battery.

CascadePattern: CONNECTION_REFUSEDSeverity: CRITICALConfidence: 95%Remote Hands75 correlated

Kubernetes Pod CrashLoopBackOff — Missing ConfigMap

PASS

A Kubernetes deployment enters CrashLoopBackOff because a ConfigMap was deleted during a cleanup script. The application cannot start without its configuration, and the backoff timer keeps increasing.

ServerPattern: PROCESS_CRASH_LOOPSeverity: CRITICALConfidence: 92%Remote Hands6 correlated

SAN Storage Latency Spike — Noisy Neighbor on Shared Storage

PASS

A dev team runs a massive data import job on a shared SAN, consuming all available IOPS. Production VMs on the same storage pool experience 10x latency increase, causing application timeouts.

ServerPattern: STORAGE_IO_LATENCYSeverity: CRITICALConfidence: 92%Remote Hands28 correlated

RAID 5 Degradation — Two Drives Showing SMART Warnings

PASS

A RAID 5 array on a file server has one failed drive and a second drive showing SMART predictive failure warnings. The array is rebuilding, but if the second drive fails before rebuild completes, all data is lost.

ServerPattern: RAID_DEGRADATIONSeverity: CRITICALConfidence: 95%Remote Hands9 correlated
PreviousPage 13 of 14Next

Every scenario is tested against Corax's Neural Engine in a production environment with AI-powered root cause analysis.

Tests run continuously as new infrastructure patterns are added.