We test Corax against real-world infrastructure failures across every vendor, platform, and scenario. Browse the results below.
The primary network monitoring platform enters a crash loop after a database corruption event during a power fluctuation. All alerting stops, creating a blind spot where infrastructure failures go undetected. The secondary monitoring server was decommissioned last month.
After a firewall firmware upgrade, the MTU on the WAN interface drops from 1500 to 1400 without updating the MSS clamp. Jumbo frames from the server VLAN hit the firewall and get silently dropped, causing intermittent failures for large file transfers and database replication.
A junior admin accidentally changes a trunk port to access mode on a distribution switch, pruning all VLANs except the native VLAN. The spanning tree topology reconverges, causing a 30-second outage across multiple VLANs and triggering TCN flooding.
A misconfigured route-map on the border router leaks internal BGP prefixes to the upstream ISP. The ISP begins routing external traffic into a blackhole. Customer-facing services become unreachable from the internet while internal connectivity remains functional.
The primary ISP circuit is experiencing intermittent packet loss (5-15%) due to a degraded fiber segment. Not a full outage — the circuit stays up but quality degrades. VoIP calls have choppy audio, video conferences freeze, and cloud app performance is poor. ISP ticket opened but ETA unknown.
During a certificate renewal, the wrong certificate is applied to the load balancer's SSL offload profile. The certificate is for a different domain (staging.acmecorp.com instead of www.acmecorp.com). Browsers show certificate name mismatch warnings. HPKP pins do not match.
A WAF rule update on the F5 ASM introduces a false positive that matches a common HTTP header sent by the company's mobile app. All mobile API requests are blocked with 403 Forbidden. 60% of customer traffic comes from the mobile app.
An F5 BIG-IP load balancer's health check monitor becomes too aggressive after a config change (interval: 1s, timeout: 2s). A brief 3-second network blip causes all pool members to be marked DOWN simultaneously. The LB returns 503 to all clients.
The WLC detects a rogue access point broadcasting a corporate SSID ('Corp-WiFi') in the parking lot. The rogue AP is performing an evil twin attack, capturing credentials from employees who auto-connect. WIDS alerts trigger but containment is not automatic.
The Cisco 9800 Wireless LAN Controller crashes, orphaning 60 managed access points. APs enter standalone mode with limited functionality. New client authentications fail because RADIUS proxy is unavailable. Existing clients remain associated but cannot roam.
The master switch in a 3-member stack reboots unexpectedly due to a firmware bug. A new master election occurs, causing a 90-second control plane outage. During the election, no configuration changes can be made, and STP reconverges, causing brief traffic interruption.
A Cisco 9300 4-member switch stack experiences a stack cable failure, splitting the stack into two independent 2-member stacks. Both halves claim the same management IP. MAC address tables conflict. Half the access ports become unreachable from management.
A sudden work-from-home mandate floods the SSL VPN concentrator with 500+ simultaneous connections. The device supports 250 concurrent sessions. Users see 'maximum sessions reached' errors. Split tunneling not configured, so all traffic routes through VPN, crushing the bandwidth.
The hub firewall's IKE daemon crashes, tearing down all 6 site-to-site IPSec VPN tunnels simultaneously. All branch offices lose connectivity to the data center. File shares, ERP, email, and VoIP between sites all fail.
The HA synchronization between a FortiGate firewall cluster pair fails due to a mismatched firmware version after one unit was updated. Session tables are out of sync. If the primary fails, the secondary has a stale configuration that will break VPN tunnels and NAT rules.
A junior admin pushes a firewall rule that blocks TCP port 443 outbound for the production server VLAN. All HTTPS-dependent services fail — API calls to payment gateways, cloud backups, software license checks, and update services all stop.
A vulnerability scanner on the network is using incorrect SNMP community strings, generating thousands of SNMP authentication failure traps from every managed device. NMS is overwhelmed.
Two devices on the same VLAN have been assigned the same IP address. Both are sending gratuitous ARPs, creating an ARP storm that degrades network performance for the entire subnet.
A compromised workstation is flooding the network with spoofed MAC addresses, overflowing the switch CAM table and causing unknown unicast flooding across all VLANs.
A fiber SFP is failing on the uplink between access and distribution layer switches. The port flaps every 30-90 seconds, causing MAC table instability and intermittent connectivity for 200+ users.
Every scenario is tested against Corax's Neural Engine in a production environment with AI-powered root cause analysis.
Tests run continuously as new infrastructure patterns are added.