We test Corax against real-world infrastructure failures across every vendor, platform, and scenario. Browse the results below.
A rogue DHCP server on the network begins handing out IP addresses that conflict with statically assigned servers and network equipment, causing widespread connectivity issues as ARP tables become poisoned.
The centralized syslog server is overwhelmed by a log storm from a network event, dropping 80% of incoming messages. Critical security and compliance logs are being lost during an active incident.
The NetFlow collector server runs out of disk space, causing it to stop ingesting flow data from all network devices. Network visibility is lost, and security analytics based on flow data become non-functional.
Both TACACS+ AAA servers become unreachable due to a VLAN misconfiguration, locking all network administrators out of switches, routers, and firewalls. Only console port access remains available.
A 48-port PoE+ switch reaches its PoE power budget after 12 new WiFi 6E APs are connected, causing the switch to cut power to lower-priority devices including phones and security cameras.
An LACP port channel between the core switch and server farm switch loses all member links after a switch firmware bug causes LACP PDU processing to fail, severing connectivity for 50 servers.
IGMP snooping is disabled on a distribution switch after a firmware upgrade, causing all multicast traffic (video surveillance, IPTV, software distribution) to flood to every port on the VLAN, saturating access links.
A managed client's secondary ISP failover fails to activate when the primary circuit goes down. The SD-WAN appliance detects the primary failure but the secondary circuit is disconnected due to an unpaid bill. The site is completely offline.
The network TAP aggregating traffic for the IDS/IPS and packet capture system becomes oversubscribed. The 10G TAP is receiving 14Gbps of traffic, causing 28% packet loss on the monitoring feed. The IDS misses attack signatures and the packet capture has gaps.
A switch firmware upgrade resets QoS policies on 12 access switches, removing DSCP marking and priority queuing for voice traffic. VoIP call quality degrades severely during business hours when data traffic competes with voice.
A RADIUS policy change breaks MAC Authentication Bypass (MAB) for IoT devices. Security cameras, badge readers, and building management sensors are all locked out of the network as they cannot perform 802.1X EAP authentication.
The IP Address Management system shows all subnets in the production VLAN are fully allocated. DHCP scopes have no available leases, and new devices cannot obtain IP addresses. Server provisioning and workstation deployment are both blocked.
The primary NTP server loses its upstream time source and begins drifting. As a stratum 1 source for the internal network, all downstream servers inherit the drift. Kerberos authentication begins failing when clock skew exceeds 5 minutes.
The TFTP server used for automated network device configuration backups becomes unreachable after a server migration. Nightly configuration backups for 80 network devices have not run for 7 days, leaving no recent configuration recovery point.
The centralized syslog server cannot keep up with the volume of incoming UDP syslog messages during a network event. UDP packets are dropped at the kernel level, causing critical security and audit log data to be permanently lost.
The RADIUS accounting server becomes unresponsive, causing all network access devices to fail sending accounting records. ISP billing data is lost for 8 hours, and compliance logging for network access events stops.
A misconfigured ACL on the layer 3 switch allows traffic from the guest VLAN to reach the server VLAN, bypassing network segmentation. The IDS detects lateral scanning from a compromised guest device targeting internal servers.
Both RADIUS servers (backed by Active Directory) become unreachable after an AD domain controller crash. All 802.1X network authentication fails, preventing users from connecting to wired and wireless networks. Existing sessions remain active but no new authentications succeed.
The primary DNS server's zone transfer (AXFR) to the secondary fails due to a firewall rule change blocking TCP port 53. The secondary DNS server continues serving increasingly stale records, causing intermittent name resolution failures as TTLs expire.
The primary power supply in the core switch stack fails, causing the switch to reboot onto the secondary PSU. During the reboot, the switch stack ring breaks and a stack master re-election occurs, disrupting all traffic through the core for 90 seconds.
Every scenario is tested against Corax's Neural Engine in a production environment with AI-powered root cause analysis.
Tests run continuously as new infrastructure patterns are added.