An Exchange Database Availability Group (DAG) member server experiences a storage failure causing the active database copy to dismount. The passive copy on the second server activates but is 5 minutes behind due to log shipping lag.
Pattern
EXCHANGE_EVENT
Severity
CRITICAL
Confidence
90%
Remediation
Remote Hands
Test Results
Metric
Expected
Actual
Result
Pattern Recognition
EXCHANGE_EVENT
EXCHANGE_EVENT
Severity Assessment
CRITICAL
CRITICAL
Incident Correlation
Yes
21 linked
Cascade Escalation
Yes
Yes
Remediation
—
Remote Hands — Corax contacts on-site support via call, email, or API
Scenario Conditions
Exchange 2019 DAG with 2 members (EX-01 and EX-02). Mailbox Database DB-01 active on EX-01. Storage controller failure on EX-01. Passive copy on EX-02 is 5 minutes behind. 500 mailboxes in DB-01.
Injected Error Messages (3)
Exchange database DB-01 dismounted on EX-01 — storage I/O error, ESE error -1022 (JET_errDiskIO), database copy status: Dismounted, Event ID 474: database page read failed, storage controller reporting SCSI errors
Exchange DAG failover initiated — DB-01 activating on EX-02, passive copy replay queue: 312 log files (5 minutes behind), database mounting in recovery mode, potential data loss for emails received in last 5 minutes
Outlook connectivity disrupted — MAPI connections to DB-01 dropped, Outlook clients in 'Trying to connect...' state, Autodiscover redirecting to EX-02, 500 mailboxes affected, reconnection in progress
Neural Engine Root Cause Analysis
Exchange database DB-01 has dismounted due to a storage I/O failure at the infrastructure level. The ESE error -1022 (JET_errDiskIO) and Event ID 474 indicate database page read failures caused by underlying SCSI errors from the storage controller. This suggests either storage hardware failure, SAN connectivity issues, or storage controller malfunction affecting the exchange server's ability to access its database files.
Remediation Plan
1. Immediately check storage controller health and SCSI connection status 2. Verify SAN connectivity and multipathing configuration 3. Review storage system logs for hardware errors or capacity issues 4. If storage is healthy, attempt to mount the exchange database using Mount-Database cmdlet 5. If mount fails, check database integrity using ESEUTIL /mh and /g 6. Consider failing over to passive database copy if in DAG configuration 7. Engage storage team for hardware replacement if controller failure confirmed