Back to All Scenarios
PASSEDcloud / azure_disk

Azure: Azure Managed Disk IOPS Throttled

An Azure managed disk hit its IOPS limit causing severe IO latency for the hosted application.

Pattern
AZURE_CLOUD
Expected: AZURE_DISK_THROTTLE
Severity
HIGH
Confidence
68%
Remediation
Auto-Heal

Test Results

MetricExpectedActualResult
Pattern RecognitionAZURE_DISK_THROTTLEAZURE_CLOUD
Severity AssessmentHIGHHIGH
Incident CorrelationN/ANone
Cascade EscalationN/ANo
RemediationAuto-Heal — Corax resolves autonomously

Scenario Conditions

Azure Standard SSD E30 (1TB, 500 IOPS). Application sustained 2000 IOPS. IO queue depth: 150. Avg latency: 800ms.

Injected Error Messages (1)

Azure managed disk throttled — Standard SSD E30 at 500 IOPS limit, application demanding 2000 IOPS, IO queue depth 150, latency 800ms, application severely degraded

Neural Engine Root Cause Analysis

Azure cloud infrastructure event detected — an Azure resource may be failing, an App Service is unhealthy, Azure AD authentication is disrupted, or a Service Bus queue is backed up. Azure outages can cascade across dependent services and affect both cloud-hosted applications and hybrid on-premises integrations relying on Azure AD.

Remediation Plan

1. Check Azure Service Health (status.azure.com) for any active incidents in your region. 2. Review Azure Monitor alerts and resource health for the affected service. 3. For App Service issues, check the Kudu console for application logs and restart the app if needed. 4. For Azure AD issues, verify conditional access policies and check the Azure AD sign-in logs for failure reasons. 5. For Service Bus, check dead-letter queues and verify the sending/receiving applications are connected and processing messages.

Improvements Applied

  • Pattern classified as AZURE_CLOUD (expected AZURE_DISK_THROTTLE)
Tested: 2026-04-02Monitors: 1 | Incidents: 1Test ID: cmnhnr9bl09c8lig7goiyxhk5