AuditRubric
de-ae-6 high Detect / Adverse Event Analysis

Information on adverse events is provided to authorized staff and tools

Detection is only valuable if the findings reach the people and systems positioned to act on them. Alert routing failures, where an alert fires but nobody with authority to respond receives it, are a common contributor to security incidents escalating unnecessarily. Effective adverse event notification means defining who needs to know about which event types, ensuring that routing is reliable, and confirming that notification channels function under the conditions where they matter most.

Estimated effort: 3h
alertingnotificationincident-responsepagerdutyescalation

Implementation steps

  1. 1

    Define notification routing for security events by severity

    Document who receives alerts at each severity tier: a low-severity anomaly may go to a shared Slack channel, a high-severity event may page the on-call security engineer, and a critical event may simultaneously notify the CISO and trigger an incident management workflow. Define escalation paths when the primary recipient does not acknowledge within the SLA. Test routing by sending test events through the pipeline.

    pagerdutyopsgeniesplunkdatadogslack
  2. 2

    Integrate security tools to share findings automatically

    Configure your detection tools to automatically share findings with other security systems: EDR alerts should flow to the SIEM for correlation, SIEM alerts above a threshold should auto-create tickets in your incident management system, and vulnerability findings should route to the responsible engineering team's backlog. Automated routing reduces the time between detection and human review.

    palo-alto-cortex-xsoarsplunk-phantomjirapagerduty
  3. 3

    Verify notification paths are reliable and tested

    Test your notification paths regularly: send a test alert through the full pipeline and confirm it reaches every intended recipient. Verify that on-call rotations are staffed, phone numbers in PagerDuty are current, and Slack webhooks have not expired. Notification failures discovered during an incident, rather than in a test, are preventable gaps.

    pagerdutyopsgeniedatadog

Evidence required

Alert routing and notification configuration

Documentation of who receives which alerts and through what channels.

  • · PagerDuty or OpsGenie escalation policy configuration
  • · SIEM alert routing rules showing notification destinations by severity
  • · On-call schedule showing 24/7 coverage for critical security alerts

Notification testing records

Evidence that notification paths have been tested and verified.

  • · Test alert delivery records showing end-to-end notification pipeline verification
  • · Quarterly on-call notification test results
  • · Runbook for verifying notification channel health

Related controls