Alert thresholds are established
Detection without calibration produces one of two failure modes: alert fatigue from too many low-confidence signals, or silent failure from thresholds set so high that real attacks never trigger. Alert thresholds define the sensitivity of your detection: when something becomes noteworthy enough to page an analyst. Organizations must deliberately tune their thresholds to their environment rather than relying on vendor defaults, which are calibrated for a generic organization, not yours.
Implementation steps
- 1
Inventory and review all active detection rules and their thresholds
Audit every active alert rule in your SIEM, EDR, and other detection tools. For each rule, document: what it detects, the current threshold, the false positive rate, and the last time it was reviewed. Many organizations accumulate detection rules over time without ever reviewing whether the threshold is appropriate. This audit identifies over-sensitive rules generating noise and under-sensitive rules likely missing real events.
splunkmicrosoft-sentinelelasticcrowdstrikedatadog - 2
Tune thresholds based on your environment's baseline
Use your established baselines (from DE.AE-1) to set thresholds that are meaningful for your environment: if your SOC analyst sees 500 failed logins per day as normal for a particular service, the failed login threshold for that service should be substantially higher than the default. Adjust thresholds iteratively: lower them if you observe missed detections, raise them if false positive volume is degrading analyst performance.
splunkmicrosoft-sentinelelasticdatadog - 3
Establish a regular threshold review cadence
Thresholds that were appropriate last quarter may be wrong today if traffic volumes have grown, new services have been deployed, or user behavior has shifted. Review alert thresholds at least quarterly and after any major infrastructure change. Track the false positive rate for each rule over time and use this trend data to justify threshold adjustments.
splunkdatadogelastic
Evidence required
Detection rule inventory with thresholds
A documented inventory of detection rules with their configured thresholds and review history.
- · SIEM rule inventory spreadsheet with threshold values and last-reviewed dates
- · Detection engineering wiki documenting tuning decisions and rationale
- · Quarterly threshold review meeting notes
Alert volume and false positive rate metrics
Evidence of ongoing monitoring and tuning of alert quality.
- · Dashboard showing alert volume by rule over time with false positive rate annotations
- · Post-review summary showing which rules were adjusted and why
- · SOC metrics report showing mean time to triage and false positive percentage
Related controls
Information is correlated from multiple sources
Adverse Event Analysis
Networks and network services are monitored to detect adverse events
Continuous Monitoring
Computing hardware and software, runtime environments, and their data are monitored to find potentially adverse events
Continuous Monitoring
A baseline of network operations and expected data flows is established and managed
Adverse Event Analysis