Skip to main content
Alerts are how AnomalyArmor notifies you when something needs attention. Whether it’s a schema change, stale data, or a failed discovery job, alerts ensure the right people know at the right time.
Diagram showing event detection, rule evaluation, routing, and delivery

How Alerts Work

Alerts follow a three-stage pipeline: Alert pipeline from event detection to delivery

1. Event Detection

AnomalyArmor detects events during discovery runs:
Event TypeDescription
Schema ChangeColumn added, removed, or type changed
Freshness ViolationData not updated within SLA
Discovery FailedConnection or permission error
Asset RemovedTable/view no longer exists
New AssetTable/view discovered for first time

2. Rule Evaluation

Each event is checked against your alert rules:
  • Scope: Does the event match the rule’s filters? (data source, schema, asset)
  • Conditions: Does it meet additional criteria? (change type, etc.)
  • Active: Is the rule enabled?

3. Suppression Check

Flow diagram showing alert passing through schedule check, blackout check, cooldown, and daily limit before delivery or suppression
Before delivery, alerts pass through suppression checks:
  • Operating Schedules: Is the event within the rule’s active hours?
  • Blackout Windows: Is a company-wide blackout currently active?
  • Cooldown: Has this rule already fired recently?
  • Daily Limit: Has the rule exceeded its daily notification cap?
Suppressed alerts are still recorded in the alert log for auditing.

4. Routing & Delivery

Matching events are sent to configured destinations:
  • Rules can have multiple destinations
  • Each destination can receive from multiple rules
  • Deduplication prevents repeat alerts for the same event

Supported Destinations

Slack

Real-time channel notifications

Email

Individual or team distribution

Webhooks

Custom integrations

PagerDuty

On-call escalation

MS Teams

Teams channel notifications

Alert Components

Rules

Rules define when alerts fire and where they go: Example: “Production Schema Changes”
  • Event Type: Schema Change Detected
  • Scope: Data source = production-postgres
  • Conditions: Change type = Column Removed
  • Destinations:
    • Slack (#data-alerts)
    • PagerDuty (on-call)
See Alert Rules for detailed configuration.

Destinations

Destinations are the channels where alerts are delivered: Example: “Slack - Data Alerts”
  • Type: Slack
  • Channel: #data-alerts
  • Workspace: your-company.slack.com
  • Status: Connected
Configure destinations before creating rules that use them.

Alert History

All alerts are logged for review:
  • View past alerts in Alerts → History
  • Filter by date, type, destination, or asset
  • See which rules triggered each alert
  • Track response times and patterns

Setting Up Alerts

Quick Start

  1. Add a destination: Connect Slack, email, or another channel
  2. Create a rule: Define what triggers alerts and where they go
  3. Test: Use “Send Test Alert” to verify delivery
  4. Monitor: Review alert history and adjust thresholds
Start with these three rules:
RuleEventDestinationWhy
Schema ChangesSchema ChangeSlackCatch breaking changes
Stale DataFreshness ViolationSlackDetect pipeline failures
Connection IssuesDiscovery FailedEmailKnow when monitoring breaks

Alert Deduplication

AnomalyArmor prevents alert storms:
  • Same event: Won’t re-alert for the same change until resolved
  • Cooldown period: Configurable delay between repeated alerts
  • Aggregation: Multiple changes can be grouped (coming soon)

Managing Alerts

Viewing Active Alerts

Go to Alerts → Active to see unresolved alerts:
  • Filter by asset or date
  • Click to view details and related changes
  • Mark as acknowledged or resolved

Disabling Rules

To temporarily stop alerts during maintenance:
  1. Go to Alerts → Rules
  2. Find the rule and toggle it OFF
  3. After maintenance, toggle it back ON

Reviewing History

Alerts → History shows all past alerts:
  • When each alert fired
  • Which rule triggered it
  • Where it was delivered
  • Alert details and context
Use history to:
  • Identify alert fatigue (too many alerts)
  • Find patterns (same asset always alerting)
  • Tune thresholds and conditions

Best Practices

Don’t alert on everything. Begin with your most important tables (revenue, users, orders) and expand from there.
  • PagerDuty: Only for truly urgent issues requiring immediate response
  • Slack: Team visibility, moderate urgency
  • Email: Low urgency, informational, digests
If your data updates hourly, don’t set a 30-minute freshness SLA. Start lenient and tighten over time.
Check alert history weekly. If you’re getting too many alerts, adjust rules. If you’re missing issues, add coverage.
See Alert Best Practices for more detailed guidance.

Troubleshooting

  1. Check rule is enabled (toggle ON)
  2. Verify destination is connected (test it)
  3. Confirm scope matches the asset
  4. Ensure events are occurring (check discovery is running)
  1. Add conditions to filter events
  2. Exclude development/test schemas
  3. Increase thresholds (e.g., longer freshness SLA)
  4. Route different event types to different destinations
  1. Check rule configuration
  2. Verify destination is selected for the correct rule
  3. Check for duplicate rules with different destinations

Next Steps

Create Alert Rules

Configure when and where alerts fire

Set Up Slack

Connect your Slack workspace

Best Practices

Reduce alert fatigue

Freshness SLAs

Set up data freshness alerts