How Alerts Work
Alerts follow a three-stage pipeline:1. Event Detection
AnomalyArmor detects events during discovery runs:| Event Type | Description |
|---|---|
| Schema Change | Column added, removed, or type changed |
| Freshness Violation | Data not updated within SLA |
| Discovery Failed | Connection or permission error |
| Asset Removed | Table/view no longer exists |
| New Asset | Table/view discovered for first time |
2. Rule Evaluation
Each event is checked against your alert rules:- Scope: Does the event match the rule’s filters? (data source, schema, asset)
- Conditions: Does it meet additional criteria? (change type, etc.)
- Active: Is the rule enabled?
3. Suppression Check
- Operating Schedules: Is the event within the rule’s active hours?
- Blackout Windows: Is a company-wide blackout currently active?
- Cooldown: Has this rule already fired recently?
- Daily Limit: Has the rule exceeded its daily notification cap?
4. Routing & Delivery
Matching events are sent to configured destinations:- Rules can have multiple destinations
- Each destination can receive from multiple rules
- Deduplication prevents repeat alerts for the same event
Supported Destinations
Slack
Real-time channel notifications
Individual or team distribution
Webhooks
Custom integrations
PagerDuty
On-call escalation
MS Teams
Teams channel notifications
Alert Components
Rules
Rules define when alerts fire and where they go: Example: “Production Schema Changes”- Event Type: Schema Change Detected
- Scope: Data source = production-postgres
- Conditions: Change type = Column Removed
- Destinations:
- Slack (#data-alerts)
- PagerDuty (on-call)
Destinations
Destinations are the channels where alerts are delivered: Example: “Slack - Data Alerts”- Type: Slack
- Channel: #data-alerts
- Workspace: your-company.slack.com
- Status: Connected
Alert History
All alerts are logged for review:- View past alerts in Alerts → History
- Filter by date, type, destination, or asset
- See which rules triggered each alert
- Track response times and patterns
Setting Up Alerts
Quick Start
- Add a destination: Connect Slack, email, or another channel
- Create a rule: Define what triggers alerts and where they go
- Test: Use “Send Test Alert” to verify delivery
- Monitor: Review alert history and adjust thresholds
Recommended First Rules
Start with these three rules:| Rule | Event | Destination | Why |
|---|---|---|---|
| Schema Changes | Schema Change | Slack | Catch breaking changes |
| Stale Data | Freshness Violation | Slack | Detect pipeline failures |
| Connection Issues | Discovery Failed | Know when monitoring breaks |
Alert Deduplication
AnomalyArmor prevents alert storms:- Same event: Won’t re-alert for the same change until resolved
- Cooldown period: Configurable delay between repeated alerts
- Aggregation: Multiple changes can be grouped (coming soon)
Managing Alerts
Viewing Active Alerts
Go to Alerts → Active to see unresolved alerts:- Filter by asset or date
- Click to view details and related changes
- Mark as acknowledged or resolved
Disabling Rules
To temporarily stop alerts during maintenance:- Go to Alerts → Rules
- Find the rule and toggle it OFF
- After maintenance, toggle it back ON
Reviewing History
Alerts → History shows all past alerts:- When each alert fired
- Which rule triggered it
- Where it was delivered
- Alert details and context
- Identify alert fatigue (too many alerts)
- Find patterns (same asset always alerting)
- Tune thresholds and conditions
Best Practices
Start with critical assets
Start with critical assets
Don’t alert on everything. Begin with your most important tables (revenue, users, orders) and expand from there.
Match channels to urgency
Match channels to urgency
- PagerDuty: Only for truly urgent issues requiring immediate response
- Slack: Team visibility, moderate urgency
- Email: Low urgency, informational, digests
Set realistic thresholds
Set realistic thresholds
If your data updates hourly, don’t set a 30-minute freshness SLA. Start lenient and tighten over time.
Review and tune regularly
Review and tune regularly
Check alert history weekly. If you’re getting too many alerts, adjust rules. If you’re missing issues, add coverage.
Troubleshooting
Alerts not firing
Alerts not firing
- Check rule is enabled (toggle ON)
- Verify destination is connected (test it)
- Confirm scope matches the asset
- Ensure events are occurring (check discovery is running)
Too many alerts
Too many alerts
- Add conditions to filter events
- Exclude development/test schemas
- Increase thresholds (e.g., longer freshness SLA)
- Route different event types to different destinations
Wrong destination receiving alerts
Wrong destination receiving alerts
- Check rule configuration
- Verify destination is selected for the correct rule
- Check for duplicate rules with different destinations
Next Steps
Create Alert Rules
Configure when and where alerts fire
Set Up Slack
Connect your Slack workspace
Best Practices
Reduce alert fatigue
Freshness SLAs
Set up data freshness alerts
