Skip to main content
The AnomalyArmor MCP server lets AI assistants like Claude Code and Cursor interact with your data observability platform through natural language.

Connection Methods

MethodAuthBest For
Remote (Recommended)Sign in with your AnomalyArmor accountZero setup, always up to date
LocalAPI keyAir-gapped environments, custom setups
Connect directly to AnomalyArmor’s hosted MCP server. No installation required. You sign in with your existing AnomalyArmor account through OAuth.

Claude Code

claude mcp add anomalyarmor --transport http https://mcp.anomalyarmor.ai/mcp
Claude Code will open a browser window to sign in the first time you use a tool.

Cursor

Add to your Cursor MCP configuration: macOS: ~/Library/Application Support/Cursor/mcp.json Linux: ~/.config/Cursor/mcp.json Windows: %APPDATA%\Cursor\mcp.json
{
  "mcpServers": {
    "anomalyarmor": {
      "url": "https://mcp.anomalyarmor.ai/mcp"
    }
  }
}
Restart Cursor after saving. You will be prompted to sign in when you first use a tool.

Other MCP Clients

Any MCP client that supports the streamable-http transport can connect:
URL: https://mcp.anomalyarmor.ai/mcp
Transport: streamable-http
Auth: OAuth 2.1 (automatic)
The server publishes standard OAuth 2.1 discovery metadata at https://mcp.anomalyarmor.ai/.well-known/oauth-protected-resource.

Local Server (API Key)

For environments where you need a local MCP server (air-gapped networks, custom configurations), install and run the server locally with an API key.

Install

pip install armor-mcp
If the package is not yet on PyPI, install from source:
pip install git+https://github.com/anomalyarmor/agents.git#subdirectory=armor-mcp

Configure

Add to your AI tool’s MCP settings:
{
  "mcpServers": {
    "anomalyarmor": {
      "command": "armor-mcp",
      "env": {
        "ANOMALYARMOR_API_KEY": "your-api-key"
      }
    }
  }
}

Getting Your API Key

  1. Go to Settings > API Keys in the AnomalyArmor dashboard
  2. Click Create API Key
  3. Name it (e.g., “Claude Code MCP”)
  4. Copy the key (it won’t be shown again)

Available Tools

Once connected, your AI assistant has access to 40+ tools across these categories:
ToolDescription
health_summaryOverall health status across all assets
get_todays_briefingDaily digest with key insights and alerts
get_coverage_summaryMonitoring coverage analysis
ToolDescription
list_alertsQuery alerts with date, severity, and status filters
get_alert_summaryAggregate alert counts and trends
list_inbox_alertsActionable alerts (acknowledge, resolve, dismiss)
acknowledge_alertMark an alert as seen
resolve_alertClose an alert with resolution metadata
dismiss_alertDismiss a non-actionable alert
snooze_alertSnooze an alert for a set duration
preview_alertsDry-run alert rules against historical data
ToolDescription
list_assetsList data sources with filters
get_assetGet details for a specific asset
exploreNavigate schemas, tables, and columns within an asset. Supports hierarchical browsing (explore(asset_id) for schemas, explore(asset_id, parent_path="schema") for tables) and full-text search (explore(asset_id, search="keyword"))
create_assetConnect a new data source
test_asset_connectionTest connectivity to a source
trigger_asset_discoveryStart schema discovery (runs as background job)
ToolDescription
get_freshness_summaryFreshness overview. Supports optional asset_id to filter to a single source
check_freshnessCheck a specific table’s freshness
list_stale_assetsFind currently stale tables
list_freshness_schedulesView monitoring schedules
create_freshness_scheduleSet up freshness monitoring. Use setup_freshness for bulk creation with table_paths and schema_name params
delete_freshness_scheduleRemove a schedule
dry_run_freshnessPreview what alerts a threshold would produce
ToolDescription
get_schema_summarySchema drift overview. Supports optional asset_id to filter to a single source
list_schema_changesRecent schema changes with filters
create_schema_baselineCapture current schema as baseline
enable_schema_monitoringStart drift detection for one or more assets. Accepts asset_id (single) or asset_ids (bulk), interval strings ('5m', '1h', '6h', '1d') or cron_expression for custom schedules
disable_schema_monitoringStop drift detection
dry_run_schemaPreview drift detection results
ToolDescription
list_metricsQuality metrics for an asset
get_metrics_summaryMetrics health overview
create_metricAdd a quality metric (row count, null rate, etc.)
delete_metricRemove a metric
capture_metricTrigger immediate metric capture
dry_run_metricPreview metric alerting behavior
ToolDescription
list_validity_rulesValidity rules for an asset
get_validity_summaryPassing/failing rule overview
create_validity_ruleAdd a rule (NOT_NULL, UNIQUE, etc.)
delete_validity_ruleRemove a rule
check_validity_ruleRun a check immediately
ToolDescription
ask_questionNatural language Q&A about your data
generate_intelligenceTrigger AI analysis for an asset
get_lineageExplore upstream/downstream dependencies
job_statusTrack progress of async operations
ToolDescription
list_tagsTags for an asset
create_tagTag a table or column
apply_tagsApply multiple tags to multiple objects
bulk_apply_tagApply a tag across multiple assets

Usage Examples

These examples show full terminal sessions using Claude Code with the AnomalyArmor MCP server. Each one demonstrates a real workflow from start to finish.

Morning Triage: Check Health and Act on Issues

~/warehouse $ claude

╭────────────────────────────────────────────────────────────────────╮
 Claude Code
╰────────────────────────────────────────────────────────────────────╯

You: What's the health of my data warehouse? Show me anything that
     needs attention.

Claude: I'll check your overall health and look for issues.

 health_summary
 Querying AnomalyArmor health status...
 Result: 47 assets monitored

 list_stale_assets
 Checking for freshness violations...
 Result: 2 stale tables found

 list_alerts
 Fetching unresolved alerts (last 24h)...
 Result: 5 alerts

Here's your morning summary:

OVERALL: NEEDS ATTENTION (2 issues)

Freshness:
  ✅ 45 tables fresh
  ❌ gold.fact_orders - 6h overdue (SLA: 2h)
  ❌ staging.orders_raw - 8h overdue (SLA: 1h)

Alerts (last 24h):
  🔴 1 critical - freshness SLA breach on gold.fact_orders
  🟡 2 warnings - schema drift on gold.fact_orders,
                    null rate spike on gold.dim_customer.email
  🔵 2 info - new schema baselines captured

Schema drift: 3 changes detected this week
Data quality: 1 anomaly (null rate)

The critical issue is the orders pipeline. staging.orders_raw
stopped updating 8 hours ago and gold.fact_orders is stale as a
result. Want me to dig into the root cause?

You: Yes, investigate the orders staleness and check if anything
     downstream is affected.

Claude: Let me trace the lineage and check downstream impact.

  ┌ check_freshness
  │ gold.fact_orders → STALE (6h overdue)
  └ Last update: yesterday 10:12 PM

  ┌ get_lineage
  │ Fetching upstream/downstream for gold.fact_orders...
  └ 1 upstream, 7 downstream

  ┌ check_freshness
  │ staging.orders_raw → STALE (8h overdue)
  └ Last update: yesterday 8:30 PM

  ┌ list_schema_changes
  │ staging.orders_raw - changes in last 7 days
  └ 1 change: discount_pct removed Tuesday 3:47 PM

Here's the full picture:

Pipeline trace:
  raw.events (✅ FRESH, updated 5m ago)
 staging.orders_raw (❌ STALE, 8h overdue)
 gold.fact_orders (❌ STALE, 6h overdue)

The pipeline breaks at staging.orders_raw. The upstream raw.events
table is fresh, so the issue is in the ETL between raw staging.

Downstream impact (7 tables):
 gold.fact_payments        - depends on fact_orders
 gold.fact_refunds          - depends on fact_orders
 analytics.order_summary    - derived view
 analytics.order_funnel     - derived view
 reporting.daily_revenue    - aggregated report
 reporting.weekly_kpis      - aggregated report
 reporting.customer_ltv     - joins with dim_customer

Additional context: staging.orders_raw also had a schema change
this week (discount_pct column removed Tuesday). This may be
related to the staleness if the ETL is failing on the missing
column.

Recommended next steps:
1. Check the Airflow/dbt DAG for the staging.orders_raw job
2. Look for errors related to the discount_pct column removal
3. Once fixed, the 7 downstream tables should recover automatically

You: Acknowledge the freshness alert and resolve the schema drift
     alert with a note that the column removal was intentional.

Claude: I'll handle both alerts.

  ┌ acknowledge_alert
  │ Alert #4521 (freshness SLA breach) → acknowledged
  └ Done

  ┌ resolve_alert
  │ Alert #4518 (schema drift) → resolved
  │ Resolution: "Column removal was intentional per product change"
  └ Done

Done. The freshness alert is acknowledged (you'll still see it
until the pipeline recovers), and the schema drift alert is
resolved with your note.

Setting Up Monitoring for a New Schema

~/warehouse $ claude

You: I just added a new "analytics" schema with 8 tables. Set up
     full monitoring: freshness, schema drift, and null checks on
     any ID or email columns.

Claude: Let me discover what's in the analytics schema first.

  ┌ search_tables
  │ Searching for tables in analytics schema...
  └ Found 8 tables

  ┌ get_table_columns
  │ Reading columns for all 8 tables...
  └ 67 columns total

Found 8 tables in analytics with 67 columns. Here's what I'll set up:

Freshness monitoring (8 tables):
  analytics.order_summary      - hourly, 2h SLA
  analytics.order_funnel       - hourly, 2h SLA
  analytics.customer_cohorts   - daily, 26h SLA
  analytics.product_performance - daily, 26h SLA
  analytics.revenue_by_region  - daily, 26h SLA
  analytics.churn_analysis     - daily, 26h SLA
  analytics.funnel_conversion  - hourly, 2h SLA
  analytics.session_metrics    - hourly, 2h SLA

Schema drift monitoring: Enabled for all 8 tables

Null percentage metrics (12 columns with ID or email patterns):
  analytics.order_summary.order_id
  analytics.order_summary.customer_id
  analytics.order_funnel.user_id
  analytics.order_funnel.session_id
  analytics.customer_cohorts.customer_id
  analytics.customer_cohorts.email
  analytics.product_performance.product_id
  analytics.churn_analysis.customer_id
  analytics.churn_analysis.email
  analytics.funnel_conversion.user_id
  analytics.session_metrics.session_id
  analytics.session_metrics.user_id

I chose hourly checks for tables that look transactional and daily
for tables that look like aggregations. Want me to proceed?

You: Yes, do it all.

Claude: Setting everything up now.

  ┌ create_freshness_schedule (×8)
  │ Creating schedules for all analytics tables...
  └ 8 schedules created

  ┌ enable_schema_monitoring (×8)
  │ Enabling drift detection...
  └ 8 baselines captured

  ┌ create_metric (×12)
  │ Creating null_percentage metrics...
  └ 12 metrics created

All done:
  ✅ 8 freshness schedules (4 hourly, 4 daily)
  ✅ 8 schema drift monitors with baselines
  ✅ 12 null_percentage metrics on ID/email columns

You'll start receiving alerts if any table misses its SLA, any
schema changes are detected, or any null rates spike outside
normal ranges. The first data points will appear after the next
discovery run.

Investigating a Data Quality Issue

~/warehouse $ claude

You: The finance team says revenue numbers look wrong today.
     Something about missing discount amounts. Can you investigate?

Claude: Let me check the revenue-related tables for anomalies.

 search_tables
 Searching for tables matching "revenue", "order", "discount"...
 Found 5 relevant tables

 get_metrics_summary
 Checking data quality metrics on gold.fact_orders...
 3 metrics tracked

 list_schema_changes
 gold.fact_orders - recent changes
 1 change this week

 check_freshness
 gold.fact_orders FRESH (updated 12m ago)
 No freshness issues

Found the issue. Here's what happened:

Root cause: The discount_pct column was removed from
gold.fact_orders on Tuesday at 4:12 PM.

Evidence:
  - Schema drift detected the removal Tuesday 4:12 PM
  - The column had a null_percentage metric that stopped reporting
    (no column to measure)
  - Freshness is fine - the table is still updating
  - The column was also removed from upstream staging.orders_raw
    at 3:47 PM

Impact on revenue:
  - analytics.order_summary uses SUM(discount_pct) to calculate
    discount totals - this query is now failing
  - reporting.daily_revenue references discount_pct in its
    net_revenue calculation

The table has data, but the discount column is gone. That's why
revenue looks wrong - discount amounts aren't being subtracted.

You: Was this intentional? Can you check if there's a related
     alert that someone already handled?

Claude: Let me check the alert history.

 list_alerts
 Filtering for schema drift alerts on gold.fact_orders...
 1 alert found

 get_alert_details
 Alert #4518...
 Status: RESOLVED

There was a schema drift alert (#4518) that fired Tuesday at
4:12 PM for the discount_pct column removal. It was resolved
today at 9:15 AM with the note:

  "Column removal was intentional per product change"

So the column removal was intentional, but the downstream views
haven't been updated yet. The finance team's revenue calculations
are referencing a column that no longer exists.

Recommended:
1. Update analytics.order_summary to remove the discount_pct
   reference
2. Update reporting.daily_revenue net_revenue calculation
3. Consider adding a new discount column if discounts are still
   tracked differently

You: Tag the affected downstream tables so the data team knows.

Claude: I'll tag them.

  ┌ apply_tags
  │ Applying "needs-migration" tag...
  └ Applied to 2 tables

Tagged 2 tables with "needs-migration":
  - analytics.order_summary
  - reporting.daily_revenue

The data team can find these by asking "what tables are tagged
needs-migration?" in any AnomalyArmor agent.

Troubleshooting

Authentication Failed (Remote)

Authentication failed
Solution:
  1. Clear your MCP auth cache and re-authenticate
  2. In Claude Code: run /mcp and select the reconnect option
  3. Verify you can sign in to app.anomalyarmor.ai

Authentication Failed (Local)

Error: Invalid API key
Solution: Verify your API key is correct in your MCP config. Generate a new key from Settings > API Keys if needed.

Server Not Found

Error: MCP server 'anomalyarmor' not found
Solution:
  1. Restart your AI tool after config changes
  2. For local mode, verify armor-mcp is installed: armor-mcp --version
  3. Check MCP config syntax (valid JSON)

Empty Responses

If tools return empty results after successful authentication: Solution:
  1. Verify you have data sources connected in AnomalyArmor
  2. Try health_summary as a basic connectivity test
  3. For local mode, ensure your API key has the correct scope

Rate Limiting

Error: Rate limit exceeded
Solution: Wait a few seconds and retry. The server handles rate limits automatically.

Next Steps

Claude Code Setup

Detailed Claude Code configuration

Cursor Setup

Cursor IDE integration

Web Agent

Use the same tools in your browser

AI Skills

Browse available skills