Open Tools AI Status Monitor
Free · Open source · Single .exe

AI Status Monitor v1.0.0

A polished desktop console that watches the official status pages of every major AI provider, computes meaningful 7 / 30 / 90-day SLA numbers, and fires Slack / Teams / Discord webhooks when anything moves. WPF / .NET 10 single-file .exe — no installer, no telemetry, no AI API keys.

Version 1.0.0
Started 2026-05-10
Released 2026-05-12
Platform Windows 10 / 11 · x64
Size 72 MB (bundled .NET runtime)
Built with C# · .NET 10 · WPF
Providers 14 out of the box
  • GitHub github.com/3389ro/ai-status-monitor · Releases · Issues
  • SHA-256 c46de7f3b760d0195001a81c79bcd9bd3055f26dd6c448a84a4dffcd52ee8d02 .sha256
  • Source SHA-256 ae913048dcaf7bb768a19c1da6213b9321fcef8ca53c24c1a0cbb30fb2fa6b97 .sha256
  • Licence MIT · LICENSE.txt + NOTICE.txt bundled inside the source ZIP and the repo

The binary is 72 MB because it bundles the entire .NET 10 runtime — nothing else to install on the target machine. Verify the SHA-256 above with Get-FileHash AIStatusMonitor.exe -Algorithm SHA256. If your antivirus blocks single-file self-extraction (rare; usually AppLocker or aggressive heuristics), build the portable folder variant from source using publish-portable.bat — same code, different packaging.

At a glance

Name
AI Status Monitor
Purpose
Desktop console that polls the official status pages of 14 AI providers, computes 7 / 30 / 90-day SLA, fires Slack / Teams / Discord webhooks on incidents and exposes a local REST API.
Target user
Engineers and SRE / on-call teams who depend on hosted AI APIs (OpenAI, Anthropic, Gemini, Azure OpenAI, Mistral, Groq, xAI, …) and want a single pane of glass for provider health, with their own alert routing.
Providers monitored
14 out of the box: OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, Mistral, Groq, xAI, Cohere, Perplexity, Together AI, Hugging Face, Replicate, OpenRouter.
Platform
Windows 10 / Windows 11, x64. Self-contained single-file executable; the .NET 10 runtime is bundled inside, no separate install.
Licence
MIT. LICENSE.txt and NOTICE.txt bundled in source ZIP and repo.
Telemetry
None. The app talks only to: (1) the providers' own status endpoints you enable, (2) the webhook URLs you configure, (3) localhost for its REST API. No analytics, no phone-home.
Admin rights
Not required. Runs as a normal user. The local REST API listens on 127.0.0.1 and does not register an HTTP URL ACL.
Network behavior
Outbound HTTPS to provider status endpoints on the schedule you configure (default 60 s). Optional outbound HTTPS to your Slack / Teams / Discord webhook URLs. No inbound connections beyond 127.0.0.1.
Download
AIStatusMonitor.exe — 72 MB self-contained executable with bundled .NET runtime.
Source code
github.com/3389ro/ai-status-monitor · source ZIP.
Verification
SHA-256 of binary: c46de7f3b760d0195001a81c79bcd9bd3055f26dd6c448a84a4dffcd52ee8d02. Verify with Get-FileHash AIStatusMonitor.exe -Algorithm SHA256.
What it does not do
Does not proxy or moderate AI API calls. Does not store API keys. Does not test provider quality, latency or model behavior — only reads the providers' public status pages.

Screenshots

Dashboard in both themes, the sticky widget, the Settings / API Configuration / About dialogs and the embedded Swagger UI.

AI Status Monitor dashboard in dark theme — hero Overall AI Health card showing 12 of 14 providers operational, KPI tiles, and the provider list with SLA windows and a 90-day sparkline
Dashboard — dark NOC theme, 14 providers monitored. Hero “Overall AI Health” card (12 of 14 operational, 2 unknown), KPI tiles, provider row with 7 / 30 / 90-day SLA and 90-day sparkline.
AI Status Monitor dashboard in light theme — same view as the NOC-dark capture, with hero Overall AI Health card, six KPI tiles (Operational, Degraded, Partial, Major, Maintenance, Unknown counts) and the provider list with 7 / 30 / 90-day SLA windows and a 90-day sparkline. Light palette intended for bright workstation monitors.
Same dashboard in the light palette — better on bright workstation monitors than the NOC dark.
AI Status Monitor sticky widget — a small frameless always-on-top window pinned to the top-right of the screen, showing the 14 monitored provider names with coloured status dots
Sticky widget — frameless, always-on-top, pinned top-right. 14 providers, status dots, ~260 × 370 px.
AI Status Monitor Settings dialog — polling cadence, backend port, window options and notification toggles
Settings — polling cadence and HTTP backend port, “always on top”, “sticky mode at startup”, “start with Windows”, toast and sound notifications, outbound webhook configuration.
AI Status Monitor API Configuration dialog — bearer token, rate limit, and start/stop controls
API Configuration — API on / off, bearer token (copy · regenerate), per-IP rate limit (60 req/min sliding window default), link to Swagger UI.
AI Status Monitor About dialog — version number, 3389 logo, tagline and call-to-action buttons
About — version, 3389 logo and branded tagline, services teaser, “Discuss a project” and “Visit 3389.ro” CTAs.
Embedded Swagger UI at http://127.0.0.1:12100/docs showing the AI Status Monitor API endpoints — Status, Providers, Config, History, SLA, Incidents — with GET and POST badges, version 1.0.0, OpenAPI 3.0
Embedded Swagger UI at http://127.0.0.1:12100/docs — full OpenAPI 3.0 spec, Authorize button for bearer-token, grouped sections (Status / Providers / Config / History / SLA / Incidents / Components / Diagnostics). Try every endpoint live from the browser.

What it does

The full feature set in one page — everything below ships inside the single 72 MB executable.

Providers monitored out of the box

  • 14 providers, four parser implementations — one set of normalised status data, regardless of the upstream feed format:
    • Statuspage providers (Atlassian summary.json): OpenAI, Anthropic Claude, Groq, Perplexity, Cohere, DeepSeek*, Hugging Face, Replicate.
    • Google Cloud Status (incidents.json): Google Gemini / AI Studio, Google Vertex AI.
    • Azure status RSS: Azure OpenAI, Azure AI Foundry.
    • xAI custom RSS feed: xAI.
    • Mistral is listed but currently shows “Insufficient history” — they migrated off Atlassian Statuspage to a custom static page without a machine-readable feed.
  • * DeepSeek — the custom-domain status.deepseek.com sits behind Alibaba’s CDN in Beijing and resets the TCP connection at TLS handshake for Windows SChannel clients (a TLS-fingerprint anti-bot). We fetch the underlying deepseek.statuspage.io endpoint instead — identical data, reachable from anywhere.

Status, SLA and history

  • Six-class normalised Status. Operational, Degraded, Partial Outage, Major Outage, Maintenance, Unknown. Consistent regardless of upstream vocabulary.
  • Historical incident import on first launch. Statuspage /api/v2/incidents.json, Google Cloud incidents.json, Azure RSS, xAI feed — ~50 incidents per provider get back-filled so SLA percentages start with real data instead of waiting weeks of local observation.
  • SLA via interval merging, not event-walking. Overlapping component-level incidents are merged before computing downtime, so the same outage covered by two component reports does not double-count.
  • Coverage check before publishing a number. If the earliest recorded data point doesn’t cover at least 80% of the requested window, the field returns null (“Insufficient history” in the UI / null in the JSON API) rather than a misleading 100%.
  • SLA penalty applies to outages only. Only Partial Outage and Major Outage deduct from the headline percentage; Degraded and Maintenance are tracked but do not subtract. Matches the industry SLA reporting convention.
  • Per-window trend arrows. Each of the 7 / 30 / 90-day windows is compared against the same-length window immediately before it; an arrow (↑ improving, ↓ degrading, → stable within ±0.05 pp) and the delta in percentage points render next to the number.

UI

  • Polished dashboard. Hero “Overall AI Health” card, six tinted KPI tiles (Operational / Degraded / Partial / Major / Maintenance / Unknown counts), a premium provider row layout.
  • Per-provider row. 7-day uptime is the primary stat (larger card with accent border + DEFAULT badge); 30-day and 90-day sit alongside as secondaries. A 90-day daily-bucket sparkline underneath matches the visual convention every Statuspage installation uses.
  • Sub-component drilldown. Statuspage providers report sub-components (“claude.ai”, “Claude Code”, “console.anthropic.com” …) surfaced inside a collapsible Show details section on each row.
  • Sticky mode. Frameless, always-on-top widget pinned to the top-right of the primary display, showing the provider list with status dots. Drag handle at the top, close button.
  • System tray icon whose colour reflects the worst observed status across enabled providers.
  • Two themes. A polished light palette and a NOC-style dark palette, driven entirely by WPF design tokens.

Localisation

  • 8 UI languages with comprehensive coverage (~80 keys/language): English, Română, Español, Português, Français, Deutsch, Italiano, 中文.
  • Language switches at runtime — no restart needed.
  • The HTTP API and Swagger UI documentation stay in English (technical surface).

HTTP API + Swagger

  • Local HTTP server bound to 127.0.0.1:12100 with Swagger UI at /docs:
    • GET /api/health — liveness probe (public, no auth).
    • GET /api/status — current snapshot of every enabled provider.
    • GET /api/providers · POST /api/providers/enabled — full catalogue and bulk-enable.
    • GET/POST /api/config — full configuration document.
    • POST /api/refresh-now — trigger an immediate poll.
    • GET /api/history, /api/incidents, /api/sla, /api/components/{providerId}, /api/diagnostics.
  • Bearer-token authentication. Token auto-generated on first run, visible in the API Configuration window with copy / regenerate.
  • Rate limiting (default 60 req/min/IP, sliding window, returns HTTP 429 with Retry-After).
  • IP whitelist / blacklist on top of the listener bind.
  • API can be started / stopped at runtime without restarting the application.

Notifications & integration

  • Windows toast notifications on every status transition (severity-coloured icon).
  • Outbound webhook sender. JSON POST to a user-configured URL on every transition. Payload format is Slack / Discord / Teams / Zapier compatible (text field plus structured fields).
  • “Start with Windows” toggle — writes the registry value under HKCU\Software\Microsoft\Windows\CurrentVersion\Run.
  • System tray menu (Dashboard / Sticky widget / Refresh now / Settings / API Configuration / Quit).

Privacy

  • No AI API keys are accepted, stored or used. The tool reads only the public, anonymous status endpoints that providers themselves publish.
  • No prompts, responses, tokens or AI traffic are ever read or stored.
  • No telemetry. The application never phones home.
  • All configuration and history files live under %APPDATA%\AIStatusMonitor\ on the local machine. Plain JSON, easy to back up or move between machines.

Quick start

The end user just wants the binary.

1. Download AIStatusMonitor.exe (72 MB).
2. Double-click. No installer, no .NET runtime required (it’s bundled).
3. On first launch the application back-fills incident history for every provider that supports it.
4. Configuration lives under %APPDATA%\AIStatusMonitor\config.json — plain JSON, easy to edit or move.
5. Swagger UI: http://127.0.0.1:12100/docs

Release history

v1.0.0 is the first public release. Future entries will appear here as they ship.

v1.0.0 2026-05-12

First public release.

Desktop console for AI provider status monitoring. 14 providers across 4 parser families. Six-class normalised status, 7/30/90-day SLA with interval merging, per-window trend arrows, 90-day daily sparkline, sub-component drilldown. Sticky widget, system tray, light / NOC-dark themes, 8 UI languages. Local HTTP API on 127.0.0.1:12100 with Swagger UI, bearer-token auth, rate limit, IP whitelist. Outbound webhook for Slack / Teams / Discord. Windows toast notifications. No AI API keys, no telemetry, no prompts read or stored.

FAQ

Why doesn’t this just send a test request to each provider?

Because that approach is fundamentally noisy. Synthetic AI requests cost money, require an API key per provider, and conflate “the provider is healthy” with “my key has quota left, my network path is fine, my regional endpoint is up.” AI Status Monitor reads only what the provider itself publishes on its status page — the same source the provider’s own SRE team uses to communicate incidents. No keys, no costs, no false positives from your end of the wire.

How is the SLA computed?

Per provider, per window (7 / 30 / 90 days). We collect every incident interval, merge overlapping intervals (so the same outage reported under two component-level incidents doesn’t double-count), sum the merged outage duration, and divide by the window length. Only Partial Outage and Major Outage deduct from the percentage — Degraded and Maintenance are tracked but don’t subtract. If the earliest recorded data point doesn’t cover at least 80% of the window, we publish null / “Insufficient history” instead of a misleading 100%.

Can I add a provider that isn’t in the list?

If the new provider runs on Atlassian Statuspage, yes — add it to providers.json with the Statuspage page-id and the existing parser will pick it up. For non-Statuspage feeds (custom RSS, scraped HTML, …) you’d add a new parser; the sources are inside the ZIP under AiMonitor/Services/Parsers/ and the existing four (Statuspage, GoogleCloud, Azure, XaiRss) are a good template.

Why is the binary 72 MB?

It bundles the entire .NET 10 runtime and the WPF native libraries inside a single self-contained .exe. The trade-off is binary size for “runs on any Windows 10/11 x64 machine, no runtime install, no version conflicts.” If size matters more than that to you, the source ZIP includes publish-portable.bat which produces a folder of separate DLLs (~266 files, ~72 MB total) zipped — same code, different packaging.

Does the webhook support more than Slack / Teams / Discord?

Yes — the payload is plain JSON with a text field plus structured fields. Anything that accepts an HTTP POST and reads a JSON body works (Zapier, Make, n8n, a custom endpoint, a PagerDuty Events API webhook with a small relay, …).

Is the API safe to expose beyond 127.0.0.1?

The listener is bound to 127.0.0.1:12100, so by default the answer is “it’s only reachable from the local machine”. You can change the bind in config.json if you need it on the LAN; if you do, keep the bearer token on and consider an IP whitelist. Don’t expose the port to the public internet without a reverse proxy doing TLS termination and additional auth.

Need a tailored version — private providers, deeper integration, custom dashboards?

AI Status Monitor is a spin-off from real client work. If you need it adapted to a private status page, integrated into an existing monitoring stack, embedded inside a larger product, or extended with custom dashboards and reporting, tell us about it.