MetricUIMetricUI
UI

AI Insights

Bring-your-own-LLM dashboard intelligence. Ask questions about your live data, reference specific components with @ mentions, and get streaming answers grounded in what your dashboard actually shows.

import { DashboardInsight } from "metricui";

See the live AI chat in action on any demo dashboard: Web Analytics and SaaS Analytics.

Overview

AI Insights is not a single component you import — it is a system that spans four layers of your dashboard:

  • Dashboard ai prop — configures the LLM connection, company context, and dashboard context.
  • AiContext provider — automatically wraps your dashboard, manages registered metrics, chat state, and the system prompt.
  • CardShell auto-registration — every KpiCard, chart, and DataTable automatically registers its live data with AiContext. No manual wiring.
  • DashboardInsight — the floating chat UI. Renders the button, sidebar panel, quick prompts, @ mention picker, and streaming responses.

The philosophy is BYO LLM. MetricUI never calls an API on your behalf. You provide an analyze function that takes messages and returns text (or an async iterable for streaming). Use OpenAI, Anthropic, a local model, or anything else. MetricUI handles the context assembly, UI, and streaming — your function handles the LLM call.

Setup

Pass an ai prop to Dashboard. At minimum, you need an analyze function. Add DashboardInsight anywhere inside the Dashboard to render the chat UI.

Minimal setup (3 lines)

import { Dashboard, DashboardInsight } from "metricui";

<Dashboard ai={{ analyze: (msgs) => fetch("/api/ai", { method: "POST", body: JSON.stringify(msgs) }).then(r => r.text()) }}>
  <DashboardInsight />
  {/* your cards and charts */}
</Dashboard>

Full setup with company + context + aiContext

import { Dashboard, DashboardInsight, KpiCard, BarChart } from "metricui";

async function analyze(messages, { signal }) {
  const res = await fetch("/api/ai", {
    method: "POST",
    body: JSON.stringify(messages),
    signal,
  });
  return res.text();
}

<Dashboard
  theme="emerald"
  ai={{
    analyze,
    company: "Acme Corp — B2B SaaS, Series B, selling to mid-market HR teams.",
    context: "This is the weekly growth dashboard. Targets: MRR > $500K, churn < 3%.",
    tone: "executive",
  }}
>
  <KpiCard
    title="MRR"
    value={487000}
    format="currency"
    aiContext="Monthly recurring revenue. Includes expansion but not one-time fees."
  />
  <BarChart
    data={channelData}
    index="channel"
    categories={["signups"]}
    title="Signups by Channel"
    aiContext="Organic includes SEO + content. Paid includes Google Ads + LinkedIn."
  />
  <DashboardInsight />
</Dashboard>

Here is an example API route (Next.js route handler) that calls Claude:

// app/api/ai/route.ts
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

export async function POST(req: Request) {
  const messages = await req.json();

  // Separate system message from conversation
  const system = messages.find((m: any) => m.role === "system")?.content ?? "";
  const conversation = messages.filter((m: any) => m.role !== "system");

  const response = await client.messages.create({
    model: "claude-sonnet-4-20250514",
    max_tokens: 1024,
    system,
    messages: conversation,
  });

  const text = response.content
    .filter((b: any) => b.type === "text")
    .map((b: any) => b.text)
    .join("");

  return new Response(text);
}

Floating Chat

When AI is configured, DashboardInsight renders a floating button in the bottom-right corner (configurable via the position prop). Clicking it opens a slide-over sidebar panel with:

  • A header with the AI Insights title and a hint to use @ mentions.
  • Quick prompt buttons (shown when the chat is empty).
  • A scrollable message area with user bubbles and assistant responses.
  • An input bar with mention chip display, send button, clear button, and abort button during streaming.

The floating button shows a badge count of assistant messages when there is an active conversation. The sidebar renders via React portal so it is never clipped by parent overflow.

@ Mentions

Type @ in the chat input to open a dropdown of all registered dashboard components. The list filters as you type. Navigate with Arrow Up / Arrow Down, select with Enter, dismiss with Escape.

Selected mentions appear as chips above the input. You can select multiple components to scope a question. When you send a message with mentions, the mention titles are passed as triggerContext so the AI focuses its analysis on those specific components first, then connects to the broader dashboard.

// The user types: @MRR @Signups by Channel why is MRR flat when signups are up?
//
// MetricUI sends triggerContext: "MRR, Signups by Channel" and prepends
// a system message: "The user is asking specifically about: MRR, Signups by Channel.
// Start your analysis there but connect to other metrics as relevant."

Three-Level Context

The system prompt is assembled from three layers of context, each more specific than the last:

LevelPropExamplePurpose
Companyai.companyAcme Corp — B2B SaaS, Series B, mid-market HR.Who you are. Industry, stage, ICP. Included in every prompt so the AI understands your business.
Dashboardai.contextWeekly growth dashboard. Target: MRR > $500K.What this dashboard measures. Targets, recent changes, what matters right now.
ComponentaiContextMRR includes expansion, not one-time fees.Per-card business context. Definitions, caveats, what makes this metric special.

Company and dashboard context go on the Dashboard ai prop. Component context goes on individual cards and charts via the aiContext prop. All three are stitched into the system prompt automatically.

Auto Data Collection

CardShell automatically registers every component's data with AiContext when it mounts, and unregisters when it unmounts. You do not need to manually pass data to the AI system — it reads what your components already display.

The data is live— when filters change and your components re-render with new data, the AI context updates automatically. The AI always sees what the user sees.

What gets sent to the AI (per component type)

ComponentData Sent to AI
KpiCardTitle, value, comparison values, description.
Charts (BarChart, AreaChart, etc.)Title, full data array (up to 20 rows). For datasets > 20 rows: first 10 rows + column statistics (min, max, avg).
DataTableTitle, full data array (up to 20 rows). For large tables: first 10 rows + column stats + total row count.

For large datasets, the system sends a sample (first 10 rows) plus per-column statistics (min, max, avg for numeric columns) so the AI can reason about distributions without the full payload.

Per-Card AI Icon

When AI is enabled, every card shows a small sparkle icon on hover (next to the export button, if present). Clicking it opens the AI chat sidebar with that component pre-selected as an @ mention.

This is powered by the openWith function on AiContext. CardShell calls ai.openWith(title) which triggers DashboardInsight to open with that title added to the selected mentions. The user can then type their question in the context of that specific component.

Quick Prompts

When the chat is empty, quick prompt buttons appear below the header. Clicking one immediately sends the prompt to the AI. Three defaults are provided: "What's notable?", "What's at risk?", and "Summarize".

Custom quick prompts

<DashboardInsight
  quickPrompts={[
    { label: "Growth check", prompt: "How is our growth trending? Focus on MRR and signup velocity." },
    { label: "Churn risk", prompt: "Which segments show early churn signals?" },
    { label: "Board summary", prompt: "Write a 3-sentence board update from this dashboard." },
  ]}
/>

Pass quickPrompts={false} to hide them entirely.

Streaming

If your analyze function returns an AsyncIterable<string> instead of a Promise<string>, tokens render incrementally as they arrive. A pulsing cursor indicates the stream is active. The user can click the abort button (X) to cancel mid-stream.

// Streaming analyze function
async function* analyze(messages, { signal }) {
  const res = await fetch("/api/ai", {
    method: "POST",
    body: JSON.stringify(messages),
    signal,
  });

  const reader = res.body.getReader();
  const decoder = new TextDecoder();

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    yield decoder.decode(value);
  }
}

<Dashboard ai={{ analyze }}>
  <DashboardInsight />
</Dashboard>

The abort signal is passed through to your function. When the user clicks abort, the signal fires and the stream stops. Partial text is discarded and no assistant message is saved.

Chat Persistence

By default, chat messages are stored in React state — they persist across sidebar open/close but are lost on page refresh. For database persistence, use controlled mode with messages and onMessage.

"use client";
import { useState, useEffect } from "react";
import { Dashboard, DashboardInsight } from "metricui";
import type { AiMessage } from "metricui";

function MyDashboard() {
  const [messages, setMessages] = useState<AiMessage[]>([]);

  // Load from database on mount
  useEffect(() => {
    fetch("/api/chat-history").then(r => r.json()).then(setMessages);
  }, []);

  // Save each message to database
  const onMessage = async (msg: AiMessage) => {
    setMessages(prev => [...prev, msg]);
    await fetch("/api/chat-history", {
      method: "POST",
      body: JSON.stringify(msg),
    });
  };

  return (
    <Dashboard
      ai={{
        analyze,
        messages,
        onMessage,
      }}
    >
      <DashboardInsight />
    </Dashboard>
  );
}

In controlled mode, MetricUI does not manage message state internally. Your onMessage callback fires for both user and assistant messages. You own the state and persistence logic.

Props

PropTypeDescription
quickPrompts
QuickPrompt[] | false

Quick prompt buttons shown when chat is empty. Pass false to hide.

placeholder
string

Placeholder text for the chat input.

position
"bottom-right" | "bottom-left"

Position of the floating button.

className
string

Additional CSS classes on the floating button.

Notes

  • BYO LLM — MetricUI never calls an API. Your analyze function is the only thing that talks to a model.
  • All data stays client-side until your analyze function sends it.
  • Works with any model: OpenAI, Anthropic, Google, Mistral, local models via Ollama.
  • The aiContext prop is available on every component. Use it to add business definitions and caveats.
  • The built-in system prompt instructs the AI to cite sources using [[Component Title]] syntax.
  • Active filters from FilterContext are automatically included in the system prompt.
  • The analyze function receives an AbortSignal via options.signal. Forward it to your fetch call.
  • If analyze returns an AsyncIterable<string>, tokens render incrementally (streaming mode).
  • Quick prompts appear when chat is empty. Pass false to hide them entirely.
  • The aiContext prop (inherited from BaseComponentProps) adds business context for AI Insights analysis. See the AI Insights guide for details.