GMAN:·paisamaker:TRADING-D19·gex-advisor:POLLING·fetcher:429rps·npm:v0.1.8 (9rel)
$ cd ..

gman — ai agent observability

multi-tenant platform · npm SDK · live in developer preview

GMAN watches AI agents in production and captures what they send, how much they cost, and how long they took. One local SDK proxy, one multi-tenant gateway, one per-tenant cockpit. The hardest part of the product is operational — how do you make LLM traces legible to the engineer at 2am without leaking one tenant's prompts into another's dashboard.

── shipped vs designed ─ transparency matters more than taste
~/gman/status-matrix6 shipped·4 frozen·1 designed
featurestatusnotes
@gman-ai/dev SDK[SHIPPED]v0.1.8 on npm · local proxy · 9 releases
Multi-tenant gateway[SHIPPED]tenant_id derived server-side from API key hash
Session viewer / cockpit[SHIPPED]Clerk auth · per-tenant isolation
API key management[SHIPPED]create, rotate, revoke · hashed storage
Stripe Connect billing[SHIPPED]webhook-driven · multi-account payout ready
Public demo endpoint (/demo)[SHIPPED]sanitized reads · paranoid redaction at output layer
Policy engine (Soft-Gate)[DESIGNED · FROZEN]designed Q4 2025 · frozen until R2+
TaskMeter evidence layer[DESIGNED · FROZEN]immutable evidence model · frozen
ProofPay payout rails[DESIGNED · FROZEN]spec-stage · contingent on policy engine ship
Sidecar proxy + SPIFFE auth[DESIGNED · FROZEN]k8s-native design · frozen, not prioritized
RAG 'ask the SDK'[DESIGNED · NOT BUILT]sketched · deliberately not started

The frozen designs are real — specs, schemas, migration paths. They're not shipped because R1.5 is tightly scoped to the observability core. Shipping more would dilute trust in what's already live.

── live demo ─ real traces, real telemetry
getmyagentnow.com/demoopen in new tab →
loading live demo…

The demo tenant is a real tenant, not a mock. Every session you see is an autonomous social agent — review commits, score relevance, generate a tweet draft, send to Discord for approval — with every LLM call proxied through the GMAN SDK.

── architecture ─ sdk → gateway → cockpit
                          agent process (user's infra)                           gman cloud
            ┌──────────────────────────────────────────┐        ┌─────────────────────────────────────────┐
            │                                          │        │                                         │
            │  agent code                              │        │  gateway (Railway, Next.js API route)   │
            │     │                                    │        │     │                                   │
            │     ▼                                    │        │     ▼                                   │
            │  @gman-ai/dev SDK                        │        │  authenticate(apiKey) → tenant_id       │
            │     │   ┌─ local HTTP proxy              │  HTTPS │     │   (derived server-side, never     │
            │     │   │  intercepts LLM egress   ──────┼────────┼───▶ │    trusted from client)           │
            │     │   │  adds X-GMAN-Step headers      │ Bearer │     ▼                                   │
            │     │   └─ forwards to OpenAI et al      │  token │  insert into dev.dev_events             │
            │     │                                    │        │     │                                   │
            │     └─ sends DevEnvelope to gateway      │        │     ▼                                   │
            │                                          │        │  Supabase (Postgres · JSONB payload)    │
            └──────────────────────────────────────────┘        │     │                                   │
                                                                │     ▼                                   │
                                                                │  cockpit (Next.js · Clerk auth)         │
                                                                │  per-tenant session viewer              │
                                                                │                                         │
                                                                │  /demo — public sanitized view          │
                                                                │          (tenant_id = demo_tenant)      │
                                                                └─────────────────────────────────────────┘

       ──── trust boundary ─────────────────────────────────────────────────────────────────────────────────────
       tenant_id never crosses the client. every write goes through the gateway auth path. /demo reads a
       single dedicated tenant and strips all identifying fields before returning. no tenant A data can
       leak into tenant B, and /demo cannot see either.
── sdk ─ install + one import
@gman-ai/dev · v0.1.8view on npm →
install
$ npm install @gman-ai/dev
set env
$ export GMAN_API_KEY="sk_live_..."
wrap your agent
import { spawn } from "child_process";

// Start the GMAN proxy once — it listens on localhost:9000
// and forwards to the upstream LLM provider, emitting telemetry
// to the GMAN gateway on every request.
const proxy = spawn("npx", ["@gman-ai/dev", "start"]);
process.env.OPENAI_BASE_URL = "http://localhost:9000/v1";

// Now any LLM SDK that respects OPENAI_BASE_URL is traced.
// Every call appears in cockpit under your tenant in seconds.

The SDK is a local HTTP proxy, not a wrapper library. It works with any LLM client that respects OPENAI_BASE_URL, across languages, without modifying the agent code path.

── planned ─ not in R1.5
── eof ─