AIGRaaS
Product

Real-Time Evaluation API

Deterministic constitutional evaluation in under 10ms

A single REST endpoint evaluates every AI response against your ruleset. No LLM at runtime. Same input, same output, every time.

<10ms

P99 latency

100%

Deterministic

0

LLMs at runtime

Voice AI can't wait for an LLM judge to decide. A 500ms guardrail call is a dead call. AIGRaaS runs in under 10ms because the evaluation engine is pure TypeScript — no model inference, no network hops to a judge, no probabilistic anything.

One POST, one verdict

Send your ruleset ID and the AI's response. Get back an approved/blocked/redirected verdict plus a full decision trace in under 10ms — fast enough to inline before your TTS layer.

aigraas-client.ts
import { AIGRaaS } from "@aigraas/client";

const guard = new AIGRaaS({ apiKey: process.env.AIGRAAS_KEY });

// Inside your voice/chat agent's response handler:
const verdict = await guard.evaluate({
  ruleset: "healthcare-intake-v1",
  response: aiGeneratedText,
  mode: "pre-delivery",
});

if (verdict.status === "blocked") {
  return verdict.redirectTo ?? "I can't help with that directly.";
}
return aiGeneratedText;

Under 10ms, every time

Pure TypeScript evaluation. No LLM inference. No cold starts. P99 latency stays under 10ms even at peak load.

Fully deterministic

Same input, same output. No temperature, no sampling, no drift between runs. Audit trails you can actually reproduce.

Full decision trace

Every verdict includes the harm variables scored, the honesty components checked, and the tests that fired. Not a black box.

One endpoint, all modes

The same API surfaces pre-delivery evaluation, post-delivery audit, and system-prompt injection. Pick your mode per request.

Zero LLMs at runtime means zero prompt-injection attack surface against the guardrail itself.

<10ms

P99 evaluation latency

Questions we get

Ready to try it?

Evaluate any AI response in the playground — no signup required.