AIGRaaS
Constitutional Framework

How AIGRaaS evaluates AI behavior

A 5-phase constitutional evaluation pipeline. 4 decision trees. 8 harm variables. 7 honesty components. 3 validation tests. All in under 10ms, fully deterministic, with zero LLMs at runtime.

The 5-phase evaluation pipeline

Every AI response passes through five deterministic phases. No randomness. No LLM judges. Same input, same output, every time.

1

Input Classification

The incoming AI response is parsed and classified. Content type, intent signals, and domain context are extracted for evaluation.

2

Principal Hierarchy Resolution

Who overrides whom? Platform rules take precedence over operator rules, which take precedence over user preferences. Emergency escalation paths are checked.

3

Harm Assessment

8 harm variables are evaluated: probability, severity, breadth, whether AIGRaaS is the proximate cause, reversibility, whether consent was given, vulnerability of affected parties, and moral responsibility.

4

Honesty Evaluation

7 honesty components are checked: truthfulness, calibration, transparency, forthrightness, non-deceptiveness, non-manipulation, and autonomy preservation.

5

Validation Tests

Three final tests confirm the verdict: the 1,000 Users Test, the Senior Employee Test, and the Dual Newspaper Test.

Three modes, one ruleset

Configure your guardrails once. Deploy them in the mode that fits your architecture. Use all three simultaneously for maximum coverage.

Mode 10ms

System Prompt Injection

AIGRaaS generates constitutional rules and injects them directly into your AI's system prompt. Your existing AI provider enforces the rules as part of its normal operation.

Best for: Quick setup, basic protection, no API integration needed

Mode 2<10ms

Pre-Delivery Evaluation

Every AI response passes through the AIGRaaS evaluation engine before reaching the user. Responses are approved, blocked, or redirected in real-time.

Best for: Production voice AI, real-time chat, any latency-sensitive application

Mode 3Async

Post-Delivery Audit

Responses are delivered immediately and evaluated asynchronously. Violations are logged, flagged, and available in the audit trail for compliance review.

Best for: Compliance documentation, batch analysis, low-risk applications

The three validation tests

After harm assessment and honesty evaluation, every response must pass three final tests before approval.

The 1,000 Users Test

If 1,000 different users sent this exact message, would the response be appropriate for all of them? Catches responses that might be fine for one person but harmful at scale.

The Senior Employee Test

Would a thoughtful, senior employee at your company be comfortable with this response? Tests for professional standards and brand alignment.

The Dual Newspaper Test

Would this response be criticized in tomorrow's newspaper for being harmful? Would refusing to give this response be criticized for being unhelpful? Balances safety with usefulness.

Simple architecture, powerful protection

One API call between your AI and your users. That's it.

Your AI Agent

VAPI, ElevenLabs, ChatGPT, etc.

AIGRaaS API

Constitutional evaluation <10ms

Safe Output

Approved, blocked, or redirected

Ready to protect your AI?

Start with the playground, explore the features, or jump straight to pricing.