AIGRaaS
Use Case

Customer Service

Guardrails for chatbots, support automation, and AI assistants

Customer service AI must stay on-topic, provide accurate information, and never make unauthorized promises. AIGRaaS prevents the next Air Canada incident.

$650

Air Canada chatbot ruling

$12K

Cost of wrong refund policy

85%

Enterprises deploying AI agents

The problem

Chatbot told customers about return policies that don't exist

Support AI promised discounts it wasn't authorized to give

No way to systematically test AI responses before deployment

Legal team asking about AI compliance documentation

The AIGRaaS solution

Constitutional evaluation catches unauthorized promises and fabricated policies

Principal hierarchy defines who can override what — platform > operator > user

Pre-delivery evaluation blocks bad responses before customers see them

Analytics dashboard shows hallucination rate over time

How AIGRaaS fits your stack

Step 1

Support bot responds

Your customer service chatbot or AI agent generates a reply.

Step 2

AIGRaaS policy check

Ruleset blocks unauthorized promises, fabricated policies, and scope drift.

Step 3

Customer gets safe reply

Valid responses flow through; violations get redirected to verified policy text.

Step 4

Analytics dashboard

Track hallucination rate, unauthorized-promise attempts, and policy drift over time.

The ruleset we recommend

Start with this configuration — refine for your specific requirements.

customer-service.json
{
  "name": "support-chatbot-v1",
  "mode": "pre-delivery",
  "principal_hierarchy": ["platform", "operator", "user"],
  "harm": {
    "unauthorized_promises": { "block": true, "severity": "high" },
    "fabricated_policies": { "block": true, "severity": "critical" },
    "unauthorized_discounts": { "block": true, "severity": "high" },
    "refund_commitments": { "block": true, "severity": "high" }
  },
  "approved_policies_source": "https://your-company.example.com/policies.json",
  "redirect_template": "Let me check our current policy on that — one moment."
}

Compliance mapping

RegulationRequirementAIGRaaS module
Air Canada precedent (2024)Company liability for AI-made promisesUnauthorized-promises blocker
CCPA §1798.140Automated decision-making disclosureDecision trace export
FTC Section 5Deceptive practices prohibitionFabricated-policy blocker + audit trail

Questions we get

Learn more about the capability that powers this use case:

Three Evaluation Modes

Ready to protect your AI?

Try AIGRaaS in the playground — no signup required.