Customer Service
Guardrails for chatbots, support automation, and AI assistants
Customer service AI must stay on-topic, provide accurate information, and never make unauthorized promises. AIGRaaS prevents the next Air Canada incident.
$650
Air Canada chatbot ruling
$12K
Cost of wrong refund policy
85%
Enterprises deploying AI agents
The problem
Chatbot told customers about return policies that don't exist
Support AI promised discounts it wasn't authorized to give
No way to systematically test AI responses before deployment
Legal team asking about AI compliance documentation
The AIGRaaS solution
Constitutional evaluation catches unauthorized promises and fabricated policies
Principal hierarchy defines who can override what — platform > operator > user
Pre-delivery evaluation blocks bad responses before customers see them
Analytics dashboard shows hallucination rate over time
How AIGRaaS fits your stack
Support bot responds
Your customer service chatbot or AI agent generates a reply.
AIGRaaS policy check
Ruleset blocks unauthorized promises, fabricated policies, and scope drift.
Customer gets safe reply
Valid responses flow through; violations get redirected to verified policy text.
Analytics dashboard
Track hallucination rate, unauthorized-promise attempts, and policy drift over time.
The ruleset we recommend
Start with this configuration — refine for your specific requirements.
{
"name": "support-chatbot-v1",
"mode": "pre-delivery",
"principal_hierarchy": ["platform", "operator", "user"],
"harm": {
"unauthorized_promises": { "block": true, "severity": "high" },
"fabricated_policies": { "block": true, "severity": "critical" },
"unauthorized_discounts": { "block": true, "severity": "high" },
"refund_commitments": { "block": true, "severity": "high" }
},
"approved_policies_source": "https://your-company.example.com/policies.json",
"redirect_template": "Let me check our current policy on that — one moment."
}Compliance mapping
| Regulation | Requirement | AIGRaaS module |
|---|---|---|
| Air Canada precedent (2024) | Company liability for AI-made promises | Unauthorized-promises blocker |
| CCPA §1798.140 | Automated decision-making disclosure | Decision trace export |
| FTC Section 5 | Deceptive practices prohibition | Fabricated-policy blocker + audit trail |
Questions we get
Learn more about the capability that powers this use case:
Three Evaluation ModesReady to protect your AI?
Try AIGRaaS in the playground — no signup required.