Request a Demo

Questions & Answers

AI Platform — Frequently Asked Questions

From AI governance and bias testing to deployment models and pre-trained capabilities — answers to the most common questions about responsible municipal AI infrastructure.

Every model undergoes mandatory pre-deployment bias testing across all protected characteristics — race, gender, age, income, geography, disability, and more. The Bias Detection & Mitigation system (Module 5) evaluates demographic parity, equalized odds, and predictive parity. No model enters production without passing governance review. In production, continuous monitoring tracks prediction distributions across demographic groups, with automated alerts when statistical disparities exceed thresholds. Mitigation strategies include re-sampling, re-weighting, adversarial debiasing, and post-processing calibration. Zero bias incidents is a Year 1 success metric.
Yes, full compliance. Every model is classified using the Treasury Board's Type I–IV Algorithmic Impact Assessment framework. Type I (administrative) requires basic documentation. Type II (operational) adds bias testing and explainability requirements. Type III (significant) mandates formal review and approval workflows. Type IV (critical) requires the highest governance — full impact assessment, independent audit, and public transparency. Proportional governance ensures appropriate oversight without bureaucratic overhead for low-risk models.
The Explainability Engine provides human-readable explanations for every AI-assisted decision. SHAP (SHapley Additive exPlanations) shows how each input factor contributed to the model's output. LIME (Local Interpretable Model-agnostic Explanations) provides locally faithful approximations. Counterfactual explanations show what would need to change for a different outcome. For citizen-facing decisions, plain-language explanation templates translate technical outputs into understandable narratives. Citizens have the right to request a human review of any AI-assisted decision affecting them.
Approval workflows are proportional to model risk classification. Type I models require department manager sign-off. Type II models add IT technical review confirming performance and security standards. Type III models require Clerk or privacy officer review for privacy impact and CAO sign-off. Type IV models require all previous reviews plus independent assessment. The governance dashboard tracks every approval, rejection, escalation, and override with complete audit trail. No bypass pathway exists — governance gates are enforced in the deployment pipeline.
The AI Governance Dashboard auto-generates quarterly and annual transparency reports compiling: complete model inventory with risk classifications, fairness assessment results across all models, bias testing methodology and outcomes, incident summary with root cause analysis and corrective actions, automation impact metrics, and improvement plans. Reports are formatted for council agendas and designed for public publication. The annual AI transparency report is auto-compiled from quarterly data for public accountability.

Still Have Questions?

Have a Question Not Listed Here?

Our municipal solutions team is available to answer technical, procurement, and implementation questions specific to your organization.