Request a Demo

Technical Specifications

Built for municipal AI at scale.

A microservices architecture purpose-built for shared AI infrastructure — from model training and governance to real-time inference and edge deployment. Every component designed for Canadian data residency, sub-500ms inference, and responsible AI governance.

99.9%

Uptime SLA

5,000+

Concurrent Users

<200ms

API Response

3

Platform Modules

Architecture Overview

The Civic AI Platform follows a modular microservices architecture with clear separation of concerns across the machine learning lifecycle. The Architecture layer provides GPU-accelerated training infrastructure and auto-scaling inference serving. The Intelligence layer encompasses NLP, computer vision, predictive analytics, and pre-trained municipal models. The Governance layer embeds bias detection, explainability, and compliance controls into the model lifecycle. The Data layer manages the feature store, data pipelines, and real-time feature computation. An Edge Runtime enables mobile and field deployment with offline-capable inference. All services communicate through event-driven messaging (Apache Kafka) and synchronous APIs (REST/gRPC), with a centralized API Gateway providing authentication, rate limiting, and request routing.

Platform Modules

Core Services

Eleven core microservices organized into four architectural tiers — each independently deployable, horizontally scalable, and monitored through distributed tracing and structured logging.

API GATEWAY
Event Bus
Platform Infrastructure

Total Modules

3

Protocol

REST / gRPC

Bus

Async Events

Container

Kubernetes

Database

PostgreSQL 16

Specifications

Technical Details

Browse specifications by category. All values reflect current production configuration.

GPU Compute

NVIDIA A100/T4 or CPU-only option

Training Orchestration

Apache Airflow + Kubernetes Jobs

Model Serving

TensorFlow Serving + Triton Inference Server

Feature Store

Online (Redis) + Offline (PostgreSQL/Parquet)

ML Frameworks

TensorFlow, PyTorch, scikit-learn, XGBoost, ONNX

Experiment Tracking

MLflow with artifact versioning

Uptime

99.9% for inference endpoints, 99.5% for training infrastructure Availability SLA

Distributed tracing (Jaeger/OpenTelemetry) across all microservices, Prometheus metrics for resource utilization, Grafana dashboards for real-time AI platform health, and PagerDuty integration for automated alerting on model performance degradation, inference failures, and data pipeline anomalies.

99.9% for inference endpoints, 99.5% for training infrastructureUptime SLA

99.953%

30-Day Avg

1

Incidents

3× DC

Redundancy

< 15min

Recovery

30-Day Uptime History

All Systems Operational

30 days agoToday

Deployment

Deployment Options

On-premises private cloud or municipal-hosted Kubernetes cluster. All services containerized as OCI-compliant images. Helm charts for Kubernetes deployment with configurable resource profiles. GPU node pools for training and inference workloads.

01

On-premises private cloud deployment

02

Municipal-hosted Kubernetes cluster

03

Hybrid cloud with Canadian data residency