Responsible AI

6 principles for AI in physical security.

Built on principles, not hype. Every principle has a measurable commitment we hold ourselves to. If we break one, we ship a postmortem.

1. Transparency

Every production AI model has a public model card.

How
Model cards live at /ai/model-card and include training data sources, evaluation methodology, intended use, known limitations, bias evaluation. Updated with every model release. No NDA required to read.
If we breach this
If we ship a model without a public card, that's a P0 bug. We block deploys that fail the model-card-exists check.

2. Human-in-the-loop

AI suggests. Humans decide on any operational action.

How
No automated decisions with operational impact. Audit logs separate AI suggestion from human action. GDPR Article 22 compliance by design.
If we breach this
If a feature is shipped that auto-actions without human review, we ship an immediate hotfix to disable + a postmortem within 5 days.

3. Customer data sovereignty

Your data trains your models, never others'.

How
Per-tenant model isolation. Anomaly Detection trains on your site data only. Base classification models are trained on curated public + synthetic data, documented in the model card.
If we breach this
Cross-customer training would be a security incident. Notification within 72h per GDPR Article 33.

4. Confidence visible

No 'trust me' outputs. Every prediction shows confidence.

How
Per-alert confidence scores. Per-factor breakdown. Tunable thresholds per site. False-positive rate exposed in dashboard for the past 30 days.
If we breach this
Any UI surface that shows AI output without confidence is a UX bug; we patch within 7 days.

5. Limit honesty

Published precision/recall on every release.

How
Each model card lists production precision, recall, F1 by site type and deployment month. Changelog at /changelog logs metric changes.
If we breach this
If actual precision diverges from the published number by more than 5pp on the same data slice, we ship a model card update within 7 days.

6. Right to explanation

Every flagged event explainable to operations + audit.

How
Per-alert explanations show contributing factors and 3-5 comparable past cases. Audit-ready exports include the AI reasoning chain. No 'the model said so' in audit reports.
If we breach this
If an explanation surface ships without contributing-factors visibility, we treat it as a P1 bug.

AI governance

AI Ethics Committee

Quarterly review of model performance, bias evaluation, and customer-reported concerns. Composition: AI lead engineer, security operations lead, data protection officer, an external advisor on AI ethics. Decisions documented and shared with customers on Complete plan.

Public AI Issues tracker

Anyone — customer or not — can submit observed errors, ethical concerns, or feature gaps. We respond within 5 business days. Critical bias/safety issues get priority triage within 24h.

Model retirement policy

When a model is deprecated, we communicate at least 60 days in advance. Customer-trained variants are exportable for analysis. Replacement model includes a comparison report against the deprecated version.

Customer rights re: AI outputs

You can dispute any AI output that affects your operations. We provide the full reasoning chain, the contributing factors, and the comparable past cases. You can submit a correction that retrains your per-site model.

Data Protection Impact Assessment

We provide a DPIA template for customers deploying AI features in zones with elevated privacy considerations (healthcare, financial, public sector). Template available under NDA. Includes Article 35 GDPR mapping, DPO sign-off checklist, and standard mitigations.

See it on your data.

Drop your work email and we'll send the technical walkthrough link plus a sandbox model card.

We respect your privacy. No cross-customer training. GDPR-compliant.

No credit card · GDPR-compliant · No cross-customer training · Unsubscribe in one click.