Public model cards

Read before you buy.

Every production AI model on guardtourai.com has a public card with training data, evaluation methodology, intended use, known limitations and bias evaluation. We don't think you should buy AI you can't inspect.

What's a model card?

A model card is a one-page (or three-page) summary of how an AI model was trained, evaluated, and intended to be used. It's a procurement-friendly document — your DPO and your security team can read it without a deep ML background. Inspired by Anthropic's and Hugging Face's model card practices.

Anomaly Detection v2.1

Productionv2.1
Intended use
Flag patrols that deviate significantly from learned per-site patterns. Examples: missed checkpoints, unusual checkpoint sequence, abnormal time-on-site, GPS path deviation.
Out of scope
NOT intended for: scoring individual guards, predicting fatigue, identifying suspicious individuals on camera, replacing human supervision.
Training data
Per-site model trained on the customer's own patrol data. Minimum 4 weeks (≥ 200 patrols). Architecture: Isolation Forest (numerical features) + temporal LSTM (sequence features). Base architecture pre-trained on synthetic patrol data.
Precision / impact
Precision: 74-89% (depending on site type and training-data length). False-positive rate exposed in customer dashboard.
Known limitations
Performance degrades when site operations change suddenly (new shift pattern, new checkpoints). The model takes ~2 weeks to re-baseline. Customers should suppress alerts during major operational changes.
Bias evaluation
Per-site isolation prevents demographic bias from cross-customer training. Within a site, the model can over-fit to specific guards' patterns; mitigated by anonymizing controller IDs in features when site staff turnover exceeds 30%/quarter.
Update cadence
Quarterly re-training of base architecture. Per-site re-baselining triggered by customer or after 4 weeks of degraded performance.

Incident Classification v1.3

Productionv1.3
Intended use
Auto-categorize free-text incident reports into 47 standard categories (security, safety, maintenance, medical, environmental, suspicious activity, access violation, etc.).
Out of scope
NOT intended for: auto-submitting incidents without human review, analyzing CCTV footage, identifying individuals.
Training data
Fine-tuned RoBERTa on a curated public + synthetic dataset of 80k incident reports. No customer data used in base model. Custom fine-tuning on customer historical data available on Complete (1,000+ labeled examples required).
Precision / impact
Precision: 91% (multi-label, top-3 categories). Languages supported in production: EN, ES, FR, PT, DE, IT.
Known limitations
Domain-specific jargon (industrial chemical names, healthcare procedures) may classify into wrong category. Custom fine-tuning recommended for sites with specialized terminology.
Bias evaluation
Base training data audited for geographic bias (US, EU, LATAM each ≥ 25% representation) and category balance (no category < 1% of training set). Bias evaluation report appended to each release.
Update cadence
Quarterly re-training of base model. Custom fine-tunes regenerated when customer adds 500+ new labeled examples.

Predictive Routing v1.0

Production betav1.0
Intended use
Suggest patrol checkpoint sequences that optimize for time efficiency, risk coverage and operator fatigue. Outputs 3 alternative routes with scoring.
Out of scope
NOT intended for: replacing operator judgment, enforcing routes against operational decisions, optimizing for surveillance density without privacy review.
Training data
Constrained optimization with weights learned from 6 months of customer historical incident-by-checkpoint data. Per-site model. No cross-customer training.
Precision / impact
Reduces average patrol time 8-14% across 12 customer sites. High-risk checkpoint coverage increased 22%. Operator-acceptance rate of suggested routes: 67%.
Known limitations
Does not account for real-time site changes (alarm zones, weather restrictions). Operator must override when site conditions diverge. Mandatory checkpoints (audit-required) are always preserved regardless of optimization.
Bias evaluation
Risk weights from customer's own incident history — if past data is biased (e.g., over-patrolling a specific zone for non-incident reasons), the model perpetuates that bias. Customers should review weight sources quarterly.
Update cadence
Re-training triggered after 3 months of new incident data, or on customer request.

In development

Photo Analysis v0.8

Q3 2026

Flag photos with anomalies (fire, water intrusion, intruders). Currently in beta with 3 design partners. Public model card will be published before GA.

Speech-to-Incident v0.5

Q4 2026

Voice-to-text incident logging. Whisper-based, self-hosted. No third-party APIs. Public model card with bias evaluation will be published before GA.

Concerns or corrections?

Submit observed errors, ethical concerns, or feature gaps to our public AI Issues tracker. We respond within 5 business days. Critical bias/safety issues get priority triage within 24h.

See it on your data.

Drop your work email and we'll send the technical walkthrough link plus a sandbox model card.

We respect your privacy. No cross-customer training. GDPR-compliant.

No credit card · GDPR-compliant · No cross-customer training · Unsubscribe in one click.