Fairness & Transparency
We publish audits, model documentation, and human review policies so every candidate understands how AI is used—and every employer builds trust.
What We Publish
- Model cards describing intended use, limitations, and known risks
- Bias audits checking disparate impact and mitigation steps
- Clear retention and deletion policies for video and transcripts
- Updates when AI models or evaluation criteria change
Candidate Rights
- Explicit consent before recording or processing
- Access to information about how recommendations are made
- Simple appeal paths reviewed by a human within two business days
- Ability to request deletion of recordings after the hiring cycle
How We Audit
Every quarter we evaluate AI outcomes by demographic cohorts (when data is available), review false positive/negative patterns, and re-tune models designed for communication scoring. We also test prompts against red-team scenarios to reduce harmful or biased outputs.
- Disparate impact tests using workforce fairness metrics
- Human spot-checks of transcripts and summaries
- Continuous monitoring of scoring drift with alerts
- Independent review with partner organizations when possible
Employer Responsibilities
Employers agree to use AI feedback as advisory input alongside interviews, references, and work samples. Any final hiring decision must be made by a human hiring manager.
- Share fairness commitments in candidate communication
- Offer accommodations when candidates request alternatives
- Track outcomes and report suspected bias to AptiaWork
Continuous Improvement
This document is a living record. We update it as models improve, regulations evolve, and we learn from employers and candidates.
Last reviewed: January 2025
