Our principles

No customer data in training.
We don't use any customer data in model training. All data in the model training are sourced separately with a combination of external annotators and synthetic data generation.

Human in the loop.
All AI evaluations that impact a hiring decision always require a human in the loop to take a look and act. There are no automated workflows for selection or rejection.

Ensure fairness.
AI features undergo regular bias audits every 6 months(?) to ensure it's not biased against a race, ethnicity or gender

Keep things interpretable.
Every AI output is traceable, explainable, and editable. AI assists, but humans stay in control.
