Curve & Compass Analytics provides AI/ML expert witness and litigation support services for attorneys handling matters involving algorithmic decision-making, regulated AI, and data science methodology.

What We Do

AI/ML systems are making consequential decisions at scale. When those decisions are challenged, courts need experts who can evaluate not just what the model did, but whether it was built, validated, and deployed responsibly.

Curve & Compass Analytics offers technically rigorous expert analysis grounded in direct experience evaluating AI/ML systems in regulatory contexts.

Case Types We Support

— AI/ML discrimination and proxy discrimination claims

— Algorithmic fairness disputes across industries

— AI governance and model validation failures

— Bad faith claims involving AI-assisted claims handling

— Broader ML/data science methodology disputes

We accept plaintiff, defense, and neutral engagements. Every matter is screened for conflicts prior to retention.

What We Evaluate

Model Validity Does the model actually perform its stated function? We examine performance metrics, out-of-sample generalization, and temporal validity — identifying overfitting, model drift, and validation failures.

Feature Appropriateness What inputs is the model actually using? We use SHAP values, permutation importance, and partial dependence plots to identify which variables are driving predictions — and whether any are functioning as proxies for protected characteristics.

Fairness Analysis We apply multiple fairness metrics — demographic parity, equalized odds, calibration, and individual fairness — and explain the legal significance of any divergence among them.

Robustness Testing We conduct sensitivity analysis and adversarial stress testing to determine whether the model behaves consistently across demographic subgroups and whether it can be gamed by modest input changes.

Governance and Documentation We evaluate pre-deployment validation, ongoing monitoring practices, change management records, and third-party vendor model oversight — identifying governance failures that create legal exposure.

For Counsel: Typical Discovery Items

Attorneys handling AI/ML matters often don't know what to request. Beyond source code — which is usually insufficient on its own — a comprehensive discovery request should include:

— The trained model artifact at the deployed version

— Training data and preprocessing pipelines

— Pre-deployment validation reports

— Feature dictionaries

— Model monitoring logs

— Internal communications about model performance or known limitations

— Third-party vendor contracts and model documentation

Engagement Process

  1. Initial consultation (no charge) — we review the matter and assess fit

  2. Conflict screening — all parties are screened before engagement

  3. Retention agreement — standard terms

  4. Analysis and reporting — empirical model evaluation, written expert report

  5. Deposition and trial support as needed

Hourly rates and retainer requirements provided upon inquiry.

A man in a dark suit and striped blue tie adjusting his suit jacket in an indoor modern building with stairs and glass surroundings.
Two people discussing documents and working on laptops and tablets at a wooden table.