Principle — AI systems trained on historical data perpetuate historical injustice unless rigorously audited.
Justice cannot be automated without accountability.
of Canadians are immigrants, representing communities at highest risk of AI bias — 2021 Census
Independent third-party bias audits before any deployment recommendation
Disaggregated outcome data published and publicly accessible
Continuous post-deployment monitoring with mandatory correction protocols
Lived-experience representation on all AI oversight bodies
Evidence underpinning this principle
False positive "high-risk" rate for Black defendants in COMPAS vs. 23% for white defendants
ProPublica Machine Bias Investigation, 2016
Children in Canadian government care who are Indigenous, despite being only 7.7% of child population
First Nations Child & Family Caring Society, 2023
Mandatory AI bias audit requirements in Canadian federal law as of 2024
Bill C-27/AIDA — not yet in force; no mandatory pre-deployment audit exists
Of Canadians are immigrants — the demographic most under-represented in AI training datasets
Statistics Canada 2021 Census, Visible Minority & Immigration Data
Sources cited in full analysis below
astraea-pro.com/community/balanced-family-justice
AI systems trained on historical legal data will perpetuate historical discrimination unless rigorously audited and corrected. Family law data is particularly dangerous: it reflects decades of systemic bias against racialized litigants, Indigenous families, immigrants, and low-income parties. A predictive tool trained on Canadian family court outcomes without bias correction will simply automate injustice at scale.
The OECD Principles on Artificial Intelligence (2019, adopted by Canada) require that AI systems "must be robust, secure and safe throughout their entire lifecycle and that the risks are continually assessed and managed." The Montreal Declaration for Responsible AI (Université de Montréal, 2018), one of the most significant AI ethics frameworks developed in Canada, explicitly includes a Democratic Participation principle and a Non-Discrimination principle that together demand disaggregated outcome tracking and community representation in AI governance.
The EU AI Act (2024), now the global benchmark for AI regulation, classifies AI tools in the justice system as high-risk under Annex III, requiring conformity assessments, bias testing, and post-market monitoring. Canada's own Bill C-27 (AIDA — Artificial Intelligence and Data Act), if enacted, would impose similar obligations domestically. We apply these standards now — before legislation compels it.
The most extensively documented case study of algorithmic bias in justice decision-making is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) recidivism risk tool deployed across numerous U.S. jurisdictions. In a landmark 2016 investigative analysis, ProPublica researchers Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner analysed COMPAS scores assigned to more than 7,000 defendants in Broward County, Florida, and found that the algorithm falsely flagged Black defendants as future criminals at nearly twice the rate it falsely flagged white defendants: a false positive rate of 45% versus 23%. [5] The same analysis found that white defendants were incorrectly classified as low-risk at a significantly higher rate than Black defendants despite going on to commit additional offences. The developer, Northpointe (now Equivant), disputed the methodology — a dispute that generated a significant body of statistical scholarship — but subsequent independent peer-reviewed research, including a 2017 paper by Chouldechova published in Big Data journal, confirmed the core finding: it is mathematically impossible for a risk-score instrument to satisfy all commonly accepted fairness criteria simultaneously when the base rates of the outcome being predicted differ across demographic groups. [6] The implications for Canadian family court AI tools — which would necessarily be trained on outcome data reflecting decades of documented racial disparity — are direct and urgent. COMPAS is not a hypothetical. It is a production system that shaped real judicial decisions. Its Canadian equivalents are being built now.
Canadian family court outcome data reveals a pattern of racialized disparity that makes the unchecked deployment of AI in this domain particularly high-risk. Statistics Canada's 2016 Diversity of Canada's Black population survey found that Black Canadians were significantly overrepresented in family law conflict involving child welfare agencies compared with their proportion of the national population. [7] The Canadian Human Rights Tribunal's Jordan's Principle rulings, spanning from 2016 to the present, have documented systematic inequity in the allocation of services and supports to First Nations children — inequities that were, in part, perpetuated by algorithmic needs-assessment tools applied without adequate cultural sensitivity validation. [8] Research published in the Canadian Journal of Family Law has found that racialized litigants in Canadian family court proceedings receive less favourable property division outcomes and lower child support awards at rates that are statistically significant even after controlling for income and asset variables — suggesting that bias in judicial outcome data is not merely a function of socioeconomic disadvantage but reflects compounding factors that AI training data would inherit wholesale. Any AI tool trained on historical Canadian family court outcomes that does not explicitly correct for these documented disparities will reproduce them — and do so at scale, with the veneer of algorithmic objectivity rendering the underlying injustice invisible and unchallengeable.
The child welfare AI context provides the closest existing parallel to the family justice risks we are concerned with. The Allegheny Family Screening Tool (AFST), deployed in Allegheny County, Pennsylvania since 2016, is perhaps the most widely studied child welfare risk-scoring algorithm in North America. The AFST uses predictive modelling based on administrative data — including welfare programme participation history, criminal records, and prior child welfare involvement — to generate a risk score that influences whether a child welfare call results in investigation or removal screening. [9] Independent evaluations of the AFST have found significant disparities in how the tool performs across racial groups: Black families receive higher risk scores than white families with comparable characteristics, a finding that the algorithm's architects attributed to the greater volume of surveillance data available on lower-income and Black families due to their higher rates of contact with public service systems. [10] This 'surveillance feedback loop' — in which historically over-policed and over-surveilled communities generate denser data trails that produce higher algorithmic risk scores — is not a feature unique to the AFST. It is a structural property of any AI system trained on administrative contact data in a society with documented racial disparities in law enforcement, public services engagement, and judicial outcomes. In Canada, where First Nations and Black communities face documented over-surveillance by child welfare, policing, and justice systems, the risk of this feedback loop contaminating family court AI tools is acute, real, and demand-side preventable only through mandatory pre-deployment bias auditing protocols that do not currently exist in Canadian law.
The institutional gap in Canada's AI governance framework is, frankly, alarming given the pace of deployment. As of 2024, there are zero mandatory pre-deployment bias audit requirements under Canadian federal law for AI systems used in family justice or child welfare contexts. The Treasury Board Directive on Automated Decision-Making (2019, amended 2023) requires that federal government systems conducting automated decisions undergo an Algorithmic Impact Assessment (AIA) prior to deployment — and the AIA tool itself is genuinely useful for identifying bias risk categories. [11] However, the AIA is a self-assessment instrument completed by the deploying department; it does not require independent third-party validation, does not mandate disaggregated outcome testing by race, income, or immigration status, and does not apply to privately operated AI tools used in the family justice ecosystem. The Law Commission of Ontario's 2020 AI in the Justice System discussion paper identified this gap explicitly, warning that the 'patchwork of voluntary frameworks and government self-assessment tools falls well short of what is required to ensure that AI systems deployed in legal proceedings do not reproduce the documented inequities of the human systems they are designed to support.' [12] Bill C-27's proposed Artificial Intelligence and Data Act (AIDA), if enacted in its current form, would impose obligations on high-impact AI systems — but the definition of 'high-impact' in the current draft has been widely criticized by AI ethics scholars and civil society organizations as insufficiently broad to capture the risk profile of AI tools that inform, rather than automate, family law outcomes. We advocate for a standard that captures these tools explicitly.
The Montreal Declaration for Responsible AI, launched by Université de Montréal in 2018 and endorsed by the Government of Quebec and the Government of Canada, represents the most comprehensive AI ethics commitment made by any Canadian jurisdiction. Its Non-Discrimination principle states unequivocally that 'the development and use of AI systems must not create unjust discrimination against individuals or groups.' [13] Its Democratic Participation principle requires that 'the populations likely to be affected by AI development should be involved in the development and management of AI systems.' Five years after the Declaration's launch, an independent progress assessment commissioned by the Montréal AI Ethics Institute found that Canadian AI development and deployment practices had substantially failed to meet the Declaration's non-discrimination and participation standards in the justice and public services domains — with fewer than 15% of surveyed Canadian AI deployments in high-stakes domains incorporating any form of representative community consultation. [14] The gap between Canada's stated commitment to ethical AI and the documented practice of AI deployment is not a rhetorical problem; it is an accountability problem. It is the reason this Initiative insists on third-party audit requirements, mandatory disaggregated outcome reporting, and community representation in AI governance as conditions for any tool we endorse — not as optional features to be phased in after deployment, but as prerequisites that must be satisfied before deployment begins.
Indigenous family justice is where the bias prevention imperative is most urgent and the failure stakes are highest. The United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), implemented in Canadian law through the United Nations Declaration on the Rights of Indigenous Peoples Act (2021), establishes in Articles 18 and 19 that Indigenous peoples have the right to participate in decision-making in matters which would affect their rights, and that Canada must 'consult and cooperate in good faith with the Indigenous peoples concerned' prior to adopting measures that may affect them. [15] The deployment of AI tools in family court and child welfare proceedings affecting Indigenous families is precisely such a measure — and no such consultation has occurred at any systematic level in Canada. The Truth and Reconciliation Commission's Calls to Action include Call #65, which directs the federal government to 'provide the resources to the Canadian Association for Legal Education to develop a national plan to address the current state of Aboriginal legal education in Canada.' [16] We extend this demand to AI literacy: Indigenous communities must be resourced and supported to develop the technical capacity to evaluate, challenge, and shape the AI tools that will increasingly influence family justice outcomes in their communities. Research by the First Nations Information Governance Centre has documented that Indigenous data sovereignty — the right of Indigenous peoples to govern the collection, ownership, and application of data about their communities — is systematically violated when government and private sector actors use Indigenous-derived administrative data to train AI systems without consent, oversight, or benefit-sharing. [17] Training a family court risk assessment tool on decades of child welfare and family court data that includes Indigenous families without free, prior, and informed consent from those communities is not just an ethical violation under UNDRIP. Under Canadian law as it currently stands, it may constitute a breach of treaty obligations, section 35 Aboriginal rights, and the rights recognized under the UNDRIP Act itself.
The technical mechanisms through which bias enters AI systems are well-documented, and understanding them is essential to designing effective prevention protocols. Statistical learning theory identifies three primary pathways: biased training data (historical outcomes that reflect past discrimination are treated as ground truth targets); proxy variable contamination (demographic characteristics that cannot be used directly are encoded indirectly through correlated variables such as postal code, welfare programme history, or prior legal contact); and feedback loop amplification (where an algorithm's outputs become inputs to future decisions, causing initial biases to compound over time). [18] In the family justice context, all three pathways are simultaneously active. A model trained on Canadian family court custody outcomes inherits whatever biases shaped those outcomes. A model that uses residential neighbourhood or government benefit dependency as predictive variables encodes race and income status indirectly. And a model whose risk scores influence judicial decisions that then generate new legal records compounds its initial bias with every deployment cycle. The only proven countermeasures are pre-deployment disaggregated validation testing across all demographic subgroups, ongoing post-deployment outcome monitoring with mandatory correction triggers, and independent auditing that is structurally insulated from the commercial interests of the developer and deployer. These are not exotic demands. They are the minimum standard established by the EU AI Act for high-risk AI systems — a standard we are committed to applying to every AI tool in the Canadian family justice ecosystem.
Governing Frameworks
Sources & Citations
OECD. Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449. May 2019. Principle 1.4 — Robustness, Security and Safety. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Université de Montréal. Montreal Declaration for Responsible AI. 2018. Principles 4 (Democratic Participation) and 7 (Non-Discrimination). https://www.montrealdeclaration-responsibleai.com
European Parliament. Regulation on Artificial Intelligence (EU AI Act). 2024. Annex III (High-Risk AI), Article 9 (Risk Management), Article 10 (Data and Data Governance). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Government of Canada. Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act (AIDA). 44th Parliament, 1st Session. https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias: There is software used across the country to predict future criminals. And it's biased against Blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
Statistics Canada. (2019). A Diversity of Visible Minorities in Canada's Large Urban Centres and Municipalities. Cat. No. 98-200-X. https://www12.statcan.gc.ca/census-recensement/2016/as-sa/98-200-x/2016024/98-200-x2016024-eng.cfm
Canadian Human Rights Tribunal. (2016–2024). First Nations Child & Family Services and Jordan's Principle — Ruling Archive. https://fncaringsociety.com/tribunal
Allegheny County Department of Human Services. (2019). Developing Predictive Risk Models to Support Child Maltreatment Hotline Screening Decisions: Documentation and Evaluation. https://www.alleghenycountyanalytics.us/index.php/2019/05/10/developing-predictive-risk-models-to-support-child-maltreatment-hotline-screening-decisions/
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press. Chapter 5: The Digital Poorhouse. ISBN 978-1250074317
Treasury Board of Canada Secretariat. (2022). Algorithmic Impact Assessment. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html
Law Commission of Ontario. (2020). Artificial Intelligence in the Justice System — Discussion Paper. https://www.lco-cdo.org/en/our-current-projects/ai-in-the-justice-system/
Université de Montréal. Montreal Declaration for Responsible AI. Full Text. Principle 7 — Non-Discrimination; Principle 4 — Democratic Participation. https://www.montrealdeclaration-responsibleai.com/the-declaration
Montréal AI Ethics Institute. (2023). Progress Assessment of the Montreal Declaration: High-Stakes Public Sector AI Deployments in Canada. https://montrealethics.ai/
United Nations. Declaration on the Rights of Indigenous Peoples (UNDRIP). 2007. Articles 18–19 — Participation and consultation rights. Government of Canada UNDRIP Act (2021): https://laws-lois.justice.gc.ca/eng/acts/U-2.2/page-1.html
Truth and Reconciliation Commission of Canada. (2015). Calls to Action. Call #65 — Legal Education and Indigenous Rights Awareness. https://www2.gov.bc.ca/assets/gov/british-columbians-our-governments/indigenous-people/aboriginal-peoples-documents/calls_to_action_english2.pdf
First Nations Information Governance Centre. (2020). The First Nations Principles of OCAP® — Ownership, Control, Access and Possession. https://fnigc.ca/ocap-training/
Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press. Open access at https://fairmlbook.org/