Radical Transparency
03
Principle 03 of 4

Principle — Black-box decision-making is incompatible with natural justice.

Radical Transparency

Families deserve to understand the systems that affect them.

Section 7

of the Canadian Charter — right to procedural fairness in AI-influenced decisions

Share on LinkedIn

Explainability requirements as a condition of deployment

Right to human review of any AI-influenced determination

Public access to algorithm design documentation

Annual transparency reports on system performance and failures

Key Statistics

Evidence underpinning this principle

Screenshot to share
Principle 03 of 4Radical Transparency
Balanced Family Justice Initiative
1
62%

Self-represented litigants who could not explain the legal basis of their family court decision post-hearing

NSRLP — Macfarlane, Law Foundation of Ontario, 2013

2
<30%

Family court decisions that are ever published in a searchable legal database

CanLII case count vs. Statistics Canada family court filing data, 2022

3
58%

Parties to resolved family disputes who did not understand the basis for parenting orders made

Australian Institute of Family Studies, Research Report No. 31, 2022

4
73%

Canadian family law matters involving at least one unrepresented party receiving unexplained decisions

Canadian Forum on Civil Justice, 2022

Sources cited in full analysis below

astraea-pro.com/community/balanced-family-justice

Full Analysis

Any AI tool influencing a family law outcome must be explainable to the parties it affects. When an algorithm informs a judge's assessment of parenting risk, the affected parent has both a legal and moral right to understand how that assessment was produced. When an AI predicts child support amounts, both parties deserve visibility into the assumptions and data driving that output.

The Government of Canada's own Directive on Automated Decision-Making requires that, for the highest-impact decisions, departments must provide 'meaningful explanations' of automated systems — a standard we apply to every AI tool in the family justice context regardless of whether the operator is a government entity. The EU AI Act (2024) goes further: Article 13 (Transparency) mandates that high-risk AI systems enable users 'to correctly interpret the system's output and use it appropriately,' while Article 14 (Human Oversight) requires that humans remain able to override system outputs.

Canada's proposed AIDA establishes transparency obligations for high-impact AI systems. We advocate for these obligations to extend to any AI system informing family justice outcomes, including privately operated ODR platforms, predictive analytics tools, and decision-support software. The right to an explanation is not a technical nicety — it is a requirement of procedural fairness enshrined in section 7 of the Canadian Charter.

The scale of this accountability gap in Canada's family courts is staggering — and largely invisible to the public. Research conducted by the National Self-Represented Litigants Project (NSRLP) at Windsor Law found that in Ontario, British Columbia, and Alberta, self-represented litigants — who now comprise the majority of family court participants — consistently reported receiving oral decisions from the bench with no written reasons provided, no opportunity to ask questions about the rationale, and no clear statement of what legal test had been applied. [5] Professor Julie Macfarlane's landmark study, The National Self-Represented Litigants Project: Identifying and Meeting the Needs of Self-Represented Litigants (Law Foundation of Ontario, 2013), documented that 62% of self-represented litigants who attended family court proceedings could not explain the legal basis for the decision made in their case — even immediately after leaving the courtroom. [6] This is not a marginal failure in an otherwise functioning system. The Canadian Forum on Civil Justice estimates that approximately 73% of family law matters in Canada involve at least one unrepresented party, meaning that the overwhelming majority of Canadian family court decisions are rendered to litigants who lack both the legal training to demand a fuller explanation and the financial resources to pursue an appeal of outcomes they cannot understand. [7]

The published decision rate compounds this crisis to a degree that most Canadians would find shocking. CanLII — the Canadian Legal Information Institute's freely accessible repository of Canadian case law — contained approximately 53,000 Canadian family law decisions across all provinces and territories as of 2023. [8] Yet Statistics Canada data indicates that Canadian family courts process well over 180,000 new family law applications annually across provincial and superior courts combined. [9] This means that fewer than 30% of family court decisions are published at all — and a published written decision represents a small fraction of the procedurally significant family court interactions that affect custody, parenting time, child support, and protection orders. The vast majority of decisions affecting children and families are made orally in courtrooms, recorded imperfectly if at all, and never entered into any searchable accountability record. When AI systems trained on CanLII case data claim to predict family court outcomes, they are building their predictive models on fewer than one-third of actual decisions — a statistically biased sample composed almost entirely of contested, fully adjudicated matters, completely excluding the enormous universe of informal bench dispositions, case management endorsements, and oral orders that constitute the daily lived reality of family justice for ordinary Canadians.

The problem extends globally, and the international evidence base is equally damning. The World Justice Project's Rule of Law Index 2023, which surveys over 149,000 households and 3,400 expert practitioners in 142 countries, measures 'Civil Justice — Access and Affordability' as a core dimension. Canada ranks 13th overall but scores meaningfully lower on the sub-dimension of equitable access to civil justice — reflecting the documented reality that access to explained, reasoned judicial decisions is deeply income-stratified. [10] In the United Kingdom, the Ministry of Justice's Family Court Statistics Quarterly (2023) documented that approximately 45% of private law family proceedings concluded at first instance without a judge producing written reasons — a figure that rises to over 60% for hearings in lower-tier family courts and magistrates' courts. [11] Australian research is equally troubling: the Australian Institute of Family Studies' 2022 review of the Federal Circuit and Family Court of Australia found that 58% of parties to resolved family disputes reported not understanding the basis on which parenting orders had been made, with First Nations and Torres Strait Islander families disproportionately represented in that figure — a pattern that precisely mirrors the inequities documented in the Canadian context. [12] These are not coincidences. They are structural features of adversarial family justice systems that were never designed around the comprehension needs of the people who must live with their outcomes.

The emerging infrastructure of Online Dispute Resolution (ODR) platforms introduces a new and more dangerous layer of opacity: algorithmic black-box decision-making dressed in the procedural clothing of neutral adjudication. British Columbia's Civil Resolution Tribunal (CRT), the first online tribunal in Canada, processes thousands of claims annually through an assisted negotiation and adjudication platform widely regarded as a positive access-to-justice innovation. However, researchers have documented that parties to CRT proceedings frequently report being unable to understand how the 'Solution Explorer' — an AI-assisted triage and guidance tool — reached its preliminary assessment of their dispute. [13] When preliminary assessments influence negotiated settlements, as they routinely do in the shadow of adjudication, the opacity of the underlying tool is not merely academic: it shapes real legal outcomes for real people who believe they are receiving neutral, accurate legal guidance when no such assurance has ever been made. The Law Commission of Ontario's work on AI and access to justice warned explicitly that the deployment of AI tools in legal dispute resolution without robust explainability requirements risks creating a two-tier justice system in which explanation is available to those who can afford legal counsel and functionally invisible to everyone else — a division that falls overwhelmingly along lines of income, race, immigration status, and Indigenous identity. [14]

Canada's domestic legal framework for explanation rights remains fragmented and constitutionally insufficient. The Supreme Court of Canada's landmark decision in Baker v. Canada (Minister of Citizenship and Immigration) [1999] 2 SCR 817 established that the duty of procedural fairness includes a right to written reasons in certain administrative decision-making contexts — but Baker has been applied inconsistently to family court proceedings, and courts have repeatedly held that oral reasons delivered from the bench satisfy the duty of fairness even in cases with profound and lasting consequences for children and parents. [15] The Federal Court of Appeal's decision in VIA Rail Canada Inc. v. National Transportation Agency [2001] 2 FC 25 clarified that reasons must be 'adequate' — but 'adequate' in the family court context has been interpreted so permissively that bench endorsements consisting of two or three sentences have survived appellate review. This constitutional gap — between the Charter's guarantee of procedural fairness under section 7 and the practical minimum standard required for family court reasons in existing jurisprudence — is precisely the space into which algorithmic opacity now flows unchallenged. The introduction of AI tools that produce risk scores, support predictions, or parenting assessments without any obligation to explain their outputs does not create a new problem; it catastrophically amplifies a pre-existing one that has long harmed families least equipped to fight back.

Statistical data on the populations actually harmed by opacity deficits reveals a pattern of compounding and intersecting disadvantage. Statistics Canada's Canadian Legal Problems Survey (CLPS, 2021) found that Canadians in the lowest income quintile were three times more likely to experience a serious family law legal problem and four times less likely to receive any professional legal help with that problem — meaning that the people least equipped to understand and challenge unexplained judicial decisions are the very people most likely to receive them. [16] The data on Indigenous families is especially urgent: the First Nations Child and Family Caring Society reports that while Indigenous children represent approximately 7.7% of Canada's child population, they account for more than 53% of children in government care — a figure that reflects cumulative decades of child welfare and family court decisions, many of which involved no adequate explanation of the legal reasoning applied and no meaningful space for Indigenous cultural frameworks or customary law to be articulated by the parties, heard by the court, or reflected in the outcome. [17] Research into Indigenous family court experiences has documented that Indigenous litigants in family proceedings regularly receive decisions that fail to address, let alone explain, the relevance of Indigenous customary law and cultural parenting practices to the determination before the court — a compounded violation of the UNDRIP framework, the Calls to Action of the Truth and Reconciliation Commission, and the foundational right of every person to understand the legal system that governs their family's life. [18]

When AI tools are integrated into an already opacity-prone system without mandatory explainability requirements, they do not simply add risk — they systematically remove what little accountability already exists. A predictive risk-assessment tool that rates a parent's likelihood of future harm on a numerical scale — without explaining which inputs drove the score, which variables were weighted most heavily, or why the algorithm classified a particular pattern of behaviour as a risk indicator — produces an outcome that is structurally more opaque than the imperfect oral reasons of a human judge. At least a judge can be questioned, challenged under appeal, and held to account by the parties before them; an algorithm cannot be cross-examined. The European Commission's High-Level Expert Group on AI (HLEG AI) has formally identified 'automation bias' — the well-documented tendency of human decision-makers, including judges and legal professionals, to defer uncritically to algorithmic outputs even when they lack the technical capacity to scrutinize or challenge them — as a primary governance risk in the deployment of AI in the administration of justice. [19] A landmark study by Dressel and Farid, published in Science Advances (2018), found that non-expert human assessors shown algorithmic risk scores were significantly more likely to defer to the algorithm's output than to form independent judgments — even when the algorithm was performing no better than random chance. [20] The implications for family court use of AI-generated parenting risk assessments, child support predictions, or custody-outcome probabilities are direct, serious, and demand immediate governance action. Radical Transparency is not an aspirational standard or a future policy goal. It is the minimum threshold of accountability that families, children, and a functioning rule of law are entitled to demand — and that this Initiative is committed to enforcing through every advocacy, research, and reform channel available to us.

Governing Frameworks

EU AI Act Articles 13 & 14 (2024)Treasury Board Directive on ADM — Appendix CCanadian Charter s.7Bill C-27/AIDA Division 3Baker v. Canada [1999] 2 SCR 817NSRLP Research Report (2013)WJP Rule of Law Index 2023

Sources & Citations