Human-Centred Design
01
Principle 01 of 4

Principle — Technology in service of people, not the reverse.

Human-Centred Design

AI must serve families, not systems.

57%

of Canadians with a serious legal problem received no professional help — CLPS 2021

Share on LinkedIn

User experience tested with real family justice system participants

Accessibility built in from design stage, not retrofitted

Survivor-led advisory on tools touching violence or trauma

Plain-language by default, legal precision on demand

Key Statistics

Evidence underpinning this principle

Screenshot to share
Principle 01 of 4Human-Centred Design
Balanced Family Justice Initiative
1
57%

Canadians with a serious legal problem who received no professional help

Statistics Canada, Canadian Legal Problems Survey 2021

2
73%

Family law matters involving at least one unrepresented party

Canadian Forum on Civil Justice, 2022

3

More likely for lowest-income quintile to face a family legal problem without any help

CLPS 2021 — income-stratified access data

4
$38K

Average cost of a fully contested family court application in Canada

Law Commission of Ontario, Costs of Family Proceedings Report, 2020

Sources cited in full analysis below

astraea-pro.com/community/balanced-family-justice

Full Analysis

Every AI tool we research, develop, or advocate for must be designed with the end user at its centre — the parent navigating custody disputes alone at midnight, the domestic violence survivor afraid to attend court, the child caught between two households. Technology that does not reduce human suffering has failed its purpose, regardless of its technical sophistication.

The Canadian Directive on Automated Decision-Making (Treasury Board Secretariat, 2019, amended 2023) establishes that any government AI system affecting Canadians must be designed around the needs of those it serves, with particular attention to vulnerable populations. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) reinforces this: "AI systems should contribute to the achievement of sustainable development goals and human well-being." These frameworks demand more than user testing — they require that lived experience drive design decisions.[1][2]

We operationalize this principle by requiring that every tool we endorse or develop undergoes user testing with real family justice system participants — not legal professionals, but the self-represented litigants, survivors, and children who are the actual end-users of these systems.

The scale of the access-to-justice crisis that human-centred design must address is staggering. Statistics Canada's Canadian Legal Problems Survey (CLPS, 2021) found that 57% of Canadians who experienced a serious legal problem — including family law problems — received no professional help of any kind. For the lowest income quintile, the probability of confronting a family law problem without legal representation was three times higher than for the highest income quintile.[3] The Law Commission of Ontario's 2020 report on the costs of family proceedings documented that a fully contested family court application in Canada costs an average of $38,000 — a figure that rises to over $100,000 for trials lasting more than two days.[4] These are not numbers that describe a justice system in modest need of improvement. They describe a system that has catastrophically failed the majority of the Canadians it is constitutionally obligated to serve. Any AI tool that does not begin its design process by confronting this reality — and asking how it concretely reduces these barriers for specific, named populations — is engaging in design theatre, not human-centred design.

The National Self-Represented Litigants Project (NSRLP), founded by Professor Julie Macfarlane at Windsor Law, has produced the most comprehensive Canadian research base on what unrepresented family court litigants actually need from legal technology tools. Macfarlane's foundational 2013 study, based on interviews with over 280 self-represented litigants (SRLs) across Ontario, British Columbia, and Alberta, identified a consistent pattern of technology failure: legal information websites and tools designed without SRL user input consistently failed to address the actual questions SRLs were trying to answer, used legal terminology that SRLs could not decode without counsel, and buried critical practical information — filing deadlines, form requirements, procedural sequences — in structures that assumed the reader had legal training.[5] In follow-up research published in 2020, Macfarlane documented that the adoption of online dispute resolution tools during COVID-19 had significantly worsened access-to-justice outcomes for SRLs who lacked the digital literacy, reliable internet access, or plain-language comprehension skills to navigate the new digital interfaces — outcomes that affected women, racialized communities, Indigenous litigants, seniors, and low-income Canadians at disproportionate rates.[6] The consistent lesson from the NSRLP corpus is that technology built for family justice without meaningful co-design with SRLs will reproduce — and in some respects amplify — the inaccessibility of the systems it claims to improve.

Human-centred design in the family justice context is not a stylistic preference; it is a constitutional requirement. Section 15 of the Canadian Charter of Rights and Freedoms guarantees equality before and under the law and equal protection and benefit of the law without discrimination on grounds including disability, age, national or ethnic origin, and sex.[7] Where an AI tool's design choices — interface language, assumed digital literacy, inaccessibility to screen readers, absence of trauma-informed UX protocols — effectively exclude specific protected groups from benefiting from the tool, those design failures are not merely poor practice. They are potential violations of the equality guarantee. The Law Commission of Ontario's 2020 report on AI and the justice system warned explicitly that "accessibility by design is a legal obligation, not an add-on" — and cited multiple examples of Canadian legal technology deployments that had achieved formal regulatory compliance while functionally excluding Deaf, visually impaired, and non-English-speaking users through design decisions made without input from those communities.[8] We hold every tool we endorse to WCAG 2.2 Level AA accessibility compliance as a minimum standard, and require evidence of actual user testing with people who have disabilities — not merely technical compliance audits.

The trauma dimension of human-centred design in family justice is frequently underestimated by technology developers who approach the problem as primarily a legal information challenge. Research on the neurobiology of trauma conducted by Bessel van der Kolk (The Body Keeps the Score, 2014) and applied to legal contexts by trauma-informed lawyering scholars including Sarah Katz and David Haldar has established that individuals experiencing acute trauma — which characterizes a significant proportion of family law litigants, particularly domestic violence survivors — process information differently, exhibit distinct decision-making patterns, and are vulnerable to re-traumatization through design elements that would be considered neutral in non-trauma contexts.[9] A form that asks a domestic violence survivor to describe their partner's behaviour using checkbox categories designed for non-adversarial disputes is not a neutral design choice; it is an experience that may trigger re-traumatization, produce inaccurate or incomplete disclosure, and deter the user from completing the task — thereby denying them access to protections they legally qualify for. Human-centred design for family justice AI requires trauma-informed UX protocols developed in consultation with clinical trauma specialists and survivor advocates — not merely legal subject-matter experts. Our commitment includes mandatory survivor advisory representation on design teams for any tool touching violence, abuse, or protection proceedings.

Children are both the population most profoundly affected by family law outcomes and the population most systematically excluded from the design of family justice technology. The United Nations Convention on the Rights of the Child (UNCRC), to which Canada is a signatory, establishes in Article 12 the right of children who are capable of forming views to express those views in matters affecting them, and to have those views given due weight.[10] In the family justice context, this principle has been operationalized through mechanisms like the Office of the Children's Lawyer in Ontario and legal representation of children in contested custody proceedings — but these mechanisms are resource-constrained and reach only a fraction of family court proceedings involving children annually. The design of AI tools that inform or facilitate family law processes involving children has, to our knowledge, been conducted without any systematic child-participant testing or co-design methodology in the Canadian context. This is not a neutral default. When an AI tool produces a parenting plan recommendation, a custody outcome prediction, or a risk assessment without any design input from children or child development specialists, it encodes adult-centric assumptions into processes that will directly shape children's lives. We require documented child-development consultation and age-appropriate co-design methodologies for every tool operating in the parenting and custody domain.

The digital divide — the stratified gap in digital access, skills, and confidence across income, age, geography, and disability status — transforms the theoretical promise of AI-assisted legal access into a practical mechanism of exclusion for the very populations most in need. The Canadian Internet Use Survey (Statistics Canada, 2022) found that 16% of Canadians aged 25–54 lacked broadband internet access at home; for rural communities, the figure rose to 36%; for communities in Northern Canada, reliable broadband access remained unavailable to the majority of residents in many jurisdictions.[11] Design choices that assume reliable high-speed internet access, current-generation hardware, and high baseline digital literacy effectively exclude these populations from the benefits of any AI tool operating through a digital interface. The implications are particularly acute for Indigenous communities: research by the First Nations Technology Council (2021) documented that over 40% of First Nations communities in British Columbia lacked the connectivity infrastructure required to reliably access cloud-based applications — precisely the category of infrastructure underpinning most contemporary AI-assisted legal tools.[12] Human-centred design for family justice AI must include low-bandwidth fallback interfaces, offline functionality for core features, and design testing that explicitly includes users with low digital literacy and users accessing tools on older, lower-specification devices. Designing exclusively for the median Canadian technology experience will systematically fail the populations at the bottom of the access gap.

Plain language is not a communication style; it is a justice mechanism. The Canadian Literacy and Learning Network's research on reading competency in Canada (2021) found that approximately 48% of adult Canadians read at or below a Grade 8 literacy level — meaning that the majority of Canadians are functionally excluded from legal documents, court forms, and technology interfaces written at the post-secondary literacy level assumed by most legal technology developers.[13] The Plain Language Association International and the Centre for Plain Language have developed rigorous standards for plain language legal drafting, requiring active voice, short sentences, common vocabulary, and visual clarity — standards that are fully compatible with accuracy and completeness of legal content but that require deliberate design effort and user testing to achieve. Human-centred design for family justice AI must embed plain language standards at the design stage, not as a post-hoc simplification pass. Every information element that a user is asked to read, understand, or respond to must be tested for comprehension with users drawn from the actual literacy distribution of the population it will serve — not tested by legal professionals evaluating whether the information is accurate. Accuracy and comprehension are not the same thing, and in the family justice context, comprehension is the variable that determines whether a litigant can actually use the tool to protect their rights.

The international evidence base for human-centred design in legal technology provides both compelling models and sobering cautionary examples. The United Kingdom's HM Courts and Tribunals Service (HMCTS) Reform Programme — a £1 billion technology modernisation initiative launched in 2016 — represents the largest common-law jurisdiction experiment in AI-assisted court administration. Independent evaluation by the Public Law Project and the Law Society of England and Wales documented that the Reform Programme's online divorce application and probate systems, while successfully increasing completion rates for technically straightforward cases, consistently failed users with cognitive impairments, non-English-speaking users, and users without reliable internet access — despite official accessibility compliance assessments that rated the systems as adequate.[14] The lesson the HMCTS experience teaches is not that technology cannot help; it is that compliance-based accessibility assessments are a poor substitute for lived-experience user testing with populations at the margins of the access gap. In contrast, the Hague Institute for Innovation of Law's 2022 comparative study of access-to-justice AI tools across twelve jurisdictions found that tools designed using iterative co-design methodologies with representative end-user panels — including self-represented litigants, legal aid clients, and community legal education participants — achieved measurably better outcomes on comprehension, task completion, and user satisfaction across all demographics, including the most vulnerable and digitally excluded populations.[15] The methodology works. The commitment required to apply it consistently is what separates genuine human-centred design from its simulation.

The operationalization of this principle requires institutional accountability mechanisms that outlast individual project cycles. Human-centred design is not a one-time deliverable; it is an ongoing commitment to monitor how tools perform in practice against the needs of the populations they were designed to serve, and to update design based on what that monitoring reveals. We require that every tool we endorse maintain a publicly accessible user experience log that tracks completion rates, drop-off points, and accessibility incident reports disaggregated by user demographic; that this log be reviewed quarterly by a standing advisory panel that includes at minimum two self-represented litigant advocates, one Indigenous community liaison, one disability rights specialist, and one domestic violence survivor advocate; and that design iterations addressing identified failures be completed on a documented timeline. The requirement for survivor-led advisory representation is non-negotiable: domestic violence organizations, women's shelters, and victim services agencies have expertise in the technological harm patterns that affect their clients that no legal technology developer has accumulated through any other channel. Their presence at design tables is not a symbolic gesture — it is the mechanism through which survivor experience becomes design reality.

Governing Frameworks

Treasury Board Directive on ADM (2023)UNESCO AI Ethics Recommendation (2021)ISO 9241-210:2019WCAG 2.2 (W3C, 2023)Bill C-27/AIDA Human-Centred ProvisionsTRC Calls to Action #65 & #72OECD AI Principles — Value 3 (2019)National Self-Represented Litigants Project (2013)

Sources & Citations