Terry Carney
Introduction
Applying automated machine-learning software, robo-debt is an illegal, immoral and ill-constructed scheme for recovering mainly non-existent or seriously inflated debt amounts. These supposed ‘debts’ are raised against working-age Newstart or Youth Allowance recipients who are often vulnerable and overwhelmed by the process.[1] Robo-debt is slated to raise between $2.1 billion and $4.5 billion of dubious revenue.[2] Yet, despite unlawfully placing the onus of disproof of debts on clients and applying the wrong legal test (an extrapolated average rather than precise fortnightly earnings), for getting on for two years Australia’s vaunted system of legal and administrative review has failed to expose and rectify the emperor’s lack of legal clothes.[3] And the situation continues to deteriorate as cost-cutting, lack of adequate numbers or merit selection of members,[4] sees a rise in serious legal oversights in AAT reasoning, such as the tacit acceptance of averaging and failure insist on Centrelink or other proof of fortnightly income in Re Thurling,[5] presenting a rare opportunity for a Federal Court robo-debt test case should the opportunity ever be seized. Disturbing as it is, all this is now old news.
What is perhaps less well known is that the robo-debt episode is both unrepresentative (a botched job in an area where the law otherwise offers good protections when they are able to be enforced) and a glimpse of the massive challenges machine-learning decision-making algorithms pose for the human rights and the justice system more generally.[6] For instance, unlike social security, tax does effectively shift the burden of disproof of errors onto the taxpayer, but remains bedeviled by difficulties on review such as in determining whether automated actions are final and reviewable ‘decisions’,[7] or in handling situations where the ATO breaches its model litigant obligations.[8] Likewise with the NDIS, as I will explain.
A Very ‘Hole-ey’ Trinity of Machine-learning Challenges?
The systemic challenges posed by machine learning arise in a number of ways, three of which will now be lightly sketched. All three challenges, I suggest, risk punching serious holes in the hull of the good ship justice and human rights.
1. Accountability in the machine learning age.
The first of the systemic challenges is the ramping up which such technology enables in age-old litigation strategies of delay, avoidance or ‘look away’ tactics – tactics long deployed by better resourced or more powerful parties to disputes. Australia’s widely applauded system of administrative and judicial review was supposed to counter this by leveling the playing field. Thus ensuring that ordinary citizens could obtain merits review and any necessary correction of flaws or oversights in government decisions irrespective of the might and power of the bureaucracy.
However robo-debt has shown that such scrutiny can be avoided in two ways. Either by the simple departmental tactic of keeping quiet about non-public AAT1 rulings overturning individual debts for lack of legal foundation (by not appealing adverse rulings).[9] Or alternatively by ‘settling’ (thus also keeping out of the public domain) any cases which do reach or are taken straight to the publicly reported on General Division of the AAT (as with NDIS reviews being settled to avoid setting generalisable but costly precedents[10]). Such ‘gaming’ of the system not only undermines the individual rights vindication function of administrative review, but also its critical systemic normative role of promoting better primary decisions across the board – a crucial preventive function of AAT review. This I have argued cannot be set to rights without reforms to the funding, powers and Parliamentary oversight of bodies such as the AAT.[11]
2. Transparency protections in the machine-learning age.
The second challenge is to the empowerment of citizens and the nourishment of the public polity from transparency of knowledge about the process and reasoning for government decisions. Much of the new administrative law was about enabling citizens to know (or be able to press for on review) the answers to the ‘who’, ‘how’ and ‘why’ questions. Robo-debt however exemplifies the way machine learning replaces decision-makers with automated outcomes (such as ‘decision letters’) and uses often very complex algorithms to generate those outcomes.
In light of this, how does anyone – social security clients, advocates, the media or the public – comprehend the otherwise hidden complexity of machine learning algorithms? This is the challenge of how to re-establish or otherwise realise the disinfectant objectives served by transparency. It is universally accepted as being a challenge, but commentators are deeply divided about possible answers, with some favouring reform to existing law-making and review systems and others looking to technology itself to restore the check and balance.[12] Either way, it is unlikely that the path will prove to be a quick or easy one. Social security clients, like other citizens, are likely to find themselves ‘in the dark’ about the real meaning and basis of automated decisions for some time yet.
3. Overcoming cost-cutting and confounding of administration in the machine-learning age:
Robo-debt illustrated the way in which processes of primary decision-making and internal review layers,[13] could leave citizens so confused and so bereft of assistance in rectifying mistakes, that many simply abandoned further engagement. They ‘cut their losses’ (and in a real dollar rather than merely a metaphorical sense). This hollowing out of administrative capacity for traditional enquiry and assessment of claims, along with pressures to limit eligibility, is also evident in DSP claim and review processing.[14] It is also a phenomenon that infects claims and review administration in Britain.[15]
How can access to public entitlements and to justice be secured in the face of such ‘efficiency’ reforms of public administration, and how can the inertia or exhaustion effects on citizens be countered? For the digitally literate, part of the answer may lie in sophisticated ‘smart systems’ that proactively adapt information collection pathways to accurately identify all entitlements from a single point of contact by citizens, without any need for claimants to know about their possible entitlements.[16] However only recently in Australia has government begun to revive interest in deploying machine learning to such ends, and in any event it is no answer to the highest priority – namely tackling harm done to the most vulnerable applicants[17] who lack access to, competence in, or the ability to take advantage of such ‘technological fixes’ due to factors such as their age, literacy, location, or cognitive impairments.
Since it is unlikely that there will be significant slowing in the pace of automation, much less reversal of replacement of public servants by automation in the press for fiscal savings, this frankly is the challenge which worries me most. Because presently there is no obvious alternative path to reform that I can see.
Conclusion
The human rights challenges of machine learning systems are becoming quite well known (if not also its potential positive contribution to addressing the needs of the vulnerable, such as people with disabilities).[18] The risks to justice and fairness from rushed design of machine learning measures promising big savings to government revenue, such as robo-debt, have also started to penetrate public consciousness (if not yet sufficiently to reverse what one commentator described as balancing budgets by way of extortion[19]). But what I suggest is least well understood are the challenges machine learning poses to traditional systems of public accountability, administrative review, and judicial guarantees of the rule of law.
These challenges also do need to be recognised more broadly, for the reasons already touched on. However as indicated, these are not challenges for which there are easy solutions. So it is time to pay them very serious attention.
[1] For details: Terry Carney, ‘The New Digital Future for Welfare: Debts without legal proofs or moral authority?’ (2018) (March) UNSW Law Journal Forum 1-16 <http://www.unswlawjournal.unsw.edu.au/the-forum/>.
[2] Cameron Houston and Chris Vedelago, ‘Top QC slams Centrelink’s robo-debt program as ‘elaborate sham’, Sydney Morning Herald Sunday 2 December 2018 2018 <https://theworldnews.net/au-news/top-qc-slams-centrelink-s-robo-debt-program-as-elaborate-sham>.
[3] Terry Carney, ‘Robo-debt Illegality: The seven veils of failed guarantees of the rule of law?’ (2019) 44(1) Alternative Law Journal pre-print DOI https://doi.org/10.1177/1037969X18815913.
[4] James Morgan, ‘Securing the Administrative Appeals Tribunal’s Independence: Tenure and mechanisms of appointment’ (2018) 43(4) 302-308
[5] Thurling and Secretary, Department of Social Services (Social services second review) [2019] AATA 3
[6] As recognised by the 2018 Human Rights Commission project on ‘Human Rights and Technology’: see AHRC, ‘Human Rights and Technology Issues Paper’ (Sydney: Australian Human Rights Commission, July 2018) <https://tech.humanrights.gov.au/sites/default/files/2018-07/Human%20Rights%20and%20Technology%20Issues%20Paper%20FINAL.pdf>. For Canada see Law Commission of Ontario, ‘LCO is leading the way for a new digital rights agenda for Ontario’ <https://www.lco-cdo.org/en/learn-about-us/publications-papers/liaison-fall-winter-2018/uncharted-legal-territory/>.
[7] See Kerr J in dissent in Pintarich v Deputy Commissioner of Taxation [2018] FCAFC 79.
[8] See Logan J in Shord v Commissioner of Taxation [2017] FCAFC 167.
[9] Carney, above n 3.
[10] Rick Morton, ‘NDIS agency avoids tribunal hearings with big settlements’, The Australian Thursday 15 November 2018, 1 <https://www.theaustralian.com.au/national-affairs/health/ndis-agency-avoids-tribunal-hearings-with-big-settlements/news-story/420896181b5b57d3e585993ce9556f66>.
[11] Carney, above n 3.
[12] Cary Coglianese and David Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2017) 105(5) Georgetown Law Journal 1147-1224; Cary Coglianese and David Lehr, ‘Transparency and Algorithmic Governance’ (2018) Administrative Law forthcoming [Available at SSRN: https://ssrn.com/abstract=3293008]
[13] Commonwealth Ombudsman, ‘Centrelink’s Automated Debt Raising and Recovery System’ (Canberra: Commonwealth Ombudsman, Commonwealth Ombudsman, April 2017) <http://www.ombudsman.gov.au/__data/assets/pdf_file/0022/43528/Report-Centrelinks-automated-debt-raising-and-recovery-system-April-2017.pdf> at 9-22, 33-38.
[14] ANAO, ‘Disability Support Pension—Follow-on Audit’ (Canberra: Australian National Audit Office, Australian National Audit Office, 2018) <https://www.anao.gov.au/work/performance-audit/disability-support-pension-follow-audit> at 24-25, 37, 39-40, 63.
[15] Robert Thomas and Joe Tomlinson, ‘A Different Tale of Judicial Power: Administrative Review as a Problematic Response to the Judicialisation of Tribunals’ (2018) (September) Public Law forthcoming [Available at SSRN: https://ssrn.com/abstract=3254119]
[16] A few decades ago work on such systems was commissioned from a firm called Softlaw but it proved ill-feted. For brief reference to the idea: Erich, Schweighofer, Legal Knowledge Representation (1999) Kluwer Law International p 82.
[17] Terry Carney, ‘Vulnerability: False hope for vulnerable social security clients?’ (2018) 41(3) University of New South Wales Law Journal 783-817 [Advance: http://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2018/2009/CARNEY-Advance-Access.pdf]
[18] AHRC, above n 6.
[19] Peter Martin, ‘Extortion is no Way to Fix the Budget’, Sydney Morning Herald (Sydney), Thursday 12 April 2018.