How Automated Benefits Determinations Violate Disabled People’s Constitutional Due Process Rights and Increase Risk of Institutionalization

Laptop, a pair of hands on the keyboard, another hand pointing at screen

If you blinked, you might have missed it.

At the end of 2020, the Sixth Circuit Court of Appeals published an important decision in Waskul, et al. v. Washtenaw County Community Mental Health. In the case, five people with developmental disabilities sued the state of Michigan and Washtenaw County after the county was allowed to implement a new budget methodology for their services and health care. The state tried to stop the case from moving to trial, but the Sixth Circuit decided that the plaintiffs’ facts were compelling enough for their arguments to be heard. Their decision offers some important language and potential tips for other advocates.

The individual plaintiffs in Waskul Derek Waskul, Cory Schneider, Kevin Wiesner, Lindsay Trabue, and Hannah Ernst all rely on state-funded services because of their disabilities. The problem? An algorithm that was narrowly designed to determine the number of care hours each person would receive was repurposed to determine the entire budget beyond care hours alone.

Washtenaw county’s budget methodology reimburses each person for the number of care hours that the algorithm comes up with. The county multiplies a fixed payment rate by the number of care hours. Before 2015, the resulting number was only used to calculate one budget line item: paying workers for hours of care provided. Disabled people receiving services could bill the state separately for other costs like transportation and workers’ compensation for their care workers. Without changing the algorithm, the county began using this dollar amount to cover all costs of care, not just hourly pay, as a single line item in 2015. As a result, the purpose of the algorithm no longer matched the county’s use of the tool. Other than appealing, the only way people get a budget increase is if the algorithm produces a higher level of care determination.

Technically, the budget changes were not hours reductions, but they still resulted in people receiving fewer hours of care because they had to use their funds to cover ancillary costs in addition to care itself. People receiving services couldn’t pay professional support workers the same rates they had been able to offer before, leaving them without consistent or reliable access to services.

For example, the new methodology forced Cory Schneider to rely on his aging grandparents to provide 75 hours of his care each week, which means he risks losing services when they are no longer physically capable of supporting him. It also forced Kevin Wiesner’s legal guardian to stay with him at least 40 hours a week. It also made Kevin’s guardian pay for all his community activity and transportation needs out of pocket. This, in turn, had a troubling spiral effect that hurt everyone involved — Kevin could not afford as many support workers as he needed, and Kevin’s guardian could not work outside the home, could no longer afford her property taxes, and risked going into foreclosure.

According to the Sixth Circuit, these facts supported the plaintiffs’ claims that the resulting cuts to plaintiffs’ services budgets put them at risk of institutionalization, in violation of the Americans with Disabilities Act and Olmstead v. L.C. ex rel. Zimring, the controlling case law establishing the right to community integration. The court cites the state’s Medicaid Provider Manual, which indicates that the new budget methodology relied only on applying an insufficient rate to the level of care algorithm’s results, leaving additional needs unaccounted for.

The Waskul decision has two important lessons for advocates who are tracking the increasing use of automated systems to allocate Medicaid and other benefits programs. First, the new budget methodology adopted by the county provides an important example of a methodological change that in fact masked a damaging policy decision. A change that was presented as a simple adjustment in how plaintiffs’ budgets were allocated led to a widespread decrease in the number of care hours that program participants actually received. This wasn’t a calculation error: it was a systematic reduction in hours that significantly decreased people’s independence and negatively influenced their quality of life. Advocates and state decision makers alike must watch out for such policy decisions and ensure that these decisions are instead made through proper decision-making channels with public oversight and accountability.

Second, the Sixth Circuit took the important step of recognizing the functional cuts to people’s services budgets, finding them so severe that they placed plaintiffs at serious risk of institutionalization in violation of their legal rights. This marks one of the first times that a U.S. appellate court has applied the precedent of Olmstead v. L.C. ex rel. Zimring in the context of states using algorithm-informed benefits tools. It serves as an important warning to state decision makers that new algorithmic assessments which cut participants’ benefits to the point of risking institutionalization may also violate the law.

The Waskul decision is part of a growing series of cases challenging the outcomes of automated systems to inform public benefits determinations. Recently, my team at the Center for Democracy & Technology published a new report assessing these cases, Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities. Our report examines the main types of arguments advocates and lawyers have used when taking states to court over an increasing array of algorithm-driven decision-making tools that have the effect of reducing or terminating people’s benefits.

Advocates have primarily raised three important arguments when challenging algorithm-driven decision-making tools:

  1. Algorithm-driven benefits determinations can violate disabled people’s constitutional and statutory rights to due process, for example by failing to provide notice and explanation for the cuts or making changes to a person’s benefits without ascertainable standards.
  2. States may violate statutory requirements for new rulemaking when they start using new algorithm-driven decision-making tools without engaging in a public notice-and-comment process.
  3. Algorithm-driven decision-making tools can violate the community integration mandate of the Americans with Disabilities Act (ADA by cutting people’s benefits so much they risk going into an institution to receive the care they need or to forego necessary care in order to stay at home.

The Sixth Circuit’s ruling in Waskul supports the third argument, by considering whether a state’s policy changes place disabled people at unlawful risk of institutionalization and isolation at home. Previously, plaintiffs in Idaho and Oregon brought similar claims under Olmstead about benefits cuts caused by their states’ use of algorithmic tools, but their claims have not been ruled on because the courts ruled against the states on different grounds. In a separate case in Florida, Brandy C. v. Palmer, the court held that the plaintiffs’ Olmstead claim over cuts caused by the state’s new Medicaid budget algorithm was too speculative because the plaintiffs had not yet lost their funding.

In contrast, the Sixth Circuit found that the Waskul plaintiffs had plausibly stated a claim that Michigan had engaged in unlawful discrimination when its new methodology placed them at serious risk of institutionalization and unduly isolated them at home. Being forced to choose between being confined to receive all necessary care or giving up all necessary care to be able to be able to participate in community life, is not a real choice at all. It’s the disability equivalent of Hobson’s choice. The state might as well ask: Which poison do you prefer? Whether it is at an institution or at home, being confined can unjustifiably isolate disabled people, and fails to meet the “most integrated setting possible” standard for services. Whether states use algorithms that cut people’s benefits or make budgeting changes that result in reduced hours of care, they must take care not to place people with disabilities at greater risk of institutionalization.

In other states, plaintiffs have won important victories against harmful algorithm-driven decision-making on procedural and substantive due process challenges, but have not always succeeded in obtaining long-term relief. There remains much work to be done as more and more states turn to increasingly automated decision-making, people with disabilities remain at risk of losing vital services, facing institutionalization, and being deprived of the meaningful ability to challenge adverse decisions. Our report aims to offer important insight to state policymakers who may be considering adopting such tools. It aims to inform and better equip litigators, advocates, and community members to develop effective strategies for challenging algorithm-driven decision-making in future cases. It also urges advocates to support more humane policies in collaboration with disabled people who know how best to meet our specific needs.

 


 

Lydia X. Z. Brown is a Policy Counsel with CDTs Privacy and Data Project, focused on disability rights and algorithmic fairness and justice.

Outside of their work at CDT, Lydia is an adjunct lecturer in disability studies at Georgetown Universitys Department of English, and the founding director of the Fund for Community Reparations for Autistic People of Colors Interdependence, Survival, and Empowerment. They serve on the American Bar Associations Commission on Disability Rights, and co-chair the Section on Civil Rights and Social Justices Disability Rights and Elder Affairs Committee. They are also lead editor of All the Weight of Our Dreams: On Living Racialized Autism, a groundbreaking anthology on autism and race published by the Autistic Women and Nonbinary Network. Lydia is a founding board member of the Alliance for Citizen-Directed Supports, and serves on several advisory committees, including for the Mozilla Foundation project on the Law and Politics of Digital Mental Health Technology, the Lurie Institute for Disability Policy at Brandeis University, and the Coelho Center for Disability Law, Policy, and Innovation at Loyola Law School.

Before joining CDT, Lydia worked on disability rights and algorithmic fairness at Georgetown Laws Institute for Tech Law and Policy. Prior to that, Lydia was Justice Catalyst Fellow at the Bazelon Center for Mental Health Law, where they advocated for disabled students civil rights in schools, and an adjunct professor of disability policy and social movements at Tufts University. Lydia has spoken internationally and throughout the U.S. on a range of topics related to disability rights and disability justice, especially at the intersections of race, class, gender, and sexuality, and has published in numerous scholarly and community publications. Among others, they have received honors from the Obama White House, the Society for Disability Studies, the American Association of People with Disabilities, the National Disability Mentoring Coalition, and the Disability Policy Consortium. In 2015, Pacific Standard named Lydia to its list of Top 30 Thinkers in the Social Sciences Under 30, and Mic named Lydia to its inaugural list of 50 impactful leaders, cultural influencers, and breakthrough innovators for the next generation. In 2018, NBC named Lydia to its list of Asian Pacific American breakthrough leaders, and Amplifier featured them in the We The Future campaign honoring youth activism. Most recently, Gold House Foundation named Lydia to its A100 list of Americas most impactful Asians for 2020.

Lydia holds a bachelors degree in Arabic from Georgetown University, and a J.D. with joint concentrations in Criminal Law and Justice and in International Law and Human Rights from Northeastern University School of Law.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.