January 2023
States, health plans, and health care providers increasingly are using algorithms and devices driven by artificial intelligence (AI) to support clinical decision making and establish clinical care standards. These tools are also widely used to support and inform population health management. Even as this evolving technology holds great promise for improving health care and health outcomes, it also can contribute to discrimination and amplify certain structural barriers and inequities that affect marginalized groups, including people with disabilities. Important work has been done that identifies how disability bias in algorithms negatively affects, for instance, employment decisions,[1] determination of the need for Medicaid personal care services in the home,[2] and the ability of autonomous vehicles to recognize pedestrian wheelchair users.[3] Race and ethnicity bias in certain algorithmic tools and AI also has been well documented.[4] Yet, very little work has been done to understand how bias in algorithms and AI affects people with disability in health care even as it has the potential to profoundly affect health care decisions, services, and outcomes for this large population. Moreover, when disability intersects with other marginalized identities, algorithmic and AI bias can further stigmatize patients, misdirect resources, and reinforce or ignore barriers to care rather than serving as a pathway to improving treatment and health outcomes.
Most advocacy organizations such as DREDF lack the technical capacity to discern when a covered entity is using AI or recognize the presence of algorithmic bias in health care decision-making. Covered entities must be required to disclose their use of algorithms and they must do so before they are placed into operation. This is especially true when the application of predictive data is literally a life and death matter, as in the case of Crisis Standards of Care, which only rose to public attention and discussion during the COVID-19 pandemic as surge conditions in healthcare utilization prompted hospitals and health systems to review and prepare such standards for use. People with disabilities and their families were caught with no opportunity for input and little recourse as states adopted standards for COVID-19 hospital care that explicitly and implicitly devalued the lives of people with disabilities, chronic conditions, and specific health conditions,[5] and called for deprioritizing people with disabilities for life-saving care, ventilator use, and even a bed in the hospital. Older persons were similarly devalued, as were people of color given higher incidences of chronic health conditions among Black persons, Hispanic persons, and American Indian/Alaskan Native populations who have long endured barriers to equal health care and social drivers of health. Many Crisis Standards of Care relied substantially on both stereotyped assumptions about the value of people with significant disabilities and medical algorithms for estimating a patient’s survivability. These algorithms assessed an individual’s potential response to life saving care without making an individualized assessment of the patient’s health and without accounting for how an individual’s disability could affect the assessment factors used in the algorithm or the time needed for the individual to respond to treatment. In short, many Crisis Standards of Care were discriminatory,[6] and many may continue to be so, given that they are once again out of the public eye.
Disability discrimination that was commonly present in Crisis Standards of Care used during the pandemic show how ableism is accepted and omnipresent in health care decision-making. Therefore, DREDF proposes using a short working definition of algorithms for discussion purposes. Algorithms used for decision-making in the health care context can be distinguished from other tools that may employ some element of artificial intelligence as a way of sorting and evaluating large amounts of potentially predictive data, for example, to create scoring guidelines. DREDF considers algorithms to be “those sets of instructions fed to a computer to solve particular problems.”[7] Algorithmic bias can be defined as “…the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems.”[8]
While DREDF is concerned with how algorithms are created and how developers evaluate the fairness of the formula and data inputs used, the crux of our concern with computer-mediated tools is that the human decision-makers who bear ethical and professional obligations as health care providers and entities have changed their decision-making process. Furthermore, they may choose to do so without any notice of the change. In essence, they may believe they have fully delegated their decision-making authority and should no longer be held accountable for the discriminatory outcomes because computers cannot “intend” discrimination. Once algorithms are involved and assigned a role within decision-making, there is a human tendency to give primary weight to the algorithmic output, decision, or recommendation, even in the face of conflict with human expertise, knowledge, and judgement. Examples of this deference to algorithms can be found in decisions made by pilots who defer to automatic flight control systems, as well as by physicians making treatment decisions in critical care units; the higher the stakes and, some might say, the greater the need for a human grappling with ethics, life values, and implicit bias, the greater the pressure to abdicate responsibility to an “objective” algorithm. Another example that is playing out in real-time involves the development of tools such as NarxCare that assign risk scores for screening opioid-addiction risk, using such factors as visits with multiple healthcare providers, a history of post-traumatic stress disorder (PTSD), or involvement with criminal justice systems.[9] The use of such factors disregard reality for people with disabilities and other characteristics who experience discrimination, such as how people with complex disabilities may legitimately need to see a number of specialist providers, disabled persons are more likely to experience traumatic experiences such as sexual abuse that can lead to PTSD, and persons of color with developmental, intellectual, or hearing disabilities disparately experience more negative interactions with criminal justice than their non-disabled white peers. For prescribers and pharmacists, there is a clear upside to using a commercial algorithm that promises to flag potential opioid abusers rather than going to the prescription drug registries that have direct information on controlled substance use:
Doctors, however, are also judged by algorithms—and can be prosecuted if they write more prescriptions than their peers, or prescribe to patients deemed high risk. . . . A couple of academic surveys have found that physicians appreciate prescription drug registries, as they truly want to be able to identify patients who are misusing opioids. But doctors have also said that some registries can take too much time to access and digest. NarxCare is partly a solution to that problem—it speeds everything up. It distills.[10]
Healthcare organizations are driving market demand for the use of algorithms and AI. They are paying for the development of tools and have resources to check for bias. We recommend that healthcare entities adopt the following principles to ensure they understand their responsibility for interrogating the algorithmic and AI tools they choose to use. The goal is to increase equity and fairness and avoid discrimination and inappropriate care decisions for people with disabilities, including those of all races and ethnicities, all ages, and those who have diverse sexual orientations, gender identities or gender expressions.
- Covered entities must be transparent about the areas in which they adopt algorithmic and AI use, the populations they are used with, what the tools determine, when the tools are used, and any instances in which the tools’ outcomes are mitigated or altered through human intervention.
- Covered entities that choose to use algorithmic and AI tools must bear a proactive burden to document the steps they took to choose unbiased and open source algorithmic or AI tools and establish how the algorithmic/AI tools they use are free of bias toward any protected ground, the algorithm’s impact on clinical decision-making, and the steps undertaken to avoid bias and unfair outcomes to consumers on protected bases.
- Adoption of algorithmic and AI tools must go hand in hand with a healthcare organization’s ongoing commitment to improving its databases and collecting granular disability demographic information from members/beneficiaries who voluntarily provide the information; without improved disability data it will be impossible to identify if and how the use of algorithmic and AI tools is driving the health care inequalities experienced by people/members with disabilities.
- The healthcare organization must establish standards for ongoing external oversight and evaluation of AI use for as long as algorithmic and AI tools are used.
- Healthcare organizations must develop disability-inclusive ethics and an ethics review process that recognizes the equal worth of people with disabilities and their right to treatment without bias, the full benefits of their health insurance coverage, nondiscrimination, and effective communication and policy modifications of people with disabilities, including when clinical algorithms like the Sequential Organ Failure Assessment (SOFA) is used. People with disabilities must be equal stakeholders in the ethics process.
- All patients and members of healthcare organizations must receive clear notice, using plain language, in any benefits denial notice of the fact that algorithms or AI was used in the assessment process, and they must have access to an accessible, readily available appeal process that will include review of the use of the algorithmic or AI tool involved.
- Any decision-makers that could deal with algorithms, such as individuals involved with health plan managed or benefits management, state Medicaid agencies, or administrative adjudicators or arbitrators, must receive basic training in how algorithms are used in healthcare and how implicit and systemic bias can be embedded in algorithmic design.
[1] Ridhi Shetti & Matt Sherer, Five Key Takeaways from New EEOC and DOJ Guidance on Disability Discrimination in Algorithm-Driven Hiring, June 3, 2022, https://cdt.org/insights/five-key-takeaways-from-new-eeoc-and-doj-guidance-on-disability-discrimination-in-algorithm-driven-hiring/.
[2] Lydia X. Z. Brown, et al., Report: Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities, October 21, 2020, https://cdt.org/insights/report-challenging-the-use-of-algorithm-driven-decision-making-in-benefits-determinations-affecting-people-with-disabilities/.
[3] Henry Claypool, et al., CDT and AAPD Report – Centering Disability in Technology Policy: Issue Landscape and Potential Opportunities for Action, December 13, 2021, https://cdt.org/insights/cdt-and-aapd-report-centering-disability-in-technology-policy-issue-landscape-and-potential-opportunities-for-action/.
[4] Z. Obermeyer, Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, Science v. 366, October 25, 2019, https://www.science.org/doi/10.1126/science.aax2342.
[5] The Arc, Bazelon Center for Mental Health Law, Center for Public Representation, and Autistic Self-Advocacy Network, Ari Ne’eman, & Sam Bagenstos, Evaluation Framework for Crises Standard of Care Plans (April 8, 2020), https://autisticadvocacy.org/wp-content/uploads/2020/04/Evaluation-framework-for-crisis-standards-of-care-plans-4.9.20-final.pdf.
[6] DREDF, Preventing Discrimination in the Treatment of COVID-19 Patients: The Illegality of Medical Rationing on the Basis of Disability (March 25, 2020), https://dredf.org/the-illegality-of-medical-rationing-on-the-basis-of-disability/.
[7] Wieringa, M.A., “What to Account for When accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability.” ACM [Association for Computing Machinery] Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27–30, 2020, Barcelona, Spain. ACM, New York, NY, USA, 18 pages. https://dl.acm.org/doi/abs/10.1145/3351095.3372833.
[8] Trishan Panch, Heather Mattie & Rifat Atun, Artificial Intelligence and Algorithmic Bias: Implications for Health Systems, Viewpoints, Vol. 9, No. 2 (December 1, 2019), https://www.jogh.org/documents/issue201902/jogh-09-020318.pdf.
[9] Szalavitz, Maia, The Pain was Unbearable. So Why Did Doctors Turn Her Away?” Wired (August 11, 2021), https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/.
[10] Szalavitz, id.