DREDF Recommendations to US Access Board on AI in Healthcare and Transportation

Download the PDF

October 31, 2024
via Online Portal (www.regulations.gov)

Sachin Dev Pavithran
Executive Director
U.S. Access Board
1331 F St NW, Suite 1000
Washington, DC 20004-1111

RE:       Artificial Intelligence (AI) for the disability community and AI practitioners

Docket No. ATBCB-2024-0005-0001

Dear Executive Director Pavithran,

Thank you for the opportunity to provide written comment on DREDF’s views regarding artificial intelligence (AI), which includes it’s use in clinical settings and autonomous vehicles (AVs). DREDF is a national cross-disability law and policy center that protects and advances the civil and human rights of people with disabilities through legal advocacy, training, education, and development of legislation and public policy. We seek a just world where all people, with and without disabilities, live full and independent lives free of discrimination. We are committed to increasing access to health care and transportation for people with disabilities and eliminating persistent barriers and disparities that affect the length and quality of disabled people’s lives. DREDF’s work is based on the knowledge that people with disabilities of varying racial and ethnic backgrounds, ages, genders, and sexual orientations are fully capable of achieving self-sufficiency and contributing to their communities with access to needed services and supports and the reasonable accommodations and modifications enshrined in U.S. law.

AI in Healthcare

States, health plans, and health care providers are using AI and automated decision-making tools (ADTs) to establish clinical care standards, utilization management, service eligibility, and in other areas. As has been noted, AI and ADTs are trained using data and outcomes that can be rife with assumptions and bias that devalue the lives of people with disabilities and health conditions. When disability intersects with other marginalized identities, AI bias can further stigmatize patients, misdirect resources, and reinforce or ignore barriers to care.

DREDF is concerned that civil rights enforcement mechanisms, policies, and agencies are ill-equipped to deal with decisions made, and ethical obligations met, in part or entirely by ADTs. Generative AI uses in healthcare can also have gaps that apply to people with disabilities, given that people with specific disabilities and care needs will usually comprise a minority of healthcare records/information, and existing healthcare patient records have little or no mention of the patient reasonable accommodation or policy modification needs that people with disabilities need for effective healthcare.

DREDF recommends that when using AI and algorithmic tools:

  • Covered healthcare and other service entities must be transparent about their use of AI, the populations it is used with, what they determine, when used, and any outcomes altered through human intervention.
  • Covered healthcare and other service entities must bear a proactive burden to document the steps they took to choose unbiased and open source algorithmic or AI tools, including the underlying data bases and populations used to train AI.
  • Healthcare organizations must commit to improving their databases and collecting granular disability demographic information from members/beneficiaries who voluntarily provide the information.
  • Healthcare organizations must establish standards for ongoing external oversight and evaluation of AI use for as long as algorithmic and AI tools are used, including their interactions and contracts with third party AI service providers.
  • Healthcare organizations must develop disability-inclusive ethics and an ethics review process. People with disabilities must be equal stakeholders in the ethics process.
  • All patients and members must receive clear notice in any benefits denial notice that algorithms or AI were used and an appeal process provided that is conversant in the impact of AI
  • Any decision-makers that could deal with algorithms must receive basic training in how algorithms are used in healthcare and how implicit and systemic bias can be embedded in algorithmic design.

AI in Transportation

In addition, DREDF seeks to ensure AVs live up to the promise of increased safe, equitable mobility. There is a high likelihood of underrepresentation of people with disabilities and critical accessible infrastructure in AV AI datasets, and potential for devaluing of disabled people’s lives in algorithmic scenarios. Bias in AV detection and algorithms can lead to serious harm to people or assistive devices. There is no time to appeal and who is liable is still uncertain.

DREDF recommendations include transparency, standard setting for inclusive datasets and scenarios at the highest levels, and disability community involvement at every stage of development of AI use in AVs. DREDF also recommends the Access Board and public entities consider AI’s potential positive and negative impacts in other transportation modes. For example, algorithms are currently being used to increase efficiency of paratransit eligibility determinations. A wrongful denial could lead to decreased overall quality of life, loss of access to education, employment or healthcare and even serious illness. Transit agencies and public officials are using AI to address state of good repair of infrastructure and vehicles. The needs of disabled travelers must be taken into account, including utilizing the technology to address elevator maintenance and platform gap issues. Agencies and officials are also increasingly relying on surveillance technology in transportation facilities and vehicles. Disabled travelers’ privacy and safety must be prioritized, especially multiply marginalized disabled people who are more likely to be misunderstood or harmed by law enforcement.

For additional detail, please refer to DREDF comments on the impacts of AI in Medicare Advantage Plans, as well as impacts of and recommendations to mitigate algorithmic bias in healthcare and AVs.[1],[2],[3]

We look forward to working with fellow disability advocates, industry and government stakeholders and the US Access Board to ensure inclusive, equitable use of AI. If you have any questions about the above comments or the materials which we cite, please contact Silvia Yee at syee@dredf.org.

Sincerely,

Silvia Yee
Policy Director

[1] DREDF, Comments submitted re: RFI on Medicare Advantage (Docket No. CMS-2022-0123-0001), August 31, 2022.  https://dredf.org/wp-content/uploads/2022/09/FINAL3-DREDF-comments-CMS-MA-RFI-8-31-22-letterhead.pdf

[2] DREDF Brief, Disability Bias in Clinical Algorithms: Recommendations for Healthcare Organizations, January 2023.  https://dredf.org/disability-bias-in-clinical-algorithms-recommendations-for-healthcare-organizations/

[3] DREDF Brief, Addressing Disability and Ableist Bias in Autonomous Vehicles: Ensuring Safety, Equity and Accessibility in Detection, Collision Algorithms and Data Collection, November 2022. https://dredf.org/addressing-disability-and-ableist-bias-in-autonomous-vehicles-ensuring-safety-equity-and-accessibility-in-detection-collision-algorithms-and-data-collection/

Secret Link