AI Bias Through the Lens of Antidiscrimination Law
By Cassidy Tshimbalanga; Photo Credit: Phonlamai Photo/Shutterstock
Over the last few years, artificial intelligence (AI) algorithms have become increasingly significant in the medical world.[1] AI algorithms have the ability to facilitate clinical tasks such as risk prediction and disease screening.[2] Additionally, these models have been seen to improve patients’ outcome, speed up clinical trials, curate personalized treatment plans, and reduce cost burdens.[3] As traditional medical software involves predefined and fixed instructions, AI systems are able to accumulate large amounts of data to learn patterns and base their decisions off of that data.[4] However, the data AI algorithms synthesize is often tainted with evidence of bias.[5] Insufficient sample sizes for certain groups of patients can result in, “subtominal performance, algorithms underestimation, and clinically unmeaningful predictions.”[6] This bias has been seen, for example, when AI algorithms are used to predict future breast cancer risk based on past data that incorrectly classified black patients as low risk.[7] Thus, the data that these algorithms train themselves on will eventually discriminate and underrepresent people of color and other marginalized groups.[8] Solving this issue has become even more crucial as doctors are beginning to rely heavily on these AI algorithms.[9] This phenomenon, known as AI paternalism, happens when doctors become inclined to trust AI instead of their own clinical judgment, thus positioning AI as the highest form of authority in treating patients.[10]
More recently, to address AI bias, the US Department of Health and Human Services Office for Civil Rights (“OCR”) issued a final rule on May 2024, holding AI users legally accountable for mitigating the risk of discrimination.[11] Section 1557 of the Affordable Care Act, which prohibits discrimination in health programs, is clarified to apply to AI-based discrimination and has two requirements.[12] First, Section 92.210(b) requires reasonable efforts to identify patient care decision support tools used in health programs and activities that employ variables or factors that measure race, color, national origin, sex, age, or disability.[13] Second, Section 92.210(c) requires reasonable efforts to mitigate the risk of discrimination resulting from the such tools.[14] Beginning May of this year, violations of the rule and resulting private action will be addressed, with health care organizations and insurers as the core target of the regulation. [15] The regulation applies to entities that receive federal funds, rather than the creators of the AI tools and algorithms themselves.[16] Hospitals need to conduct regular evaluations, take corrective action, establish internal standards, and conduct comprehensive training for their staff on AI literacy and bias recognition.[17] Additionally, the OCR will provide technical assistance and guidance on how to comply with the rule.[18]
In a world where AI paternalism becomes and increasing threat, there is a need to ensure that these algorithms are accurate and nondiscriminatory. OCR’s final rule considers the underlying bias in these systems and provides a solution to mitigate the discrimination in hopes of promoting more equitable outcomes for all patients.
Cassidy Tshimbalanga is a 2L at Vanderbilt Law and hails from Danville, California. She graduated from the University of California, Los Angeles in 2022 and plans on moving to New York City after graduation to focus on corporate transactional law.
[1] James Cross et al., Bias in Medical AI: Implications for Clinical Decision-Making, 3 PLOS Digital Health 1, 2 (Nov. 7, 2024), https://pdfs.semanticscholar.org/beac/bfeb85d8c954fafa4d0f3a405b2c12c3ae03.pdf.
[2] Id. at 2.
[3] Id. at 3.
[4] Id.
[5] Olga Askelrod, How Artificial Intelligence can Deepen Racial and Economic Inequities, ACLU (July 13, 2021), https://www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities.
[6] Mike Miliard, Yale Study Shows how AI Bias Worsens Healthcare Disparities, HealthcareITNews (Nov. 25, 2024), https://www.healthcareitnews.com/news/yale-study-shows-how-ai-bias-worsens-healthcare-disparities.
[7] See Mirja Mittermaier et al., Bias in AI-Based Models for Medical Applications: Challenges and Mitigation Strategies, npj Digital Medicine (2023), https://pmc.ncbi.nlm.nih.gov/articles/PMC10264403/pdf/41746_2023_Article_858.pdf.
[8] See Olga, supra note 5.
[9] See Jessica Hamzelou, Artificial Intelligence is Infiltrating Healthcare. We Shouldn’t Let it Make all the Decisions, MIT Tech. Rev. (Apr. 21, 2023), https://www.technologyreview.com/2023/04/21/1071921/ai-is-infiltrating-health-care-we-shouldnt-let-it-make-decisions/.
[10] Id.; Melissa McCradden, Patients Wisdom Should be Incorporated into Health AI to Avoid Algorithmic Paternalism, Nature (Feb. 23, 2023), https://www.nature.com/articles/s41591-023-02224-8.pdf.
[11] Michelle Mello, Antidiscrimination Law Meets Artificial Intelligence- New Requirements for Health Care Organizations and Insurers, JAMA Network (Aug. 29, 2024), https://jamanetwork.com/journals/jama-health-forum/fullarticle/2823255.
[12] See id.
[13] See 45 C.F.R. § 92.210(b) (2024).
[14] See 45 C.F.R. § 92.210(c).
[15] See Mello, supra note 11.
[16] Katie Adams, Navigating AI in Health Care: HHS’s Nondiscrimination Final Rule is in Effect, Bipartisan Policy Center (July 19, 2024), https://bipartisanpolicy.org/blog/navigating-ai-in-health-care-hhss-nondiscrimination-final-rule-is-in-effect/#:~:text=Core%20Protections%20of%20the%20Rule,algorithms%20in%20health%20care%20settings.
[17] Id.
[18] Id.