Home Health AI in healthcare needs to be regulated, but don’t forget about algorithms, researchers say

AI in healthcare needs to be regulated, but don’t forget about algorithms, researchers say

by trpliquidation
0 comment
AI in healthcare needs to be regulated, but don't forget about algorithms, researchers say

Credit: Pixabay/CC0 public domain

You could argue that one of the most important tasks of a doctor is to constantly evaluate and re-evaluate the odds: what are the chances of a medical procedure being successful? Is the patient at risk of developing serious symptoms? When should the patient return for further examination?

Amid these critical considerations, the rise of artificial intelligence promises to reduce risk in clinical settings and help physicians prioritize care for high-risk patients.

Despite its potential, researchers from the MIT Department of Electrical Engineering and Computer Science (EECS), Equality AI and Boston University are calling for greater oversight of AI by world regulators. a comment published in the New England Journal of Medicine AI (NEJM AI) after the U.S. Office for Civil Rights (OCR) of the Department of Health and Human Services (HHS) issued a new rule under the Affordable Care Act (ACA).

In May, the OCR published a final rule in the ACA that prohibits discrimination based on race, color, national origin, age, disability, or sex in “patient care decision support tools,” a newly created term that includes both A.I. and non-automated tools used in medicine.

The final rule, developed in response to President Joe Biden’s 2023 Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, builds on the Biden-Harris Administration’s commitment to advancing health equity by focusing on preventing discrimination.

According to senior author and EECS associate professor Marzyeh Ghassemi, “the rule is an important step forward.”

Ghassemi, from the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Institute for Medical Engineering and Science (IMES), adds that the rule “would equity-based improvements should dictate the non-AI algorithms and clinical decision support tools already used in clinical subspecialties.”

The number of AI-based devices approved by the U.S. Food and Drug Administration has increased dramatically over the past decade since the approval of the first AI-based device in 1995 (PAPNET Testing System, a cervical cancer screening tool).

Since October, the FDA has approved nearly 1,000 AI devices, many of which are designed to support clinical decision making.

However, researchers point out that there is no regulatory body overseeing the clinical risk scores resulting from clinical decision support tools, despite the fact that the majority of US physicians (65%) use these tools monthly to determine next steps to be determined. patient care.

To address this shortcoming, the Jameel Clinic will again host a regulatory conference in March 2025. Last year’s conference sparked a series of discussions and debates between educators, regulators from around the world and industry experts, focused on the regulation of AI in healthcare.

“Clinical risk scores are less opaque than AI algorithms because they typically include only a handful of variables linked in a simple model,” said Isaac Kohane, chairman of the Department of Biomedical Informatics at Harvard Medical School and editor-in-chief of NEJM AI.

“Nonetheless, even these scores are only as good as the data sets used to train them and as the variables experts have chosen to select or study in a given cohort. If they influence clinical decision-making, they should be held to the same standards as their more recent and much more complex AI cousins.”

Additionally, while many decision support tools do not use AI, researchers note that these tools are just as guilty of perpetuating biases in healthcare and require oversight.

“Regulating clinical risk scores poses significant challenges due to the proliferation of clinical decision support tools embedded in electronic health records and their widespread use in clinical practice,” said co-author Maia Hightower, CEO of Equality AI. “Such regulations remain necessary to guarantee transparency and non-discrimination.”

However, Hightower adds that under the incoming administration, regulation of clinical risk scores “may prove particularly challenging given the emphasis on deregulation and opposition to the Affordable Care Act and certain nondiscrimination policies.”

More information:
Marzyeh Ghassemi et al, Settling the Score on Algorithmic Discrimination in Healthcare, NEJM AI (2024). DOI: 10.1056/AIp2400583

Provided by the Massachusetts Institute of Technology


Quote: AI in healthcare needs to be regulated, but don’t forget the algorithms, say researchers (2024, December 23) retrieved December 24, 2024 from https://medicalxpress.com/news/2024-12-ai-health-dont-algorithms .html

This document is copyrighted. Except for fair dealing purposes for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.

You may also like

logo

Stay informed with our comprehensive general news site, covering breaking news, politics, entertainment, technology, and more. Get timely updates, in-depth analysis, and insightful articles to keep you engaged and knowledgeable about the world’s latest events.

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

© 2024 – All Right Reserved.