As tools powered by artificial intelligence increasingly find their way into healthcare, the latest research from UC Santa Cruz Politics Department PhD candidate Lucia Vitale takes stock of the current landscape of promises and concerns.
AI proponents see the technology helping manage healthcare supply chains, monitoring disease outbreaks, making diagnoses, interpreting medical images and reducing disparities in access to care due to the shortage of to compensate healthcare workers. But others are sounding the alarm on issues like privacy rights, racial and gender bias in models, lack of transparency in AI decision-making processes that could lead to errors in patient care, and even the ability for insurance companies to use AI to discriminate against people. with poor health.
The types of impact these tools will ultimately have will depend on how they are developed and deployed. In an article for the magazine Social sciences and medicineVitale and her co-author, PhD student Leah Shipton of the University of British Columbia, conducted an extensive literature review of the current trajectory of AI in healthcare.
They argue that AI is positioned to become the latest in a long line of technological developments that will ultimately have limited impact because they engage in a “politics of avoidance” that distracts from more fundamental structural problems in global public health , or even worsen it. .
For example, like many technological interventions of the past, most AI developed for health focuses on treating disease while ignoring the underlying determinants of health. Vitale and Shipton fear that the hype over unproven AI tools could distract from the urgent need to implement low-tech but evidence-based holistic interventions, such as community health workers and harm reduction programs.
“We’ve seen this pattern before,” Vitale said. “We continue to invest in these technical silver bullets that are failing to actually change public health because they don’t address the deep-seated political and social determinants of health, which can range from things like health policy priorities to access to healthy food and a safe place to live.”
AI is also likely to continue or exacerbate patterns of harm and exploitation that have traditionally been common in the biopharmaceutical industry.
An example discussed in the article is that ownership and profits from AI are currently concentrated in high-income countries, while low- to middle-income countries with weak regulations can be targeted for data extraction or deployment experiments of potentially risky new technologies. technologies.
The article also predicts that lax regulatory approaches to AI will ensure that intellectual property rights and industrial incentives continue to be prioritized over fair and affordable public access to new treatments and tools. And as corporate profit motives will continue to drive product development, AI companies are also likely to follow the health tech sector’s long-term trend of overlooking the needs of the world’s poorest people when deciding which issues to target for research and investment investments. development.
However, Vitale and Shipton did identify a bright spot. AI could potentially break the mold and create a deeper impact by focusing on improving the healthcare system itself. AI could be used to distribute resources more efficiently across hospitals and for more effective patient triage.
Diagnostic tools could improve efficiency and expand the capabilities of primary care physicians in small rural hospitals without specialists. AI could even provide some basic but essential healthcare services to fill the labor and specialization gaps, such as providing prenatal checkups in areas where maternity care is expanding.
All of these applications could potentially result in fairer access to care. But that outcome is far from guaranteed.
Depending on how and where these technologies are deployed, they could successfully fill gaps in care where there are real shortages of healthcare workers, or lead to unemployment or precarious gig work for existing healthcare workers. And unless the root causes of healthcare workforce shortages are addressed – including burnout and “brain drain” in high-income countries – AI tools could end up diagnosing or detecting outbreaks, which is ultimately not helpful because communities still don’t have the capacity to answer.
To maximize benefits and minimize harm, Vitale and Shipton argue that regulations must be put in place before AI expands further into the health sector. The right safeguards can help stop AI from following harmful patterns from the past and instead chart a new path that ensures future projects will be aligned with the public interest.
“With AI we have the opportunity to correct the way we interact with new technologies,” Shipton said.
“But we need a clear agenda and framework for the ethical governance of AI health technologies through the World Health Organization, major public-private partnerships that finance and deliver health interventions, and countries like the United States, India and China that host technology companies. To implement this, continued advocacy from civil society is needed.”
More information:
Leah Shipton et al., Artificial Intelligence and the Politics of Avoidance in Global Healthcare, Social sciences and medicine (2024). DOI: 10.1016/j.socscimed.2024.117274
Quote: Will AI tools revolutionize public health? Not if they continue to follow old patterns, researchers claim (2024, October 9) from October 14, 2024 at https://medicalxpress.com/news/2024-10-ai-tools-revolutionize-health-patterns.html
This document is copyrighted. Except for fair dealing purposes for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.