HomeHealthAI and healthcare: should we worry?

AI and healthcare: should we worry?

A Pew Research poll found that six in 10 American adults would feel uncomfortable if their own healthcare provider relied on artificial intelligence (AI) to diagnose diseases and recommend treatments. But the reality is that AI has entered the health and wellness world, with some doctors already harnessing its power and potential.

Yahoo News spoke with Marzyeh Ghassemi, assistant professor at MIT’s Institute for Medical Engineering and Science, and James Zou, assistant professor of biomedical data science at Stanford University, to learn more about the intersection of AI and healthcare – what’s currently possible. what is on the horizon and what the disadvantages could be.

Some doctors are using AI chatbots to deliver bad news to patients, report says >>>

What is currently possible?

Here are some examples of what AI can do now.

  • Make diagnoses and conduct assessments. “[There are] more than 500 medical AI algorithms and devices cleared by the US FDA and now available for use on patients. And many of these algorithms actually help doctors make better diagnoses and better assess patients,” Zou said. By using AI in tasks like evaluating medical images, doctors can eliminate some of the more labor-intensive manual work.

  • Make forecasts. While many current AI models focus on helping diagnose patients, Ghassemi said she has also seen some models being developed that can help predict the progression of a disease or the development of possible complications of a disease.

  • Simplify medical information for patients. “A lot of medical terminology and concepts can be quite complicated,” Zou said. “One of the projects we’ve done is to use ChatGPT to basically simplify the medical consent forms, which are terribly difficult to read, so that someone at an eighth-grade reading level can read it. ”

See also  The truth about ultra-processed foods can be extremely complicated

Artificial intelligence could lead to faster, more accurate heart attack diagnosis – and other health news you may have missed >>>

What could AI do in the future?

And there may be more applications on the horizon. This is what AI could do in the future.

  • Organize healthcare data. Zou said a major challenge is that data from different hospitals, including electronic medical records, cannot be easily exchanged. AI could help with that. “If you are a patient and go to different hospitals, the hospitals often don’t talk to each other very well. And this is an area where these AI algorithms, possibly the language models, could make it much easier.”

  • Predict bad outcomes. AI could also help identify patients at risk so they get the care they need early – which could help combat maternal morbidity and mortality in the US. outcomes for women. Then we might be able to point out to health care teams that they are making poor choices in women’s health care, or we can direct additional resources to pregnant women when they need them most,” Ghassemi said.

  • Improve treatment response predictions. For chronic conditions such as depression, in many cases a doctor will have to make an “educated assessment” of which drug or treatment would work best for an individual patient. Ghassemi said AI can help doctors make better decisions by taking into account limitations such as body weight or gender, which can affect how a patient metabolizes certain medications.

  • Develop new medicines. “There is a whole pipeline where AI can be used in the early stages to help us discover new drugs, new molecules, new antibiotics,” Zou said.

See also  Air quality alerts were activated in New York as Canadian wildfires blanketed the Northeast

Will AI soon be as smart as – or smarter than – humans? >>>

The scary side of AI in healthcare

“I don’t think the danger is that it becomes a killer robot and comes after you. The danger is that it will repeat or worsen the poor care you are already receiving,” says Ghassemi.

“We actually train machine learning systems to do what we do — not what we think we do or hope we would do. And what happens in healthcare is if you naively train machine learning models to Do what we do now, you get models that work much, much worse for women and minorities.”

For example, one AI-powered device overestimated blood oxygen levels in patients with darker skin, resulting in undertreatment of hypoxia (oxygen deficiency). A 2019 study found that an algorithm used to predict the health care needs of more than 100 million people was biased against Black patients. “The algorithm relied on healthcare spending to predict future health needs. But because there has historically been less access to care, black patients often spent less. As a result, black patients had to be much sicker to be recommended for additional care under the algorithm,” NPR reported.

The National Eating Disorders Association also made headlines after its new AI chatbot, Tessa, advised users to count calories and measure body fat — forcing the organization to remove the chatbot months after laying off its human phone line workers.

See also  Some people with autism and intellectual disabilities seek euthanasia, researchers find

“I think the problem is that if you try to naively replace humans with AI in healthcare, you get very bad results,” says Ghassemi. “You have to see it as a tool for expansion, not as a substitute.”

Major medical devices are failing to accurately diagnose black patients, research shows >>>

How can we limit the potential harm of AI to health?

Technology industry leaders released a one-sentence statement in May saying that “mitigating the risk of AI extinction must be a global priority, alongside other societal-scale risks such as pandemics and nuclear war.” In the healthcare field, Ghassemi and Zou made some suggestions for steps that can be taken to limit the potential harms of AI.

  • Be transparent. Zou said a big first step would be more openness about what data is used to train AI models such as chatbots and how those models are evaluated.

  • Evaluate algorithms carefully before allowing patients to interact with them. There is already a risk of patients receiving misinformation online, but if hundreds of thousands of patients come to a single source, such as a chatbot that has problems, the risk is even greater, Zou said.

  • Keep AI systems up to date. “You need a plan to keep the AI ​​system up to date and relevant with current medical advice, because medical advice changes,” Ghassemi said. “If you let a model become outdated and start giving incorrect recommendations to doctors, that can also lead to patient harm.”

  • Establish rules. Ghassemi suggested the Department of Health and Human Services Office for Civil Rights could play a role. “They could enforce this rule that prohibits discrimination in healthcare and say, ‘Hey, that goes for algorithms too.’”

- Advertisement -
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments