ChatGPT Can Help Doctors—and Hurt Patients

[ad_1]

“Medical knowledge and practices change and evolve over time, and there’s no telling where in the timeline of medicine ChatGPT pulls its information from when stating a typical treatment,” she says. “Is that information recent or is it dated?”

Users also need to beware how ChatGPT-style bots can present fabricated, or “hallucinated,” information in a superficially fluent way, potentially leading to serious errors if a person doesn’t fact-check an algorithm’s responses. And AI-generated text can influence humans in subtle ways. A study Published in January, which has not been peer reviewed, that posed ethical teasers to ChatGPT concluded that the chatbot makes for an inconsistent moral advisor that can influence human decisionmaking even when people know that the advice is coming from AI software.

Being a doctor is about much more than regurgitating encyclopedic medical knowledge. While many physicians are enthusiastic about using ChatGPT for low-risk tasks like text summarization, some bioethicists worry that doctors will turn to the bot for advice a t whether surgery is the right choice for a patient with a low likelihood of survival or recovery.

“You can’t outsource or automate that kind of process to a generative AI model,” says Jamie Webb, a bioethicist at the Center for Technomoral Futures at the University of Edinburgh.

Last year, Webb and a team of moral psychologists explored what it would take to build an AI-powered “moral adviser” for use in medicine, inspired by previous research that suggested the idea. Webb and his coauthors concluded that it would be tricky for such systems to reliably balance different ethical principles and that doctors and other staff might suffer “moral de-skilling” if they were to grow overly reliant on a bot instead of thinking through tricky decisions themselves.

Webb points out that doctors have been told before that AI that processes language will revolutionize their work, only to be disappointed. Jeopardy! wins in 2010 and 2011, the Watson division at IBM turned to oncology and made claims about effectiveness fighting cancer with AI. But that solution, initially dubbed Memorial Sloan Kettering in a box, wasn’t as successful in clinical settings as the hype would suggest , and in 2020 IBM shut down the project.

When hype rings hollow, there could be lasting consequences. During a discussion panel at Harvard on the potential for AI in medicine in February, primary care physician Trishan Panch recalled seeing a colleague post on Twitter to share the results of asking ChatGPT to diagnose an illness, soon after the chatbot’s release.

Excited clinicians quickly responded with pledges to use the tech in their own practices, Panch recalled, but by around the 20th reply, another doctor chimed in and said every reference generated by the model was fake. “It only takes one or two things like that to erode trust in the whole thing,” said Panch, who is cofounder of health care software startup Wellframe.

Despite AI’s sometimes glaring mistakes, Robert Pearl, formerly of Kaiser Permanente, remains extremely bullish on language models like ChatGPT. He believes that in the years ahead, language models in health care will become more like the iPhone, packed with features and power that can augment doctors and help patients manage chronic disease. He even suspects language models like ChatGPT can help reduce the more than 250,000 deaths that occur annually in the US as a result of medical errors.

Pearl does consider some things off-limits for AI. Helping people cope with grief and loss, end-of-life conversations with families, and talk about procedures involving a high risk of complications should not involve a bot, he says, because every patient’s needs are so variable that you have to have those conversations to get there.

“Those are human-to-human conversations,” Pearl says, predicting that what’s available today is just a small percentage of the potential. “If I’m wrong, it’s because I’m overestimating the pace of improvement in the technology. But Every time I look, it’s moving faster than even I thought.”

For now, he likes ChatGPT to a medical student: capable of providing care to patients and pitching in, but everything it does must be reviewed by an attending physician.

[ad_2]

Source link

Recommended For You

About the Author: News Center