The World Health Organization says it’s “enthusiastic” about using new artificial technologies in health care, but called for caution and raised concerns about bias and misinformation seeping into the tools.
“While WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support health-care professionals, patients, researchers and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs,” a WHO statement reads, referring to artificial intelligence (AI) generated “large language model” tools (LLMs).
Emerging AI technology — including the chatbot ChatGPT — has been talked about for possible use in health care, from aiding providers to answering patient questions.
But WHO lists a number of concerns about the “meteoric public diffusion and growing experimental use” of the tech for health-related purposes, including worries that AI could be trained on biased data, thereby “generating misleading or inaccurate information.”
LLMs produce responses that “can appear authoritative and plausible,” even if the responses are incorrect, WHO warns. Content produced by AI — whether text, audio or video — that contains disinformation can be “difficult for the public to differentiate” from reliable material.
The tech could also be trained on data “for which consent may not have been previously provided for such use,” raising concerns about AI’s use of sensitive data.
A poll released earlier this year found that a majority of Americans said they’d be uncomfortable with their health care provider relying on AI as part of their medical care. The WHO’s statement comes amid ongoing debate over the new and advanced tech, and its place in arenas like medicine, school and elsewhere.