ChatGPT and Health Advice: The Future of AI Generated Health Content
Co-Author: Shivani Mishra
ChatGPT has become one of the most talked about and rapidly growing advancements in technology. An app that gained 1 million users in under a week can generate content that is almost indistinguishable from text produced by humans. This innovation has opened a range of possibilities in the medical field including producing quick AI generated health articles. While many believe that ChatGPT is poised to revolutionise the way content is produced in the health sector in the near future, companies are already putting this technology to use.
Earlier this month, publishers of widely read magazines like Sports Illustrated and Men’s Journal announced that they would start publishing AI generated articles and assured their readers that this practice would not reduce the quality of their writing. While several popular tech outlets like CNET have already begun generating AI based articles, using AI to provide readers about consequential topics of health and medicine could turn out to be rather perilous.
The space of health content is already filled with conflicting information.Clinicians, medical experts and public health researchers have long been concerned about the effects of irresponsible heath messaging in the broader public information environment. We have written two separate articles on this blog, problematizing irresponsible and click-bait style health messaging. Wouldn’t a further lack of monitoring and regulation in AI generated health content make matters worse? Take for example the very first article written by a bot in Men’s Journal titled ‘’What All Men Should Know About Low Testosterone’’. It’s interesting to note that the byline of the article credits some ambiguous ‘’Men’s Fitness Editors’’ and not ChatGPT. The article is filled with medical claims, nutrition and lifestyle advice and even goes on to suggest a specific medical treatment in the form of testosterone replacement therapy! There seems to be no consideration for the fact that the target audience for an article like this are genuinely concerned lay people looking for guidance on a serious health issue.
What makes matters worse is the almost indistinguishable authority and supposed expertise with which these articles are authored. On the face of it, the article reads fairly academic due to all the well positioned citations, however on a closer look, several errors begin to surface. Bradley Anawalt, the chief of medicine at the University of Washington Medical Center who has held several leadership positions at the university’s Endocrine Society, reviewed the article and expressed grave concerns about the persistent factual mistakes and mischaracterizations of medical science that mislead lay readers with a profoundly warped understanding of serious health issues. Unsurprisingly the article lacks critical nuances - an already persistent issue in the field of journalistic health reporting.
"There is just enough proximity to the scientific evidence and literature to have the ring of truth," Anawalt said, "but there are many false and misleading notes." Examples of such inaccuracies include describing testosterone replacement therapy as using ‘synthetic hormones’ and stating poor nutrition as one of the most common causes of low testosterone.
This kind of unreliability is baked into how ChatGPT and its competitor LLMs are built. They work by learning statistical patterns of language in enormous databases of online text — including any untruths, biases or outmoded knowledge. When LLMs are then given prompts, they simply spit out, word by word, any way to continue the conversation that seems stylistically plausible. This results in easily produced errors and misleading information all of which is baked into ‘’content’’ that is produced for ‘’free’’. The other notable issue is that LLMs cannot show the origins of their information - so citations produced in their work are nearly fictitious.“The tool cannot be trusted to get facts right or produce reliable references,” noted a January editorial on ChatGPT in the journal Nature Machine Intelligence.
With these caveats, it is now more important than ever to consume health content with extra caution. There is no doubt that publishers will continue to harness the power of AI in producing all forms of content, including health content, however, the onus now lies on the readers to really filter out the content that they consume and simply disregard any health advice that comes up floating on your newsfeed or internet search!