All news

HNU Health­care Man­age­ment In­sights #2

21.07.2023, Dia­logues :

In the interview series, Prof. Dr. Patrick Da-Cruz interviews changing experts on current topics from the health sector. This time, Michael Ullmann, MD, is the guest and answers questions on the topic of generative AI in the health sector.

The in­ter­locutors

What is generative AI and how does it work?

Dr. med. Ullmann: Generative artificial intelligence (AI) generates (produces) texts on the basis of statistical probabilities after a corresponding input request (so-called prompt). It makes language and thus texts computable. The texts generated in this way are therefore not based on factual knowledge or an intelligent approach, but purely on statistical probabilities. Therefore, the term artificial intelligence is unfortunate for these trained algorithms.

Generative AI is based on neural networks with transformer architecture and Deep Learning algorithms and is able to understand, process and generate (produce) natural language. Thus, it is a machine learning model from the field of AI for natural language processing (NLP).
Generative AI is based on so-called Large Language Models (LLMs). These are trained with huge amounts of text and fine-tuned as needed. The acronym GPT, for example in ChatGPT, one of the best-known LLM representatives, already describes how it works: It stands for "Generative Pre-trained Transformer", an artificial "pre-trained" neural network.
The learning during the training usually takes place unsupervised. The training texts are not labeled (marked) or provided with special comments. For example, hundreds of gigabytes of publicly available texts in different languages such as Wikipedia articles, books, scientific articles, news texts, forum posts, social network posts, or online comments are used for training.

Put simply, generative AI or LLMs could be called "text prediction machines."

How will generative AI impact healthcare, which areas can benefit from it and in what way?

Ullmann, MD: According to unanimous expert opinion, LLMs will play a central role in healthcare in the future. They are expected to take over the entire spectrum of text-based clinical documentation:

  • Medical and nursing (progress) documentation
  • Physician letters (admission, transfer, discharge, reports on findings, etc.)
  • Treatment plans
  • Therapy recommendations

In addition, they can already be used in training for nursing as well as in medical studies. It is also conceivable that they could be used in patient education and training.

An essential application will be the time-saving summary of inpatient hospital stays on the basis of the data documented in the medical record. Even complicated courses of events can be summarized very quickly and generated in simple text form that is also understandable for patients. This principle of extracting, summarizing and clearly presenting essential information can also be applied to medical science and research, where large volumes of scientific text can be summarized quickly and clearly.

Another important aspect is the promising support of physicians in the development of therapy plans. Especially in the field of oncology, where new scientific findings and therapeutic approaches are published almost daily, this can be very helpful and significantly improve patient care.

Can generative AI in healthcare be a solution approach for the shortage of specialists?

Ullmann, MD: No. Generative AI can neither replace a lack of medical or nursing staff, by the way, nor can the other AI solutions used in healthcare. AI is designed to support, not replace, healthcare professionals.

However, generative AI in healthcare can lead to significantly greater efficiency and productivity. It is expected that the time saved in this way will free up more time for patient care. This could possibly compensate somewhat for the existing shortage of healthcare professionals, which is expected to worsen in the future, and also have a positive impact on the quality of patient care.

What risks are associated with the use of generative AI in healthcare and how can these be minimized?

Ullmann, MD: If you know what AI, including generative AI, is based on, you also know the risks associated with AI development. Generative AI, as already mentioned, is trained with large amounts of data that have usually not been checked for accuracy. The freely available data comes from text posts on the Internet, social networks and other unverified sources and thus naturally contains false information and untrue claims. This falsified training data subsequently leads to a bias during the training of the AI, which can affect the output, i.e. the generated text in this case, without the user immediately recognizing this. In this context, one also speaks of hallucinations of the AI when it or the LLM freely invents alleged facts. This can have serious consequences, especially in healthcare, if these hallucinations are not identified as such by the user and are simply uncritically adopted as facts.

The texts generated by LLMs in particular usually make a very accurate and sensible impression and can thus very easily deceive inexperienced and uncritical users in particular. To minimize these risks, experts have long been calling for strict regulation of AI, especially for use in healthcare. Initial drafts have already been launched at EU level. One possible approach, for example, would be to label texts generated with LLMs as such.

At medical universities in the USA, there are already considerations to actively participate in the development of generative AI for medicine, instead of leaving this only to industry and its software developers, i.e. ultimately medical laymen. Another sensible demand is that AI and its medical applications should also become part of medical studies in the future. Future physicians should already become familiar with these technologies and solutions during their studies and be aware of the advantages and disadvantages.

What requirements for competence profiles will arise for health professionals and the management of healthcare facilities as a result of generative AI?

Dr. med. Ullmann: In principle, the same as those that should apply to the now taken-for-granted daily use of social networks and the assessment of their freely accessible and uncontrolled texts and posts.

The application of generative AI or LLMs does not require any special competence per se, but the evaluation of the text generated in this way does. Here, critical thinking is required with permanent questioning of presented claims for their correctness and truthfulness. All documents generated with AI, whether discharge letters or radiological findings, must be critically proofread, checked for accuracy and corrected if necessary.

However, it is precisely the detection of AI hallucinations that can sometimes pose a challenge, especially in view of the time pressure prevailing in the healthcare sector. The recognition of untruths and freely invented "facts" must be practiced and applied consistently, even under time pressure.

Thank you very much for the interview!