Dr. AI: How will artificial intelligence and other emerging technologies make for a better healthcare experience?

Health

Any business looking to automate manual processes now has the option of turning to artificial intelligence (AI), both in the cloud and running on local machines. It’s clear that the technology – and all of its subsets – holds powerful potential benefits whether in identifying trends, streamlining the user experience, or helping train better systems. 

AI can be particularly transformative for healthcare, where it can help save lives by speeding up critical decisions, or by freeing up healthcare workers to focus on the patient experience rather than bureaucracy.

It’s important at the outset to distinguish between generative AI – typically text or image generation models – and ‘traditional’ forms of AI. Generative AI has dominated the limelight for several years now, with its ability to produce text and images or respond to users in a manner reminiscent of online interactions with human users. However, the two have different use cases and values for AI in healthcare.

Many benefits on offer

Healthcare organizations can particularly benefit from predictive AI, which takes masses of data and uses it to provide accurate projections into the future, as well as analytical AI which can help leaders make data-driven decisions.

For example, a recent digital transformation project undertaken by Bradford Teaching Hospitals NHS Foundation Trust, cut down the time healthcare workers had to spend on unnecessary phone calls through a new direct digital platform for patients. In all, the trust saved 2.5 days of nursing time over eight months, just in reduced phone time.

Generative AI chatbots, which can handle non-critical patient inquiries and refer them to a human if the topic clears a specific benchmark for seriousness. This can be integrated into existing digital communication platforms healthcare organizations may have and can free up staff time while also reducing the amount of time that potential patients have to spend on hold or searching for answers to frequently asked questions.

The Guardian reported on a new AI tool that was trialed in UK hospitals between 2021 and 2022 and helped improve cancer identification rates in patients, increasing it from 58.7% to 66%. The tool, dubbed ‘C the Signs’, assesses a patient’s data to determine their risk of developing cancer and flag those at high risk.

Multimodal AI models have brought these capabilities further, as they can process images such as handwritten notes or schedules as well as audio inputs such as a medical professional’s verbal observations of a patient to use as inputs. Theoretically, a future multimodal model could be used to provide an analysis of a patient’s scan and respond to a medical professional’s verbal questions about the scan in real time.

AI can also be used to model the optimal schedules for staff, which can help make healthcare organizations more efficient.

In the future, small language models running on a handheld device such as a tablet could help achieve all of these tasks with next to no latency. This would help provide healthcare professionals with an even more seamless AI experience that could see them transcribe notes, cross-check a patient’s symptoms with trusted and up-to-date sources of information, and use this to inform their diagnosis and advice for the patient.

Outside the frontline of healthcare, AI also holds the potential to speed up fields such as drug discovery or to provide a helping hand in vaccine development. This could have seismic impacts around the world, as common conditions are fought with higher survival rates and diseases are prevented from taking root in the first place.

For example, Pfizer used an in-house machine learning tool to sort its Covid vaccine trial data into a legible format for researchers, ahead of the treatment being approved for use by medical regulators. In all, it stated the tool saved the firm a month of manual data management.

Researchers at Imperial College London have identified methods of the AI approach deep learning for producing digital representations of drug molecules. Its open source tool can identify drug compounds based on target molecules, to efficiently provide a starting point for drugs that could be complementary for any given condition.

Finally, generative AI models can produce synthetic data that mimics real patient or healthcare data. This can come in useful where there are gaps in data repositories that could prevent analytical AI models from projecting outcomes or giving healthcare leaders data insights – for example, if a hospital is still undergoing digital transformation and doesn’t have real staff schedules to provide to a model yet.

In this way, it can help improve the readiness of a healthcare organization to expand its use of AI, giving IT leaders breathing room to further train and refine models until real data can be sourced.

AI risks can be especially problematic in healthcare

Although it’s clear that AI holds transformational potential for healthcare, there are also major ethical and practical concerns that leaders must bear in mind. Hallucinations, for example, are a flaw inherent to LLMs and have to be accounted for when any user seeks to use an AI model as a source of truth. 

Using an AI model to summarize long conversations or meeting minutes is a use case that’s ripe for hallucinations.

AP News recently reported on major errors found in AI transcripts and summaries made in healthcare settings, which can provide patients with misleading information about their own health. In some cases, AI transcripts are all that remains of audio recordings, with the originals wiped after the text version or summary is produced, which only further separate patients and healthcare professionals from accurate insights.

It’s for these reasons that leaders must ensure they only use AI where it’s appropriate within healthcare and ensure that there are suitable guardrails in place to prevent any unwanted outputs from compromising patient healthcare experiences.

Generative AI platforms also carry the potential to leak information, which could be particularly catastrophic when handling sensitive healthcare information. The risk of using machine learning and non-generative AI models to analyze images such as PET scans to detect anomalies could be lower than passing a detailed digital twin of a patient through an AI model to generate personalized responses.

In all, AI holds tangible potential for the field of AI. Across medicine, healthcare staffing, and patient experience, AI can make systems more detailed and efficient and help save lives faster. Once paired with a tight data protection and ethics strategy, it’s a technology that can go a long way to making healthcare better for everyone.

Rory Bathgate
Features and Multimedia Editor

Rory Bathgate is Features and Multimedia Editor at ITPro, overseeing all in-depth content and case studies. He can also be found co-hosting the ITPro Podcast with Jane McCallion, swapping a keyboard for a microphone to discuss the latest learnings with thought leaders from across the tech sector.

In his free time, Rory enjoys photography, video editing, and good science fiction. After graduating from the University of Kent with a BA in English and American Literature, Rory undertook an MA in Eighteenth-Century Studies at King’s College London. He joined ITPro in 2022 as a graduate, following four years in student journalism. You can contact Rory at rory.bathgate@futurenet.com or on LinkedIn.