How can AI improve healthcare?
With an ever growing, aging population there has never been such a strain on healthcare systems across the world, a fact that was made evident during by the COVID-19 pandemic. AI offers the healthcare sector a chance to automate and innovate. Systems are under increasing pressure to improve efficiency and lower costs. Using AI tools could provide the answer however with health data being sensitive in nature, many citizens may be sceptical of this inevitable transition as trust in AI is low.
There are number of ways in which AI has already begun shaping the future of healthcare systems. For example, AI summaries of doctor’s notes saves many doctors precious time, which allows them to see more patients and focus on the quality of their assessments rather than paperwork. Further, AI systems have access to medical histories and practice-wide data allowing better assessments of cases. The benefits of AI can be seen throughout all aspects of the medical profession from dosage error reductions, to record keeping and even fraud prevention. AI could also be beneficial in settings with a lack of medical specialists for example, interpreting retinal scans and radiology images. This would put less pressure on specialists by easing their workload and improving precision.
There is no doubt that AI combined with the experience of expert medical professionals will improve the efficiency of systems globally. However, the trustworthiness of AI systems is low as many users do not understand how to accurately reap their benefits. This can lead to misuse. Additionally, lack of transparency between the creators of AI systems and all other stakeholders means that issues such as bias and discrimination can be developed through use of the system itself.
Challenges:
In June 2021, the World Health Organisation (WHO) released a publication listing key regulatory considerations on AI for health. The publication emphasised the importance of establishing AI systems’ safety and effectiveness. Upon release of the guidance, Director
General stated, ‘AI holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation’. However, a recent study by Stanford School of Medicine found that popular chatbots were spreading misinformation upon patients asking about their diagnosis and imposing a race bias. This is not the first-time predictive AI models have created bias. In 2019, it was found that an algorithm that was being used to predict the healthcare needs of 70 million patients across the United States was systematically privileging white patients.
On the 19th of October 2023, the WHO called for further regulations over the use and potential misuse of AI in the healthcare industry, indicating that the previous guidance is not enough. The WHO stated that there were ‘serious challenges’ in using AI systems such as ‘unethical data collection, cybersecurity threats and amplifying bias and misinformation’. Whilst some steps have been taken to aid implementation of AI into healthcare systems, there is far more to be done to win the trust of professionals and the public.
Image: Consumer Technology Association’s Standard for Healthcare Products That Use AI
How can we mitigate these challenges?
The increased use of AI in medicine is inevitable. Therefore, trust in these systems in vital. Professor Faisal, one of the UK’s leading experts in AI healthcare at Imperial Collage London, is currently installing an AI system at St Mary’s Hospital in London. Professor Faisal has urged citizens not to fear the new medical tech wave saying: ‘we’ve put a lot of thought into how AI deals with clinicians as people – interacting with them and making recommendations. At the same time, we’ve worked on how to make this a regulatable, acceptable way of interacting’. By dividing AI healthcare into two categories: perception and intervention there can be a better understanding of what is required by each kind of system. Perception systems must be able to communicate to medical professionals whereas intervention must be accessible to all citizens. These systems must therefore be developed with the user’s level of proficiency and understanding in mind.
THEMIS aims to develop a human-centred trustworthy AI ecosystem which ensures that decisions made by hybrid-AI systems can be trusted by users. One of the Use Cases partners involved in the project is MUP Hospital, the largest hospital in Bulgaria. Accessing a data set of this scale means that AI risk assessment models, such as that for cancer risk identification, can be accurately trained. This training will be married with explainability and fairness criteria that are defined through co-creation activities in THEMIS. Outcomes will be optimised to ensure they are transparent and understandable for all users, thus empowering healthcare professionals to make decisions they trust.
Comments