top of page

Themis 5.0 Co-Creation Series: 4) AI and Trustworthiness in Healthcare

Writer: THEMIS 5.0THEMIS 5.0

Updated: Oct 17, 2024

Last week, we explored the findings of the Co-Creation Healthcare Workshop’s first session, this week we shall explore the second session focusing specifically on the role of trustworthiness for Artificial Intelligence (AI) in the healthcare sector. The Co-Creation Workshop found three critical features for trustworthiness in AI, which they polled according to which had the greatest priority, these were:


  1. Robustness (49%)

  2. Accuracy (32%)

  3. Fairness (25%)


People sitting around a table having a discussion
Participants having a roundtable discussion during the workshop in the healthcare sector in Denmark (DBT/Sofie Kirkegaard Jansen)

Robustness

Robustness is about ensuring that AI systems are adequately secure from faults and attacks so that there can be no doubt as to the integrity of the technology. 13 factors of robustness were proposed by the Co-Creation Workshops which were classified into four themes:


  1. Security Measurements for Data and Functionality

  2. Performance

  3. Legislation and Protocols

  4. Adaptability to User’s Needs 


Security Measurements for Data and Functionality

AI functions by use of both data and algorithms, and is only as effective as these underlying foundations. AI systems are at risk from unwarranted alteration to its underlying datasets and algorithms which would compromise the technology’s efficacy. This risk can be malicious alteration such as hacking, but it can also come from other issues such as human errors, fire, explosions, or accidental deletion of information.  Therefore, in order to ensure the integrity of AI, it is vital that these components have proper security measures in order to prevent unwanted alterations.


Performance

Participants raised concerns regarding practical issues to the utilisation of AI tools. The use of AI tools can be hindered by accessibility requirements such as the internet connection requirements or battery usage. Furthermore, the effectiveness of AI tools needs to be evaluated within its “real-world” context, and they can only be regarded to perform well if they are able to account for specific external and environmental factors that may alter its findings.


Legislation and Protocols

It is important that AI tools are beholden to appropriate laws and does not become a means to circumvent critical healthcare regulation. It is also vital that AI systems regularly undergo inspection in order to ensure that they remain compliant with relevant knowledge and protocols.


Adaptability to User’s Needs

AI tools will rarely be used by one individual and it is important that multiple users have equitable access to AI tools. This can be facilitated by tailoring the programming so that it is straightforward to use, and by implementing preventative measures to help if a device is incorrectly used. Additionally, AI tools can be used across different specialities in healthcare, and it is important that AI systems can function according to the particularities of each speciality.



Accuracy

Accuracy is ensuring that the results proposed can be trusted and do not fail to account for inferences or data points. A total of 13 accuracy user requirements were generated by the participants, which were clustered into 5 themes: 


  1. Responsibility 

  2. Balance and Limitations in Data 

  3. Data Quality 

  4. Adaptability to Complex Patients 

  5. Sustainability of AI Tools


Responsibility

As discussed in last week's blog, participants were in agreement that AI systems would only enhance the quality and not hinder it if ultimate responsibility was entrusted to human actors who used AI for counsel. For this safeguard to be effective, human actors must also hold accountability for the decisions they make if patients are harmed based upon inadequate AI recommendations. Furthermore, users of AI tools should be able to detect systematic errors, which requires adequate training in the proper use of AI and also that developers provide utmost transparency in their systems.


Balance and Limitations in Data

Accuracy of AI systems is contingent upon the quality of patient data, but gathering necessary data can be difficult as a result of patients either lying, being unaware of their condition, unconsciousness or mental health issues. Therefore, it is important that AI tools are not hampered by hyper-rigidity when assessing situations, and are able to account for limitations in its datasets which may impact its recommendations. AI tools should be able to provide a certainty estimation based upon the information it has been provided with, and suggest alternative treatments or diagnosis if an uncertainty threshold is cleared.


Data Quality

As well as implementing safeguards to AI systems, ensuring that data is of the highest calibre will also secure the accuracy of AI. As well as being transparent and grounded in real world information, it is also important that data comprehensively covers all patient groups. Therefore, future AI development will also need to account for how healthcare data is collected at present, and how this will impact the results of AI systems.


Adaptability to Complex Patients

Patients rarely are living with isolated diseases, but a range of ailments which make analysing symptoms and treatment more complex. AI’s effectiveness is contingent on its ability to assess the overall health of the patient. In cases where multiple potential treatments are identified, or when treatments for different ailments have mutually exclusive effects, then all possibilities should be recommended to healthcare professionals who can then decide what is the optimal course of action.


Sustainability of AI Tools

Participants advocated that developers must enhance the sustainability and longevity of their AI tools. Throughout the lifetime of an AI tool, it must be under regular supervision to ensure that it continuously performs effectively, which could require sporadically retraining the AI tool and updating its datasets. 


Fairness

Fairness requires that access to AI is equitable amongst all people  and that AI systems do not inadvertently discriminate against any groups. 11 user requirements related to fairness were generated by the participants, which were clustered into three themes: 


  1. Bias in Data

  2. Equal Access

  3. Influence of the Political Landscape in Each Country


Bias in Data

In healthcare, patients are treated fairly when they are understood and treated with reference to their social and biological context. When using AI tools, participants suggested that it is important in healthcare to consider which groups have been taken into account in the datasets. A more diverse dataset will be able to match the characteristics of the wider population and ensure the most effective and equitable healthcare outcomes.


Equal Access

Participants argued that to ensure fairness, it is important that AI tools are accessible for all patients irrespective of their geographic location, disease, cultural background, socio-economic status or literacy levels. This can in part be ensured by making AI tools as user friendly as possible, with the option of catering to people with physical and cognitive disabilities. Furthermore, concerns were raised that certain hospitals may not have the necessary equipment to implement AI tools.


Influence of the Political Landscape of Each Country

Fairness does not just apply between individuals, but also countries. Participants argued that AI tools will need to consider the socio-technical environment of the respective countries in which they are employed, so that they can take each nation's political and social differences into account when operating. Many countries have their own respective initiatives for disease prevention and other healthcare policies which AI tools can be tailored to in order to ensure fairness between nations.


We have now fully explored the findings of our Co-Creation Workshops on the role of AI in the healthcare sector, with specific emphasis on the relationship between AI and trustworthiness in this blog. We look forward to next week when we shall look at the findings of Artificial Intelligence in port management.

コメント


bottom of page