top of page
  • Writer's pictureBella Callaway

How Risk Management is Being Transformed by AI

Updated: Apr 18

Organisations of every size are investing time and money into understanding AI tools and strategies, which can be leveraged to optimise all aspects of business, to gain an edge over competitors. Effective use of AI in the workplace has become essential.  In some cases, entire industries are transitioning to a reliance on AI, which comes with a heightened risk that must be managed. As AI technology continues to advance rapidly, organisations must be prepared to respond to the significant changes taking place globally as this AI 'revolution' unfolds. It is essential to understand the risks associated with such a shift in workplace technology and the best practices to ensure a successful implementation of AI technology.


AI Generated Image via Canva


AI Risk Management is the process of identifying, assessing, and managing risks associated with using AI technologies. This includes addressing both technical risks (such as security vulnerabilities and algorithmic bias) and non-technical risks (such as ethical considerations and regulatory compliance). It involves understanding the potential risks and benefits of AI, developing strategies and policies to mitigate potential risks, and monitoring and responding to changes in the AI environment. Additionally, AI Risk Management also includes creating processes and systems to ensure ethical and legal standards are met.


The "Artificial Intelligence Risk Management Framework" was released in January 2023 by the National Institute of Standards and Technology (NIST), a pioneer in creating international standards for artificial intelligence. "The composite measure of an event's probability of occurring and the magnitude of its consequences" is how this paradigm defines risk.NIST defines AI risks as the possible negative effects that creating and implementing AI systems may have on individuals, groups, or systems. Anything from discriminatory hiring practices to unmanageable trading algorithms that have the potential to bring about market collapses are examples of harm. The AI system itself (i.e., the computational model), how it is employed, the data used to train and test it, or even interactions with people can all pose hazards.


Given AI systems' many potential dangers, proactively monitoring AI-based products and services is essential. One way to achieve control and help ensure safety and security is by adopting a risk management solution to triage, verify, and mitigate AI risks.‍ An effective AI governance, risk, and compliance process enables organisations to identify and manage risks. At a high level, AI governance can be broken down into three main approaches:


  • ‍Principles – using guidelines that inform and direct the use and development of AI, such as legislative standards and norms.‍

  • Processes – to address risk and harm resulting from design issues and lack of appropriate governance.‍

  • Ethical Consciousness – actions motivated by a moral awareness or desire to do the right thing. It encompasses the integration of codes of conduct and compliance, consideration of reputational issues, (corporate) social responsibility, and concerns for institutional philosophy and culture.

Ensuring that harmful consequences are minimised, or do not occur at all during the lifespan of AI projects, requires a comprehensive understanding of the role of responsible principles during the design, implementation, and maintenance of AI applications.




When assessing a system, there are five key risk considerations:


Robustness is the risk of an algorithm failing in unexpected circumstances or under attack. It is essential to address when failure could result in financial losses or harm human well-being. This can be measured by assessing performance on unseen data and testing the system's ability to deal with targeted or adversarial attacks. Mitigation strategies for robustness risks include:


-        Improving model generalisation.

-        Retraining the model on new data

-        Using adversarial training and continual monitoring


Bias risk is the risk that an algorithm mistreats individuals or groups and is particularly important for applications that significantly impact people’s lives. Measuring bias involves looking at performance across different groups based on characteristics such as gender, ethnicity, and age. Data debiasing, model amendment, and output amendment can be used to reduce bias, depending on the source of the bias.


Privacy risk refers to the potential for an algorithm to leak sensitive or personal data. It is an important consideration for applications that process personal and sensitive data, as it can lead to data breaches and unlawful processing. Assessing privacy risk involves looking at the data type, the amount of data stored, and whether data minimisation techniques were applied. They can be addressed by reducing the training data, anonymising/pseudonymising data, or using de-centralised/federated models.


Explainability is the risk that the system or its decisions may need to be more understandable to users and developers. It is a key risk to consider when developing critical applications affecting many users. To reduce this risk, it is essential to examine the documentation and communication processes concerning models and data and how easy it is to interpret the model's decisions. Better documentation procedures can be developed, and tools can be used to interpret better the model's decisions, including how different features are weighted.


Efficacy is the risk that the system does not perform well relatively to its business case. It is a key risk to consider when working on projects where failure would have major consequences, such as a large financial loss. To reduce efficacy risks, it is important to measure the performance of the system using metrics such as accuracy, precision, and recall. Steps to improve model efficacy improving model generalisation, regularly monitor performance, and collecting additional training and test data.


The THEMIS 5.0 trustworthiness evaluation methodology will employ a dynamic and transparent risk management approach based on the AI Risk Management Framework proposed by NIST and specific standards used in the lifecycle of AI systems, as defined by ENISA, in order to manage socio-technical threats. The planned technique for evaluating trustworthiness will consider weaknesses pertaining to the technical accuracy, robustness, and fairness of AI systems, as well as the legislative environment for trusted AI in the EU. When evaluating a level of trustworthiness based on risk management, the THEMIS trustworthiness evaluation ecosystem will innovate by considering both the human perspective and the larger socio-technical systems perspective.

28 views0 comments
bottom of page