Reference Materials
To support actuaries in understanding Artificial Intelligence and its implications, the IAA AI Task Force has developed and curated a collection of reference materials. The materials are designed to foster informed discussion, build capability within the actuarial profession, and contribute to global conversations on AI.
Artificial Intelligence Governance Framework
This paper aims to provide educational material that helps actuaries in safeguarding responsible artificial intelligence (AI), while raising awareness of the risks that need to be managed when designing, developing, implementing and using AI models and AI systems. Actuaries have long been at the forefront of managing uncertainty, utilizing a combination of skills in areas such as probability theory, advanced mathematics, statistics, economics, finance and, importantly, professionalism. As key players in decision-making within the financial industry and the field of social protection, actuaries’ concerns about managing risks appropriately and contributing to societal well-being have become even more relevant with the rise of AI. AI is no longer an emerging topic; it has already found a place within the actuarial profession, particularly in data analytics, predictive modelling and risk management, and its impact on actuarial practice is only bound to grow. Given the distinctive nature of the actuarial profession – encompassing technical skills, ethical standards and professionalism – actuaries are well positioned to contribute to the development of AI systems and to oversee their principal’s overall approach to AI.
Access the PaperDocumentation of Artificial Intelligence Models or Systems
Documentation is a vital component of any artificial intelligence (AI) model or system, forming the foundation for a robust governance framework. Comprehensive documentation is essential not only for compliance and regulatory purposes, but also for transparency, accountability of the AI systems and continuity of operations.
While this paper aims to assist actuaries in creating effective documentation for their AI models or systems, it makes no claim to be exhaustive. It outlines key elements that may be considered as good practice for documentation of an AI model or system. The paper covers the elements of documentation in all stages of the model lifecycle, from Data, through Model Development to Model Deployment. The level of detail included in AI model documentation should reflect proportionality and could vary depending on a number of factors, such as the level of significance, risk and complexity of the AI models. Actuaries are encouraged to use professional judgement and tailor the documentation to these factors, with the goal of meeting the needs of the end users of the documentation. In this document, the terms “AI system” and “AI model” align with those presented in the Artificial Intelligence Governance Framework paper.1 In summary, an AI system is an overall machine-based system that infers from inputs how to generate outputs, and an AI model is a core component within an AI system used to make such inferences.
Testing of Artificial Intelligence Models or Systems
In recent years, the integration of artificial intelligence (AI) and machine learning (ML) into actuarial practice has been reshaping the landscape of risk assessment and decision-making. As these technologies become increasingly prevalent, it is important to understand the principles and methodologies involved in creating, testing and using AI models to ensure their reliability, accuracy and fairness. Actuaries play a critical role in this process, leveraging their expertise in statistical analysis, risk assessment and regulatory compliance. Through comprehensive testing of AI models, actuaries help mitigate model risk and enhance the overall integrity of the decision-making process. This paper aims to provide educational material regarding good practices for testing AI models within the context of actuarial work, emphasizing a principle-based approach that allows for flexibility in application. By presenting suggested approaches and tools for effective model testing, this paper aims to foster confidence in the use of AI technologies while promoting ethical and responsible practices.<br><br>
The paper is structured to offer an overview of the key considerations and methodologies involved in testing AI models. The structure of the paper is set out below. Some points may appear in more than one section as they are relevant in the different contexts. Additionally, the paper addresses the importance of model governance and risk management, highlighting the actuary’s role in the testing and validation process.
