The market context in which insurance companies operate is fundamentally changing. The
use of data and Artificial Intelligence (AI) algorithms is growing significantly and is expected
to be a key currency of future success. With the huge quantities of data created across the
insurance value chain, AI provides tremendous opportunities for further automation of
processes, development of new, more customer-centric products and the assessment of
insurance risks. With these new possibilities, processes are becoming more complex and risks
need to be handled. AI algorithms may have a direct impact on people and therefore ethical
and privacy questions arise, which in turn brings regulators and industry bodies to the
discussion to avoid adverse effects, without stifling the innovation and potential of AI.
Insurance companies must achieve the right balance between improving their operations
with the new solutions which AI will make possible and managing the corresponding risks. This
requires rigorous risk assessment and management of the development, implementation and
use of AI. The importance is reflected by various legislation currently under development
across the world, including the European Union’s AI Act, which includes penalties of up to 6%
of total worldwide annual turnover. With these regulatory requirements and the potential
reputational implications, AI risk management cannot be completely diversified or assessed
proportionally. No matter the size of the insurance company, it can be catastrophic for
reputation and business if customers are harmed by AI. That’s why Internal Audit should play
a role in providing assurance and advice on mitigating risks arising from implementing AI.
The Internal Audit function can, according to its mandate, help organizations with the
balancing act between risk mitigation and business innovation. This could include developing
strategies for assurance to govern AI, data privacy and security, reviewing processes for
potential bias and ensuring compliance with relevant laws and regulations. In addition,
internal auditors can provide insights and advice for companies in understanding and
mitigating the risks associated with AI adoption and use.
Internal Audit should be involved from the start of new AI implementations to provide advice
on how to implement AI securely, according to policies and regulation. Following a top down
approach is wise, starting with auditing the AI strategy, governance and test individual
instances, algorithms and models, starting with high risk AI. This will ensure that the
development is being conducted in an efficient and effective manner and that controls are in
place tailored to the risks related to the specific AI implementation.
Internal Audit should not only provide assurance over the process of developing AI, but also
perform risk-based deep dives to ensure AI implementation is compliant and working
effectively. Auditing AI includes technical aspects, data governance and quality, ethical
themes and business application. Therefore, a multidisciplinary audit team should be formed.
The team should have representatives from IT audit, data science, business audit and specific
technical expertise such as actuaries, as well as ethics, to ensure each aspect is thoroughly
assessed. Hence, Internal Audit departments should upskill their staff where needed, to stay
ahead of key new developments, and be able to independently assess the risks, plan and
execute audits as required. Our research has shown that most Internal Audit departments are
at an early state of establishing the required skills and processes, and often not keeping up
with the rapid development in use of AI in the Insurance industry. For these reasons, this
paper contains a proposal of an AI Audit Program, where the most important AI related risks,
possible root causes and testing strategies are identified.