Artificial Intelligence: Emerging Risks and Challenges for the Financial Sector
Published
Read time
Artificial Intelligence (AI) is fundamentally transforming Singapore's financial services industry. AI technologies, from machine learning algorithms to natural language processing, are enhancing the efficiency and accuracy of financial services. For the banking sector alone, AI-driven innovation could lead to an annual revenue boost of between US$200 billion and US$340 billion.
However, alongside these opportunities come pressing risks. As AI adoption accelerates, the associated risks—from algorithmic bias and cybersecurity vulnerabilities to regulatory compliance—are becoming increasingly critical. Financial institutions must adopt robust governance frameworks and risk mitigation strategies to leverage AI responsibly and protect against potential setbacks. This article explores the top emerging AI risks for the financial sector and the strategies organizations can employ to address these challenges.
Algorithmic Bias: A Growing Concern
Algorithmic bias is one of the most pressing risks tied to AI in financial services. AI models, trained on historical data, can inadvertently inherit biases, resulting in discriminatory or unfair outcomes. This bias often remains hidden, only surfacing when AI-driven decisions are analyzed post-deployment.
For instance, in Indonesia, a job recommendation system powered by AI unintentionally excluded female candidates from certain opportunities due to biases within the historical data used for training. Amazon also faced challenges with a similar AI-driven recruitment tool, which favored male applicants over females due to inherent biases. Such biases raise ethical concerns and can lead to legal repercussions and reputational damage for organizations.
In the financial services sector, algorithmic bias can impact lending practices, pricing models, and risk assessments, disproportionately affecting vulnerable populations. A biased lending algorithm, for example, might unfairly deny loans to certain demographics or assign higher interest rates based on biased data points, which could lead to regulatory action or litigation.
One challenge in addressing bias is the complexity of many AI models, especially deep learning systems. These models function as "black boxes," making it challenging to understand how they reach decisions. This lack of transparency not only hinders accountability but also complicates compliance with fairness and transparency regulations. The Monetary Authority of Singapore (MAS) has implemented guidelines to fairness and transparency in AI applications within the financial services sector.
To mitigate these risks, financial institutions should prioritize the development of explainable AI models. Additionally, insurance products like Directors and Officers Liability (D&O) and Professional Indemnity (PI) are critical to protecting organizations against the potential financial and reputational fallout from biased AI models. These policies can shield institutions from the costs associated with legal disputes, regulatory fines, and reputational damage due to biased AI decisions.
Cybersecurity Threats to AI Systems
As AI systems process vast amounts of sensitive financial and personal data, they have become prime targets for cybercriminals. Over the past two years, there has been a 319% increase in data breaches, highlighting the critical need for secure AI systems. When compromised, AI systems can be manipulated to produce false outcomes, leading to data breaches, financial loss, and reputational damage.
Financial institutions are especially vulnerable, as they handle high volumes of sensitive data, from customer identification numbers to transaction histories. A breach of this data could result in severe financial and reputational repercussions, as well as regulatory consequences under data protection laws. Additionally, the growing dependence on AI systems in essential business operations means malfunctions and cyberattacks can also disrupt critical services and lead to financial losses and reputational damage.
To address these threats, AI systems should be "secure by design," with cybersecurity measures incorporated from the outset. The Cyber Security Agency of Singapore (CSA) has published guidelines in securing AI systems, advising institutions to integrate strong encryption protocols, access controls, and regular security assessments.
However, no system is entirely immune to threats, so financial institutions are encouraged to consider comprehensive insurance coverage to protect against financial losses from cyber incidents. Cyber insurance can provide a safety net in the event of data breaches, disruptions, or other AI-related incidents, helping to mitigate potential impacts on business operations and corporate reputation.
Navigating the Regulatory Landscape: A Complex Challenge
AI governance is still a nascent space and regulatory frameworks around AI are evolving, creating challenges for financial institutions aiming to stay compliant. Beyond the MAS and CSA guidelines mentioned above, the Model AI Governance Framework developed by the Infocomm Media Development Authority (IMDA) also provides a guide for financial institutions to adopt responsible AI practices. This framework offers detailed and readily implementable guidance for private sector organisations to address key ethical and governance issues when deploying AI solutions.
Clearly, the government is keeping a close eye on the rapidly evolving AI landscape, and financial institutions must prioritize compliance with regulations governing AI to avoid legal and reputational risks. This requires a comprehensive understanding of the regulatory landscape, ongoing monitoring of changes, and the implementation of robust AI governance practices.
Protecting Against AI-Related Risks and Liabilities
The risk landscape for the financial sector is evolving rapidly with the rise of AI-related threats. While robust and secure AI practices are a critical line of defence, they aren’t enough on their own. Now more than ever, insurance solutions like Directors and Officers (D&O) Liability, Professional Indemnity (PI), Crime, and Cyber insurance are essential for protecting businesses against these emerging risks.
For example, D&O Liability and PI insurance can offer coverage for claims arising from AI-driven decisions, such as biases in lending or pricing models. Cyber insurance, on the other hand, protects against data breaches, financial losses, and reputational damage from cyberattacks on AI systems. Cyber insurance policies now often include extensions specifically for AI-related risks, such as:
- Data Poisoning: Manipulation of training data to compromise AI models, which can lead to expensive data cleanup and litigation risks.
- Accidental Infringement: Coverage for unintentional infringement of data, media, or software usage rights, reducing exposure to infringement claims and legal disputes.
- Regulatory Violations: Protection against liabilities related to non-compliance with emerging regulations, such as the European Union’s AI Act.
Working with an experienced insurance broker, like Howden, can equip financial institutions with tailored solutions to address specific AI-related risks. Our brokers bring expertise in assessing risk exposure and providing appropriate insurance coverage, helping institutions mitigate liability while supporting resilience against evolving threats.
AI holds tremendous potential to reshape financial services, but only those institutions that take a proactive, well-rounded approach to risk management will succeed in capitalizing on these benefits while safeguarding against emerging threats.

Have questions about your insurance cover?
Reach out to us to have a chat, we'll answer all your questions.