How financial institutions can navigate the promise and pitfalls of artificial intelligence to unlock its full potential.
The finance industry’s embrace of artificial intelligence (AI) represents both a step-change in efficiency and a complex puzzle of risk management. Two years after generative AI burst into the mainstream, financial institutions have taken strides to explore its potential, from enhancing customer interactions to transforming backend processes. Yet, as these organisations navigate the opportunities, they also confront critical challenges—data integrity, governance, and ethical considerations, to name a few.
At the FinTech Connect 2024 event, a panel of industry leaders examined AI’s evolving role in finance. Speakers included Joshua Lloyd-Lyons (Fidelity Ventures), Søren Rode Jain Andreasen (Nordea), Mariano Giralt (BNY Mellon), and Tom Cahn (Eigen Technologies), with Ryan Browne, CNBC’s Technology Correspondent, moderating the discussion. The panel explored how AI is reshaping financial services and the steps institutions must take to unlock its full potential while mitigating risks.
Artificial intelligence is already driving measurable change across financial services, with its applications spanning both customer-facing and backend operations. In customer engagement, AI-powered tools like chatbots have gained traction, offering scalable solutions to handle inquiries and streamline service delivery. Some institutions are now managing more customer interactions through AI-driven platforms than traditional call centres, illustrating the technology’s potential to enhance accessibility and efficiency.
Beyond customer service, AI is transforming internal processes. Financial institutions are leveraging machine learning and generative AI to automate workflows, improve compliance, and extract actionable insights from unstructured data. For example, AI is being used to process and analyse contracts, enabling institutions to meet stringent regulatory requirements with precision. Additionally, advanced analytics are helping identify potential tax evasion activities, reducing the need for manual intervention.
Despite the hype surrounding generative AI, its most effective implementations often occur when combined with traditional AI. This hybrid approach enhances data structuring and workflow optimisation, illustrating how financial organisations can integrate these technologies to create robust, high-value systems.
While AI offers transformative potential, its adoption in finance is fraught with challenges that require careful management. Chief among these is data quality—the old adage “garbage in, garbage out” is particularly relevant here. AI models are only as reliable as the data they are trained on, and inconsistencies or inaccuracies can lead to flawed outputs, undermining trust and operational effectiveness.
Equally critical is the issue of governance. Financial institutions operate in a highly regulated environment, making robust oversight of AI systems indispensable. Organisations must implement governance frameworks that involve compliance, risk management, and legal teams, alongside ethics committees to address AI-specific dilemmas. Establishing such structures ensures decisions align with regulatory requirements and organisational values.
Another concern lies in AI’s current limitations, particularly generative AI’s struggles with numerical data and reasoning. For an industry where precision is paramount, these weaknesses can have far-reaching implications. Furthermore, risks like data “hallucinations,” biases, and model collapse highlight the need for human oversight to validate AI-driven insights.
Despite being only two years into widespread AI adoption, financial organisations must prioritise long-term strategies. Investments in AI infrastructure and workforce training are crucial for ensuring that these technologies are deployed responsibly and effectively, paving the way for scalable, ethical growth.
To harness the full potential of artificial intelligence, financial institutions must approach its deployment strategically. A key focus is integrating generative AI with traditional AI, creating hybrid systems that combine the strengths of both technologies. This layered approach enables institutions to automate complex workflows, structure unorganised data, and deliver actionable insights, all while mitigating the shortcomings of generative AI alone.
Building internal AI capabilities is another critical step. Organisations are realising the importance of embedding AI across departments, not as a standalone tool but as an integral part of solving specific business problems. For example, using AI to streamline compliance processes, enhance portfolio monitoring, or optimise customer onboarding highlights the technology’s role in delivering measurable value.
Additionally, protecting and leveraging high-quality legacy data is becoming a strategic priority. Datasets created before generative AI’s rapid evolution hold significant value for training robust, reliable models. Institutions are recognising this and taking steps to safeguard these assets as a long-term advantage.
Effective implementation also requires cross-functional collaboration. By encouraging synergy between AI teams, data scientists, and other departments such as compliance and blockchain, organisations can break down silos and innovate more effectively. This collaborative approach ensures that AI is not just a tool but a driver of holistic growth and transformation.
While artificial intelligence brings significant efficiency gains, its adoption must not come at the cost of human connection. Financial services thrive on trust, and many customers still prefer personal interaction when dealing with complex or sensitive issues. For this reason, the human touch remains an indispensable complement to AI-driven solutions.
AI-powered chatbots, for example, have proven effective for handling routine queries and directing customers to appropriate resources. However, these tools must operate as part of a broader strategy that includes seamless access to human advisors when needed. Institutions that fail to strike this balance risk alienating customers through impersonal or frustrating experiences.
The solution lies in viewing AI as an enabler of better service, not a replacement for human expertise. By using AI to manage repetitive tasks, financial professionals can focus on high-value activities that strengthen customer relationships and build trust. Moreover, ensuring transparency in AI-driven interactions helps foster confidence among users, mitigating concerns about data misuse or algorithmic bias.
Ultimately, the financial sector must adopt AI responsibly, recognising its role as one piece of a broader service ecosystem. Institutions that prioritise ethical implementation and maintain strong customer connections will be better positioned to leverage AI’s full potential while preserving the trust that underpins their success.