AI’s promise in finance is vast, but so are its risks. ING’s Bahadir Yilmaz shares how strategy and responsibility can outshine the hype.
When it comes to artificial intelligence (AI) in financial services, we’re in the midst of both an awakening and a reckoning. McKinsey’s State of AI 2024 report shows that AI adoption in finance has surged, with 72% of institutions using AI to drive value, almost double the figure from just a few years ago. But as finance leaders increasingly turn to AI, the challenge lies in balancing rapid innovation with responsible governance and tangible impact.
To better understand AI’s impact on financial services, Bobsguide sat down with Bahadir Yilmaz, Chief Analytics Officer at ING. With over a decade of experience in digital transformation, Bahadir argues that while AI offers unmatched potential, deploying it responsibly is no simple switch.
“AI’s promise is vast, but so are the risks. Leaders need to approach AI not as a ‘magic sauce’ for every challenge but as a tool best applied strategically,” he explains. By prioritising well-defined use cases, finance leaders can leverage AI effectively while ensuring its deployment aligns with ethical and regulatory expectations.
In financial services, AI’s potential isn’t limited to flashy applications; rather, its real power lies in its ability to tackle high-cost, repetitive tasks that drain resources and hamper client satisfaction. As Bahadir states, “AI should be applied with intent. It’s not a solution for every challenge, but where it’s a fit, the results can be transformative.”
Key areas like customer support and compliance are where AI-driven solutions truly shine. Financial services often struggle with the volume and complexity of client queries, especially in contact centres. By employing AI-driven chatbots, institutions can streamline responses, handle basic inquiries, and allow human agents to focus on more complex needs.
“Contact centres are often a pain point for clients. AI-driven chatbots, when secure and well-monitored, improve response times and boost client satisfaction,” Bahadir points out, illustrating AI’s potential to enhance user experience without compromising data security.
Financial services are highly regulated, and compliance is both a necessity and a burden, especially in processes like Know Your Customer (KYC) and Anti-Money Laundering (AML). Bahadir sees these areas as prime candidates for AI integration, enabling institutions to automate document checks and flag risks early in the process. By handling these data-heavy, repetitive tasks, AI frees up human resources for more complex compliance matters.
However, AI’s dual potential for innovation and risk underscores the need for responsible governance. “Responsible AI isn’t optional; it’s foundational,” says Bahadir. He calls for rigorous internal controls to “monitor for biases, avoid discriminatory language, and ensure AI performs as intended.”
In regions with strong regulatory frameworks like the EU, such as GDPR, institutions are guided by clear standards on how to govern AI systems responsibly. Bahadir advocates for these frameworks to be emulated globally, as they provide a strict but beneficial layer of security.
For finance leaders, the advice is a proactive approach to regulation, integrating compliance from the outset rather than waiting for potential penalties. “Regulatory compliance isn’t just about avoiding fines; it’s about building trust and aligning AI’s deployment with the organisation’s values,” Bahadir notes. By embedding oversight into AI processes, leaders can navigate the regulatory landscape effectively and maintain client confidence.
While AI’s transformative potential is often discussed, its current value in finance is largely tactical. AI-driven automation offers practical benefits in customer support, compliance, and even lending, streamlining operations that were traditionally resource-intensive.
In customer-facing roles, AI also enables personalisation. Institutions can leverage AI to analyse customer data, tailoring communications and recommendations based on client profiles. This capability has proven valuable, especially in GDPR-compliant regions, where consent-based marketing is mandatory. Bahadir’s team observed that AI-driven personalised emails significantly boost client engagement and conversion rates. “Personalisation isn’t just a trend—it’s a critical element of client experience. AI enables us to scale this approach effectively,” Bahadir notes.
In compliance-heavy domains like KYC and AML, AI can also reduce the human workload by identifying suspicious patterns and flagging potential issues. This approach not only improves efficiency but also allows compliance officers to focus on high-risk cases. “AI’s ability to tackle labour-intensive tasks makes it invaluable in compliance. It doesn’t replace professionals but enhances their work, allowing them to focus on what matters most,” Bahadir says.
Despite AI’s rapid evolution, Bahadir believes that human expertise remains indispensable in financial services. “There’s a misconception that AI can replace skilled professionals, but this isn’t the case,” he explains. AI may handle routine functions, but critical tasks such as risk assessment, fraud detection, and client advisories still require human judgement.
In lending, for instance, AI can streamline data processing, but human professionals must validate and interpret these results to ensure accuracy. Bahadir elaborates, “Checking what AI does is not easier than doing it yourself. It requires critical thinking, technical skills, and a thorough understanding of both the technology and the industry.”
As AI takes on more operational tasks, Bahadir emphasises the need for upskilling. Financial professionals need to develop competencies that blend technical expertise with industry-specific knowledge. Leaders in the financial sector are investing in training programmes to prepare their teams for this new environment, where AI enhances but does not replace human roles.
For Bahadir, the current AI landscape in finance is similar to the early days of personal computing: promising but still maturing. He anticipates a breakthrough—an “Apple moment”—where a defining AI application reshapes the industry. Until that moment arrives, he advises leaders to focus on incremental, high-impact applications rather than speculative projects.
“AI is a powerful tool, but its full potential hasn’t yet been realised in finance.” Bahadir cautions against the temptation to implement broad, untested AI initiatives, recommending instead a focus on areas where AI can deliver measurable value today. Risk management, customer journeys, and operational efficiency are just a few domains where AI is already proving its worth.
Bahadir is sceptical of industry hype around AI’s transformative potential, noting that a recent Gartner report found that at least 30% of AI projects will be abandoned by the end of 2025. “Many leaders want to implement AI without a clear strategy, but AI’s value is in solving specific problems, not as a catch-all solution,” he states.
Bahadir’s insights distil the essence of AI strategy in finance into three guiding principles: strategic foresight, caution, and a commitment to transparency. He argues that AI’s potential to reshape financial services is significant, but only if it is deployed thoughtfully. Leaders should avoid the pressure to embrace AI for AI’s sake and instead focus on projects that align with their organisation’s long-term goals.
The future of AI in finance, according to Bahadir, lies in a balanced, client-focused approach. “For AI to truly benefit finance, it must be deployed responsibly, with a clear focus on client needs and ethical standards,” he concludes. His advice to finance leaders is clear: treat AI as a tool, not a revolution. By doing so, institutions can harness AI’s strengths, mitigate its risks, and maintain the trust of their clients.