You don't have javascript enabled.

London Tech Week dissects AI’s cybersecurity risks

The escalating cybersecurity battle with AI at its core was a dominant theme at London Tech Week. For fintech and financial leaders, the message is clear: mastering AI’s dual nature, coupled with robust cybersecurity fundamentals, is key to securing our digital future.

  • Nikita Alexander
  • June 12, 2025
  • 5 minutes

As Artificial Intelligence rapidly reshapes our digital world, a critical question emerges: Who will control the future of cybersecurity – the good guys or the bad guys?

This was the central debate at a compelling London Tech Week panel, where experts grappled with the dual nature of AI. From leveraging its power for real-time threat detection to confronting sophisticated AI-driven attacks and the inherent risks of AI-generated code, the discussion underscored the urgent need for strategic implementation. We dive into the insights from Darktrace, the National Cyber Security Centre, the UK government, and webAi on how tech leaders can harness AI as a powerful ally, not a rogue disruptor, in securing the financial sector.

The panel, moderated by Joe Tidy, Cyber Correspondent at the BBC, brought together a formidable line-up: Tim Bazalgette, Chief AI Officer at Darktrace; Richard Horne, CEO of the National Cyber Security Centre; Feryal Clark, Parliamentary Under-Secretary of State for AI and Digital Government; and David Stout, Founder & CEO of webAi.

AI’s dual role

Richard Horne of the NCSC wasted no time in setting the stage: “The criminals are just way out, and they’re learning just like we are… They’re generally taking existing techniques and making them more efficient and more effective.” This resonates particularly in the financial sector, where attack efficacy directly correlates with financial loss and reputational damage. While AI isn’t creating entirely new attack vectors yet, its ability to supercharge existing ones is undeniable.

Feryal Clark highlighted the escalating sophistication of attacks, noting that phishing attempts are “a lot more sophisticated, but more of them are generators.” This isn’t just about volume; it’s about precision. AI can craft highly convincing spear-phishing emails targeting specific individuals, overcoming language barriers and making social engineering far more potent. For financial institutions, this means a heightened risk to employees who are often the first line of defense against cyber fraud.

The hidden risks of AI-generated code

Perhaps the most troubling long-term concern raised by David Stout of webAi revolved around the unacknowledged vulnerabilities being baked into software by AI. “You hear CEOs of publicly traded companies in the U.S. saying 30% of our code is generated by AI. What does that mean?” Stout questioned. His concern lies in the potential for AI models, trained on universal (and potentially flawed) data corpuses, to inadvertently embed security vulnerabilities into new applications.

If numerous companies rely on the same AI models for code generation, a single identified vulnerability could lead to widespread, systemic compromises across a sector – a nightmare scenario for interconnected financial systems. “That’s a moment where we’re gonna face real market impact, because they’re going to have companies and enterprises that are power real service, caught essentially in this trap of… reducing costs,” Stout warned. The drive for automation, if not coupled with rigorous security oversight, could lead to unforeseen consequences.

The crucial role of human expertise and fundamental security

Despite the advanced nature of AI threats, a consistent theme throughout the panel was the enduring importance of cybersecurity fundamentals. Richard Horne passionately reiterated, “It’s because our businesses are not doing the basics of cybersecurity. And that’s the biggest problem… it’s not as if it’s completely changed.” He stressed that 61% of attacks still exploit basic vulnerabilities. This is particularly salient for fintechs and banks handling sensitive customer data and large financial transactions – neglecting the basics can be catastrophic.

Tim Bazalgette from Darktrace emphasized that while AI tools can assist, they aren’t a silver bullet. “We’re sceptical of just, you know, naively using whatever you can.” He advocated for AI systems that are “fully interpretable” and can “explain what they’re doing, why they’re doing it.” This ensures that human experts remain in control, augmenting their capabilities rather than ceding critical decision-making to opaque algorithms. The NCSC’s AI cybersecurity code of practice, now a global standard, further guides businesses in safely introducing AI into their systems.

Preparing for the future

Looking ahead, the panelists agreed that preparedness and proactive defense are paramount. Councillor Clark highlighted government initiatives, such as the proposed cyber security and resilience legislation, designed to be dynamic and rapidly adaptable to evolving threats.

The “tennis racket” analogy offered by Richard Horne was particularly insightful: “We’ve got a new racket, and our opponents have a new racket too… we have to adopt them and use AI in our defenses, but we have to learn as we go.” While attackers only need one success, defenders must be right every time.

David Stout underscored the opportunity for companies to partner with specialized AI cybersecurity firms like Darktrace, leveraging their domain expertise to tackle specific problems rather than relying on general-purpose AI models that “pretend to know things they don’t know.” This targeted approach ensures that AI is a tool to enhance human capabilities, not replace expert judgment.

The consensus was clear: AI is not a panacea, nor is it a complete replacement for robust cybersecurity hygiene. For financial institutions, the future of cybersecurity will be controlled by those who judiciously integrate AI into their defenses, underpinned by unwavering commitment to fundamental security practices, continuous learning, and strategic collaboration. The race is on, and the good guys must be ready not just to react, but to anticipate and innovate.