You don't have javascript enabled.

For Financial Services Firms, High Performance Computing Capabilities are a Matter of Competitive Advantage

High performance computing (HPC), sometimes referred to as “supercomputing” is the use of aggregated computing power (e.g., computing servers and clusters) to solve problems that are computationally intensive or data intensive. And it’s on the rise: According to research firm MarketsandMarkets, the global HPC market was worth £15 billion in 2013 and is expected to grow

  • Jeffrey Davis
  • November 7, 2014
  • 6 minutes

High performance computing (HPC), sometimes referred to as “supercomputing” is the use of aggregated computing power (e.g., computing servers and clusters) to solve problems that are computationally intensive or data intensive. And it’s on the rise: According to research firm MarketsandMarkets, the global HPC market was worth £15 billion in 2013 and is expected to grow to £20 billion by 2018.

The HPC market is growing fast because more companies need it (to leverage Big Data, for example), and they recognise that HPC capabilities can mean competitive advantage. As the saying goes: To out-compute is to out-compete. In financial services, for example, investment banks use high-frequency trading applications to execute trades milliseconds before their competitors – which can mean millions in additional returns.

In one worldwide IDC study, 97 percent of companies that had adopted high performance computing said they could no longer compete or survive without it. Suzy Tichenor and Albert Reuther explain: “More-capable HPC resources often translate into faster time-to-market (in some cases more than 50 percent faster), reduced costs, and superior product quality.”

High performance computing capability is a lynchpin for competitiveness in financial services

One of the first and heaviest users of high performance computing is the financial services industry. Beyond its use in high-frequency trading applications, HPC is used by financial services firms for risk modeling, fraud detection, bitcoin mining, derivatives trading, pricing, regulatory compliance, and Big Data applications like customer profiling – among other uses.

Yet financial services firms often find themselves in a tough spot: the technology to process a lot of data, fast, exists, but it’s not compatible with legacy software, hardware, and infrastructure. As research firm IDC tells the story, that was the challenge PayPal faced before the online payments company implemented a high performance computing data analysis program. The company, which a the time managed 13 million financial transactions per day, needed to:

  • Detect fraud in real time as millions of transactions were processed between disparate systems
  • Find suspicious patterns in related data sets
  • Create and deploy new fraud models into event flows quickly and with minimal effort

PayPal built a high performance computing environment and reports first-year savings of $710 million in fraud that they wouldn’t have been able to detect before.

Beyond financial services, high performance computing is also leveraged for scientific and medical research applications such as computational fluid dynamics and genomic sequencing. The ways in which research applications and financial services applications rely on HPC are in many cases quite similar. As such, we expect to see a lot of crossover in both technology (software, hardware, and infrastructure) and people resources between HPC for research and HPC for financial services in the coming years.

Yet most legacy data centre environments can’t adequately support high performance computing

HPC applications can be run in high density data centre environments, which support server racks that consume 25kW of power (or more). Or, HPC applications can be run in low density environments that support per-rack consumption of 5-7kW. So a company can achieve the same computing power with fewer high density servers in a smaller data centre footprint, or more low density servers spread across a larger data centre space.

So the high density v. low density decision depends largely on real estate (though there are other benefits of a high density environment, including greater efficiencies in cooling, significant capital and operational cost savings, and increased sustainability). As one analyst explained to my colleagues and me, “Whether a firm needs to pull computing capacity into high density really depends on real estate cost.” Where real estate is in short supply – and expensive – then “to keep up with competitors’ increases in computing capacity, firms are likely to go denser.”

Yet increasing density in a legacy data centre can be extremely expensive, if feasible at all. Most legacy data centres are simply not set up to accommodate higher densities. Analyst firm Forrester explains, “[IT infrastructure and operations] pros have told Forrester on several occasions that converged infrastructure offerings are more power-intensive and require more cooling than their data centre is prepared to handle.”

However, a high density environment architected specifically for HPC can be a high-security, high-efficiency alternative to “tons of pizza-box servers.” In particular, a prefabricated modular (PFM) data centre can suit high densities. Analyst firm 451 Research explains: “Prefabrication has inherent design and scalability advantages in supporting high rack densities cost-effectively and in an energy-efficient fashion. Investments in advanced aerodynamic optimisations, granular scalability and dynamically regulated multimode cooling systems – all integrated into the design – help PFM data centres to gain an edge over traditional builds.”

Energy efficiency – and the resulting cost savings – is a significant advantage of a modular data centre built to suit high densities. In a yearlong, side-by-side comparison of a traditional raised-floor data centre and a modular data centre, the modular data centre was found to have a PUE (power usage efficiency) of 1.41, significantly better than the 1.73 PUE of the traditional data centre. For a 1 MW facility, that PUE difference yielded a 19 percent reduction in energy costs.

Consider an example: a financial services firm needed to add more compute capacity in 30 days or less. Their complex computing needs were outstripping the capability of their current environment, and the firm risked being out-competed. The company, wanting to push rack density to 25-30kW to squeeze more compute out of their existing hardware, found that legacy data centre providers were: a) unable to support such high densities; and b) unable to meet demand in less than a month.

Turning to a modular data centre colocation environment, the financial services firm was able to increase rack density to its target and do it in less than 30 days. Among the benefits the firm has realised are:

  • Energy efficiency – 120 racks consume approximately 25-30kW per rack in a modular deployment with no additional rack cooling
  • Space savings – Where a traditional data centre might require 279 sq. meters or more, the modular data centre can support the firm’s HPC requirements in 43 sq. meters
  • Security – The firm maintains privacy and control of their applications and devices inside a secure data centre
  • Competitive advantage – The company remains ahead of the competition with massive compute capability with room to grow

Bottom line

In an environment in which high performance computing capabilities are a matter of competitive advantage for financial services firms, the capacity of the data centre to support such applications is no longer “just” an IT issue. It’s a competitiveness issue. Where space is constrained, or retrofitting the legacy data centre is cost-prohibitive, a prefabricated modular data centre built to suit high density server racks can be just the environment the firm needs to out-compute, and out-compete.

 

By Jeffrey Davis, Senior Vice President of Market Development, IO