Platform Computing, the leader in cluster, grid and cloud management software, today announced that the company is making it easy for high performance computing (HPC) customers to take immediate advantage of the performance provided by graphic processing units (GPUs) by extending its products to provide superior management of GPU clusters and GPU-aware applications. Platformâs flagship workload scheduler products, Platform HPC, Platform LSF and Platform Symphony, now incorporate GPU-aware scheduling features for more efficient workload management. In addition, Platform is enabling administrators to better manage and plan GPU cluster capacity by providing access to monitoring, reporting and analysis tools. This includes new monitoring functions within the Platform HPC and Platform Symphony web-based interfaces, as well as in Platform RTM for large Platform LSF datacenters.
â¢ âGPUs power the worldâs fastest supercomputer today and are powering thousands of other clusters across hundreds of industry segments. The combination of Tesla GPUs and Platformâs HPC products offers a powerful combined solution for building the clusters that enterprises and academic institutions need today to tackle their challenging problems,â said Sanford Russell, general manager, GPU computing software, NVIDIA.
â¢ âPlatform has a long history of supporting special purpose HPC architectures. By providing enhanced support for the latest generation of Tesla GPUs, Platform is firmly establishing itself as providing premier management products for HPC data centres, offering the most complete solutions for managing GPU-based application services. By enhancing Platform HPC, Platform LSF and Platform Symphony to support GPU clusters out-of-the-box, weâre making cluster management and monitoring more simple, easy-to-use and more efficient,â said Ken Hertzler, vice president, product management, Platform Computing.
â¢ GPU computing involves harnessing the advanced capabilities of graphics processing units to run the parallel portion of applications many times faster than can be done by standard CPUs. GPU acceleration has gained traction among HPC data centres because computational problems in fields such as biology, physics, seismic processing, finance and other disciplines are able to run much faster on GPUs. In this way, a more cost-efficient solution becomes available to perform these complex calculations.
â¢ Platform provides superior cluster and workload management on resources using GPUs. Platformâs users can utilise Platform HPC management software to deploy CUDA software to compute nodes as well as intelligently allocate jobs only to resources that have GPUs. Customers get improved productivity and reduce cluster costs, enabling them to do more with less.
â¢ Platform provides rich support by exposing GPU-specific scheduling metrics for NVIDIA Tesla GPUs in Platform HPC, Platform LSF and Platform Symphony. In addition, Platform also includes administrator monitoring support that includes:
â¢ GPU-related management metrics such as the number of available GPUs, GPU temperature, GPU operating mode and ECC error counts
â¢ For NVIDIA Tesla 20-series GPUs, in addition to GPU-level information, Platform also supports chassis-level metrics such as fan speeds, PSU voltage and electrical current, and LED states.
â¢ Utilising these metrics, administrators can manage GPU devices in a manner similar to traditional computational devices. By monitoring and tracking these measures Platform RTM and Platform Analytics provide the industryâs strongest end-to-end solution for managing clusters and grids for GPUs.
â¢ GPU support will be included as standard features in Platform HPC, Platform LSF and Platform Symphony. Platform products currently support NVIDIA Tesla GPUs, with plans to support future GPU accelerators when they become available.