The DSS 8440 is a 2 socket, 4U server specifically designed to reduce time to insight for machine learning training by providing substantially increased horsepower. Analyze massive amounts of data, recognize patterns, and determine follow-on actions, with up to 10 industry-leading NVIDIA® Tesla® V100 Tensor Core GPUs.
Its high speed, switched PCIe fabric and extensive local storage (NVMe & SATA) contribute to achieving rapid results. The NVIDIA V100s is the most advanced GPU ever built and in the DSS 8440 it delivers competitive training performance at lower cost than competitors. It has 25% more accelerators in a single chassis enabling 10% more Tensor FLOPS using the same rack density.
Furthermore, it provides higher training efficiency (performance/watt) than competitive offerings when using the most common frameworks and popular convolutional neural network models.
Scaling up to 10 GPUs delivers more processing capacity than previous single chassis offerings, boosting performance for compute-intensive applications, like simulation, modeling and predictive analysis in scientific and engineering environments. It also enables exceptional multi-tenant, multi workload capability for departmental and hosting environments where a single host’s resources need to be shared among different workloads.
The DSS 8440 can be populated with 4, 8 or 10 GPUs, so you can buy only the resources you currently need, then easily scale accelerator capacity as your processing demands increase. Get faster results from today’s immense data sources by leveraging the extensive low-latency local storage (SATA and NVMe – 32TB max.) and abundant throughput (9 x16 IO channels) available in the DSS 8440.
The DSS 8440 GPU delivers leading training performance and efficiency (performance/watt) using the most common frameworks and popular convolutional neural network models. Further details can be found here.