Cray® CS-Storm™ Accelerated GPU Cluster System
Cray® CS-Storm™ cluster supercomputers tackle the toughest extreme HPC and artificial intelligence (AI) workloads. Designed for speed, architected for scale and integrated for production use, the Cray CS-Storm GPU supercomputer is your path to exploiting the performance available from the latest NVIDIA® Tesla® GPUs.
Designed for Speed
Powered by NVIDIA® Volta or NVIDIA® Pascal™ architecture GPUs, each CS-Storm server node and rack system is integrated by Cray to deliver maximum accelerated computing performance across the broadest range of HPC and AI uses.
Architected for Scale
While others focus on small cluster nodes, Cray architects and delivers accelerated computing systems built for production-level HPC and AI applications that require scaling beyond the accelerator or node. Whether it’s deep learning training, signal processing, reservoir simulation, high-performance microscopy or medical image processing, the Cray CS-Storm system is architected with scaling in mind.
Integrated for Production Use
Designing, implementing and using a GPU cluster system doesn’t have to be difficult. You can rely on Cray, the expert in production supercomputing, to simplify your environment. Our accelerated computing systems are built using open, industry-standard technologies. We deliver a complete solution for HPC and AI including high-performance networking, storage, a comprehensive software environment for cluster management and developer productivity, and the latest machine and deep learning frameworks.
Cray CS-Storm 500GT System
The Cray CS-Storm 500GT configuration scales up to 10 NVIDIA Tesla Volta V100, Pascal P40 or P100 GPUs or Nallatech FPGAs by leveraging the flexibility and economics of PCI Express as well as the power of today’s advanced Intel® Xeon® processor Scalable family.
Cray CS-Storm 500NX System
The Cray CS-Storm 500NX configuration scales up to eight NVIDIA Tesla Volta or Pascal architecture GPUs (V100, P100) using NVIDIA® NVLink™ to reduce latency and increase bandwidth between GPU-to-GPU communications, enabling larger models and faster results for AI and deep learning neural network training.