Cray® CS-Storm™ Accelerated Cluster Supercomputers
The purpose-built Cray® CS-Storm™ cluster supercomputer adds to the industry’s broadest range of integrated systems, ready to tackle artificial intelligence (AI) problems at production scale. As machine learning and deep learning become integral to production workloads, Cray’s CS-Storm GPU supercomputer is your fastest path to value and the foundation for future innovation through new discoveries.
Designed for Speed
Powered by NVIDIA, Intel and Nallatech accelerators and processors, each CS-Storm server node and rack system is integrated by Cray to deliver maximum performance across the broadest range of machine learning and deep learning environments.
Designed for Scale
As machine learning and deep learning become core requirements for business and scientific discovery, the need for consistent and timely processing is driving the use of supercomputer-scale systems, which are designed to handle the largest data sets and perform the most complex calculations.
Designed for Simplicity
Designing, implementing and using an AI system doesn’t have to be difficult. You can rely on Cray, the expert in production supercomputing, to simplify your environment. Our systems are built on open, industry-standard technologies. We deliver a complete solution for AI including high-performance storage, system management and developer tools.
Cray CS-Storm 500GT System
The Cray CS-Storm 500GT configuration scales up to ten NVIDIA® Tesla® Pascal P40 or P100 GPUs or Nallatech FPGAs by leveraging the flexibility and economics of PCI Express as well as the power of today’s advanced Intel® Xeon® processor Scalable family.
Cray CS-Storm 500NX System
The Cray CS-Storm 500NX configuration scales up to eight NVIDIA Tesla Pascal P100 SXM2 GPUs using NVIDIA® NVLink™ to reduce latency and increase bandwidth between GPU-to-GPU communications, enabling larger models and faster results for AI and deep learning neural network training.
|Processors and Accelerators
- Up to 8 450W or 10 400W accelerators
- NVIDIA® Tesla® P40 or P100 PCIe GPU accelerators
- Nallatech FPGA accelerators
- Two Intel® Xeon® Scalable processors
- Up to 8 NVIDIA Tesla P100 SXM2 GPU accelerators
- Two Intel Xeon E5-2600 v4 “Broadwell” processors
- PCIe with balanced configuration
- InfiniBand® or Intel® OmniPath architecture
- NVIDIA® NVLink™ for 8-way GPU-to-GPU communications
- InfiniBand™ or Intel OmniPath Architecture
- 19" 3U or 4U rackmount chassis
- 16 hot-swap 2.5” drives (up to 8 NVMe)
- 19" 4U rackmount chassis
- 12 2.5" drives (up to 4 NVMe)