Machine Learning

Cray Solutions for Machine Learning and Deep Learning

Connect With an Expert

Accelerate Your Machine Learning Projects


Computers are still not as sophisticated as the human brain, but there’s been great progress in artificial intelligence (AI) over the last 50 years. In fact, computers can do many things better than humans. Machine learning and the related field of deep learning are the most pragmatic approaches to AI — and both offer striking benefits.

Even as machine-learning datasets continue to grow, deep learning requires orders of magnitude more data to train models — which, in turn, complicates and burdens compute systems as well as data movement overhead.

Cray’s unique history in supercomputing and analytics has given us front-line experience in pushing the limits of CPU and GPU integration, network scale, tuning for analytics, and optimizing for both model and data parallelization. Particularly important to machine learning is our holistic approach to parallelism and performance, which includes extremely scalable compute, storage and analytics.

Cray systems, software and toolkits help organizations accelerate their machine learning and deep learning projects.

Featured Resources

Forrester TechRadar: Artificial Intelligence Technologies, Q1 2017

Artificial intelligence may be an idea that’s been around for years, but recent advances in big data, computing and the cloud are enabling a new world of possibilities.

Cray Systems Power Deep Learning in Supercomputing at Scale

With validated deep learning toolkits and the most scalable supercomputing systems in the industry, Cray customers can run deep learning workloads at their fullest potential.

See All Resources

Machine Learning: Develop Predictive Models

Cray® Urika®-GX platform

Cray Urika-GX platformAs machine learning becomes a must-have for business models and forecasts, it’s important to consider frameworks that can be integrated into your big data analytics environments and workflows. The Cray Urika-GX platform is a pre-integrated hardware-software platform that includes Apache Spark™, which provides a machine-learning library, and MLlib, designed for simplicity, scalability and easy integration with other tools.

The Cray Urika-GX platform is designed to accelerate Spark machine learning tasks in a very accessible format with large in-memory compute ability, up to 22 TB DRAM, and dense cores in a standard enterprise rack.

Also included in the Urika-GX system are Hadoop® and the Cray Graph Engine so you can run all your big data analytics workflows on one system and avoid the common overhead of data movement. We also leverage open standards like OpenStack, Mesos and Docker so you can be ready for data in days and then customize for future needs.

Deep Learning: Expand Your Insight

Cray® CS-Storm™ system

Cray CS-Storm systemThe Cray CS-Storm accelerated GPU system is ideal for deep learning problems that require precision and performance in an accessible format. Cray has now integrated NVIDIA’s most powerful deep learning and HPC GPUs (NVIDIA® Tesla® M40 and P100) on a dense platform that provides 8x GPUs within a single server for up to 250 GPU teraflops in a single rack.

Adding support for additional NVIDIA GPU accelerators, the M40 Tesla deep learning training accelerator provides up to 7 teraflops (TF) of single precision performance, with 3,072 cores and 24 GBs of DDR 4 memory. It is optimized for emerging applications such as machine learning and deep learning.

The Tesla P100 GPU accelerator for PCIe — powered by the NVIDIA Pascal™ architecture — offers 3,584 cores and up to 16 GB of HBM2 memory to deliver up to 9.4 TF single precision or 4.7 TF double precision performance with GPU boost technology. The Tesla P100 is well suited for the most demanding research and scientific applications.

Deep learning toolkits

So customers can immediately take advantage of their high-performance CS-Storm systems without altering deep learning software models, Cray offers several validated deep learning toolkits using NVIDIA Docker images for more robust performance:

  • Microsoft Cognitive Toolkit (previously CNTK)
  • TensorFlow™
  • NVIDIA Digits
  • Caffe
  • Torch
  • MXNet

Cray® XC50™ supercomputer

Cray XC50 supercomputerFor deep neural network training, the XC50 system is the world’s most scalable deep learning supercomputer. The XC50 supercomputer features the Tesla P100 GPU accelerator for PCIe, which enables lightning-fast nodes to deliver the highest absolute performance for high performance computing and deep learning workloads. And optimizations such as partitioned global address space (PGAS) and Cray’s Aries™ supercomputing interconnect, with its all-to-all communications, support more parallel analytics at scale with higher throughput.

To simplify the building and deploying of deep learning environments in supercomputing, Cray provides XC customers with directive files for common deep learning tools, such as TensorFlow and Microsoft Cognitive Toolkit (previously CNTK).

News & Articles

Forrester TechRadar: Artificial Intelligence Technologies, Q1 2017

Artificial intelligence may be an idea that’s been around for years, but recent advances in big data, computing and the cloud are enabling a new world of possibilities.

Cray Systems Power Deep Learning in Supercomputing at Scale

With validated deep learning toolkits and the most scalable supercomputing systems in the industry, Cray customers can run deep learning workloads at their fullest potential.

Accelerating Cancer Research with Deep Learning

A team from Oak Ridge National Laboratory is using the Cray "Titan" supercomputer to automate text extraction from cancer reports.

Machine Learning at Scale for Full Waveform Inversion at PGS

Cray's Geert Wenes blogs about Norwegian seismic exploration company PGS, who's using the Cray "Abel" supercomputer to process one of the largest offshore seismic surveys ever.

ALCF selects projects for new data science program

The Argonne Leadership Computing Facility (ALCF) has selected four projects to kick off the ALCF Data Science Program (ADSP). The new initiative, targeted at big data problems that require the scale and performance of leadership-class supercomputers, will enable new science and novel usage modalities.