Urika-CS AI and Analytics Technology
Bringing AI to the Cray CS series
The Urika®-CS AI and Analytics software suite is a set of powerful data science tools and frameworks integrated and supported on Cray® CS-Storm™ GPU-accelerated systems and CS500™ cluster supercomputing systems. Selected to address the end-to-end data science workflows associated with AI, the Urika-CS suite allows organizations using machine learning and deep learning AI approaches to focus on data and models.
The Right Tool for the Job: Open-Source Analytics at Supercomputer Scale
The Urika-CS suite incudes the latest open-source technologies for AI workflows. Integrated with standard HPC workload managers, the Urika-CS suite brings the power of distributed AI and analytics to Cray CS series CPU and GPU systems.
The Cray Distributed Training Framework
The Urika-CS AI and Analytics suite incudes the Cray Distributed Training Framework , a collection of libraries designed to simplify and speed up the distributed training of large and complex neural network models. Fully supported by Cray, the Distributed Training Framework includes:
- The Cray PE ML plugin, leveraging supercomputer-class MPI to provide superior distributed training scalability and performance on CPU-based cluster nodes. It also eliminates time-consuming administrative tasks for you. For example, it automatically defines the nodes to use, simplifying the data scientist’s burden of configuring and infrastructure associated with distributed training.
- The Horovod Open Source distributed training library for enhanced distributed training and scale on dense-GPU nodes.
- The Horovod Open Source Horovod distributed training library for enhanced distributed training and scale on dense-GPU nodes.
- The most popular frameworks for machine learning (TensorFlow, PyTorch, Keras)
Easy Deployment Using Containers
The Urika-CS AI suite leverages Singularity containers to deliver a fully-tested, integrated and supported AI environment. Cray has removed the complexity of downloading, integrating, configuring and running AI frameworks and toolkits.