“Titan” Supercomputer: More than an Engineering Marvel


Like Cray, the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory (ORNL) is dedicated to solving the world’s toughest problems using supercomputers. Perhaps that’s why Cray and ORNL have a relationship dating back three decades to when the facility’s first Cray system, a Cray® X-MP™, came to East Tennessee in 1985. After four years of administering Cray systems for the U.S. Air Force in the early ‘80s, I came to Oak Ridge to help run the X-MP system. Since then, I’ve seen a long line of Cray supercomputers cycle through ORNL. Each new generation of systems has improved upon its predecessors in small — and oftentimes big — ways that helped advance ORNL’s broad science and energy mission. The most recent ... [ Read More ]

The Cray Shasta System


You may have seen Cray’s recent announcement regarding our next generation supercomputer (codenamed “Shasta”) that we anticipate delivering to the Department of Energy's Argonne National Laboratory (ANL) in the future. We don't normally talk about a new system architecture this far in advance, but since it's come out in the announcement, I thought I'd provide a brief overview of the Shasta system. Shasta will be the successor to both our Cray® XC™ line of supercomputers (previously code-named “Cascade”) and our Cray® CS™ line of standards-based cluster systems. As such, Shasta is the most flexible system we've ever designed and the full embodiment of our adaptive supercomputing vision. We've been out talking with customers from HPC ... [ Read More ]

Cray CEO Reflects on Bill Blake’s Legacy


I’d like to take a moment to celebrate our friend and a truly great supercomputing visionary, William “Bill” Blake. Bill passed away on March 31 and his departure leaves a huge hole in the HPC community. We at Cray know we’re fortunate to have been part of Bill’s long history of accomplishments. Before we had the honor of working with him, Bill had already spent 30 years in the HPC industry leading the way on many transformative technologies in computing, data warehousing and analytics, hardware and software. He held executive roles ranging from head of engineering to CEO at great companies such as DEC, Compaq, Netezza, Interactive Supercomputing and Microsoft. Bill has been a personal friend and mentor to me for quite a while. I have ... [ Read More ]

High Performance Computing – The Last Rewrite

Hybrid multicore

Cray’s John Levesque has been optimizing scientific applications for high performance computers for 47 years and is currently a member of Cray’s CTO office and director of Cray’s Supercomputing Center of Excellence for the Trinity system based in Los Alamos, N.M. He has authored and co-authored several books including “HPC: Programming and Applications.” His next book, titled “Programming for Hybrid Many/Multi-core MPP Systems,” will present fundamentals of optimizing for the target systems, specifically parallelization and vectorization and the use of Cray’s programming tools to analyze and optimize numerous open-source applications. While the book will not be released until early 2016, John will be writing a series of blog posts here ... [ Read More ]

Realizing GPU Computation at Scale

GTC 2015

A “perfect storm” is brewing: Data volumes are increasing exponentially, mathematical models are growing ever more sophisticated and computationally intensive, but power and cooling are limiting the systems that can be deployed. These considerations are driving the use of GPUs as accelerators for computationally intense workloads. Accelerators are commonly deployed in a 1:1 configuration with conventional CPUs. Applications typically run across the two types of processors, with 95 percent of the application running on the conventional CPU and the computationally intense 5 percent running on the accelerator. However, in many applications this is still not sufficient. Many applications in financial services, the oil and gas sector, signal ... [ Read More ]