The Cray Shasta System

blog-shasta700

You may have seen Cray’s recent announcement regarding our next generation supercomputer (codenamed “Shasta”) that we anticipate delivering to the Department of Energy's Argonne National Laboratory (ANL) in the future. We don't normally talk about a new system architecture this far in advance, but since it's come out in the announcement, I thought I'd provide a brief overview of the Shasta system. Shasta will be the successor to both our Cray® XC™ line of supercomputers (previously code-named “Cascade”) and our Cray® CS™ line of standards-based cluster systems. As such, Shasta is the most flexible system we've ever designed and the full embodiment of our adaptive supercomputing vision. We've been out talking with customers from HPC ... [ Read More ]

Cray CEO Reflects on Bill Blake’s Legacy

Bill-Blake-Blog

I’d like to take a moment to celebrate our friend and a truly great supercomputing visionary, William “Bill” Blake. Bill passed away on March 31 and his departure leaves a huge hole in the HPC community. We at Cray know we’re fortunate to have been part of Bill’s long history of accomplishments. Before we had the honor of working with him, Bill had already spent 30 years in the HPC industry leading the way on many transformative technologies in computing, data warehousing and analytics, hardware and software. He held executive roles ranging from head of engineering to CEO at great companies such as DEC, Compaq, Netezza, Interactive Supercomputing and Microsoft. Bill has been a personal friend and mentor to me for quite a while. I have ... [ Read More ]

High Performance Computing – The Last Rewrite

Hybrid multicore

Cray’s John Levesque has been optimizing scientific applications for high performance computers for 47 years and is currently a member of Cray’s CTO office and director of Cray’s Supercomputing Center of Excellence for the Trinity system based in Los Alamos, N.M. He has authored and co-authored several books including “HPC: Programming and Applications.” His next book, titled “Programming for Hybrid Many/Multi-core MPP Systems,” will present fundamentals of optimizing for the target systems, specifically parallelization and vectorization and the use of Cray’s programming tools to analyze and optimize numerous open-source applications. While the book will not be released until early 2016, John will be writing a series of blog posts here ... [ Read More ]

Realizing GPU Computation at Scale

GTC 2015

A “perfect storm” is brewing: Data volumes are increasing exponentially, mathematical models are growing ever more sophisticated and computationally intensive, but power and cooling are limiting the systems that can be deployed. These considerations are driving the use of GPUs as accelerators for computationally intense workloads. Accelerators are commonly deployed in a 1:1 configuration with conventional CPUs. Applications typically run across the two types of processors, with 95 percent of the application running on the conventional CPU and the computationally intense 5 percent running on the accelerator. However, in many applications this is still not sufficient. Many applications in financial services, the oil and gas sector, signal ... [ Read More ]

Supercomputing Matters: SC14 in Retrospect

SC14-BLOG

SC14, the premier event in the HPC industry, has wound to a close in New Orleans. It’s interesting to look back and contemplate the validity of this year’s theme, “HPC Matters,” the success of the conference and the vitality of the industry. Our industry has seen the ebb and flow of the relevance of supercomputing, and the fortunes of Cray have often paralleled those of this event (or visa versa). This year was no exception and SC14 was a resounding success for Cray and for the industry. With more than  10,000 attendees and 356 exhibitors it was a full show and the work never stopped. As I walked the show floor — logging about 5 miles a day — I was struck by the number of Cray customers. They were everywhere, either with booths of their own ... [ Read More ]