High Performance Computing – The Last Rewrite

Hybrid multicore

Cray’s John Levesque has been optimizing scientific applications for high performance computers for 47 years and is currently a member of Cray’s CTO office and director of Cray’s Supercomputing Center of Excellence for the Trinity system based in Los Alamos, N.M. He has authored and co-authored several books including “HPC: Programming and Applications.” His next book, titled “Programming for Hybrid Many/Multi-core MPP Systems,” will present fundamentals of optimizing for the target systems, specifically parallelization and vectorization and the use of Cray’s programming tools to analyze and optimize numerous open-source applications. While the book will not be released until early 2016, John will be writing a series of blog posts here ... [ Read More ]

Realizing GPU Computation at Scale

GTC 2015

A “perfect storm” is brewing: Data volumes are increasing exponentially, mathematical models are growing ever more sophisticated and computationally intensive, but power and cooling are limiting the systems that can be deployed. These considerations are driving the use of GPUs as accelerators for computationally intense workloads. Accelerators are commonly deployed in a 1:1 configuration with conventional CPUs. Applications typically run across the two types of processors, with 95 percent of the application running on the conventional CPU and the computationally intense 5 percent running on the accelerator. However, in many applications this is still not sufficient. Many applications in financial services, the oil and gas sector, signal ... [ Read More ]

Supercomputing Matters: SC14 in Retrospect

SC14-BLOG

SC14, the premier event in the HPC industry, has wound to a close in New Orleans. It’s interesting to look back and contemplate the validity of this year’s theme, “HPC Matters,” the success of the conference and the vitality of the industry. Our industry has seen the ebb and flow of the relevance of supercomputing, and the fortunes of Cray have often paralleled those of this event (or visa versa). This year was no exception and SC14 was a resounding success for Cray and for the industry. With more than  10,000 attendees and 356 exhibitors it was a full show and the work never stopped. As I walked the show floor — logging about 5 miles a day — I was struck by the number of Cray customers. They were everywhere, either with booths of their own ... [ Read More ]

Data is More Valuable than the Hardware it Runs On

TahomaBlog2

It’s been a while since my last post earlier this year, but we’ve been very busy working on some exciting new technologies that we can finally talk about in public. As many of you know, our original product, the Urika graph analytics appliance (now renamed Urika-GD®), gave us a great opportunity to introduce a unique technology to the enterprise analytics market. Not only did Urika-GD start solving some fairly intractable problems where commodity machines faltered, but it also gave us the opportunity to better understand some of the use cases that were beginning to emerge – how our clients and users actually deployed some of the big data analytic solutions, the workflow from start to finish as well as the other systems that were needed ... [ Read More ]

Back to Warp Speed, Scotty!

DataWarpBlog

Aren’t supercomputers already supposed to be moving at “warp” speed? Well, multi-petaflops anyway? With computing challenges getting more demanding and datasets getting more massive, it is not as easy as Captain James Kirk asking Scotty for more power. Supercomputers and their associated storage systems are getting larger each year, but a performance inefficiency is emerging between the compute nodes and the disk systems where the parallel file systems are located. The overall performance of application execution on supercomputing systems is subject to the efficiency of the data movement between a tier of spinning disks and the memory where the codes are actually executing. Cray has been collaborating with some of the most ... [ Read More ]