What’s Hiding in Your Performance Toolkit?

Similar to the old adage “you cannot judge a book by its cover,” estimating the performance of an application on thousands of nodes of a massively parallel computer cannot be done by investigating the performance on a single node or, for that matter, 100 nodes. For example, the performance of an application is dependent upon disjointed operations, some that scale and others that don’t. As an experiment, let’s say one operation takes 99% of the time when running 32 MPI tasks on one node and it scales as the number of MPI tasks increases. Another operation takes 1% of the time on one node; however, it doesn’t scale at all. On 1,000 nodes of 320,000 MPI tasks the first operation would take .000099% of the time and the second operation ... [ Read More ]

Altair Receives Cray’s 2017 Supplier of the Year Award

This Tuesday at SC17, we announced the recipient of our Supplier of the Year award. The honor goes to Altair, a testament to its continued collaboration with Cray — a relationship that goes back over 15 years — and its commitment to integrating PBS Professional®, Altair’s market-leading HPC software solution, with Cray supercomputers to optimize performance for compute-intensive environments. Altair is the first software provider to receive the award. “We’re excited to honor Altair as an outstanding supplier,” said Cray CFO Brian Henry. “This award represents excellence in the supercomputing industry and Altair has demonstrated their commitment to providing leading solutions to the HPC industry.” Altair’s PBS Professional is an ... [ Read More ]

How to Program a Supercomputer

We felt that it was necessary to write a book reintroducing programming techniques that should be used by application developers targeting the current and future generation supercomputers. While the techniques have been around for a long time, many of today’s developers are not aware of them. Let us explain. The supercomputer has been a shifting target for application programmers ever since its inception in the form of Seymour Cray’s CDC 6600 in the early 1970s, forcing developers to adapt to new approaches along with the ever-changing hardware and software systems. This necessity of developer adaptation is especially conspicuous in the field of high-performance computing (HPC), where developers typically optimize for the target node ... [ Read More ]

Bio-IT World 2017: Life Sciences Embrace Cloud

Hope you had a chance to attend the 16th annual, three-day Boston Bio-IT World Conference & Expo this year. Competitions included: Best Practices awards announcements and the Benjamin Franklin Award, and the show planners named the 2017 Best of Show winners. Tracks for the conference included: data and storage management, cloud computing, networking hardware, bioinformatics, next-gen sequencing informatics, data security and more. Face to face As many times as I’ve walked Bio-IT’s exhibit hall floor, the excitement of seeing in person someone I’ve been interacting with via email and phone – in some cases for years – and hearing their thoughts still pumps me up. True, social media has diminished the impact of conferences, but the ... [ Read More ]

Cray’s New Urika-XC Suite: the Convergence of Supercomputing and Analytics

Cray today announced the launch of the Cray® Urika®-XC analytics software suite, which brings graph analytics, deep learning and robust big data analytics tools to the company’s flagship line of Cray® XC™ supercomputers. The Cray Urika-XC analytics software suite empowers data scientists to make breakthrough discoveries previously hidden within massive datasets, and achieve faster time to insight while leveraging the scale and performance of Cray XC supercomputers. With the Urika-XC software suite, analytics and artificial intelligence (AI) workloads can run alongside scientific modeling and simulations on XC supercomputers, eliminating costly and time-consuming movement of data between systems. Cray XC customers will be able to run ... [ Read More ]