Why Artificial Intelligence Needs HPC

As we work with our customers on their artificial intelligence (AI) journeys, we are finding that a range of approaches are being taken by distinct organizations to implement AI. We recently had an opportunity to speak with Brian Dolan, chief scientist and co-founder of an AI startup — Deep 6 AI — to understand their interest in the Chapel parallel programming language, as well as why they view AI as a high-performance computing (HPC) problem. First a bit of background on Deep 6 AI and Chapel. Deep 6 AI is an artificial intelligence company whose primary mission is to find more patients faster for clinical trials — an important part of drug development and drug discovery. In 2016, Deep 6 AI was chosen by the ... [ Read More ]

What’s Hiding in Your Performance Toolkit?

Similar to the old adage “you cannot judge a book by its cover,” estimating the performance of an application on thousands of nodes of a massively parallel computer cannot be done by investigating the performance on a single node or, for that matter, 100 nodes. For example, the performance of an application is dependent upon disjointed operations, some that scale and others that don’t. As an experiment, let’s say one operation takes 99% of the time when running 32 MPI tasks on one node and it scales as the number of MPI tasks increases. Another operation takes 1% of the time on one node; however, it doesn’t scale at all. On 1,000 nodes of 320,000 MPI tasks the first operation would take .000099% of the time and the second operation ... [ Read More ]

Cray and Microsoft Partner to Bring Cray Supercomputers to Azure

This week we announced an exciting new strategic alliance with Microsoft that will make Cray systems available in Microsoft Azure, giving users access to all the supercomputing power they need without the overhead of owning and maintaining a datacenter. Cray in Azure will open up the power of supercomputing to a broad new cross-section of businesses and organizations with growing mission-critical, scalable applications needs. The dramatic growth in AI, machine and deep learning, and data analytics is driving the need for scalable simulation capability — and vice-versa — in a virtuous cycle where companies and organizations will vie for competitive advantage. Our CEO, Pete Ungaro, stated it well: “Our partnership with Microsoft will ... [ Read More ]

Getting the Most From the New Intel® Xeon® Scalable Processors on HPC Workloads

July brought news of the launch of the new Intel® Xeon® Scalable Processors, previously known by the codename Skylake, and with it new features and capabilities that can bring performance enhancements to both legacy codes and new applications. This may leave you wondering what the best approaches are to getting the most out of these new Intel processor workhorses. In this article, we’ll let you know about a couple of the key insights we’ve gained from running the benchmarks we’ve collected and optimized over the last four decades on the latest in the line of processors that are ubiquitous in high-performance computing (HPC). HPC codes require balance Parallel programming is about the balance between computation and data movement. Many ... [ Read More ]

How HPC Can Help Tap the Power of Ocean Waves

“What’s amazing about ocean wave energy is the enormity of the resource sitting there,” says Ashkan Rafiee. “Whoever solves this riddle will make a huge impact on the world.” Dr. Rafiee is the hydrodynamics team leader for Carnegie Clean Energy — an Australian wave, solar and battery energy company well on its way to making wave power a reality. For the last decade, Carnegie has been developing a wave energy device that converts ocean swell into zero-emission, renewable power and desalinated freshwater. Dubbed “CETO,” the device is already in use off of Western Australia’s Garden Island, helping power the country’s largest naval base. But deploying wave energy technology at scale is another matter. “The potential is phenomenal,” says ... [ Read More ]