Getting the Most From the New Intel® Xeon® Scalable Processors on HPC Workloads

July brought news of the launch of the new Intel® Xeon® Scalable Processors, previously known by the codename Skylake, and with it new features and capabilities that can bring performance enhancements to both legacy codes and new applications. This may leave you wondering what the best approaches are to getting the most out of these new workhorses. In this article, we’ll let you know about a couple of the key insights we’ve gained from running the benchmarks we’ve collected and optimized over the last four decades on the latest in the line of processors that are ubiquitous in high-performance computing (HPC). HPC codes require balance Parallel programming is about the balance between computation and data movement. Many of the ... [ Read More ]

How HPC Can Help Tap the Power of Ocean Waves

“What’s amazing about ocean wave energy is the enormity of the resource sitting there,” says Ashkan Rafiee. “Whoever solves this riddle will make a huge impact on the world.” Dr. Rafiee is the hydrodynamics team leader for Carnegie Clean Energy — an Australian wave, solar and battery energy company well on its way to making wave power a reality. For the last decade, Carnegie has been developing a wave energy device that converts ocean swell into zero-emission, renewable power and desalinated freshwater. Dubbed “CETO,” the device is already in use off of Western Australia’s Garden Island, helping power the country’s largest naval base. But deploying wave energy technology at scale is another matter. “The potential is phenomenal,” says ... [ Read More ]

CHIUW 2017: Surveying the Chapel Landscape

CHIUW 2017 — the 4th Annual Chapel Implementers and Users Workshop — was held last month in Orlando, Fla., in conjunction with IEEE IPDPS 2017. Right out of the gate, attendees heard about a number of positive trends in the annual “state of the project” talk which summarizes Chapel progress over the past year. This year’s highlights included: Chapel performance is competitive with hand-coded C+OpenMP for single-node workloads as demonstrated by benchmarks like the Livermore Compiler Analysis Loop Suite (LCALS). For key communication benchmarks like ISx and HPCC RA, Chapel performance is increasingly competitive with the MPI/SHMEM reference versions, and occasionally beats them (see Figure 1 below). In May, Chapel became the ... [ Read More ]

Data-Intensive Computing to Simulate the Brain

Understanding how the human brain works will take more than brains. Along with the planet’s smartest scientific minds, it will take never-before-achieved computing capabilities. The science and technology required to decode the human brain is a scientific final frontier …  and Professor Dr. Dirk Pleiter is on the front lines. The theoretical physics professor and research group leader at the Jülich Supercomputing Centre (JSC) in Jülich, Germany, is part of the Human Brain Project (HBP), a 10-year-long European research initiative tasked with creating a working simulation of the brain. “Understanding the human brain is one of the greatest challenges facing 21st century science,” states the HBP’s report to the European Commission. “If ... [ Read More ]

Cray’s New Urika-XC Suite: the Convergence of Supercomputing and Analytics

Cray today announced the launch of the Cray® Urika®-XC analytics software suite, which brings graph analytics, deep learning and robust big data analytics tools to the company’s flagship line of Cray® XC™ supercomputers. The Cray Urika-XC analytics software suite empowers data scientists to make breakthrough discoveries previously hidden within massive datasets, and achieve faster time to insight while leveraging the scale and performance of Cray XC supercomputers. With the Urika-XC software suite, analytics and artificial intelligence (AI) workloads can run alongside scientific modeling and simulations on XC supercomputers, eliminating costly and time-consuming movement of data between systems. Cray XC customers will be able to run ... [ Read More ]