3 Reasons Why CosmoFlow on a Cray System is a Big Deal

Today, Cray, NERSC (the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory) and Intel announced the results of a three-way collaboration around CosmoFlow, a deep learning 3D convolutional neural network (CNN) that can predict cosmological parameters with unprecedented accuracy using the Intel-powered Cray® XC™ series “Cori” supercomputer at NERSC. Supercomputers are unique in their ability to be instruments of discovery for problems on the smallest and largest of scales — from subatomic scale to the cosmos. Cosmologists who study the origin, evolution and eventual fate of the universe use a combination of empirical observations and theoretical computer simulations to define and refine a ... [ Read More ]

Boost Your HPC & AI Knowledge with Fall Learning Series Webinars

Summer is almost over and fall is the perfect time to refocus, reengage, and reinvest in learning how to overcome some of your biggest HPC challenges. To help get you on track, Cray is offering a September Learning Series of webinars designed to address four of the most frequently asked questions we get in areas ranging from artificial intelligence to storage and compute to software. Join us for any — or all — of the sessions to learn, ask questions and engage with industry thought leaders. Tuesday, 9/11, 9 a.m. PT: The Three Steps: Focusing on Workflow for Successful AI Projects As artificial intelligence (AI) has gained mainstream acceptance, there's been a lot of focus on the systems used to develop and train models. But ... [ Read More ]

Can LS-DYNA Scale Any Higher?

Processing and memory bottlenecks can run but they can’t hide. Not indefinitely, at least. And especially not when four technology leaders combine efforts against them. Cray, Livermore Software Technology Corporation (LSTC), the National Center for Supercomputing Applications (NCSA) and Rolls-Royce are partnering on an ongoing project to explore the future of implicit finite element analyses of large-scale models using LS-DYNA, a multiphysics simulation software package, and Cray supercomputing technology. As the scale of finite element models — and the systems they run on — increase, so do scaling issues and the amount of time it takes to run a model. Understanding that, ultimately, only time and resource constraints limit the size ... [ Read More ]

The Peloton Project: Largest-Ever Sports Simulation Yields Surprising Results

Aerodynamics in a cycling peloton aren’t what you might expect. A peloton is the main field or group of riders in a road bicycle race like the Tour de France. While it can take different shapes, the peloton’s overall purpose is to take advantage of the effects of slipstreaming, or drafting, behind other riders in the group. Air resistance is the biggest mechanical component preventing cyclists from going faster on flat road, and slipstreaming can save up to 50 percent of their energy. Or that was the assumption, at least. The reality is that almost no information exists on aerodynamic resistance for riders in cycling pelotons. Systematic computer simulations or measurements have never been reported before. Professor Bert ... [ Read More ]

Is Your HPC System Fully Optimized? + Live Chat

Let’s talk operating systems… Take a step back and look at the systems you have in place to run applications and workloads that require high core counts to get the job done. (You know, jobs like fraud detection, seismic analysis, cancer research, patient data analysis, weather prediction, and more — jobs that can really take your organization to the next level, trying to solve seriously tough challenges.) You’re likely running a cutting-edge processor, looking to take advantage of the latest and greatest innovations in compute processing and floating-point calculations. The models you’re running are complex, pushing the bounds of math and science, so you want the best you can get when it comes to processor power. You’re probably ... [ Read More ]