When It Comes to AI, Automation Is Better, Choice Is Good, and Simplicity Is King

This week at the SC18 supercomputing event, one of the exciting areas we’re highlighting is how Cray continues to expand its artificial intelligence portfolio to support researchers, data scientists and IT teams as machine and deep learning become core to their everyday missions. As we’ve spoken to organizations starting out with AI, developing models, doing real research, or implementing production systems, we’ve noticed a few themes: • Machine learning is as much an art as it is a science, and a time-consuming endeavor at that. Automation tools that help the data scientist get to the optimal solution faster are much appreciated. • No single “right” tool set exists. Some teams prefer TensorFlow, some PyTorch, some Apache Spark™ ... [ Read More ]

3 Reasons Why CosmoFlow on a Cray System is a Big Deal

Today, Cray, NERSC (the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory) and Intel announced the results of a three-way collaboration around CosmoFlow, a deep learning 3D convolutional neural network (CNN) that can predict cosmological parameters with unprecedented accuracy using the Intel-powered Cray® XC™ series “Cori” supercomputer at NERSC. Supercomputers are unique in their ability to be instruments of discovery for problems on the smallest and largest of scales — from subatomic scale to the cosmos. Cosmologists who study the origin, evolution and eventual fate of the universe use a combination of empirical observations and theoretical computer simulations to define and refine a ... [ Read More ]

Boost Your HPC & AI Knowledge with Fall Learning Series Webinars

Summer is almost over and fall is the perfect time to refocus, reengage, and reinvest in learning how to overcome some of your biggest HPC challenges. To help get you on track, Cray is offering a September Learning Series of webinars designed to address four of the most frequently asked questions we get in areas ranging from artificial intelligence to storage and compute to software. Join us for any — or all — of the sessions to learn, ask questions and engage with industry thought leaders. Tuesday, 9/11, 9 a.m. PT: The Three Steps: Focusing on Workflow for Successful AI Projects As artificial intelligence (AI) has gained mainstream acceptance, there's been a lot of focus on the systems used to develop and train models. But ... [ Read More ]

Can LS-DYNA Scale Any Higher?

Processing and memory bottlenecks can run but they can’t hide. Not indefinitely, at least. And especially not when four technology leaders combine efforts against them. Cray, Livermore Software Technology Corporation (LSTC), the National Center for Supercomputing Applications (NCSA) and Rolls-Royce are partnering on an ongoing project to explore the future of implicit finite element analyses of large-scale models using LS-DYNA, a multiphysics simulation software package, and Cray supercomputing technology. As the scale of finite element models — and the systems they run on — increase, so do scaling issues and the amount of time it takes to run a model. Understanding that, ultimately, only time and resource constraints limit the size ... [ Read More ]

The Peloton Project: Largest-Ever Sports Simulation Yields Surprising Results

Aerodynamics in a cycling peloton aren’t what you might expect. A peloton is the main field or group of riders in a road bicycle race like the Tour de France. While it can take different shapes, the peloton’s overall purpose is to take advantage of the effects of slipstreaming, or drafting, behind other riders in the group. Air resistance is the biggest mechanical component preventing cyclists from going faster on flat road, and slipstreaming can save up to 50 percent of their energy. Or that was the assumption, at least. The reality is that almost no information exists on aerodynamic resistance for riders in cycling pelotons. Systematic computer simulations or measurements have never been reported before. Professor Bert ... [ Read More ]