Six Ways to Say “Hello” in Chapel | Part 2


This article continues the introduction to Chapel via simple “Hello world” programs that I started in part one of this series. Continuing where we left off: Distributed Parallel Hello World   My last post ended with the following parallel, distributed-memory Chapel program, sans explanation: Here’s how this program works: As in previous examples, the first line declares a configuration constant, n, indicating how many messages to print. The next line is a use statement, which makes a module’s contents available to the current scope. In this case, we are ‟use”-ing a standard library module, CyclicDist, which supports the cyclic distribution of rectangular index sets to compute nodes (or locales in Chapel terminology). The ... [ Read More ]

Why Do Better Weather Forecasts Matter?


Cray, NVIDIA, the Swiss National Supercomputing Centre (CSCS) and MeteoSwiss recently announced the acceptance of MeteoSwiss’ new supercomputing platform for operational weather forecasting, a Cray CS-Storm system with NVIDIA® Tesla® K80 GPUs. It is the world’s first operational weather forecasting system using GPGPUs as the primary computational engine, and it represents a successful return on years of effort by MeteoSwiss, C2SM/ETH and CSCS in porting the COSMO weather model to GPUs. This system is the latest in a long series of investments in Cray supercomputers by weather forecasting and climate research organizations, which over the past two years have included the United Kingdom’s Met Office; Danish Meteorological Institute; ... [ Read More ]

Six Ways to Say “Hello” in Chapel | Part 1


When learning a new programming language, users often start by studying “Hello world” programs —those that output simple messages to the console. Though such programs are trivial by nature, they can be an illuminating way to get familiar with a new language in a short amount of time. In this series of articles, I’ll show several “Hello world” programs in Chapel, Cray’s open-source programming language for productive parallel programming. I’ll start with a pair of traditional (serial) “Hello world” programs and then move on to parallel versions that take advantage of Chapel’s features for shared- and distributed-memory execution. Simple Hello World Writing a traditional “Hello world” program in Chapel is the one-liner you’d hope ... [ Read More ]

Machine Learning for Critical Workflows | Part III


(This is the third and last blog entry in a series of three. The first one introduced big data analytics and critical workflows. The second post discussed critical workflows in oil and gas and in life sciences. This one will speculate about machine learning techniques to optimize such workflows.) Machine learning (ML), at first blush, has to be one of the lesser Olympians in today’s IT pantheon: a quick Google search reveals “big data” to be Zeus (863 million entries), closely followed by “analytics” (632 million), while “cloud computing” is trailing at 161 million entries, and machine learning a distant fourth at 59 million entries. ML is typically defined as those algorithms (and the study thereof) that can discover patterns in data ... [ Read More ]

From Grand Challenge Applications to Critical Workflows for Exascale | Part II


(This is the second in a series of three blog entries. The first post introduced big data analytics and critical workflows, this one will discuss critical workflows in oil and gas and in the life sciences, and the last will speculate about machine learning techniques to optimize such workflows). In the first post, I defined workflows in terms of a low-attack surface which implied four characteristics: many user input fields, combined with mixed protocols and interfaces and blocks of software functionalities that are organized as a service to each other. In addition, critical workflows are those without which R&D or engineering doesn’t get done. I also gave an example of big data analytics (BDA) and noted an article on the analysis of ... [ Read More ]