Why Chapel?


In previous articles for this blog, I’ve provided a high-level overview of Chapel, the parallel programming language that we’re developing at Cray Inc., and I’ve introduced Chapel’s iterators as a sample productivity-oriented feature. In this article (the first in a series), I’ll address some recurring questions about why we’re pursuing Chapel. Why develop a new language? My short answer to this question is that, quite simply, I believe programmers have never had a decent programming language for large-scale parallel computing. By “decent,” I mean one that contains sufficient concepts for expressing the parallelism and locality control required to leverage supercomputers, while also being as general, effective, and feature-rich as … [Read more...]

Video Blog: Cray Application Performance Team Poised to Get the Most from HPC Systems

Supercomputing solutions are more than the hardware they run on. It is supercomputing applications that transform ideas into reality and answer previously unanswerable questions. With a wider variety of customer sectors demanding parallel computing and other sophisticated functions, Cray is committed to finding ways to get the most value and efficiency from our systems. As part of that effort, we have a team dedicated to working closely with customers and numerous independent software vendors (ISVs). Our application performance team is devoted optimizing application solutions that help our customers make the most of their investments. Here’s a brief look at their work. What is the Cray Application team’s area of expertise? Like … [Read more...]

Hadoop for Scientific Big Data – Maslow’s Hammer or Swiss Army Knife?

cray-elephant-500X500 (2)

The Law of the Instrument (aka Maslow’s Hammer) is an over-reliance on a familiar tool — or, as my grandfather used to say “when the only tool you have is a hammer, every problem begins to resemble a nail.” I’m sure this principle comes to mind as many professionals within the scientific computing community are being asked to investigate the appropriateness of Hadoop® — today’s most familiar Big Data tool. When used inappropriately, and incorporating technologies not suited for scientific Big Data, using Hadoop may indeed feel like wielding a cumbersome hammer. But when used appropriately, and with a technology stack that’s specifically suited to the realities of scientific Big Data, Hadoop can feel like a Swiss Army knife — a … [Read more...]

Another Exciting Year

Pete's Blog_holiday 2013

As we near the end of 2013, Cray is closing out one of the most transformative years in our long and proud history. In just this year alone, we successfully integrated two strategic acquisitions, won a number of large supercomputing contracts at customer sites around the world, significantly expanded our business and portfolio of solutions, continued to build the financial strength of the company, and added a host of highly-talented individuals to our team. It’s been quite a year! Since the early days of our company, starting with our iconic founder Seymour Cray, we have been a trusted partner to customers around the world with a commitment to do two things really well:  be focused on their success and help them to see around the next … [Read more...]

OpenACC 2.0 Elucidated

Software V2.0

In a recent blog post, David Wallace discussed the importance of OpenACC and how it is impacting the industry as a whole. Here, I will be focusing on the recently released OpenACC 2.0, its features and benefits, primary purpose, and discussions on the next version. Why release a second version of the OpenACC specification? Whenever something is done in a committee, compromises are made.  In the case of OpenACC 1.0 most of the compromises were related to what the technical committee felt could reasonably be designed in the given timeframe.  There were already four different mechanisms for programming accelerators provided by the initial four members of OpenACC.  So the technical committee took the pieces that it liked from the two … [Read more...]