Why Chapel?

Chapel_Blog

In previous articles for this blog, I’ve provided a high-level overview of Chapel, the parallel programming language that we’re developing at Cray Inc., and I’ve introduced Chapel’s iterators as a sample productivity-oriented feature. In this article (the first in a series), I’ll address some recurring questions about why we’re pursuing Chapel.

Why develop a new language?

My short answer to this question is that, quite simply, I believe programmers have never had a decent programming language for large-scale parallel computing. By “decent,” I mean one that contains sufficient concepts for expressing the parallelism and locality control required to leverage supercomputers, while also being as general, effective, and feature-rich as languages like Fortran, C, C++, or Java. Ideally, such a language would strive to be more than simply “decent” and feel as attractive and productive to programmers as Python or MATLAB are. Libraries and pragma-based notations are very reasonable and effective alternatives to creating a language. Yet, given the choice between the three, languages are almost always going to be preferable from the perspectives of:

  • providing good, clear notation;
  • supporting semantic checks on the programmer’s operations; and
  • enabling optimization opportunities by expressing the programmer’s intent most clearly to the compiler, runtime, and system.

To support this assertion, consider a hypothetical scenario in which some colleagues or students who are facile at programming desktop computers using Java or Python consider writing their first distributed memory parallel programs. Because you are a member of the supercomputing community who works in this space routinely, they approach you for recommendations. Speaking for myself, it would be difficult for me to recommend conventional HPC technologies to them without feeling a bit embarrassed or apologetic about it. It’s not that we don’t have effective, proven, and mature scalable computing technologies to offer — we do, MPI and OpenMP chief among them. It’s simply that our adopted technologies tend to be lower-level, less productive, more tied to specific hardware capabilities, and (frankly) old-fashioned compared to what modern programmers are accustomed to.

But don’t HPC workflows necessitate lower-level techniques?

Some may react to my scenario with disdain — believing that while our notations may not match a modern programmer’s expectations, they’re good enough for the HPC community where performance is king and nothing else matters; that we simply can’t afford to have anything sitting between the programmer and the metal. And to be clear, I should emphasize that if you are a programmer who is completely satisfied with current HPC programming notations, I’ve got no problem with that. Our goal with Chapel is not to supplant current technologies, but to provide an alternative for those HPC programmers (and prospective HPC programmers) who have expressed dissatisfaction with the status quo. In my experience, such users are not difficult to find, particularly if you talk to the scientists and programmers who write the code, rather than simply their managers or sources of funding.

At the same time, I want to push back on the characterization that HPC programmers have no alternative but to program to the metal. In my experience, most HPC programmers use OpenMP rather than managing threading and tasking themselves; they use MPI rather than calling directly to a network’s native interfaces. These are opportunities to work closer to the metal and improve performance where we typically choose not to because the programmability and portability benefits trump raw performance. Productivity vs. control tradeoffs like this are made all the time, particularly when taking the 90/10 rule into account. And the hunger for better abstractions seems clear, as exhibited by the domain specific libraries and template metaprogramming notations being pursued by those groups who are tired of chasing after the shifting notations necessitated by recent, rapid architectural changes.

Taking this argument further, productivity is not inherently at odds with performance. With good design, not only can raising the level of abstraction improve programmability and portability, it can also help a compiler — to say nothing of subsequent programmers — better understand and optimize a piece of code. As a technology matures, well-designed higher-level notations will tend to outperform lower-level ones for the typical programmer, while also supporting the expansion of the user base. As an example, consider that there are more programmers today than ever, yet most are unable to write assembly that competes with the best compilers — our processors have become sufficiently complex that compilers do a better job than most humans. And our parallel systems are following that same trend of complexity.

Finally, it should be noted that raising the level of abstraction does not necessarily preclude the ability to call out to traditional, lower-level notations as needed. In the same way that a C-level programmer can inline or call out to an assembly-level routine, a well-designed higher-level language should let programmers leverage code written in more traditional HPC notations.

The Chapel team is working in all of these areas: Supporting interoperability with traditional techniques; improving our implementation’s ability to automatically optimize code; helping with portability or programmability issues that hamper conventional techniques; and, of course, raising the level of abstraction in parallel programming. In doing so, we are striving to make parallel programming more accessible, which we believe will significantly benefit and broaden the community — from computational scientists, to labs struggling to attract raw computer science talent, to the HPC industry, and even mainstream and open-source developers wrestling with new-found opportunities for parallelism, from multicore to the cloud.

With so many languages trying and failing… is this an intractable problem?

That’s an excellent question, and I’m glad you asked it, but my editor tells me that I’m out of space this time around. I’ll pick up here again in Part 2 of “Why Chapel?” in an upcoming post. In the meantime, if you have questions about Chapel that you’d like to see addressed in this series, or a future one, please send them to blog@cray.com.

Brad Chamberlain, Principal Engineer 

Brad_Chamberlain_v1

 

Speak Your Mind

*

(won't be published)