Supercomputer, Cluster, Cellphone – What’s the Difference?

Let’s admit it. Supercomputing is an interesting industry to work in, and this line of employment often generates some peculiar questions and thoughts from people trying to gain a better understanding of what it is that we actually design, develop, build and sell.

I often hear the saying: “my cellphone is a supercomputer because it’s just as fast, or faster, than a supercomputer was 30 years ago.

Believe it or not, you still cannot predict a tornado on a cellphone, it just won’t work. You need a supercomputer for that type of scientific prognostication.  While today’s smartphones are, well, smart, they’re not that smart.

The saying does hold some deep truth for us about commoditization and technological evolution, but it misses the mark in a key dimension – that products should fit the needs of the users.  It also leaves out the fact that in those 30 years, the user has evolved just as much as the technology.  Technical and scientific challenges have also increased in complexity over the last three decades, and thus the sophistication of supercomputing solutions has had to evolve at an even quicker rate

So, I am sorry to say that your cellphone is NOT a supercomputer.  That being said, the supercomputer of 30 years ago would have made a terrible cellphone.

Over the past couple of months I have participated in a number of press interviews to promote our new Cray XC30-AC™ and Cray CS300™ supercomputers.  These systems are geared towards technical enterprise users, or to put it another way, an organization that has a real need for a supercomputer, just not on the scale of a Blue Waters or a Titan – two of the largest Cray supercomputers in the world.  Instead of a supercomputer that’s the size of a basketball court, maybe you want a supercomputer with the same technologies and features, but instead it takes up the same space of a Ping-Pong table or two.

In a lot of the interviews, I kept getting the same question: “what is the difference between a supercomputer and a cluster?”  This is a great question, because it strikes at the heart of our own industry messaging. It begs us to justify the terminology we hold so dear. The answer to this question is similar to the cellphone discussion above, but not in the obvious way you might expect.

For clusters, I like to change the term slightly and use “cloud-cluster”.  Cloud computing is primarily a sales/delivery model. Within that sales model, one can provide capacity cluster technologies (i.e. “cloud-clusters”) that are economical at delivering compute capacity for irregular workloads.  Supercomputers are designed for continuous usage with production applications and are most economical for continuous-production users. Cloud-clusters and supercomputers can both have a similar numbers of processors and similar power utilization.  In fact, given their similarities one might be tempted to say that they are one and the same.

But they are not the same, and again, just like with the cellphone, it boils down to having the right tool for the job.

Lets use transportation to drive home a couple of analogies.  Cloud computing is analogous to the Zipcar® ( delivery model for cars.  If you drive every day, it doesn’t make much sense to use Zipcar, but if you are an occasional user then it makes a lot of sense.  You can also view the cloud as analogous to taking the bus.  It is inexpensive and high capacity.

If you are a heavy automobile user and need power and flexibility in transportation, then neither the bus, nor Zipcar work for you – you need to find an automobile that best fits your needs.  Take the Prius and the Tundra for example. Both are sold by Toyota, but their intended uses are very different. Toyota has specifically designed the Prius (an economy commuter car) and the Tundra (a full-size truck) to excel at very different things.

Cray sells supercomputers in several shapes and sizes (Cray XC30™, Cray XC30-AC™, and Cray CS300™ supercomputers), and we target these systems at heavy-duty workloads characteristic of productivity-driven, time-critical applications.  What makes them supercomputers is that they can do work that cloud-clusters either cannot do, or are clumsy and inefficient for.  Weather prediction, turbulent airflow in aircraft engines and high resolution seismic modeling are supercomputer workloads.  Grinding out the most accurate forecast every few hours, simulating accurate combustion and investing in the right oil/gas drilling site are all critical, production workloads.  This is where Cray’s computing solutions shine.

Fit the right computing model to the right application and you will be happy.  Try to fit a production supercomputing application into a cloud-cluster and you will be sorely disappointed.

Remember, the supercomputer of 30 years ago is a terrible cellphone and the cloud-cluster of today is not a supercomputer.


  1. 2


    Good question QuantaCosta. I can envision it. Some conditions need to be met for a viable “MPP-cloud” or “supercomputing-cloud” and those may defeat some of the value propositions of the cloud model.
    a) The application should not require large data sets to be imported or exported from the “supercomputing-cloud”. This avoids the data bottleneck for clouds. This can be met today for things like ab initio quantum mechanics where you need only a geometry and basis-set(small input) to begin a large simulation and you can limit your outputs to wavefunction coefficients (small output) and in the end get useful supercomputing work done.
    b) The user would need to specifically request time on “supercomputing-cloud” vs the normal cloud. This kind-of defeats the cloud model of being hardware agnostic (i.e. inexpensive HW). Some cloud sites are trying to implement this “supercomputing-cloud” and it may work at modest scale but at high scale the economics will probably not work. This usage model fits better with our national data center model where users requiring dedicated supercomputer resources request these and submit jobs to a sophisticated scheduler and those national resources serve both academics and industry very well.

    As to a “cloud strategy”, Cray is focusing today on a Fast-Data, Big-Data strategy to complement our core supercomputing expertise. We are seeing this Fusion of Big Data and Supercomputing occuring at an ever increasing pace. It is the virtuous cycle of data creation (or aggregation) and vast computing needs that is where we are putting our energy today… but never say never, cloud may evolve in a way that fits Cray’s core value of building tools that help solve the worlds most challenging problems.

  2. 3

    QuantaCosta says

    I would like to ask you if you can envision a time when MPP can be considered viable on the cloud, or do you see proximity being a limiting factor? That is, given finite (c) communication and a quantum processing, will local computing (via bus) always trump parallel computing (via networking)? Is there a future scenario where Cray would need to employ some aspect of “cloud” strategy, for instance, a binary supercomputer system?

Speak Your Mind

Your email address will not be published. Required fields are marked *