YarcData Urika Offers Unique Storage Functionality


Storage is an integral component of any high-performance computing strategy, especially as more organizations begin to explore big data analytics. Companies trying to figure out the best way to handle big data may want to take a close look at the YarcData Urika® solution, a storage model that establishes a non-partitioned storage unit with 512 terabytes of shared memory to maximize analytics capabilities. A recent blog from YarcData’s, Alyssa Jarrett, explores this functionality and explains its benefits through a few key comparisons. She also points out that  512 terabytes of unpartitioned storage equates to seven times more books than the entire catalog of the U.S. Library of Congress, 48 seconds worth of the entire internet or … [Read more...]

Cray’s Past Instrumental in Guiding its Future


This past weekend, we used the occasion of Mr. Seymour Cray’s birthday to pay tribute to the visionary who started it all and is affectionately known as the “Father of Supercomputing.”  With a childhood interest in electronics and electrical devices, he began an early quest to create some of the world’s most powerful supercomputers, and there is no doubt he has shaped and influenced today’s HPC industry. As a company, Cray was among the early information technology giants to emerge in the United States and has long striven to provide leadership in the sector. History often shapes the present and guides the future, and this is evident in Cray's direction. Our longstanding place in the HPC sector has given us the experience and expertise … [Read more...]

Big or Big Fast Data?

cray-elephant-500X500 (2)

You say HPC & Hadoop®, I say Big Fast Data While the “Big” part of the Big Data nomenclature is a characteristic that garnered so many headlines, organizations using Big Data technologies like Hadoop, in meaningful ways are coming to discover that size alone isn’t enough. Though I’ll use the term Hadoop in this discussion, I’m really referring to anything in the MapReduce ecosystem, as well as a host of alternative “Big Data” technologies. There are a few reasons why organizations need to think about being fast, before they get big. Reason 1: Getting big is the easy part Go big, or stay relational… or even flat, for that matter. Technologies like Hadoop aren’t interesting for their ease of use, as they’re arguably prohibitively … [Read more...]

Cray, this is not your father’s supercomputing company


Cray, the stalwart of supercomputing, is evolving and changing the face of Big Data with it. For us ardent HPC followers, we’ve known Cray for precisely what it is: the Ferrari of computers - high performance machines that have been instrumental in pushing the boundaries of what’s computationally possible with bits and bytes. Cray, the company founded by Seymour Cray over three decades ago, has had an interesting evolution but has always excelled at one thing: helping researchers solve some of world’s most challenging problems. Cray is no different today; however, the computing needs of society have drastically changed. What’s also interesting is how our need for computing has changed. From simulating massive galaxies to molecular … [Read more...]

Supercomputer, Cluster, Cellphone – What’s the difference?

Clusters vs. Supercomputers

Let’s admit it. Supercomputing is an interesting industry to work in, and this line of employment often generates some peculiar questions and thoughts from people trying to gain a better understanding of what it is that we actually design, develop, build and sell. I often hear the saying: “my cellphone is a supercomputer because it’s just as fast, or faster, than a supercomputer was 30 years ago. Believe it or not, you still cannot predict a tornado on a cellphone, it just won’t work. You need a supercomputer for that type of scientific prognostication.  While today’s smartphones are, well, smart, they’re not that smart. The saying does hold some deep truth for us about commoditization and technological evolution, but it misses the … [Read more...]