The Dawn of Big-Fast-Data

Binary Tunnel

In computing today we’re surrounded by the term Big Data. In fact it’s even beginning to invade some areas of popular culture. Whether it’s through Microsoft’s SkyDrive, Adobe’s Creative Cloud, Steam’s gaming portal or the video industry with Netflix, Hulu or Amazon, Big Data and its relationship to cloud storage is something that impacts people worldwide.

Big Data is commonly talked about in terms of the “three Vs” — Volume: the ability to process large quantities of data; Velocity: the rate at which the data is growing; and Variety: the forms, shapes and relationships of the data. Today I will be discussing what Cray is doing to enhance the movement of huge quantities of data for processing to meet the grow analysis demands around the “Volume” aspect of Big Data. In many cases Big Data Volume requires storing and retrieving information at reasonable rates. Reasonable rates vary, but on average may mean moving 1-5 MB/sec/file and are governed by the “wait time” the user perceives. How much can be stored is often more important, relatively, than how fast the data can be accessed.

Cray and Big Data have a long history, but for Cray, Big Data has often been synonymous with very fast data access. As the world becomes more computationally integrated, the needs for both big and fast data are being driven together at an amazing clip.

Big-Fast-Data

Let me first talk about the recent growth of what I like to call “Big-Fast-Data” (BFD for short and my homage to Roald Dahl and the BFG) and then relate this growth back to what it means for the future of Big Data.

From 2008 to 2011 fast parallel file systems hovered in the 50-200GB/sec range. Most of these used either Lustre (from CFS, then Sun, then Oracle) or IBM’s General Parallel File System (GPFS).Then, in 2011, Lustre passed into the hands of the open source community in the form of OpenSFS and a new confidence in the roadmap and resiliency of this community emerged.

Last year was a milestone for BFD with two landmark Lustre installations – one at Lawrence Livermore National Labs (LLNL) and one at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. Both installations had the goal of driving BFD to 1000GB/sec – a whopping five-fold increase over previous installations.

At NCSA, Cray installed its highly integrated Sonexion 1600 Lustre appliance which uses a completely factory-integrated software and hardware approach. LLNL took a more modular approach using components from a number of vendors which were then integrated onsite by staff and partners. The two architectures were different from an integration, and underlying software and hardware standpoint, but both were built on Lustre as the driving file system for BFD. And both had tremendous scaling hurdles to surmount in order to come even close to reaching the 1000GB/sec goal.

NCSA Installation by the Numbers

The NCSA installation is comprised of three filesystems – the largest has almost 15,000 2TB disk drives. In total, this Lustre installation has more than 17,000 disks, more than 430 servers pre-installed with Linux and Lustre, 36 InfiniBand switches and 479 InfiniBand cables. What does all this get you for your fast-data needs? Since this system uses RAID for data protection, about 13,200 disks worth of usable space are available to store a total 26.5 petabytes of data.

This installation had multiple challenges to face. Not only did a staggering number of components have to work together, but the Sonexion 1600 – a fully integrated solution with software and hardware – was a new product in the marketplace. It was also unique in that it promised both an appliance-like design and scaling in terms of Lustre scalability and resiliency to failure – uncharted territory for storage. In addition, software stability needed to keep this filesystem up and running in a production mode to meet the demands of hundreds of NCSA users.

Success was achieved on Oct. 7, 2012 when Cray and NCSA measured 1.037 TB/sec of performance to a single Lustre filesystem and 1.137 TB/sec of simultaneously to all three Lustre filesystems. To put this achievement into mass media perspective, 1TB/sec is equivalent to downloading about 125 copies of Dead Space 3 per second, or about 250,000 songs through Spotify per second. This is a milestone for parallel filesystems, a milestone for Lustre, a milestone for Cray and a milestone for our customer at NCSA.  Best of all this the technology is being put to good use by a cadre of National Science Foundation users in solving some of the most challenging scientific problems facing our world today.

This five-fold increase in performance over previous parallel file system installations is only the beginning. In fact, one could argue that these installations at NCSA and LLNL are puny in comparison to what will be required to drive the BFD needs of this decade.

In my next blog, I will discuss the increasing need for Big-Fast-Data in the coming years by looking at some of the scientific endeavors like the Square Kilometer Array project that are driving this need and by looking at some of the innovative technologies that will be needed to push the BFD frontiers.

Barry Bolding, Vice President of Storage & Data Management and Corporate Marketing

Bolding150x224

Speak Your Mind

*

(won't be published)