Unlocking the Full Potential of Health Data

ORNL_BlogImage

The health care industry is facing a challenging transition as it tries to use data more effectively. Gathering, sharing and analyzing patient information is an incredibly challenging process that is contributing to a technological revolution across the health care sector. One of the main challenges in our health care system today is the difficulty in working with patient data, because it exists in a variety of formats, both structured and unstructured, and it often exists in large volumes, making it difficult to handle in many data centers. This is where supercomputing comes into play, and the Oak Ridge National Laboratory (ORNL) is leading a project that is taking key steps forward in driving innovation. According to a recent … [Read more...]

RTM: Essential Imaging for the Oil and Gas Sector

SEG RTM featured image

Reverse time migration (RTM) modeling is a critical component in the seismic processing workflow of oil and gas exploration. RTM imaging enables more accurate imaging in areas of complex structures and velocities by gathering a two-way acoustic image of seismic data in place of a one-way image. As this definition implies, it means creating and analyzing much more data with every seismic model, something that can be incredibly daunting — traditional simulations are already complex and require significant resources. RTM comes in many flavors and is rapidly evolving. Its growth is driving increased geophysical processing requirements: vertical transverse isotropy, tilted transversely isotropic media, least squares/residual shot RTM (aka … [Read more...]

Data is More Valuable than the Hardware it Runs On

TahomaBlog2

It’s been a while since my last post earlier this year, but we’ve been very busy working on some exciting new technologies that we can finally talk about in public. As many of you know, our original product, the Urika graph analytics appliance (now renamed Urika-GD®), gave us a great opportunity to introduce a unique technology to the enterprise analytics market. Not only did Urika-GD start solving some fairly intractable problems where commodity machines faltered, but it also gave us the opportunity to better understand some of the use cases that were beginning to emerge – how our clients and users actually deployed some of the big data analytic solutions, the workflow from start to finish as well as the other systems that were needed … [Read more...]

Why Chapel? (Part 3)

Chapel_Blog

This article wraps up the series started in my previous two blog articles (here and here) with the goal of explaining why we are pursuing the development of the Chapel parallel programming language at Cray Inc. In those articles, I advocated for the pursuit of productive new languages like Chapel. In this article, I’ll cover reasons for developing a new language rather than extending an existing one. Why create a new language? Why not extend C, C++, Fortran, … ? People who ask this question are typically referring to one or more of the following perceived benefits. By extending a language you don’t have to reinvent the wheel and can build on what’s come before.  While this is true, conventional languages also tend to carry baggage that … [Read more...]

Back to Warp Speed, Scotty!

DataWarpBlog

Aren’t supercomputers already supposed to be moving at “warp” speed? Well, multi-petaflops anyway? With computing challenges getting more demanding and datasets getting more massive, it is not as easy as Captain James Kirk asking Scotty for more power. Supercomputers and their associated storage systems are getting larger each year, but a performance inefficiency is emerging between the compute nodes and the disk systems where the parallel file systems are located. The overall performance of application execution on supercomputing systems is subject to the efficiency of the data movement between a tier of spinning disks and the memory where the codes are actually executing. Cray has been collaborating with some of the most … [Read more...]