Credit Valuation Adjustment: Solve Computational Challenges and Reduce Grid TCO

Credit Valuation Adjustment Feature

Financial institutions have long used credit valuation adjustment (CVA) to monitor and manage counterparty credit risk, meet regulatory and reporting requirements, and even price and hedge CVA. In the post-2008 credit crisis world, however, this evaluation activity has steadily increased in demand and complexity, creating tough computational challenges for firms. Where once large banks used CVA on a monthly basis, they’re now performing these calculations at daily to real-time (sub-second) intervals. The active, continuous management demanded by counterparty credit risk (CCR) raises two computational problems in particular: latency sensitivity and vast scale. Alone, these two would challenge traditional scale-out grid infrastructures. ... [ Read More ]

[Infographic] Risk Management & HPC

FS-Blog

In the post-2008 credit crisis world, risk evaluation by financial institutions has steadily increased in demand and complexity. Where large banks once performed risk analysis on a monthly basis, they’re now performing these calculations at least daily, if not in near-real-time. Cray makes this possible with the embedded Aries network, which eliminates the performance and latency issues of more traditional “fat tree” approaches, as well as the DataWarp™ applications I/O accelerator which effectively removes data access bottlenecks. See the infographic below for a look at Cray’s approach compared to legacy approaches – and to understand the difference in value.   ... [ Read More ]

5 Predictions of Where Supercomputing is Heading in 2016

supercomputing blog icon

From new processor technologies to quantum computing, 2016 promises to be another exciting year for supercomputing. Here are five predictions as to how the industry will push ahead in 2016:    The year of progress in new processor technologies There are a number of exciting technologies we should see in 2016, and a leader will be Intel’s next-generation Xeon Phi coprocessor – a hybrid between an accelerator and general purpose processor. This new class of processors will have a large impact on the industry with its innovative design that combines a many-core architecture with general-purpose productivity. Cray, for example, will be delivering Intel Xeon Phi processors with some of our largest systems, including those going to Los ... [ Read More ]

Top 5 Blog Posts From 2015

Top-5-Blog-Image

Happy New Year! Check out some of the top blog posts from 2015: Why Do Better Forecasts Matter? Meteorologists have been striving to increase the accuracy of their work since the first weather forecasts were issued in the mid-19th century by pioneers such as Robert FitzRoy. In the 1950s and 1960s, the development of computers and observational technologies enabled forecast generation using the numerical weather prediction theories of Vilhelm Bjerknes and Lewis Fry Richardson. Today the numerical models that support our weather and climate forecasts are true “grand challenge” problems, requiring on the order of 10,000 billion (1016) mathematical operations and the exchange of 150 billion (1.5 x 1014) bytes of information to generate a ... [ Read More ]

Can We Make Petabytes Useful?

TIdal-Wave

A recent article in Nature, “The Power of Petabytes”, by Michael Eisenstein, reviews how exponentially increasing life science data exceeds our present abilities to process and make sense of it. Even while continuing to grow unbounded, data sets are still often not yet large enough to draw convincing conclusions. Computation is one obvious problem. “The computation scales linearly with respect to the number of people,” says Marylyn Ritchie, a genomics researcher at Pennsylvania State University in State College. “But as you add more variables, it becomes exponential as you start to look at different combinations.” To efficiently harness growing computing resources, researchers will need to leverage scalable algorithmic approaches and ... [ Read More ]