2016: The Year of High-Performance Computing - LEKULE

Breaking

16 Dec 2016

2016: The Year of High-Performance Computing

High-performance computing is becoming increasingly important in a world where problems are becoming more complex and data sets are becoming exponentially larger. As 2016 slowly draws to a closer, we look back on some of the breakthroughs, challenges, and future prospects in the world of high-performance computing.

For many, 2016 feels like the year that truly efficient processing and maybe even quantum computing come within our grasp. Here's a look at some of the signs that high-performance computing is quickly becoming a reality.

The World’s Most Efficient Supercomputers

With increasing computation power and performance, energy efficiency is becoming a critical factor in high-performance computing. The trend in the 90s was “performance-at-any-cost”, leading to systems that would consume massive amounts of electricity and produce large quantities of heat.
With more energy-efficient supercomputers, data centers will consume less power and produce less heat. Power consumption can, and has been, a limiting factor for businesses and research institutions because the cost of powering these systems can become excessive.

NVIDIA just announced its DGX SATURNV supercomputer, ranking 28th within the world’s top 500 supercomputers. It also snags the title of the most efficient supercomputer at 9.46 gigaflops/watt—which is 40% more efficient than its nearest competitor in power efficiency. The DGX SATURNV’s efficiency can be credited to the DGX-1 server nodes which use the P100 Tesla datacenter GPUs.

The NVIDIA DGX SATURNV Supercomputer. Image courtesy of NVIDIA.

The PIZ Daint made it to second place on the TOP500 energy efficiency list with 7.45 gigaflops/watt efficiency. It's interesting to note that the PIZ Daint also uses the same processor technology as the DGX SATURNV.
More affordable and accessible high-performance computing has benefits across many sectors. Supercomputers like the NVIDIA DGX SATURNV can be used in a variety of applications such as cancer research, artificial intelligence, and deep learning.

But high-performance computing is not only important because of its potential applications. As chip density begins to plateau and the applicability of Moore’s law begins to wane, supercomputers will be one of the ways we continue to expand our computational capabilities.
The importance of supercomputing is not going unnoticed. Last week, SC16—the International Conference for High-Performance Computing, Networking, Storage, and Analysis—had record-breaking attendance in Salt Lake City, Utah. Over 11,000 delegates from industry, academia, and the general public attended to discuss and share the latest in the high-performance computing world. It is also reported that 349 companies and institutions were involved.

Image courtesy of SC16


The Advent of Quantum Computing

Even Microsoft is throwing its hat into pushing frontiers in the high-performance computing race. The company has spent the last decade researching quantum computing at its Station Q research center, and it is reported that Microsoft has hired top industry and academic names to begin developing hardware and software.
Their goal is to create a universal quantum computer that can be used to compute a wide variety of applications and tasks. Quantum computing will expand computational capability much further than what the best supercomputer today is capable of.
The challenges ahead of Microsoft include designing quantum circuitry and resolving any issues with fault tolerances and error correction. It is not known yet when we might see one of the first, fully-fledged quantum computers from the company.
With increasing interest from some of the biggest names in the computer industry and the shift into the mainstream market, it is expected that the high-performance computing industry will be worth over $31 billion by 2019.

No comments: