Searching...
Saturday, December 29, 2012

Intel’s Exascale HPC Revolution and Xeon Phi

1:50 AM

Last month, Intel brought us out to the Texas Advanced Computing Center (TACC) in Austin to brief us on their latest and greatest foray into high-performance computing (HPC) and exascale level processing performance.

There are mountains of problems that need to be solved and a myriad of insight to be gained, in fields from the sciences to national security, that require HPC and highly parallel processing to most effectively and efficiently solve. Parallel processing is what the HPC space is all about, and when large amounts of data can be processed and complex problems solved, it can help researchers move from the concept phase to the results phase more quickly.

Intel’s John Hengeveld said that Big Data is like drilling in an oil field; it digs through data and extracts records, and HPC is the pump that sucks it out. Ideally, Big Data and HPC create streams of information that can be used to accomplish great things. HPC can use Big Data to create insights into data aggregation, data analytics, data visualization, and interactive visualization and simulation, among a multitude of other things. And Hengeveld noted that these are all highly parallel problems that require parallel processing to solve in a timely manner.

Intel’s Xeon E5 architecture established a foundation for parallel processing for the company and could realistically deliver performance at the petascale level, but the giant leap forward to exascale computing was not looking like a reality in the near future; with the processing paradigm inherent in the Xeon line, the power demands and processing capabilities wouldn’t allow it.

To address this problem, Intel spent years developing its Many Integrated Core (MIC) technology. In a nutshell, the MIC architecture uses many, smaller low-power cores instead of a few full-blown, high-powered cores to accomplish processing tasks, thus enabling parallel processing on a much greater scale. The fruit of that labor is the new line of Intel Xeon Phi Coprocessors.

Image credit : hothardware


The Future of HPC For Intel: Xeon Phi
Working in tandem with Xeon E5 processors, Xeon Phi coprocessors can deliver on the promise of exascale computing today. According to Intel, the most efficient path to exascale computing is Xeon + Xeon Phi.

The Xeon Phi coprocessors are designed to deliver all the advantages of Intel’s architecture, including familiar programming environments and performance tuning tools and advanced power management technology, while also offering the performance of an add-in accelerator. To be clear, though, Xeon Phi is not an accelerator; it’s an actual many-core CPU. It can (theoretically) even run an operating system, although it looks more like a cluster of computers on a chip.

In practice, different users will use different combinations of Xeon/Xeon Phi depending on the problems that need to be solved--for example, one customer might run one Xeon and two Xeon Phis, while another would have two Xeons and a lone Xeon Phi (or any other possible combination). The flexibility of offering the Xeon Phi as a PCI Express add-in board gives customers the ability to configure their servers to best suit the needs of their parallel computing workloads.


For Intel, years of heady talk about parallelism and exascale computing have finally come to fruition. Intel is bringing to market a pair of Xeon Phi coprocessor offerings in 2013, the 3100 family and the 5110p.

Imaege credit : hothardwaer

There are currently two Intel Xeon Phi Coprocessor in the 3100 family, with the primary difference being that one is actively cooled with a built-in fan and the other is passively cooled. Both are PCI Express add-in boards and offer >1 teraflop peak double precision performance and offer 28.5MB cache and 6GB GDDR5 memory capable of 5GT/s and 240GB/s bandwidth. Both are built on Intel's 22nm process and feature 300W board TDPs. The maximum number of cores and clock speeds of the 3100 series Xeon Phi coprocessors are unknown at this time.

Intel believes that the 3100 coprocessors are ideal parallel computing solution, that target compute bound workloads in areas such as life sciences, linear algebra, banking, and more.


Like the 3100 coprocessors, the passively-cooled 5110p is a PCIe add-in card, It offers 1,011 gigaflops per second performance with 60 max cores clocked at 1.053GHz. The coprocessor has 30MB of cache and is pared to 8GB of GDDR5 memory running at 5GT/s and offers 320GB/s bandwidth. Board power is 225W.

Intel claims the 5110p as optimized for memory bandwidth and memory capacity bound workloads, ideal for applications such as STREAM and digital content creation.

Because Intel wanted users to be able to use common tools and programming languages with the Xeon Phi coprocessors, both the 3100 family and 5110p support C, C++, Fortran, and other familiar Intel and third-party tools.

The 5110p is available to order now, and will be shipping by January 28th, while the 3100 family is coming sometime in the first half of 2013. The exact pricing for the 3100 coprocessors is still undisclosed, but they will reportedly be under $2,000; the 5100p, on the other hand, will run $2,649.

Of note is that those prices match up well with some of the competition, landing at price points one could expect to pay for some last-generation NVIDIA Tesla cards. Speaking of NVIDIA, the company also announced its latest HPC offerings today with the K20 and K20X GPUs, which sport some similar specifications to the Intel Xeon Phi coprocessors. Obviously, Intel is approaching supercomputing from the CPU side while NVIDIA is tackling things with its GPUs, but both are focusing on many-core architectures designed for highly parallel computing workloads.



Source : hothardware
View the original article

0 comments:

Post a Comment