Imagine your morning commute if your driveway fed into a superhighway with an uncapped speed limit, that went directly to your office parking lot. For you city dwellers, imagine an express train that only stops at your apartment and your office, and always runs on your schedule. Also, there is no chewing gum stuck to the floor. Now, think about it if you and 1,000s of your co-workers had the same commute. A dream you say? Something you ponder while stuck in traffic or waiting on the train platform scraping the gum off your shoe? Unfortunately, yes, (Sorry!). But Intel has applied those concepts to data moving through the data center with its Intel® Omni-Path Architecture (Intel® OPA), and it’s big news for HPC customers. Lenovo has already begun shipping OPA-based networking cards and switches to customers worldwide.
Lenovo has been an early and enthusiastic supporter of Intel’s Scalable Systems Framework (SFF), of which OPA is a critical component. We actually demonstrated OPA up and running in our Stuttgart, Germany Enterprise Innovation Center on the day Intel announced it last year. And, now we are pleased to showcase our first deployment using this framework – the “MARCONI” Supercomputer for CINECA, an inter-university computing consortium based in Casalecchio di Reno, Italy. MARCONI will be listed on the June 2016 TOP500 list and we believe it will be the largest OPA installation in the world!
As we move into new realms of computing like machine learning, customers, beginning with HPC customers, will require an advanced fabric like OPA to handle the massive amounts of data. OPA supports up to 100 Gbs line speeds, and up to 25 GBps/port bi-directional bandwidth. But OPA isn’t just a faster fabric. It is a re-architecting of how data is moved throughout a system to minimize the bottlenecks associated with IO, and increase overall system performance. While most customers won’t need it today, the fabric can scale to over 10,000 nodes paving the path toward EXAscale computing in the future.
Tomorrow, OPA will deliver additional, game-changing technologies that will revamp data flow within a cluster. This includes the integration of the fabric controller with the CPU. Future Intel® Xeon® CPUs and Phi accelerators will have this integration and it will change the game once again on solution design, cost, and capabilities.
So while commute time improvement for you may not be imminent, your data’s commute throughout your cluster is about to get its own express train, without the gum.