PathScale Intros Highly Scalable Linux Clustering
Reducing Barriers to Linux Clustering
PathScale has unveiled a trio of new products touted as overcoming scalability and other lingering barriers to Linux clusters, so that scientific and technical users can migrate SMP applications from pricey RISC hardware and Unix while still maintaining strong performance.
Until now, PathScale�s big claim to fame has been EKOPath, a Linux compiler suite that already has more than 1,000 high performance computing (HPC) customers, including NASA; Boeing; Volvo; Los Alamos National Laboratory; the US Air Force; and the University of Chicago, to name a few.
EKOPath was honored at last week�s SC2004 Supercomputing Conference with a Reader�s Choice Award for �Greatest Price Performance in a Software Application� from HPCwire.
But also last week, Sunnyvale, CA-based PathScale introduced EKOPath 2.0--which adds Intel�s EM64T to the previously supported AMD64--plus two other new product offerings: PathScale InfiniPath Interconnect and PathScale OptiPath MPI (Message Pathing Interface) Acceleration Tools.
Together, PathScale�s three latest products greatly reduce the scalability and latency gap between Unix-based SMP and Linux clusters, while also simplifying development, debugging, and analysis of parallel applications for Linux, contended Len Rosenthal, PathScale�s VP of marketing.
�PathScale has solved three impediments to Linux cluster growth,� Rosenthal said, in an interview with LinuxPlanet. �Clusters have been latency- and bandwidth-bound. It�s been hard to produce efficient MPI applications. And in general, 64-bit development tools for Linux have been immature.�
Latency is becoming the dominant determinant of application performance, he said. �Customers want to solve larger, more complex applications. Using more CPUs drives up the frequency of communications and decreases the average message size. (But with InfiniPath Interconnect), clusters can now scale higher with greater efficiency, and migration.of SMP applications to low-cost clusters will be vastly accelerated.�
Rosenthal explained that the new InfiniPath Interconnect produces less than 1.5 microseconds MPI latency, for much better price performance than interconnect products from Miracom or Quadrics, for example.
In a study published in April of this year, IDC estimated that Linux clusters will surpass $2.6 billion in revenues in 2005.
According to the same report, however, more than 70 percent of customers have been spending over 40 percent of their cluster budgets on interconnect.
�If you�re trying to save money by using PCs (instead of RISC machines), spending a lot on interconnect kind of defeats the purpose,� Rosenthal noted.
On the other hand, though, without the use of interconnect technology, Linux clusters typically suffer latency of 4 to 4.5 microseconds MPI, he said.
PathScale-enabled clusters will be capable of scaling up to thousands of nodes, Rosenthal said.
Supporting both MPI and Internet protocol (IP) traffic, InfiniPath Interconnect attaches directly to the AMD Opteron�s HyperTransport port. The interconnect is being implemented in two ways: directly on to the PC motherboard; and as an adapter card.
PathScale�s adapter card, which plugs into an industry-standard HyperTransport HTX slot, will be known as the PathScale InfiniPath HTX Adapter. The adapter will provide 1.8 GB/s of bidirectional bandwidth. Each card will support up to eight processors.
Meanwhile, PathScale has also signed up a number of OEM partners for InfiniPath, including Linux Networx; Microway; Angstrom; Appro; TeamHPC; GridCore; and Dalco, for instance.
InfiniPath supports interconnect switches from multiple vendors, such as Topspin; Voltaire; Mellanox and Infinicon.
To achieve low latency, PathScale is using InfiniBand standard architecture for switching and fabric management, and then adding some of its own proprietary technology, Rosenthal said.
�(InfiniPath) performs like a �turbo InfiniBand,� or InfiniBand on steroids,� LinuxPlanet was told.
PathScale�s OptiPath MPI Acceleration Tools, another brand new product, is designed to make it easier for scientific users to conduct performance analysis on parallel applications.
�MPI is the method used for writing parallel applications. But it�s very hard to work with MPI code. You really need a degree in computer science to do so. Most HPC users aren�t experts in computer science. They�re experts in other scientific fields,� he pointed out.
The new MPI tools are meant to automate analysis of performance problems such as latency and bandwidth constraints; load imbalances; communications-bound conditions; communications efficiency; and non-uniform node behavior.
Bottlenecks are identified, and multiple test case runs are carried out for different cluster sizes, data sets, and grid resolution. Finally, the software supplies recommendations to the user, making projections about potential performance vs. current performance.
OptiPath supports Gig-E and IB, as well as InfiniPath. The tools run on AMD64, EM64T, and 32-bit x86 architectures.
Meanwhile, by adding support for Intel EM64T, PathScale�s EKOPath Compiler Suite lets developers of Linux-based parallel applications use a single compiler for AMD and Intel 32- and 64-bit applications.
Other capabilities new in EKOPath 2.0 include support for OpenMP 2.0; inclusion of the AMD Core Math Library; and PathDB, a new �backwards stepping� serial debugger. The new PathDB supports C, C++ and Fortran, with gdb compatibility, Rosenthal stated.