The Ultimate Install Fest: Linux on the IBM System/390 - page 4
Recycling the Mainframe with Linux
After my first article, many of the questions I received via e-mail concerned the cost effectiveness of deploying a single large S/390 mainframe, versus a few racks of Intel or RISC equipment. Although the S/390's I/O and memory bandwidth far exceeds any PC-class hardware, the raw speed of the CPU itself--per processor--is higher but not orders-of-magnitude higher. Some readers questioned whether it makes sense to even consider the S/390 for Internet or intranet applications.
Since Alex Stark is IBM's key technical lead for Linux design on the 390, I asked him to respond to these questions. What, I asked, are the classes of application where the 390 makes sense, and where is it overkill or not cost effective?
Stark answered by pulling a quote from Douglas Adams' famous Hitchhiker's Guide to the Galaxy, responding, "The answer is forty-two!"
He went on to explain that the performance issues are heavily dependent on the application--no surprise to anyone who knows about hardware optimization. Stark also includes other items in the "performance" category besides the throughput of the machine itself.
Stark lists four key areas for assessing performance: speed of porting new operating system versions, speed of porting new and existing applications, execution speed of the applications individually, and overall execution speed of the total software environment.
The first two aspects, he says, are as good for Linux on S/390 as they are for any other Linux port. IBM is committed to changing nothing about the base architecture of Linux beyond the hardware-specific items that are needed for any port to a new platform. Stark says that any application that is free from architecture-specific dependencies--such as a sensitivity to big-endian or little-endian byte ordering--should run on Linux for S/390 with not much more than a recompile from source. IBM is working with the Linux kernel team to incorporate S/390 into the official kernel; Alan Cox is a frequent contributor to the Linux-390 e-mail discussion list. According to Stark, if a developer's application compiles and runs on other non-Intel processors it will most likely compile and run on the S/390 as well.
Who Are the Customers? In terms of application speed, Pete McCaffrey, Program Director for S/390, acknowledges that there are some applications where a 390 may not be a good choice. These include scientific number-crunching or ray tracing--areas where I/O is often less important than CPU speed. For such applications, McCaffrey recommends Intel or RISC platforms, preferring, of course, the IBM Netfinity line.
On the other hand, says McCaffrey, tests run in IBM labs and by customers evaluating Linux for S/390 indicate that other classes of application perform extremely well on the S/390. As an example, McCaffrey cites a very large e-commerce hosting company that is serious about deploying S/390 for a multi-customer "virtual storefront"web farm. He says that the company likes the idea of being able to rapidly redeploy CPU and memory resources from one customer to another, based on a very dynamic loading model, while still retaining complete isolation of each customer's environment from the others. LPARs are allocated a variable percentage of a variable number of physical processors, and these values can be adjusted while the machine is running. This is the equivalent of upgrading--or downgrading--the CPUs in a rack full of Intel machines, allocating them where they are most needed.
McCaffrey says that there are still areas where the performance characteristics are not yet well-defined. IBM is currently engaged in some internal tests to study the horizontal scalability (multiple applications) and vertical scalability (many users on each application) of their middleware on the Linux for S/390 platform.
Pete McCaffrey lists internet content providers, application service providers, and colocation hosting providers among the best candidates for S/390 Linux. One example would be the case of a Web hosting company that has many small customers who generate a lot of traffic in aggregate but little as individual sites. Each customer's server sees a very "bursty" load profile, which tends to work well on the S/390 hardware. The provider can dedicate a Linux instance to providing common services such as DNS or POP3 e-mail accounts to all customers, and could even offer a single instance of OS390-based DB2 as a shared database server (with each customer getting its own database for security reasons). Among the middleware soon to be available for S/390 Linux is the "DB2 Connect" client, which provides APIs for connecting to DB2 Universal Database over TCP/IP. If the OS390 instance hosting DB2 is on the same physical mainframe as the customer's Linux instance, then the database connections can be made over the machine's internal memory bus, on a "virtual LAN". And the various customers can purchase more or less CPU power from the provider, with the changes made in near realtime and with little administrative cost to the provider.
I asked Alex Stark about the famous David Boyes experiment, in which over 41,000 instances of Linux ran on a single System/390 machine. Is that really practical, in the real world?
As one would expect, Stark's answer was an emphatic "No." He was familiar with Boyes' work (and impressed by it) but said that production installations just wouldn't be done that way. More likely, Stark says, there would be one instance of Linux per application function rather than one per user. Stark does feel, though, that a few hundred instances are realistic even if tens of thousands are not.
Of course, as with everything else, the customer application determines what is and is not realistic. Pete McCaffrey cited universities as one case where thousands of Linux instances might realistically be used, because each student needs to have an isolated, private operating system to tweak--and to break--as they learn. At the same time, most of the students' CPU loading would be small, averaged over time.