March 24, 2019

S/390: The Linux Dream Machine - page 4

Linux Everywhere: More than a Slogan

  • February 23, 2000
  • By Scott Courtney

About a month ago, our resident mainframe wizard came to see me and said he needed some help with a Linux problem. I should point out that this didn't surprise me. Several mainframe mavens that I know are very interested in Linux and Java and other new technologies. I've found the mainframe crowd to be much more open to new ideas than a lot of my PC-oriented colleagues who think the world ends at the edge of their LAN. So when Ralph asked me for Linux help, I assumed he had installed it on a spare PC to play around.

Wrong. He had installed it on the company mainframe.

He was quite smug about it, too, but in a good way. I couldn't believe that it was really Linux, so I asked him if it was emulating Linux APIs or if it was actually a native port. And if it was native, I supposed it was a very old kernel because there must have been a monumental effort to port it. He handed me the README printout and I started reading. After a minute or two, I just looked up at him, and I grinned from ear to ear.

This was no emulation, but the Real McCoy! He booted it while I watched, and I was amazed to see all the usual kernel and module initialization messages flash by on a 3270 "green screen" terminal. When the login prompt appeared, we did so and were running a full bash shell. I immediately started poking around the filesystem, looking to see just how Linux this "Linux" really was, still not quite believing what I was seeing.

It took only a few minutes to convince me that this was no "lab queen" toy. The kernel level was 2.2.13--not absolutely the latest, but near enough to be interesting. (I understand that 2.2.15 is out now.) All the standard filesystems were there and (after we extracted a post-installation tarball) populated. The bash shell works just as you would expect it to. Instead of a 3270 screen-at-a-time terminal mode, you can telnet directly to Linux and enjoy the keystroke-level responsiveness of any other Linux version.

We rebooted it a couple of times, tweaking startup scripts and adding filesystems. Did I mention that this was all running underneath the VM environment? Ralph told VM what storage devices were to be visible to Linux and at what I/O addresses for their virtual controllers (yes, Linux actually thinks it's driving the hardware!), how many CPUs this virtual machine should have, and how much memory. So we had a two-way SMP box with 128 meg RAM and three or four disk "drives" with a couple of gig each--more than enough to play on.

We continued fooling with the Linux system over our next few lunch hours (had to--this was not exactly a core project sanctioned by the company's business plan). We downloaded and compiled source tarballs from the Internet, using the standard ./configure, make, make install sequence. I was amazed at how much Open Source software just plain worked.

Then we moved on to the important stuff: X11. As most experienced Linux users know, X11 is the network protocol that underlies KDE and GNOME and the other Linux user interfaces, as well as CDE and others from the UNIX world. Because X11 is a network-transparent protocol, your display doesn't have to be on the same hardware where the application is running. The terminal, or console, is actually considered a display server in this context because it provides graphics services to an application (the client) which needs to interact with a user. The mainframe itself doesn't have any graphics hardware, so it can't be an X11 server. But I wanted to see if X11 clients could run on the mainframe but direct their display to a network-connected terminal.

So I fired up Hummingbird eXceed on my laptop, which runs Windows NT 4.0. I temporarily turned off its security settings, allowing access from anywhere. Then on the mainframe side, I set the DISPLAY environment variable to point to my PC's network address. Voila! I could run graphical applications like xcalc on the VM Linux machine, but have them interact with my screen, keyboard, and mouse on my PC. It was really amazing to see xeyes running side-by-side with Microsoft Office, knowing that every time the mouse moved it was Linux on an IBM mainframe calculating the movement of the eyeballs!

During all this time, we never once interrupted production on the mainframe's primary systems. You see, all of this Linux activity takes place inside a single virtual machine, one of the five thousand or so that are running on this particular mainframe at any given time of the day. Even as root in Linux, you are still just a single session on VM itself. It was immensely liberating to be able to experiment freely with Linux, knowing that no matter what we did we could always just login from the CMS environment and rebuild the whole virtual Linux computer from our saved disk images.

Stop and think about this for a second. The VM host operating system creates thousands of virtual machines, most of which run the CMS operating environment for normal users. One virtual machine, however, happens to boot Linux into its virtual hardware instead. That Linux system is still fully multiuser and multitasking, though. So I could have dozens of telnet sessions logged into a single VM Linux virtual system.

This gets better: nobody ever said you could only run one VM Linux system at a time. In fact, you can run multiples of Linux just as you run multiples of CMS. Just imagine one physical computer with several thousand copies of Linux running on it simultaneously, and each of these supporting multiple user connections. Fantasy? I have heard from one system administrator, David Boyes at Dimension Enterprises, who decided to push the envelope on this. His test system finally ran out of resources at 41,400 Linux images. That's not a typo--there were forty-one thousand copies of Linux running on one logical partition of one mainframe, under VM. This isn't a practical number for real work (yet) but it's still impressive as a demonstration of just what VM can do. David joked about wanting to get some standalone time on a bigger box--after all, he didn't have the whole machine to himself for this little test! Remember those forty thousand raptors I mentioned in the introduction?

Adam Thornton of Flathead Software fired up a 390 emulator called "Hercules" (originally designed to emulate 370-series mainframes on Intel hardware) underneath Linux underneath VM. Then he ran another Linux boot underneath that. The hack value of this is just, well, way cool.

Are you impressed yet? I am.

Most Popular LinuxPlanet Stories