The StartX Files: Word to the Wise: gwp
The Paradox of Being Cool
I will be honest with you: when it comes right down to it, I hate compiling applications in Linux.
Yes, yes, I know that the ability to compile our own applications on our own operating system is one of the Greatest Things Ever. And that I would be an idiot to decry this unique ability found in Linux and its UNIX cousins. Compiling an application is, pound for pound, the best way of installing an application on your PC because of the wonderful stability gained from an application customized to your environment.
But why the heck does it have to be such a pain in the ass to accomplish?
It is ironic to me that that we as Linux advocates see as one of Linux's greatest strengths, others see as one of the operating system's greatest weaknesses. For them, compilation is only one step removed from programming the application from scratch, which is an anathema to many users, including programmers. "It's too much work," they whine. "We just want to use setup.exe."
On our side of the trenches, we ridicule the wimps who need a fancy setup application to get something running on a computer. "Who wants something running on your PC that dumps huge libraries on your system without a care in the world?" we mock. "Even if I had a Registry, I sure as heck wouldn't want some punk-ass setup application modifying it!"
And so we all stare across the chasm at each other, secretly coveting what the other has.
Packages, of course, are a good compromise. DEB and RPM packages are uniform in the manner they are used. You follow the same steps for every package, with few surprises other than dependency issues or the odd bad package or two. But not every developer makes a package for their application, because they too have their problems, particularly in the creation process. And I know several developers who don't want to use packages because they object to the concept of tying their application to one particular set of Linux distributions.
I certainly can respect that and I do applaud the fact that compiling an application solves many cross-distribution issues. I applaud much about compilation--in theory. It's the practice that gets me burned.
Many has been the time when I have pulled a tarball down from the Net and created the Makefile, ran make, and ran make install with no problems whatsoever. But there have been too many instance when missing files or relocated directories have tripped up any part of this process and left me with an unrecoverable mess. I consider myself pretty tech-savvy, but I can barely read the output from a bad make session, let alone understand it. And what chance does a newbie who just wants to try out a new application have?
I am not sure what the solution here is. A standard installation/compilation procedure would be nice, but I am unsure how that would be implemented without defeating the whole point of compiling something in the first place. I am torn between wanting something easier to manage and something robust and stable. I realize that be being a Linux user, I am already predisposed to the latter position.
But I don't have to like it.
- 1Linux Top 3: Fedora 24, Peppermint 7 and Solus 1.2
- 2Linux Top 3: Alpine Linux 3.4, deepin 15.2 and Linux Lite 3.0
- 3Linux 4.7 Set to Boost Live Patching, Security and Power Management
- 4Linux 4.6 Charred Weasel adds USB 3.1 Support
- 5Linux Top 3: OpenIndiana 2016.04, Ubuntu 16.04 and Debian's New Leader