Supercharge Your LAN With Condor, part 2 - page 2
Submitting a Basic Condor Job
There are a lot more options you can give to Condor when submitting a job.
condor_compile cc main.o -o test
This enables the use of checkpointing (snapshotting a job's current status, so it can be paused or migrated if need be), and remote system calls, enabling a job to believe itself to be running on its home machine even when in fact it's not.
However, not all programs can be relinked -- you need access to the unlinked libraries. Shell scripts are also unsuitable. The main alternative is the vanilla universe, which doesn't require linking -- but which also cannot provide checkpointing and remote system calls. If a job is interrupted, it can either be suspended on that machine, or restarted from the beginning on another machine -- it can't be restarted elsewhere half-way, as in the standard universe. If you're able to relink your programs and run in the standard universe, this is far and away the best option.
Other universes include the Parallel Virtual Machine universe, the Java universe, and the Local universe (which executes jobs on the local machine immediately).
Running multiple copies
You can queue multiple copies of the same program by using
Initialdir = dir1 Queue Initialdir = dir2 Queue
This will put the outputs into two different directories, and you can repeat the syntax as often as you like. However, for many copies of a job, there's a more efficient syntax available. This:
Error = err.$(Process) Input = in.$(Process) Output = out.$(Process) Log = foo.log Queue 150
will queue up 150 runs of the executable, and give each run a process number (starting with 0) which will be appended to the error, input, and output files, as shown, thus avoiding you having to type the details repeatedly.