Memory may be cheap but we go on gobbling up virtual memory faster than we buy physical memory. Here are some hints for running big programs faster without making enemies.
CHOOSING a machine
When choosing a workstation on which to run a job (large or small), please consider the following issues:
(1) JOB PRIORITY - please run background jobs at a nice level of at least 10 and preferably 19 so that interactive jobs like editor sessions will not be affected by your job. See point 4 in the following section for more details.
(2) JOB SIZE - Obviously, the number of jobs running on a given machine affects the run time of each of the jobs. Small jobs will run equally well on machines with small and large amounts of memory, BUT large jobs will run very poorly on machines with small amounts of memory. Because there is far more small-memory machines than large-memory machines, the person with a large job has fewer choices.
Thus, in choosing a machine, consider the machine's physical memory size and the core image size of your program -- the former can be found in the file /eecg/doc/eecg.machines and the latter can be determined by using top. A guideline for choosing a machine is to consider using a machine that has 8 to 10 Megs more physical memory than your program requires. Please DON'T run jobs on the SERVERS mentioned in the /eecg/doc/eecg.machines file. Any job, no matter how small, has a potential of effecting many people if run on a file server.
REMEMBER other programs (like X11) may be running on the machine on which you want to run your job. Thus, you must take the physical size as well as the total memory requirements of the active jobs into account. The unix command top will tell you how much physical memory is available.
(3) JOB ACTIVITY - when developing a program, please monitor its progress carefully until it has been thoroughly tested. On several occasions, other user's jobs have had to be killed by a super user because the job went wild and the user was not logged on. When running such programs, don't chose a machine on which many other users rely.
HINTS for running big programs *faster*
When the program keeps referencing pages of memory which have been swapped out (to make way for other pages), the machine starts to thrash and performance for everyone on the machine tumbles to near zero.
Run top to see the amount of free memory on the system as well as each process's size and resident size.
1. If possible, copy the binary to /tmp on the local machine, and run it from there. By doing so, the amount of network paging will be reduced.
2. If you have several jobs to run, run them sequentially. Put all the commands in a list in a shell script `myscript' and run the lot at night with `at'
% at 400
Don't put a & after each! Running more than one large job in the background takes *longer* than running them sequentially because they compete which each other for memory.
4. Run your jobs niced (low CPU priority) [note: if you forget to nice the job when starting it up, use the renice command to lower the CPU priority of already running jobs; alternatively nice the job from within top using the r command.]
% nice +19 program&
but this isn't enough. We're much more limited by memory than CPU cycles in our environment. You have to give your long running jobs lower memory priority as well. Type:
% limit mem 0
to the csh or tcsh before you run your job. This won't stop the program from running or even make it run slower on an otherwise idle machine; but it will give priority to the person trying to use an editor and would like to see the keystrokes echoed.
5. If your job goes out of control, remember the command
% kill -9 pid1 pid2 pid3 ...
EECG graduate admin