Monitoring and Managing Your Job

There are several commands available that allow you to check the status of your job, monitor execution of a running job, and collect performance statistics for your job. You can also delete a job if necessary.

Status of queued jobs

You can monitor the batch queues and check the status of your job using the commands qstat and showq. There is also a command to get an extremely unreliable estimate of the time your job will start. This section also addresses the question of why a job may have a long queue wait and explains a little about how job scheduling works.


Use the qstat command to check the status of your jobs. You can see whether your job is queued or running, along with information about requested resources. If the job is running you can see elapsed time and resources used.

Here are some examples for user usr1234 and job 123456.

By itself, qstat lists all jobs in the system:


To list all the jobs belonging to a particular user:

qstat -u usr1234

To list the status of a particular job, in standard or alternate (more useful!) format:

qstat 123456
qstat -a 123456

To get all the details about a particular job (full status):

qstat -f 123456


The showq command lists job information from the point of view of the scheduler.  Jobs are grouped according to their state: running, idle, or blocked.

To list all jobs in the system:


To list all jobs belonging to a particular user (-u flag may be combined with others):

showq -u usr1234

Idle jobs are those that are eligible to run; they are listed in priority order. Note that the priority order may change over time. Note also that jobs may be run out of order if resources are not immediately available to run the highest priority job (“backfill”). This is done in such a way that it does not delay the start of the highest priority job.

To list details about idle jobs:

showq -i
showq -i -u usr1234

Blocked jobs are those that are not currently eligible to run. There are several reasons a job may be blocked.

  • If a user or group has reached the limit on the number of jobs or cores allowed, the rest of their jobs will be blocked. The jobs will be released as the running jobs complete.
  • If a user sets up dependencies among jobs or conditions that have to be met before a job can run, the jobs will be blocked until the dependencies or conditions are met.
  • You can place a hold on your own job using qhold jobid.
  • In rare cases, an error in the batch sysetm will cause a job to be blocked with state “BatchHold”. If you see one of your jobs in this state, contact OSC Help for assistance.

To list blocked jobs:

showq -b
showq -b -u usr1234


The showstart command gives an estimate for the start time of a job. Unfortunately, these estimates are not at all accurate except for the highest priority job in the queue. If the time shown is exactly midnight two or three days in the future, it is meaningless. Otherwise the estimate may be off by a large amount in either direction.


showstart 123456

Why isn’t my job running?

There are many reasons that your job may have to wait in the queue longer than you would like. Here are some of them.

  • System load is high. It’s frustrating for everyone!
  • A system downtime has been scheduled and jobs are being held. Check the message of the day, which is displayed every time you login, or the system notices posted on OSC webpage.
  • You or your group have used a lot of resources in the last few days, causing your job priority to be lowered (“fairness policy”).
  • You or your group are at the maximum processor count or running job count and your job is being held.
  • Your project has a large negative RU (resource unit) balance.
  • Your job is requesting specialized resources, such as large memory or certain software licences, that are in high demand.
  • Your job is requesting a lot of resources. It takes time for the resources to become available.
  • Your job is requesting incompatible or nonexistent resources and can never run.
  • Job is unnecessarily stuck in batch hold because of system problems (very rare!).

Priority, backfill, and debug reservations

Priority is a complicated function of many factors, including the processor count and walltime requested, the length of time the job has been waiting, and how much other computing has been done by the user and their group over the last several days.

During each scheduling iteration, the scheduler will identify the highest priority job that cannot currently be run and find a time in the future to reserve for it. Once that is done, the scheduler will then try to backfill as many lower priority jobs as it can without affecting the highest priority job's start time. This keeps the overall utilization of the system high while still allowing reasonable turnaround time for high priority jobs. Short jobs and jobs requesting few resources are the easiest to backfill.

A small number of nodes are set aside during the day for jobs with a walltime limit of 1 hour or less, primarily for debugging purposes.

Observing a running job

You can monitor a running batch job almost as easily as you can monitor a program running interactively. The qpeek command allows you to see the output that would normally appear on your display. The pdsh command allows you to monitor your job’s CPU and memory usage, among other things. These commands are run from the login node.

qpeek (only for Ruby)

A job’s stdout and stderr data streams, which normally show up on the screen, are written to log files. These log files are stored on a server until the job ends, so you can’t look at them directly on Ruby. For Owens and Pitzer, you can read the log files during the job is running. The qpeek command allows you to peek at their contents. If you used the PBS header line to join the stdout and stderr streams (#PBS -j oe), the two streams are combined in the output log.

Here are a few examples for job 123456.  You can use the -e flag with any of them to get the error log instead of the output log.  (This is not applicable if you used “#PBS -j oe”.)

To display the current contents of the output log (stdout) for job 123456:

qpeek 123456

To display the current contents of the error log (stderr) for job 123456:

qpeek -e 123456

To display just the beginning (“head”) of the output log for job 123456:

qpeek -h 123456

To display just the end (“tail”) of the output log for job 123456:

qpeek -t 123456

To display the end of the output log and keep listening (“tail -f”) – terminate with Ctrl-C:

qpeek -f 123456


If you’re in the habit of monitoring your programs using top or ps or something similar, you may find the pdsh  command helpful. pdsh stands for “Parallel Distributed Shell”. It lets you run a command in parallel on all the nodes assigned to your job, with the results displayed on your screen.

Caution: The commands that you run should be quick and simple to avoid interfering with the job. This is especially true if your job is sharing a node with other jobs.

Two useful commands often used with pdsh are uptime, which displays system load, and free, which gives memory usage; see also the man pages for these commands. There are also options for top that make it usable with pdsh.

Since this is a parallel command, the output for the various nodes will appear in an unpredictable order.

Examples for job 123456:

pdsh -j 123456 uptime
pdsh -j 123456 free -m
pdsh -j 123456 top -b -n 1 -u usr1234


The qstat command provides information about CPU, memory, and walltime usage for running jobs. With the -a flag, it shows elapsed time (wall time) in hours and minutes. With no flag, it shows “Time Used”, an accounting metric, in hours, minutes, and seconds. With the -f flag, it shows resources used, with information aggregated across all the nodes the job is running on.


qstat -a 123456
qstat -f 123456

Managing your jobs

Deleting a job

Situations may arise in which you want to delete one of your jobs from the PBS queue. Perhaps you set the resource limits incorrectly, neglected to copy an input file, or had incorrect or missing commands in the batch file. Or maybe the program is taking too long to run (infinite loop).

The PBS command to delete a batch job is qdel. It applies to both queued and running jobs.


qdel 123456

If you are unable to delete one of your jobs, it may be because of a hardware problem or system software crash. In this case you should contact OSC Help.

Altering a queued job

You can alter certain attributes of your job while it’s in the queue using the qalter command. This can be useful if you want to make a change without losing your place in the queue. You cannot make any alterations to the executable portion of the script, nor can you make any changes after the job starts running.

The syntax is:

qalter [options ...] jobid

The options argument consists of one or more PBS directives in the form of command-line options.

For example, to change the walltime limit on job 123456 to 5 hours and have email sent when the job ends (only):

qalter -l walltime=5:00:00 -m e 123456

Placing a hold on a queued job

If you want to prevent a job from running but leave it in the queue, you can place a hold on it using the qhold command. The job will remain blocked until you release it with the qrls command. A hold can be useful if you need to modify the input file for a job, for example, but you don’t want to lose your place in the queue.


qhold 123456
qrls 123456

Job statistics

There are commands you can include in your batch script to collect job statistics or performance information.


The date command prints the current date and time. It can be informative to include it at the beginning and end of the executable portion of your script as a rough measure of time spent in the job.


The time utility is used to measure the performance of a single command. It can be used for serial or parallel processes. Add /usr/bin/time to the beginning of a command in the batch script:

/usr/bin/time myprog arg1 arg2

The result is provided in the following format:

  1. user time (CPU time spent running your program)
  2. system time (CPU time spent by your program in system calls)
  3. elapsed time (wallclock)
  4. % CPU used
  5. memory, pagefault and swap statistics
  6. I/O statistics

These results are appended to the job's error log file. Note: Use the full path “/usr/bin/time” to get all the information shown.


The job accounting utility ja prints job accounting information inside a PBS job, including CPU time, memory, virtual memory, and walltime used. This information is also included in the email sent when a jobs ends (if email is requested). While the job is running, the same information is available with the qstat -f command.