Batch requests are handled by the TORQUE resource manager and Moab Scheduler as on the Oakley system. Use the
qsub command to submit a batch request,
qstat to view the status of your requests, and
qdel to delete unwanted requests. For more information, see the manual pages for each command.
There are some changes for Ruby, they are listed here:
- Ruby nodes have 20 cores per node, and 64 GB of memory per node. This is less memory per core than on Oakley.
- Ruby will be allocated on the basis of whole nodes even for jobs using less than 20 cores.
- The amount of local disk space available on a node is approximately 800 GB.
- MPI Parallel Programs should be run with
mpiexec, as on Oakley, but the underlying program is mpiexec.hydra instead of OSC's mpiexec. Type
mpiexec --helpfor information on the command line options.
Example Serial Job
This particular example uses OpenMP.
#PBS -l walltime=1:00:00 #PBS -l nodes=1:ppn=20 #PBS -N my_job #PBS -j oe cd $TMPDIR cp $HOME/science/my_program.f . ifort -O2 -openmp my_program.f export OMP_NUM_PROCS=20 ./a.out > my_results cp my_results $HOME/science
Please remember that jobs on Ruby must use a complete node.
Example Parallel Job
#PBS -l walltime=1:00:00 #PBS -l nodes=4:ppn=20 #PBS -N my_job #PBS -j oe cd $HOME/science mpif90 -O3 mpiprogram.f cp a.out $TMPDIR cd $TMPDIR mpiexec ./a.out > my_results cp my_results $HOME/science
For more information about how to use OSC resources, please see our guide on batch processing at OSC. For specific information about modules and file storage, please see the Batch Execution Environment page.