C, C++ and Fortran are supported on the Ruby cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.
LANGUAGE | INTEL EXAMPLE | PGI EXAMPLE | GNU EXAMPLE |
---|---|---|---|
C | icc -O2 -xHost hello.c | pgcc -fast hello.c | gcc -O2 -march=native hello.c |
Fortran 90 | ifort -O2 -xHost hello.f90 | pgf90 -fast hello.f90 | gfortran -O2 -march=native hello.f90 |
The system uses the MVAPICH2 implementation of the Message Passing Interface (MPI), optimized for the high-speed Infiniband interconnect. MPI is a standard library for performing parallel processing using a distributed-memory model. For more information on building your MPI codes, please visit the MPI Library documentation.
Ruby uses a different version of mpiexec
than Oakley. This is necessary because of changes in Torque. All OSC systems use the mpiexec
command, but the underlying code on Ruby is mpiexec.hydra while the code on Oakley was developed at OSC. They are largely compatible, but a few differences should be noted.
The table below shows some commonly used options. Use mpiexec -help
for more information.
OAKLEY (old) | RUBY | COMMENT |
---|---|---|
mpiexec |
mpiexec |
Same command on both systems |
mpiexec a.out |
mpiexec ./a.out |
Program must be in path on Ruby, not necessary on Oakley. |
-pernode |
-ppn 1 |
One process per node |
-npernode procs |
-ppn procs |
procs processes per node |
-n totalprocs -np totalprocs |
-n totalprocs -np totalprocs |
At most totalprocs processes per node (same on both systems) |
-comm none |
Omit for simple cases. If using $MPIEXEC_RANK, consider using pbsdsh with $PBS_VNODENUM. | |
-comm anything_else |
Omit. Ignored on Oakley, will fail on Ruby. | |
-prepend-rank |
Prepend rank to output | |
-help |
-help |
Get a list of available options |
mpiexec will normally spawn one MPI process per CPU core requested in a batch job. The -pernode
option is not supported by mpiexec on Ruby, instead use -ppn 1
as mentioned in the table above.
The Intel, PGI and gnu compilers understand the OpenMP set of directives, which give the programmer a finer control over the parallelization. For more information on building OpenMP codes on OSC systems, please visit the OpenMP documentation.
To request the GPU node on Ruby, use nodes=1:ppn=20:gpus=1
. For GPU programming with CUDA, please refer to CUDA documentation. Also refer to the page of each software to check whether it is GPU enabled.