222 compute nodes on Glenn have been removed from service to begin preparing for the arrival of Ruby compute nodes. There have been no other associated system changes. Jobs on Glenn may see more frequent waits in the queue than they have in the past due to a reduction of available resources, depending on scheduler load.
Amber 14 has been installed on Oakley and Glenn; usage is via the modules amber/14 on Oakley and amber14 on Glenn. For information on available executables and installation details see the software page for Amber or the output of the respective module help command, e.g.: module help amber/14
Intel compiler licenses have been updated. This should be invisible on the HPC systems. If you are a statewide user of the license, please set the
LM_LICENSE_FILE environment variable to
email@example.com before reporting issues.
On both production clusters, we have begun rejecting jobs that request the use of Lustre or $PFSDIR in order to reduce job failures caused by the triggering of a bug that crashes the filesystem. If you have a GPFS allocation, you may if appropriate want to move your data there in order to maintain productivity while the Lustre service is in a degraded state.
This document shows you how to set soft limits using
ulimit command sets or reports user process resource limits. The default limits are defined and applied when a new user is added to the system. Limits are categorized as either soft or hard. With the
ulimit command, you can change your soft limits for the current shell environment, up to the maximum set by the hard limits. You must have root user authority to change resource hard limits.
The April 2014 HPC Tech Talk (Tuesday, April 22th from 4-5PM) will provide some brief OSC updates, have a user-driven Q&A session, and will close with an invited talk about MPI-3 from the MVAPICH developers from The Ohio State University. To get the WebEX information and add a calendar entry, go here. Slides are available below.
We have added two login nodes to Glenn (opt-login05 and opt-login06), which are quad-socket computers with 64 GB of RAM. These were previously used as large memory nodes on Glenn, and will provide greater resources to the shared environment on the login nodes.