Search Documentation

GNU Compilers

Fortran, C and C++ compilers produced by the GNU Project. 

Availability and Restrictions

Versions

The following versions of the GNU compilers are available on OSC systems:

Using the Intel Xeon Phi on Ruby

Introduction

Twenty of the new Ruby nodes have an Intel Xeon Phi coprocessor.  Some of the older debug nodes do also.  This guide explains how to build and run code for the Phi on Ruby.  It does not discuss programming techniques or performance issues.

For background information on the Xeon Phi and techniques for using the Phi efficiently, the following references may be useful: 

CDO

CDO (Climate Data Operators) is a collection of command line operator to manipulate and analyse climate and NWP model data. It is open source and released under the terms of the GNU General Public License v2 (GPL).

Availability and Restrictions

Versions

CDO is available on Oakley Cluster. The versions currently available at OSC are:

Request Access

Projects who would like to use the Ruby cluster will need to request access.  This is because of the peticulars of the Ruby envionment, which includes its size, MICs, GPUs, and scheduling policies.  

CCAPP Condo on Ruby Cluster

Condo model refers to that the participants (condo owners) purchase one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. CCAPP Condo on Ruby cluster is owned by the Center for Cosmology and AstroParticle Physics, at OSU. Prof. Annika Peter has been heavily involved in specifying requirements.

Hardware

Detailed system specifications:

  • 21 total nodes

    • 20 cores per node

Prof. Gaitonde's Condo on Ruby Cluster

Condo model refers to that the participants (condo owners) purchase one or more compute nodes for the shared cluster while OSC provides the infrastructure, as well as maintenance and services. Prof. Gaitonde's Condo on Ruby cluster is owned by Prof. Datta Gaitonde from Mechanical and Aerospace Engineering Department of Ohio State University.

Hardware

Detailed system specifications:

  • 96 total nodes

    • 20 cores per node

SGI Altix 350
In October, 2004, OSC engineers installed three SGI Altix 350s. The Altix 350s featured 16-processors each for SMP and large-memory applications configured. They included 32GB of memory, 161.4 Gigahertz Intel Itanium2 processors, 4 Gigabit Ethernet interfaces, 2-Gigabit FibreChannel interfaces, and approximately 250 GB of temporary disk.
Cray XD1
The OSC-Springfield offices would officially open in April 2004. Over the next several months, OSC engineers would install the 16-MSP Cray X1 system, the Cray XD1 system and the 33-node Apple Xserve G5 Cluster at Springfield office. A 1-Gbit/s Ethernet WAN service linked the cluster to OSC’s remote-site hosts in Columbus. The G5 Cluster featured one front-end node configured with four gigabytes of RAM, two 2.0 gigahertz PowerPC G5 processors, 2-Gigabit Fibre Channel interfaces, approximately 750 gigabytes of local disk and about 12 terabytes of Fibre Channel attached storage.
Cray X1
 

Pages