HPC

2016 Storage Service Upgrades

On July 12th, 2016 OSC migrated its old GPFS and Lustre filesystems to new Project and Scratch services, respectively. We've moved 1.22 PB of data, and the new capacities are 3.4 PB for Project, and 1.1 PB for Scratch. If you store data on these services, there are a few important details to note.

Citation

To cite Owens, please use the following Archival Resource Key:

ark:/19495/hpc6h5b1

Here is the citation in BibTeX format:

@article{Owens2016,
ark = {ark:/19495/hpc93fc8},
url = {http://osc.edu/ark:/19495/hpc6h5b1},
year  = {2016},
author = {Ohio Supercomputer Center},
title = {Owens supercomputer}
}

And in EndNote format:

BMI Condo on the Owens Cluster

"Condo model" refers to an arrangment when a client leases one or more compute nodes for the shared cluster while OSC provides all infrastructure, as well as maintenance and services. BMI's Condo on the Oakley Cluster is leased by the Biomedical Informatics Institute at The Ohio State University. This condo is a temporary "bridge" condo to server the community needs until the Owens Cluster is available.

Hardware

Detailed system specifications for the Oakley condo:

  • 12 nodes

    • 12 cores per node

OSCusage

Introduction

OSCusage is command developed at OSC for use on OSC's systems.  It allows for a user to see information on their project's current RU balance, including which users and jobs incurred what charges.

Messages from qsub

We have been adding some output from qsub that should aid you in creating better job scripts. We've documented the various messages here.

NOTE

A "NOTE" message is informational; your job has been submitted, but qsub made some assumptions about your job that you may not have intended.

Migrating jobs from Glenn to Oakley or Ruby

This page includes a summary of differences to keep in mind when migrating jobs from Glenn to one of our other clusters.

Hardware

Most Oakley nodes have 12 cores and 48GB memory. There are eight large-memory nodes with 12 cores and 192GB memory, and one huge-memory node with 32 cores and 1TB of memory. Most Ruby nodes have 20 cores and 64GB of memory. There is one huge-memory node with 32 cores and 1TB of memory. By contrast, most Glenn nodes have 8 cores and 24GB memory, with eight nodes having 16 cores and 64GB memory.

Owens

TIP: Remember to check the menu to the right of the page for related pages with more information about Owens' specifics.

OSC's Owens cluster being installed in 2016 is a Dell-built, Intel® Xeon® processor-based supercomputer. More details will be forthcoming as we finalize facilities changes and deployment schedules.

ParMETIS / METIS

ParMETIS (Parallel Graph Partitioning and Fill-reducing Matrix Ordering) is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in Karypis lab.

METIS (Serial Graph Partitioning and Fill-reducing Matrix Ordering) is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes developed in Karypis lab.

Firewall and Proxy Settings

Connections to OSC

In order for users to access OSC resources through the web your firewall rules should allow for connections to the following IP ranges.  Otherwise, users may be blocked or denied access to our services.

  • 192.148.248.0/24
  • 192.148.247.0/24
  • 192.157.5.0/25

The followingg TCP ports should be opened:

  • 80 (HTTP)
  • 443 (HTTPS)
  • 22 (SSH)

The following domain should be allowed:

SGI Altix 350

In October, 2004, OSC engineers installed three SGI Altix 350s. The Altix 350s featured 16-processors each for SMP and large-memory applications configured. They included 32GB of memory, 161.4 Gigahertz Intel Itanium2 processors, 4 Gigabit Ethernet interfaces, 2-Gigabit FibreChannel interfaces, and approximately 250 GB of temporary disk.

Pages

Subscribe to RSS - HPC