ParMETIS (Parallel Graph Partitioning and Fill-reducing Matrix Ordering) is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes developed in Karypis lab.

METIS (Serial Graph Partitioning and Fill-reducing Matrix Ordering) is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes developed in Karypis lab.

Firewall and Proxy Settings

In order for users to access OSC resources through the web your firewall rules should allow for connections to the following IP ranges.  Otherwise, users may be blocked or denied access to our services.


Users who are unsure of whether their network is blocking theses hosts or ports should contact their local IT administrator.

IT and Network Administrators

Ensure your clients have the following ports opened up in order to acess our resources.  

SGI Altix 350

In October, 2004, OSC engineers installed three SGI Altix 350s. The Altix 350s featured 16-processors each for SMP and large-memory applications configured. They included 32GB of memory, 161.4 Gigahertz Intel Itanium2 processors, 4 Gigabit Ethernet interfaces, 2-Gigabit FibreChannel interfaces, and approximately 250 GB of temporary disk.

Cray XD1

The OSC-Springfield offices would officially open in April 2004. Over the next several months, OSC engineers would install the 16-MSP Cray X1 system, the Cray XD1 system and the 33-node Apple Xserve G5 Cluster at Springfield office. A 1-Gbit/s Ethernet WAN service linked the cluster to OSC’s remote-site hosts in Columbus. The G5 Cluster featured one front-end node configured with four gigabytes of RAM, two 2.0 gigahertz PowerPC G5 processors, 2-Gigabit Fibre Channel interfaces, approximately 750 gigabytes of local disk and about 12 terabytes of Fibre Channel attached storage.

PIV cluster

PIVIn December, 2003, OSC engineers installed a 512-CPU Pentium 4 Linux Cluster. Replacing the AMD Athlon cluster, the P4 doubled the existing system’s power with a sizable increase in speed. With a theoretical peak of 2,457 gigaflops, the P4 cluster contained 256 dual-processor Pentium IV Xeon systems with four gigabytes of memory per node and 20 terabytes of aggregate disk space.

SGI Altix 3700

SGI Altix 3700In September, 2003, OSC engineers installed a SGI Altix 3700 system to replace its SGI Origin 2000 system and to augment its HP Itanium 2 Cluster. The Altix was a non-uniform memory access system with 32 Itanium processors and 64 gigabytes of memory. The Altix featured Itanium 2 processors and runs the Linux operating system. OSC's HP Cluster also included Itanium 2 processors and runs Linux.

Itanium 2 cluster

In October, 2002, OSC engineers installed the 300-CPU HP Workstation Itanium 2 Linux zx6000 Cluster. OSC selected HP’s computing cluster for its blend of high performance, flexibility and low cost. The HP cluster used Myricom's Myrinet high-speed interconnect and ran the Red Hat Linux Advanced Workstation, a 64-bit Linux operating system.


Subscribe to RSS - HPC