Search our client documentation below, optionally filtered by one or more systems.
Search Documentation
Search Documentation
Search our client documentation below, optionally filtered by one or more systems.
This documentation is to discuss how to run STAR-CCM+ to STAR-CCM+ Coupling simulation in batch job at OSC. The following example demonstrates the process of using STAR-CCM+ version 11.02.010 on Cardinal. Depending on the version of STAR-CCM+ and cluster you work on, there mighe be some differences from the example. Feel free to contact OSC Help if you have any questions.
Darshan is a lightweight "scalable HPC I/O characterization tool
Apache Spark is an open source cluster-computing framework originally developed in the AMPLab at University of California, Berkeley but was later donated to the Apache Software Foundation where it remains today. In contrast to Hadoop's disk-based analytics paradigm, Spark has multi-stage in-memory analytics. Spark can run programs up-to 100x faster than Hadoop’s MapReduce in memory or 10x faster on disk. Spark support applications written in python, java, scala and R.
Wednesday, October 5th
|
3:00 - 5:00 pm |
Allocations Committee Meeting (members only) |
|
6:00 - 7:30 pm |
SUG Executive Meeting (members only) |
Wednesday, October 5th
|
3:00 - 5:00 pm |
Allocations Committee Meeting (members only) |
|
6:00 - 7:30 pm |
SUG Executive Meeting (members only) |
The following are technical specifications for Owens.
- Number of Nodes
-
824 nodes
- Number of CPU Sockets
-
1,648 (2 sockets/node)
- Number of CPU Cores
-
23,392 (28 cores/node)
- Cores Per Node
-
28 cores/node (48 cores/node for Huge Mem Nodes)
- Local Disk Space Per Node
-
~1,500GB in /tmp