Cardinal

Julia

From julialang.org:

"Julia is a high-level, high-performance dynamic programming language for numerical computing. It provides a sophisticated compiler, distributed parallel execution, numerical accuracy, and an extensive mathematical function library. Julia’s Base library, largely written in Julia itself, also integrates mature, best-of-breed open source C and Fortran libraries for linear algebra, random number generation, signal processing, and string processing. In addition, the Julia developer community is contributing a number of external packages through Julia’s built-in package manager at a rapid pace. IJulia, a collaboration between the Jupyter and Julia communities, provides a powerful browser-based graphical notebook interface to Julia."

OnDemand Desktop App: MATLAB

MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, C#, Java, Fortran and Python.

Accessing through OnDemand

All the desktop apps can be found within the 'Interactive Apps' dropdown in our OnDemand web portal as shown in the image below:

access

 

OnDemand Desktop App: RStudio Server

RStudio is an integrated development environment (IDE) for R. It includes a console, syntax-highlighting editor that supports direct code execution, as well as tools for plotting, history, debugging and workspace management.

Accessing through OnDemand

All the desktop apps can be found within the 'Desktop Apps' dropdown in our OnDemand web portal as shown in the image below:

Image of OnDemand Desktop Apps Dropdown

Desktop App Catalog

OSC OnDemand provides access to applications on compute nodes through the batch system, without the hassle or performance problems associated with X11 forwarding. To access one, please select an application under "Interactive HPC" from the "Desktop Apps" menu. For more information on each product, please go to its page provided below.

Overview of File Systems

OSC has several different file systems where you can create files and directories. The characteristics of those systems and the policies associated with them determine their suitability for any particular purpose. This section describes the characteristics and policies that you should take into consideration in selecting a file system to use.

The various file systems are described in subsequent sections.

HOWTO: Use NFSv4 ACL

This document shows you how to use the NFSv4 ACL permissions system. An ACL (access control list) is a list of permissions associated with a file or directory. These permissions allow you to restrict access to a certian file or directory by user or group. NFSv4 ACLs provide more specific options than typical POSIX read/write/execute permissions used in most systems.

These commands are useful for managing ACLs in the dir locations of /users/<project-code>.

Understanding NFSv4 ACL

This is an example of an NFSv4 ACL

Run STAR-CCM+ to STAR-CCM+ Coupling

This documentation is to discuss how to run STAR-CCM+ to STAR-CCM+ Coupling simulation in batch job at OSC. The following example demonstrates the process of using STAR-CCM+ version 11.02.010 on Cardinal. Depending on the version of STAR-CCM+ and cluster you work on, there mighe be some differences from the example. Feel free to contact OSC Help if you have any questions. 

Darshan

Darshan is a lightweight "scalable HPC I/O characterization tool".  It is intended to profile I/O by emitting log files to a consistent log location for systems administrators, and also provides scripts to create summary PDFs to characterize I/O in MPI-based programs.

Availability and Restrictions

Versions

The following versions of Darshan are available on OSC clusters:

Spark

Apache Spark is an open source cluster-computing framework originally developed in the AMPLab at University of California, Berkeley but was later donated to the Apache Software Foundation where it remains today. In contrast to Hadoop's disk-based analytics paradigm, Spark has multi-stage in-memory analytics. Spark can run programs up-to 100x faster than Hadoop’s MapReduce in memory or 10x faster on disk. Spark support applications written in python, java, scala and R

Pages