Ollama
Ollama is an open-source inference server supporting a number of generative AI models. This module also includes Open-WebUI, which provides an easy-to-use web interface.
Ollama is an open-source inference server supporting a number of generative AI models. This module also includes Open-WebUI, which provides an easy-to-use web interface.
The NVIDIA Collective Communication Library (NCCL) implements multi-GPU and multi-node communication primitives optimized for NVIDIA GPUs and Networking. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter as well as point-to-point send and receive that are optimized to achieve high bandwidth and low latency over PCIe and NVLink high-speed interconnects within a node and over NVIDIA Mellanox Network across nodes.
This document is obsoleted and kept as a reference to previous Owens programming environment. Please refer to here for the latest version.
PyTorch is an open source machine learning framework with GPU acceleration and deep neural networks that is based on the automatic differentiation in the Torch library of tensors.
VMD is a visulaization program for the display and analysis of molecular systems.
The following versions of VMD are available on OSC clusters:
C, C++ and Fortran are supported on the Owens cluster. Intel, PGI and GNU compiler suites are available. The Intel development tool chain is loaded by default. Compiler commands and recommended options for serial programs are listed in the table below. See also our compilation guide.
The February 2014 SUG HPC Tech Talk focused on using the NVIDIA GPUs for computational chemistry. Slides are attached.
CUDA™ (Compute Unified Device Architecture) is a parallel computing platform and programming model developed by Nvidia that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).