This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Rivanna

Rivanna

1 - Facilities Statement

Facilities statement

Computing Environments at UVA

Research Computing (UVA-RC) serves as the principal center for computational resources and associated expertise at the University of Virginia (UVA). Each year UVA-RC provides services to over 433 active PIs that sponsor more than 2463 unique users from 14 different schools/organizations at the University, maintaining a breadth of systems to support the computational and data intensive research of UVA’s researchers.

High Performance Computing  UVA-RC’s High Performance Computing (HPC) systems are designed with high speed networks, high performance storage, GPUs, and large amounts of memory in order to support modern compute and memory intensive programs. UVA-RC’s HPC systems are comprised of over 614 compute nodes, with a total of 20476 X86 64-bit compute cores and 240 TB total RAM. Scheduled using Slurm, these resource can support over 1.5 PFLOP of peak CPU performance. HPC nodes are equipped with between 375 GB and 1 TB of RAM to support applications that require small and large amounts of memory, and 49 nodes include various configurations of the NVIDIA general purpose GPU accelerators (RTX2080, RTX3090, A6000, V100 and A100), from 4- to 10-way.   

UVA-RC also acquires and maintains capability systems focused on providing novel environments. This includes an 18-node DGX BasePOD system with 8x A100 GPU. The BasePOD provides a shared memory space across all GPUs in the system allowing the system to work collectively on models with memory needs larger than what can be held in a single node.

Interactive Computing and Scientific Visualization

UVA-RC supports specialized interfaces (i.e., Open OnDemand, FastX) and hardware for remote visualization and interactive computing.  Interactive HPC systems allow real-time user inputs in order to facilitate code development, real-time data exploration, and visualizations.  Interactive HPC systems are used when data are too large to download to a desktop or laptop, software is difficult or impossible to install on a personal machine, or specialized hardware resources (e.g., GPUs) are needed to visualize large data sets.

Expertise

UVA-RC aggregates expertise to provide consulting and collaboration services to researchers addressing all levels of the Research Computing technology stack.

UVA-RCs user support staff provide basic support and general onboarding through helpdesk and regularly scheduled tutorials. Senior support staff have advanced degrees in relevant research domains such as biology, imaging, physics, computer science and material science, enabling in-depth collaboration on complex projects. For projects that require significant application development work, UVA-RC maintains a Solutions & DevOps team capable of rapid iteration while leveraging non-traditional HPC technologies. Lastly, UVA-RC’s Infrastructure Services team enables projects that may require custom hardware or configurations outside of the standard images. Beyond their availability for direct project support, together these teams provide the R&D and operations expertise needed to ensure that UVA-RC is providing a modern research computing ecosystem for UVA researchers.

Cloud Computing

Ivy is a secure computing environment for researchers consisting of virtual machines (Linux and Windows) backed by a total of 45 nodes and 2048 cores. Researchers can use Ivy to process and store sensitive data with the confidence that the environment is secure and meets HIPAA, FERPA, or CUI requirements.

For standard security projects, UVA-RC supports microservices in a clustered orchestration environment that leverages Kubernetes to automate the deployment and management of many containers in an easy and scalable manner. This cluster has 876 cores and 4.9TB of memory allocated to running containerized services, including one node with 4 x A100 GPUs. It also has over 300TB of cluster storage and can attach to UVA-RC’s broader storage offerings.

ACCORDA

The ACCORD project (NSF Award: #1919667) offers flexible web-based interfaces for sensitive and highly sensitive data in a system focused on supporting cross-institutional access and collaboration. The ACCORD platform consists of 8 nodes in a Kubernetes cluster, for a total of 320 cores and ~3.2TB of memory. Cluster storage is approximately 1PB of IBM Spectrum storage (GPFS).

Researchers from non-UVA institutions can be brought into the ACCORD system through a memorandum of understanding between the researcher’s institution and UVA, security training for the researcher, and a posture-checking client installed on the researcher’s laptop/desktop.

Data Storage

All researchers on UVA-RC’s systems have access to a high-performance parallel storage platform. This system provides 8PB (PetaBytes) of storage with sustained read and write speeds of up to 10 GB/sec. The integrity of the data is protected by daily snapshots. UVA-RC also supports a second-tier storage solution, 3 PB, designed to address the growing need for resources that support data-intensive research by offering a lower cost, scalable solution.  The system is tightly integrated with other UVA-RC storage and computing resources in order to support a wide variety of research data life cycles and data analysis workflows.

Data Centers, Network Connectivity, and Office Facilities

UVA-RC enables interdisciplinary research through its robust data center facilities with over 1.5 MW of IT capacity to support leading edge computational and data storage systems. UVA-RC’s equipment occupies a data center near campus, connected to the 10 Gbps campus network.  Dedicated 10 and 100 Gbps links to our regional optical network and Internet2 give our researchers the network capacity and capability needed to collaborate with researchers from around the world. A Globus data transfer node enables data access and transfers to transcend institutional credentials.  Located in the Ivy Translational Research Building of the Fontaine Research Park, UVA-RC’s offices (2,877 sq. ft) are a short shuttle ride away from the central UVA grounds.

2 - Rivanna

Rivanna
graph TB
    subgraph Getting-Started
    b1(UVA Account) 
	b2(email to Gregor about groups)
	b3(groups available)
	b4(access to singularity build)
	b1 --> b2 --> b3 --> b4
    end
    subgraph Windows
    a1(gitbash)
	a2(wsl)
	a3[an <b>important</b> <a href='http://google.com'>link</a>]
    end

Rivanna is the University of Virginia’s High-Performance Computing (HPC) system. As a centralized resource and has many software packages available. Currently, the Rivanna supercomputer has 603 nodes with over 20476 cores and 8PB of various storage. Rivanna has multiple nodes equipped with GPUs including RTX2080, RTX3090, K80, P100, V100, A100-40GB, A100-80GB.

Communication

We have a team discord at: uva-bii-community

https://discord.gg/uFKJ5TUv

please subscribe if you work on rivanna and are part of the bii_dsc_community.

Rivanna at UVA

The official Web page for Rivanna is located at

In case you need support you can ask the staff using a ticket system at

It is important that before you use Rivanna to attend a seminar that upon request is given every Wednesday. To sign up, use the link:

Please note that in this introduction we will provide you with additional inforamation that may make the use of Rivanna easier. We encourage you to add to this information and share your tips,

Getting Permissions to use Rivanna

To use Rivanna you need to have special authorization. In case you work with a faculty member you will need to be added to a special group (or multiple) to be able to access it. The faculty member will know which group it is. This is managed via the group management portal by the faculty member. Please do not use the previous link and instead communicate with your faculty member first.

  • Note: For BII work conducted with Geoffrey Fox or Gregor von Laszewski, please contact Gregor at laszewski@gmail.com

TODO: IS THIS THE CASE?

Once you are added to the group, you will receive an invitation email to set up password for the research computing support portal. If you do not recive such an email, please visit the support portal at

TBD

This password is also the password that you will use to log into the system.

END TODO IS THIS THE CASE

After your account is set up, you can try to log in through the Web-based access. Please test it to make sure you have the proper access already.

However, we will typically notuse the online portal but instead use the more advanced batch system as it provides significant advantages for you when managing multiple jobs to Fivanna.

Accessing an HPC Computer via command line

If you need to use X11 on Rivanna you can finde documentation at the rivanna documentation. In case you need to run jupyter notebooks directly on Rivanna, please consult with the Rivanna documentation.

VPN (required)

You can access rivanna via ssh only via VPN. UVA requires you to use the VPN to access any computer on campus. VPN is offered by IT services but oficially only supported for Mac and Windows.

However, if you have a Linux machine you can follow the VPN install instructions for Linux. If you have issues installing it, attend an online support session with the Rivanna staff.

Access via the Web Browser

Rivanna can be accessed right from the Web browser. Although this may be helpful for those with systems where a proper terminal can not be accessed it can not leverage the features of your own desktop or laptop while using for example advanced editors or keeping the file system of your machine in sync with the HPC file system.

Therefore, practical experience shows that you benefit while using a terminal and your own computer for software development.

Additiional documentation by the rivanna system staff is provided at

Access Rivanna from macOS and Linux

To access Rivanna from macOS, use the terminal and use ssh to connect to it. We will provide an in depth configuration tutorial on this later on. We will use the same programs as on Linux and Windows so we have to only provide one documentation and it is uniform across platforms.

Please remember to use

commputer>
$ eval `ssh-agent`
$ ssh-add

To activate ssh in your terminal

Access Rivanna from Windows

While exploring the various choices for accessing Rivanna from Windows you can use putty and MobaXterm.

However, most recently a possible better choice is available while using gitbash. Git bash is trivial to install. However, you need to read the configuration options carefully. READ CAREFULLY Let us know your options so we can add them here.

To simplify the setup of a Windows computer for research we have prepared a separate

It addresses the installation of gitbash, Python, PyCharm (much better than VSCode), and other useful tools such as chocolate.

With git bash, you get a bash terminal that works the same as a Linux bash terminal and which is similar to the zsh terminal for a Mac.

Set up the connection (mac/Linux)

The first thing to do when trying to connect to Rivanna is to create an ssh key if you have not yet done so.

To do this use the command

commputer>
ssh-keygen

Please make sure you use a passphrase when generating the key. Make sure to not just skip the passphrase by typing in ENTER but instead use a real not easy to guess passphrase as this is best practice and not in violation violation of security policies. You always can use use ssh-agent and ssh-add so you do not have to repeatedly enter your passphrase.

The ssh-keygen program will generate a public-private keypair in the directory ~/.ssh/id_rsa.pub (public key) and ~/.ssh/id_rsa. Please never share the private key with anyone.

Next, we need to add the public key to Rivanna’s rivanna:~/.ssh/authorized_keys file. The easiest way to do this is to use the program ssh-copy-id.

commputer>
ssh-copy-id username@rivanna.hpc.virginia.edu

Please use your password when using ssh-copy-id. Your username is your UVA computing id. Now you should be ready to connect with

commputer>
ssh username@rivanna.hpc.virginia.edu

Commandline editor

Sometimes it is necessary to edit files on Rivanna. For this, we recommend that you learn a command line editor. There are lots of debates on which one is better. When I was young I used vi, but found it too cumbersome. So I spend one-day learning emacs which is just great and all you need to learn. You can install it also on Linux, Mac, and Windows. This way you have one editor with very advanced features that is easy to learn.

If you do not have one day to familiarize yourself with editors such as emacs, vim, or vi, you can use editors such as nano and pico.

The best commandline editor is emacs. It is extremely easy to learn when using just the basics. The advantage is that the same commands also work in the terminal.

Keys Action
CTRL-x c Save in emacs
CTRL-x q Leave
CTRL-x g If something goes wrong
CTRL a Go to beginning line
CTRL e Go to end of line
CTRL k Delete till end of line from curser
cursor Just works ;-)

PyCharm

The best editor to do python development is pyCharm. Install it on your desktop. The education version is free.

VSCode

An inferior editor for python development is VSCode. It can be configured to also use a Remote-SSH plugin.

Moving data from your desktop to Rivanna

To copy a directory use scp

If only a few lines have changed use rsync

To mount Rivannas file system onto your computer use fuse-ssh. This will allow you to for example use pyCharm to directly edit files on Rivanna.

Developers however often also use GitHub to push the code to git and then on Rivanna use pull to get the code from git. This has the advantage that you can use pyCharm on your local system while synchronizing the code via git onto Rivanna.

However often scp and rsync may just be sufficient.

Example Config file

Replace abc2de with your computing id

place this on your computer in ~/.ssh/config

~/.ssh/config
ServerAliveInterval 60

Host rivanna
     User abc2de
     HostName rivanna.hpc.virginia.edu
     IdentityFile ~/.ssh/id_rsa
     
Host b1
     User abc2de
     HostName biihead1.bii.virginia.edu
     IdentityFile ~/.ssh/id_rsa
     
Host b2
     User abc2de
     HostName biihead2.bii.virginia.edu
     IdentityFile ~/.ssh/id_rsa

Adding it allows you to just ssh to the machines with

commputer>
ssh rivanna
ssh b1
ssh b2

Rivanna’s filesystem

The file systems on Rivanna have some restrictions that are set by system wide policies that you need to be inspecting:

  • TODO: add link here

You can alls see your quote with

rivanna>
  hdquota

we distinguish

  • home directory: /home/<uvaid> or ~
  • /scratch/<uvaid>
  • /project/bii_dsc_community/projectname/<uvaid>

Y In your home directory, you will find system directories and files such as ~/.ssh , ~/.bashrcand ~/.zshrc

The difference in the file systems is explained at

Dealing with limited space under HOME

As we conduct research you may find that the file space in your home directory is insufficient. This is especially the case when using conda. Therefore, it is recommended that you create softlinks from your home directory to a location where you have more space. This is typically somewhere under /project and /scratch.

We describe next how to relocate some of the directories to /project and /scratch

In ~/.bashrc, add the following lines, for creating a project directory.

$ vi ~/.bashrc

$ PS1="\w \$"
$ alias project='cd /project/bii_dsc_community/$USER'
$ export PROJECT="/project/bii_dsc_community/$USER"

$ alias scratch='cd /scratch/$USER'
$ export PROJECT="/scratch/$USER"

At the end of the .bashrc file use

$ cd $PROJECT

or alternative to

$ cd $SCRATCH

So you always cd directly into your project directory instead of home.

The home directory only has 50GB. Installing everything on the home directory will exceed the allocation and have problems with any execution. So it’s better to move conda all other package installation directories to $PROJECT.

First, explore what is in your home directory and how much space it consumes with the following commands.

cd $HOME
$ ls -lisa
$ du -h .

Select from this list of directories that you want to move (those that you not already have moved).

Let us assume you want to move the directories .local, .vscode-server, and .conda. Important is that you want to make sure that .conda and .local are moved as they may include lots of files and you may run out of memory quickly. Hence you do next the following.

rivanna>
  $ cd $PROJECT
  $ mv ~/.local .
  $ mv ~/.vscode-server .
  $ mv ~/.conda .

Then create symbolic links to the home directory installed folder.

rivanna>
$ cd $PROJECT
$ ln -s $PROJECT/.local ~/.local
$ ln -s $PROJECT/.vscode-server ~/.vscode-server
$ ln -s $PROJECT/.conda ~/.conda

Check all symbolic links:

rivanna>
$ ls -lisa

20407358289 4 lrwxrwxrwx 1 $USER users 40 May  5 10:58 .local -> /project/bii_dsc_community/djy8hg/.local
20407358290 4 lrwxrwxrwx 1 $USER users 48 May  5 10:58 .vscode-server -> /project/bii_dsc_community/djy8hg/.vscode-server

Singularity Cache

In case you use singularity you can build images you need to set the singularity cache. This is due to the fact that the cache usually is created in your home directory and is often far too small for even our small projects. Thus you need to set it as follows

rivanna>
  mkdir -p /scratch/$USER/.singularity/cache
  export SINGULARITY_CACHEDIR=/scratch/$USER/.singularity/cache

`

Python

In case you use python venv, do not place them in home but under project or scratch.

rivanna>
  module load python3.8
  python -m venv $SCRATCH/ENV3
  source $SCRATCH/ENV3/bin/activate

If you succeed, you can also place the source line in your .bashrc file.

In case you use conda and python, we also recommend that you create a venv from the conda python, so you have a copy of that in ENV3 and if something goes wrong it is easy to recreate from your default python. Those that use that path ought to improve how to do this here.

Adding cloudmesh rivanna specific commands and tools

On your computer in your ENV3 add the following to enable the commands

computer>
pip install pip -U
pip install cloudmesh-common
pip install cloudmesh-rivanna
pip install cloudmesh-sbatch
pip install cloudmesh-vpn

On Rivanna in ENV3 also add the gpu monitor

computer>
  pip install pip -U
  pip install cloudmesh-common
  pip install cloudmesh-gpu
  pip install cloudmesh-rivanna
  pip install cloudmesh-sbatch

Note: Please send me a mail to laszewski@gmail.com if any requirements are missing as I may not yet have included all of them in the pip package.

Once you have activated it the cloudmesh rivanna command shows you combinations of SBATCH flags that you can use.

To see them type in

computer>
cms rivanna slurm list

To login into a specific node you can say (lest assume you like to log into a k80

computer>
cms rivanna login v100

Please be reminded that interactive login is only allowed for debugging all jobs must be submitted through sbatch.

To get the directives template to use that GPU, use

computer>
cms rivanna slurm v100

cloudmesh sbatch

Cloudmesh-sbatch is a super cool extension to sbatch allowing you to outomatically run parameters studies while creating permuattions on experiment parameters. At this time we try to create some sampel applications, but you can also ararnge a 30 minute meeting with Gregor so we try setting it up for your application with his help

See also:

cloudmesh vpn command

cloudmesh has a simple commandline vpn command that you can use to switch on and off vpn for UVA (and other vpn’s, we can add that feature ;-))

computer>
cms vpn connect
  ... do your work in vpn such as working on rivanna
cms vpn disconnect
  ... work on your regular network 

Load modules

Modules are preconfigured packages that allow you to use a specific software to be loaded into your environment without needing you to install it from source. To find out more about a particular package such as cmake you can use the command

rivanna>
  module spider cmake # check whether cmake is available and details

Load the needed module (you can add version info). Note that some modules are dependent on other modules (clang/10.0.1 depends on gcc/9.2.0 so gcc needs to be loaded first.

rivanna>
  # module load gcc/9.2.0 clang/10.0.1
  module load clanggcc
  module load cmake/3.23.3 git/2.4.1 ninja/1.10.2-py3.8 llvm cuda/11.4.2

check currently loaded modules

rivanna>
  module list

clean all the modules

rivanna>
  module purge

Request GPUs to use interactively

TODO: explain what -A is

rivanna>
ijob -c number_of_cpus \
     -A group_name \
     -p queue_name \
     --gres=gpu:gpu_model:number_of_gpus \
     --time=day-hours:minutes:seconds

An example to request 1 cpu with 1 a100 gpu for 10 minutes in ‘dev’ partition is

rivanna>
ijob -c 1 -A bii_dsc_community -p gpu --gres=gpu:a100:1 --time=0-00:10:00

Rivanna has different partitions with different resource availability and charging rate. dev is free but limited to 1 hour for each session/allocation and no GPU is available. To list the different partitons use qlist to check partitions

Last Checked July 28th, note thes values may change.

Queue Total Free Jobs Jobs Time SU
(partition) Cores Cores Running Pending Limit Charge
bii 4640 3331 31 15 7-00:00:00 1
standard 4080 496 1209 5670 7-00:00:00 1
dev 160 86 5 1:00:00 0
parallel 4880 1594 21 3 3-00:00:00 1
instructional 480 280 16 3-00:00:00 1
largemem 144 80 2 1 4-00:00:00 1
gpu 1876 1066 99 210 3-00:00:00 3
bii-gpu 608 542 18 1 3-00:00:00 1
bii-largemem 288 224 7-00:00:00 1

To list the limits, use the command qlimits

Last Checked July 28th, note these values may change.

Queue Maximum Maximum Minimum Maximum Maximum Default Maximum Minimum
(partition) Submit Cores(GPU)/User Cores/Job Mem/Node(MB) Mem/Core(MB) Mem/Core(MB) Nodes/Job Nodes/Job
bii 10000 cpu=400 354000+ 9400 112
standard 10000 cpu=1000 384000+ 9000 1
dev 10000 cpu=16 384000 9000 6000 2
parallel 2000 cpu=1500 4 384000 9600 9000 50 2
instructional 2000 cpu=20 384000 6000 5
largemem 2000 cpu=32 1500000 64000 60000 2
gpu 10000 gres/gpu=32 128000+ 32000 6000 4
bii-gpu 10000 384000+ 9400 12
bii-largemem 10000 1500000 31000 2

Linux commands for HPC

Many useful commands can be found in Gregor’s book at

The following additional commands are quite useful on HPC systems

command description
allocations check available account and balance
hdquota check storage you has used
du -h --max-depth=1 check which directory uses most space
qlist list the queues
qlimits prints the limits of the queues

SLURM Batch Parameters

We present next a number of default parameters for using a variety of GPUs on rivanna. Please note that you may need to adopt some parameters to adjust for cores or memory according to your application.

Running on v100

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --time=12:00:00
#SBATCH --partition=bii-gpu
#SBATCH --account=bii_dsc_community
#SBATCH --gres=gpu:v100:1
#SBATCH --job-name=MYNAME
#SBATCH --output=%u-%j.out
#SBATCH --error=%u-%j.err

Running on a100-40GB

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --time=12:00:00
#SBATCH --partition=bii-gpu
#SBATCH --account=bii_dsc_community
#SBATCH --gres=gpu:a100:1
#SBATCH --job-name=MYNAME
#SBATCH --output=%u-%j.out
#SBATCH --error=%u-%j.err

Running on special fox node a100-80GB

#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --time=12:00:00
#SBATCH --partition=bii-gpu
#SBATCH --account=bii_dsc_community
#SBATCH --gres=gpu:a100:1
#SBATCH --job-name=MYNAME
#SBATCH --output=%u-%j.out
#SBATCH --error=%u-%j.err
#SBATCH --reservation=bi_fox_dgx
#SBATCH --constraint=a100_80gb

Some suggestions

When compiling large projects, you may neeed to make surue you have enough time and memory to conduct such compiles. This can be best achieved by using an interactive node, possibly from the large memory partition.

References

Help Support

When requesting help from Gregor or anyone make sure to completely specify the issue, a lot of things cannot be solved if you are not clear on the issue and where it is occurring. Include:

  • The issue you are encountering.
  • Where it is occurring.
  • What you have done to try to resolve the issue.

A good example is:

I ran the application xyz, from url xyz on Rivanna. I placed code in the directory /project/…. or I placed the data in /project/… The download worked and I placed about 600GB. However when I uncompress the data with the command xyz I get the error xyz. What should we do now?

3 - Rivanna Pod

Rivanna

This documentation is so far only useful for betatesters. In this group we have

  • Gregor von Laszewski

The rivanna documentation for the basic pod is available at

https://www.rc.virginia.edu/userinfo/rivanna/basepod/

Introducing the NVIDIA DGX BasePOD

Rivanna contains a BasePod with

  • 10 DGX A100 nodes
  • 8 A100 GPU devices
  • 2 TB local node memory (per node)
  • 80 GB GPU memory (per GPU device)

The following Advanced Features have now been enabled on the BasePOD:

  • NVLink for fast multi-GPU communication
  • GPUDirect RDMA Peer Memory for fast multi-node multi-GPU communication
  • GPUDirect Storage with 200 TB IBM ESS3200 (NVMe) SpectrumScale storage array

What this means to you is that the POD is ideal for the following scenarios:

  • The job needs multiple GPUs and/or even multiple nodes.
  • The job (can be single- or multi-GPU) is I/O intensive.
  • The job (can be single- or multi-GPU) requires more than 40 GB GPU memory. (We have 12 A100 nodes in total, 10 of which are the POD and 2 are regular with 40 GB GPU memory per device.)

Detailed specs can be found in the official document (Chapter 3.1):

Accessing the POD

Allocation

A single job can request up to 4 nodes with 32 GPUs. Before running multi-node jobs, please make sure it can scale well to 8 GPUs on a single node.

Slurm script Please include the following lines:

#SBATCH -p gpu
#SBATCH --gres=gpu:a100:X # replace X with the number of GPUs per node
#SBATCH -C gpupod

Open OnDemand

In Optional: Slurm Option write:

-C gpupod

Interactive login

Interactive login to the nodes should be VERY limited and you need to use for most activities the batch queue. In case you need to look at thisng you can use our cloudmesh progarm to do so

Make sure to have vpn enabled and cloumdesh-rivanna installed via pip.

computer>

cms rivanna login a100-pod

 
Will log you into a node. The time is set by default to 30 minutes. 
Please immediatly log out after you are done with your work interactive 
work.

## Usage examples
 
### Deep learning

We will be migrating toward NVIDIA’s NGC containers for deep learning
frameworks such as PyTorch and TensorFlow, as they have been heavily
optimized to achieve excellent multi-GPU performance. These containers
have not yet been installed as modules but can be accessed under
/share/resources/containers/singularity:

* pytorch_23.03-py3.sif
* tensorflow_23.03-tf1-py3.sif
* tensorflow_23.03-tf2-py3.sif

(NGC has their own versioning scheme. The PyTorch and TensorFlow
versions are 2.0.0, 1.15.5, 2.11.0, respectively.)
 
The singularity command is of the form:

singularity run –nv /path/to/sif python /path/to/python/script


**Warning:** Distributed training is not automatic! Your code must be
parallelizable. If you are not familiar with this concept, please
visit:

* TF distributed training <https://www.tensorflow.org/guide/distributed_training>
* PyTorch DDP <https://pytorch.org/docs/stable/notes/ddp.html>
 
### MPI codes

Please check the manual for your code regarding the relationship
between the number of MPI ranks and the number of GPUs. For
computational chemistry codes (e.g. VASP, QuantumEspresso, LAMMPS) the
two are oftentimes equal, e.g.

#SBATCH –gres=gpu:a100:8 #SBATCH –ntasks-per-node=8


If you are building your own code, please load the modules nvhpc and
cuda which provide NVIDIA compilers and CUDA libraries. The compute
capability of the POD A100 is 8.0.
 
For documentation and demos, refer to the *Resources* section at the
bottom of this page: <https://developer.nvidia.com/hpc-sdk>
 
 
We will be updating our website documentation gradually in the near
future as we iron out some operational specifics. GPU-enabled modules
are now marked with a (g) in the *module avail* command as shown
below:


TODO: output from maodule avail to be included
 
 

4 - Rivanna and Singularity

Singularity.

Singularity

Singularity is a container runtime that implements a unique security model to mitigate privilege escalation risks and provides a platform to capture a complete application environment into a single file (SIF).

Singularity is often used in HPC centers.

University of Virginia granted us special permission to create Singularity images on rivanna. We discuss here how to build and run singularity images.

Access

In order for you to be able to access singularity and build images, you must be in the following groups:

biocomplexity
nssac_students
bii_dsc_community

To find out if you are, ssh into rivanna and issue the command

$ groups

If any of the groups is missing, please send Gregor an e-mail at laszewski@gmail.com.

Singularity cache

Before you can build images you need to set the singularity cache. This is due to the fact that the cache usually is created in your home directory and is often far too small for even our small projects. Thus you need to set it as follows

rivanna>
  mkdir -p /scratch/$USER/.singularity/cache
  export SINGULARITY_CACHEDIR=/scratch/$USER/.singularity/cache

Please remember that scratch is not permanent. In case you like a bit more permanent location you can alternatively use

rivanna>
  mkdir -p /project/bii_dsc_community/$USER/.singularity/cache
  export SINGULARITY_CACHEDIR=/project/bii_dsc_community/$USER/.singularity/cache

build.def

To build an image you will need a build definition file

We show next an exxample of a simple buid.def file that uses internally a NVIDIA NGC PyTorch container.

Bootstrap: docker
From: nvcr.io/nvidia/pytorch:23.02-py3

Next you can follow the steps that are detailed in https://docs.sylabs.io/guides/3.7/user-guide/definition_files.html#sections

However, for Rivanna we MUST create the image as discussed next.

Creating the Singularity Image

In order for you to create a singularity container from the build.def file please login to either of the following special nodes on Rivanna:

  • biihead1.bii.virginia.edu
  • biihead2.bii.virginia.edu

For example:

ssh $USER@biihead1.bii.virginia.edu

where $USER is your computing ID on Rivanna.

Now that you are logged in to the special node, you can create the singularity image with the following command:

sudo /opt/singularity/3.7.1/bin/singularity build output_image.sif build.def

Note: It is important that you type in only this command. If you modify the name output_image.sif or build.def the command will not work and you will recieve an authorization error.

In case you need to rename the image to a better name please use the mv command.

In case you also need to have a different name other then build.def the following Makefile is very useful. We assume you use myimage.def and myimage.sif. Include it into a makefile such as:

BUILD=myimage.def
IMAGE=myimage.sif

image:
	cp ${BUILD} build.def
	sudo /opt/singularity/3.7.1/bin/singularity build output_image.sif build.def
	cp output_image.sif ${IMAGE}
	make -f clean

clean:
	rm -rf build.def output_image.sif

Having such a Makefile will allow you to use the command

make image

and the image myimage.sif will be created. with make clean you will delete the temporary files build.def and output_image.sif

Create a singularity image for tensorflow

TODO

Work with Singularity container

Now that you have an image, you can use it while using the documentation provided at https://www.rc.virginia.edu/userinfo/rivanna/software/containers/

Run GPU images

To use NVIDIA GPU with Singularity, --nv flag is needed.

singularity exec --nv output_image.sif python myscript.py

TODO: THE NEXT PARAGRAPH IS WRONG

Since Python is defined as the default command to be excuted and singularity passes the argument(s) after the image name, i.e. myscript.py, to the Python interpreter. So the above singularity command is equivalent to

singularity run --nv output_image.sif myscript.py

Run Images Interactively

ijob  -A mygroup -p gpu --gres=gpu -c 1
module purge
module load singularity
singularity shell --nv output_image.sif

Singularity Filesystem on Rivanna

The following paths are exposed to the container by default

  • /tmp
  • /proc
  • /sys
  • /dev
  • /home
  • /scratch
  • /nv
  • /project

Adding Custom Bind Paths

For example, the following command adds the /scratch/$USER directory as an overlay without overlaying any other user directories provided by the host:

singularity run -c -B /scratch/$USER output_image.sif

To add the /home directory on the host as /rivanna/home inside the container:

singularity run -c -B /home:/rivanna/home output_image.sif

FAQ

Adding singularity to slurm scripts

TBD

Running on v100

TBD

Running on a100-40GB

TBD

Running on a100-80GB

TBD

RUnning on special fox node a100-80GB

TBD