3. SLURM Job Examples

3.1. MATLAB job

#!/bin/bash
#SBATCH -J helloMatlab
#SBATCH -o output_%j.txt
#SBATCH -e errors_%j.txt
#SBATCH -t 01:30:00
#SBATCH -n 1
#SBATCH -p allgroups
#SBATCH --mem 10G

cd $WORKING_DIR
#your working directory

srun matlab < example.m

3.2. MPI job

#!/bin/bash
#SBATCH -J hellompi
#SBATCH -o output_%j.txt
#SBATCH -e errors_%j.txt
#SBATCH -t 01:30:00
# request 32 MPI tasks
#SBATCH -n 32
#SBATCH -p allgroups
#SBATCH --mem 640G

cd $WORKING_DIR
#your working directory
#spack load intel-parallel-studio@professional.2019.4 (work in progress)

srun ./mphello

Note

spack load ... initializes the Intel MPI environment and is equivalent to module load intel-parallel-studio-professional.2019.4-gcc-8.2.1-fnvratt (work in progress)

3.3. OpenMP job

#!/bin/bash
#SBATCH -J helloopenmp
#SBATCH -o output_%j.txt
#SBATCH -e errors_%j.txt
#SBATCH -t 01:30:00
# notice we're using the '-c' option
# to request for OMP threads
#SBATCH -c 32
#SBATCH -p allgroups
#SBATCH -–mem 640G

cd $WORKING_DIR
#your working directory
# set this to what you asked with '-c'
export OMP_NUM_THREADS=32

srun ./omphello

Note

  • OMP_NUM_THREADS must be set to the same number as the -c parameter

  • set -n to 1 if the program uses OpenMP only. If using both MPI and OpenMP set -n to the number of MPI tasks. The total number of slots requested, in any case, will be n*c

3.4. GPU Job

#!/bin/bash
#SBATCH -J helloGPU
#SBATCH -o output_%j.txt
#SBATCH -e errors_%j.txt
#SBATCH -t 01:30:00
#SBATCH -n 1
#SBATCH -p allgroups
#SBATCH –-mem 640G
# requesting 1 GPU; set --gres=gpu:2 to use for example two GPUs
#SBATCH --gres=gpu:1

cd $WORKING_DIR
#your working directory

srun ./GPUhello

Note

In DEI cluster there are currently four servers with GPUs:

  • one server (gpu1) with 6x Nvidia Titan Rtx each;

  • two server (gpu2,gpu3) with 8x Nvidia Rtx3090 each;

  • three servers (runner-04/05/06) with one Nvidia Quadro P2000 each.

Request more than one GPU only if your program is capable of using more than one GPU at a time.

Important

DO NOT request GPU if you don’t use them! To specify a GPU you want to use:

#SBATCH --gres=gpu                          Use a generic GPU
#SBATCH --gres=gpu:rtx                      Use Nvidia Titan RTX GPU or Nvidia RTX 3090 GPU
#SBATCH --gres=gpu:rtx:3                    Use for example three Nvidia RTX GPUs
#SBATCH --gres=gpu:p2000                    Use Nvidia Quadro P2000 GPU

3.5. Interactive Job

To run an interactive job using the “interactive” partition, use the command:

interactive

The interactive command will return an interactive shell to the user. The resources are limited to 1 processor and 3 GB of RAM. To obtain an interactive shell using the “interactive” partition, the user can also use the following command (one line command)

srun --pty --mem=1g -n 1 -J interactive -p interactive /bin/bash

To run an interactive job in a specific node (hostname), use the command (one line command)

srun --pty --mem=1g -n 1 -w hostname -J interactive -p interactive /bin/bash

The interactive shell is active for a maximum of 24 hours.

Note

Interactive jobs should be used ONLY when an real time interaction is needed and/or for tasks having low computation burden. Typical examples are the installation of software having an interactive installation procedure, simple file managing/manipulation (e.g. compressing files), etc.

Do not use the “interactive” partition to run tasks having a long execution time and/or having a high computation burden. These kind of jobs should be executed in the “allgroups” partition.

The use of the “interactive” partition is monitored: jobs that will use this partition in a wrong way will be killed.

To run an interactive job that use one GPU, use the command (one line command)

srun --pty --mem=1g -n 1 --gres=gpu:1 -J interactive -p interactive /bin/bash

To run an interactive job that use for example two specific GPUs, use the command (one line command)

srun --pty --mem=1g -n 1 --gres=gpu:titan_rtx:2 -J interactive -p interactive /bin/bash

Note

If the GPUs are already used by other jobs/users, the previous commands will not work.

3.6. Singularity Job

#!/bin/bash
#SBATCH --job-name=mysingularity
#SBATCH --error=opencv.%j.err
#SBATCH --output=opencv.%j.out
#SBATCH --partition=allgroups
#SBATCH --ntasks=1
#SBATCH --mem=1G
#SBATCH --time=00:05:00

cd $WORKING_DIR
#your working directory

srun singularity exec ./mysingularity.sif python script.py

3.7. Singularity job using GPU

#!/bin/bash
#SBATCH -J SingGPU
#SBATCH -o output_%j.txt
#SBATCH -e errors_%j.txt
#SBATCH -t 01:30:00
#SBATCH -n 1
#SBATCH -p allgroups
#SBATCH –-mem 640G
# requesting 1 GPU; set --gres=gpu:2 to use both GPUs
#SBATCH --gres=gpu:1

cd $WORKING_DIR
#your working directory

srun singularity exec --nv ./tensorflow.sif python script.py

Important

You must request (at least) one GPU and you must pass the - -nv flag to singularity