Genius quick start guide¶
Genius is the most recent KU Leuven/UHasselt Tier-2 cluster. It can be used for most workloads, and has nodes with a lot of memory, as well as nodes with GPUs.
Direct login using SSH is possible to all login infrastructure without restrictions.
You can access Genius through:
This will loadbalance your connection to one of the 4 Genius login nodes. Two types of login nodes are available:
classic login nodes, i.e., terminal SSH access:
login node that provides a desktop environment that can be used for, e.g., visualization, see the NX clients section:
This node should not be accessed using terminal SSH, it serves only as a gateway to the actual login nodes your NX sessions will be running on.
The NX login node will start a session on a login node that has a GPU, i.e., either
For example, to log in to any of the login node using SSH:
$ ssh [email protected]
Running jobs on Genius¶
There are several type of nodes in the Genius cluster: normal compute nodes, GPU nodes, big memory nodes. The resources specifications for jobs have to be tuned to use these nodes properly.
In case you are not yet familiar with the system, you read more information on
There are several type of nodes in the Genius cluster: normal compute nodes, GPU nodes, big memory nodes. For information on systems, see the hardware specification.
5.00 per GPU
The maximum walltime for any job on Genius is 7 days (168 hours). Job requests with walltimes between 3 and 7 days are furthermore only allowed to request up to 10 compute nodes per job. No such limitation is imposed on jobs with walltimes of 3 days or less.
There is a limit on the number of jobs you can have in a queue. This number includes idle, running, and blocked jobs. If you try to submit more jobs than the maximum number, these jobs will be deferred and will not start. Therefore you should always respect the following limits on how many jobs you have in a queue at the same time:
q1h: max_user_queueable = 200
q24h: max_user_queueable = 250
q72h: max_user_queueable = 150
q7d: max_user_queueable = 20
qsuperdome: max_user_queueable = 20
These limits can be checked on the cluster by executing:
$ qstat -f -Q
Submit to a compute node¶
To submit to a compute node it all boils down to specifying the required number of nodes and cores. As the nodes have a single user policy we recommend to always request all available cores per node (36 cores in this case). For example to request 2 nodes with each 36 cores you can submit like this:
$ qsub -l nodes=2:ppn=36 -l walltime=2:00:00 -A myproject myjobscript.pbs
Submit to a GPU node¶
The GPU nodes are located in a separate cluster partition so you will need to explicitly specify it when submitting your job. We also configured the GPU nodes as shared resources, meaning that different users can simultaneously use a portion of the same node. However every user will have exclusive access to the number of GPUs requested. If you want to use only 1 GPU of type P100 (which are on nodes with SkyLake architecture) you can submit for example like this:
$ qsub -l nodes=1:ppn=9:gpus=1:skylake -l partition=gpu -l pmem=5gb -A myproject myscript.pbs
Note that in case of 1 GPU you have to request 9 cores. In case you need more GPUs you have to multiply the 9 cores with the number of GPUs requested, so in case of for example 3 GPUs you will have to specify this:
$ qsub -l nodes=1:ppn=27:gpus=3:skylake -l partition=gpu -l pmem=5gb -A myproject myscript.pbs
To specifically request V100 GPUs (which are on nodes with CascadeLake architecture), you can submit for example like this:
$ qsub -l nodes=1:ppn=4:gpus=1:cascadelake -l partition=gpu -l pmem=20gb -A myproject myscript.pbs
For the V100 type of GPU, it is required that you request 4 cores for each GPU. Also notice that these nodes offer much larger memory bank.
There are different GPU compute modes available, which are explained on this documentation page.
exclusive_process: only one compute process is allowed to run on the GPU
default: shared mode available for multiple processes
exclusive_thread: only one compute thread is allowed to run on the GPU
To select the mode of your choice, you can for example submit like this:
$ qsub -l nodes=1:ppn=9:gpus=1:skylake:exclusive_process -l partition=gpu -A myproject myscript.pbs $ qsub -l nodes=1:ppn=9:gpus=1:skylake:default -l partition=gpu -A myproject myscript.pbs $ qsub -l nodes=1:ppn=9:gpus=1:skylake:exclusive_thread -l partition=gpu -A myproject myscript.pbs
If no mode is specified, the
exclusive_process mode is selected by default.
Submit to a big memory node¶
The big memory nodes are also located in a separate partition. In case of the big memory nodes it is also important to add your memory requirements, for example:
$ qsub -l nodes=1:ppn=36 -l pmem=20gb -l partition=bigmem -A myproject myscript.pbs
Submit to an AMD node¶
The AMD nodes are in their own partition. Besides specifying the partition,
it is also important to specify the memory per process (
the AMD nodes have 256 GB of RAM, which implies that the default value is
too high, and your job will never run.
$ qsub -l nodes=2:ppn=64 -l pmem=3800mb -l partition=amd -A myproject myscript.pbs
This resource specification for the memory is a few GB less than 256 GB, leaving some room for the operating system to function properly.
Running debug jobs¶
Debugging on a busy cluster can be taxing due to long queue times. To mitigate this, two skylake CPU nodes and a skylake GPU node has been reserved for debugging purposes.
A few restrictions apply to a debug job:
it has to be submitted with
it can only use at most two nodes for CPU jobs, a single node for GPU jobs,
its walltime is at most 30 minutes,
you can only have a single debug job in the queue at any time.
To run a debug job for 20 minutes on two CPU nodes, you would use:
$ qsub -A myproject -l nodes=2:ppn=36 -l walltime=00:20:00 \ -l qos=debugging myscript.pbs
To run a debug job for 15 minutes on a GPU node, you would use:
$ qsub -A myproject -l nodes=1:ppn=9:gpus=1 -l partition=gpu \ -l walltime=00:15:00 -l qos=debugging myscript.pbs