User Tools

Site Tools


doku:vsc4_queue

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
doku:vsc4_queue [2021/09/11 19:30]
goldenberg created
doku:vsc4_queue [2021/09/14 12:54] (current)
goldenberg [sbatch parameters]
Line 1: Line 1:
 ====== Queue | Partition setup on VSC-4 ====== ====== Queue | Partition setup on VSC-4 ======
 On VSC-4, the type of hardware and the quality of service (QOS) where the jobs run on may be selected. Nodes of the same type of hardware are grouped to partitions, the QOS defines the maximum run time of a job and the number and type of allocable nodes.  On VSC-4, the type of hardware and the quality of service (QOS) where the jobs run on may be selected. Nodes of the same type of hardware are grouped to partitions, the QOS defines the maximum run time of a job and the number and type of allocable nodes. 
 +
 +===== Hardware types =====
 +There is one type of compute node, which comes in three different memory version, 96 GB, 384 GB and 768 GB.
 +
 +On VSC-4, the hardware is grouped into so-called <html><font color=#cc3300>&#x27A0; partitions</font></html>:
 +
 +^partition name^ description^
 +| | | 
 +|mem_0096 | default, nodes with 96 GB of memory |
 +|mem_0384 | nodes with 384 GB of memory|
 +|mem_0768 | nodes with 768 GB of memory|
 +|adm_test | reserved for the admin team |
 +|jupyter| reserved for the JupyterHub |
 +
 +===== Quality of service (QOS) =====
 +
 +Access to node partitions is granted by the so-called <html><font color=#cc3300>&#x27A0; quality of service (QOS)</font></html>. The QOSs constrain the number of allocatable nodes and limit job wall time. The naming scheme of the QOSs is:
 +<project_type>_<memoryConfig>
 +
 +The QOSs that are assigned to a specific user can be viewed with:
 +<code>
 +sacctmgr show user `id -u` withassoc format=user,defaultaccount,account,qos%40s,defaultqos%20s
 +</code>
 +The default QOS and all QOSs usable are also shown right after login.
 +
 +Generally, it can be distinguished in QOS defined on the generally available compute nodes (mem_0096/mem_0384/mem_0768) and on private nodes. Furthermore, there is a distinction whether a project still has available computing time or if the computing time has already been consumed. In the latter case, jobs of this project are running with low job priority and reduced maximum run time limit in the <html><font color=#cc3300>&#x27A0; idle queue</font></html>
 +
 +The <html><font color=#cc3300>&#x27A0; devel queue</font></html> (devel_0096) gives fast feed-back to the user if her or his job is running. It is possible to connect to the node where the actual job is running and to directly [[doku:monitoring|monitor]] the job, e.g., for the purpose of checking if the threads/processes are doing what is expected. This might be recommended before sending the job to one of the 'computing' queues. 
 +
 +==== Run time limits ====
 +
 +
 +^ The QOS's hard run time limits ^   |
 +| | | 
 +| mem_0096 / mem_0384 / mem_0768           | 72h (3 days) |           
 +| idle_0096 / idle_0384 / idle_0768        | 24h (1 day)  |
 +| private queues   p....._0...             | up to 240h (10 days) |
 +| devel_0096 (up to 5 nodes available)     | 10min        |
 +The QOS's run time limits can also be requested via the command
 +<code>sacctmgr show qos  format=name%20s,priority,grpnodes,maxwall,description%40s</code>
 +SLURM allows for setting a run time limit //below// the default QOS's run time limit. After the specified time is elapsed, the job is killed:
 +<code>#SBATCH --time=<time> </code>
 +Acceptable time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds".
 +
 +==== sbatch parameters ====
 +For submitting jobs, three parameters are important:
 +
 +<code>
 +#SBATCH --partition=mem_xxxx
 +#SBATCH --qos=xxxxx_xxxx
 +#SBATCH --account=xxxxxx
 +</code>
 +The core hours will be charged to the specified account. If not specified, the default account (''sacctmgr show user `id -u` withassoc format=defaultaccount'') will be used.
 +
 +=== ordinary projects ===
 +
 +For ordinary projects the QOSs are:
 +^QOS name ^ gives access to partition ^description^
 +| | | 
 +|mem_0096 | mem_0096 | default |
 +|mem_0384 | mem_0384 | |
 +|mem_0768 | mem_0768 | |
 +|devel_0096 | 5 nodes on mem_0096 |
 +
 +== examples ==
 +<code>
 +#SBATCH --partition=mem_0096
 +#SBATCH --qos=mem_0096   
 +#SBATCH --account=p7xxxx   
 +</code>
 +  * Note that partition, qos, and account have to fit together. 
 +  * If the account is not given, the default account (''sacctmgr show user `id -u` withassoc format=defaultaccount'') will be used.
 +  * If partition and qos are not given, default values are mem_0096 for both.
 +
 +=== private nodes projects ===
 +
 +== example ==
 +
 +<code>
 +#SBATCH --partition=mem_0384
 +#SBATCH --qos=p7xxx_xxxx
 +#SBATCH --account=p7xxxx 
 +</code> 
 +
 +
doku/vsc4_queue.1631388608.txt.gz ยท Last modified: 2021/09/11 19:30 by goldenberg