Partitions on ALICE

Partitions on ALICE

Partitions (2025)

The following table list the new partition layout from oktober 2025. As ALICE has grown more heterogeneous over time this layout focusses more on hardware type then job runtime. We do encourage you to set the runtime in your jobs to improve the job scheduling.

Partitions

Nodes

Hardware

Limits

Features

Slurm

Description

Partitions

Nodes

Hardware

Limits

Features

Slurm

Description

cpu-short

all

 

time: 4 hour

ib,Intel.Skylake,AMD.Zen4,AMD.Zen3

 

For short cpu jobs

cpu-zen4

13

EPYC 9554P, 64 core,128 threads, 384G mem

time: 7 days

 

 

Thin nodes

cpu-skylake

20

dual Xeon 6126@2.60GHz, 24 core, 48 threads, 384G mem

time: 7 days

ib

 

Thin nodes

gpu-short

all gpu

 

time: 4 hour

 

--gres=gpu:1 or type:number
--gres=gpu:2080_ti:4
--gres=gpu:l4:2
--gres=gpu:4g.40gb:1

Short GPU jobs.
Note --exclusive cannot be used with --gres
available gpu types: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/pages/37519378/About+ALICE#GPU-overview

gpu-l4-24g

8

4*L4

time: 7 days

 

--gres=gpu:1

GPU nodes

gpu-2080ti-11g

 

4*2080ti, dual Xeon 6126@2.60GHz, 24 core, 48 threads, 384G mem

time: 7 days

 

--gres=gpu:1

GPU nodes

gpu-mig-40g

 

 

time: 7 days

 

--gres=gpu:1

Partial A100 GPU

gpu-a100-80g

 

 

time: 7 days

 

--gres=gpu:1

large GPU nodes

mem

2

EPYC 7662, 128 core, 256 threads, 4TB mem

time: 14 days

AMD.Zen3, AMD.Zen4

 

High memory nodes

testing

 

 

time: 30 min
Running Jobs: 4

 

 

For testing and debugging jobs.

interactive

 

 

time: 8 hours
Running Jobs: 4

Intel.Skylake, AMD.Zen4

--gres=gpu:1

--gres=gpu:2080_ti:1

--gres=gpu:4g.40gb:1

For interactive jobs

cpu_natbio

1

 

time: 30 days

 

 

Private

cpu_lorentz

2

 

time: 7 days

 

 

Private

mem_mi

1

 

time: 4 days, qos: 30 days

 

 

Private

gpu_strw

2

 

time: 7 days, qos: 14 days

 

--account=gpu_strw

Private

gpu_lucdh

1

 

time: 14 days

 

--account=gpu_lucdh

Private

gpu_lion

1

 

time: 14 days

 

--account=gpu_lion

Private

gpu_cml

1

 

time: 14 days

 

--account=gpu_cml

Private

Notes:

  • Scheduling is based on cpus/cores mostly. If your job requires a lot of memory, consider running your job in the mem partition or run exclusively on a node with the slurm option --exclusive.

  • Slurm option --gres cannot be used in combination with --exclusive.

List of Partitions (previous)

The following table gives an overview of all partitions on ALICE. Below this table is an overview of additional limits set for each partition.

Partition

Timelimit

Default Timelimit

Default Memory Per CPU

GPU available

Nodes

Nodelist

Description

Partition

Timelimit

Default Timelimit

Default Memory Per CPU

GPU available

Nodes

Nodelist

Description

testing

1:00:00

 

10000 MB

2

nodelogin[01-02]

For some basic and short testing of batch scripts.
Each login node is equipped with an NVIDIA Tesla T4 which can be used to test GPU jobs.

cpu-short

4:00:00

01:00:00

4000 MB

44

node[001-020,801-802,853-860,863-876]

For jobs that require CPU nodes and not more than 4h of running time. This is the default partition

cpu-medium

1-00:00:00

01:00:00

4000 MB

30

node[002-020,866-876]

For jobs that require CPU nodes and not more than 1d of running time.

cpu-long

7-00:00:00

01:00:00

4000 MB

28

node[003-020,867-876]

For jobs that require CPU nodes and not more than 7d of running time.

gpu-short

4:00:00

01:00:00

4000 MB

24

node[851-860,863-876]

For jobs that require GPU nodes and not more than 4h of running time

gpu-medium

1-00:00:00

01:00:00

4000 MB

23

node[852-860,863-876]

For jobs that require GPU nodes and not more than 1d of running time

gpu-long

7-00:00:00

01:00:00

4000 MB

18

node[853-860,864-872,876]

For jobs that require GPU nodes and not more than 7d of running time

mem

14-00:00:00

01:00:00

85369 MB

1

node801

For jobs that require the high memory node.

mem_mi

4-00:00:00

01:00:00

31253 MB

1

node802

Partition only available to MI researchers. Default running time is 4h.

cpu_lorentz

7-00:00:00

01:00:00

4027 MB

3

node0[22-23]

Partition only available to researchers from Lorentz Institute

cpu_natbio

30-00:00:00

01:00:00

23552 MB

1

node021

Partition only available to researchers from the group of B. Wielstra

gpu_strw

7-00:00:00

01:00:00

2644 MB

2

node86[1-2]

Partition only available to researchers from the group of E. Rossi

gpu_lucdh

14-00:00:00

01:00:00

4000 MB

1

node877

Parition only available to researchers from LUCDH

You can find the GPUs available on each node here: https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/pages/37519378/About+ALICE#GPU-overview and how you can request them further below on this page.

Partition Limits

The following limits currently apply to each partition:

Partition

Max CPUs per Node

Max Memory per Node

Max GPUs Per Node

#Allocated CPUs per User (running jobs)

#Allocated GPUs per User (running jobs)

#Jobs submitted per User

Partition

Max CPUs per Node

Max Memory per Node

Max GPUs Per Node

#Allocated CPUs per User (running jobs)

#Allocated GPUs per User (running jobs)

#Jobs submitted per User

cpu-short

24 - 32

247G - 1T

-

288

 

 

cpu-medium

24 - 32

246G - 376G

-

240

 

 

cpu-long

24 - 32

246G - 376G

-

192

 

 

gpu-short

24 - 64

246G - 373G

2 - 4

168

28

 

gpu-medium

24 - 64

246G - 373G

2 - 4

120

20

 

gpu-long

24 - 64

246G - 373G

2 - 4

96

16

 

mem

24

2000G

 

 

 

 

mem_mi

128

3906G

 

 

 

 

cpu_natbio

32

752G