Using partition cpu_lorentz

About

Partition cpu_lorentz with nodes node0[22-24] is a private partition for members of the Lorentz Institute.

The hardware configuration of the nodes can be found here: About ALICE | Hardware Description

Access

  • Priority access only after confirmation from the PI except for users from Institute Lorentz which get access automatically

    • Priority access to the partition can be requested via e-mail to the ALICE Helpdesk and with confirmation from the PI.

  • Users need to be a member of the group cpu_lorentz and have access to the account cpu_lorentz

    • you can check whether you a member of the group by running the command id on the command line

    • you can check whether you have access to the account by running sacctmgr show associations user=<username> where <username> should be replaced by your ALICE user name.

  • Other LION users do not have access to the partition by default

Partition Settings

Partition settings can be changed to adjust the need of the group. Requests for changes can be done either by the PI or by a group member with confirmation from the PI and should be send to the ALICE Helpdesk.

Job submission

  • user should use their account cpu_lorentz for running jobs. This is necessary so that usage of this account does not impact the fairshare of other strw users.

    • In your batch script, you have to add: #SBATCH --account=cpu_lorentz

      • For other jobs on ALICE, your regular ALICE account is sufficient and you do not need to set this

  • You can check this with the following command (where <username> should be replaced by your ALICE username):

sacctmgr show association user=<username>

Software

Scientific software stack

  • You can make use of the general scientific software stack which can be accessed by running

    module load ALICE/default

    It is recommend to add this to your batch scripts, too.

  • If you want to use software fully optimized for the CPU architecture of the nodes, you have to build the software yourself.

Your own scripts/programmes

  • Because this node has a different CPU, it is possible that conda environments or other software that you build on the login nodes are not working if the software is build optimized for CPU architecture.

  • In this case, you need to compile such scripts/software as part of a batch or interactive job

    • One way to do this is to create a short slurm batch job specifically for compiling your software, setting up your conda/Python environments, etc. If you only need to do this once, then there is no need to make this part of your production batch job.

    • Another option is to compile the first time you run your programme as part of a job. In this first job, you copy the compiled program back to your shared storage or home directory. For the next job, you use the already compiled version (see example below).

  • You can still use the login nodes for testing/debugging. In this case, you need to compile on the login nodes, run your test and for your job, compile on the compute node again.

Example

Here is an example of how a Slurm batch script could look like for using the node, including a HelloWorld OpenMP program to demonstrate the compiling and use of the local scratch storage.

If you are new to HPC, ALICE or Slurm, have a look at the https://pubappslu.atlassian.net/wiki/spaces/HPCWIKI/pages/5963809 first.

Batch script

#!/bin/bash #SBATCH --partition=cpu_lorentz #SBATCH --account=cpu_lorentz #SBATCH --job-name=test_job #SBATCH --time=0-00:02:00 #SBATCH --output=%x_%j.out #SBATCH --nodes=1 #SBATCH --ntasks=5 #SBATCH --cpus-per-task=3 #SBATCH --mem=10G #SBATCH --mail-user="your-email-address" #SBATCH --mail-type="ALL" module load ALICE/default module load OpenMPI/4.0.5-GCC-9.3.0 echo "#### Test started" # return the name of the node echo "## Which node is this: $HOSTNAME" # check the number of cores (ntasks*cpus-per-task) echo "How many cores do I have access to: ${SLURM_CPUS_ON_NODE}" # Just to check that the AMD software stack is loaded echo "Am I loading the from the right module path" echo ${MODULEPATH%%:*} # get the current working directory CWD=$(pwd) echo "## Where am I: ${CWD}" # check out the nodes local scratch echo "## My local scratch space on the node is: ${SCRATCH}" cd $SCRATCH echo "## Let us go there: $(pwd)" # In case the file has already been compiled # and stored in $CWD, the following six lines # are not necessary echo "## Let us copy the C script to it" cp $CWD/omp_hello.c $SCRATCH/ echo "## Is the file there?" ls -la omp_hello.c echo "## Now we compile it on the node" gcc -o omp_hello_amd -fopenmp omp_hello.c # In case the file is already compiled # the next four lines would copy it # and check that it is there: #echo "## Let us copy the compiled C programme to it" #cp $CWD/omp_hello_amd $SCRATCH/ #echo "## Is the file there?" #ls -la omp_hello_amd echo "## Let us run it" export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASKS srun ./omp_hello_amd # Copy those files back to shared scratch or home # that should be kept for later. # Here, it is just the compiled C programme. # It does not need to be copied back of course # if it came from shared scratch or home. echo "## Saving files that should be saved." cp $SCRATCH/omp_hello_amd $CWD/ echo "## Now that this is done, I want to go home" cd $CWD echo "## Good to be back $(pwd)" echo "#### Test finished"

 

OpenMP script

Here is the content of the file omp_hello.c from https://computing.llnl.gov/tutorials/openMP/samples/C/omp_hello.c