Your first MATLAB job

About this tutorial

This tutorial will guide you through running a job with MATLAB on ALICE.

What you will learn?

  • Setting up the batch script for running a MATLAB script

  • Loading the necessary modules

  • Submitting your job

  • Monitoring your job

  • Collect information about your job

What this example will not cover?

  • Writing MATLAB scripts

  • Optimize MATLAB jobs for HPC

  • Running an interactive job for MATLAB

What you should know before starting?

MATLAB on ALICE and SHARK

The availability of MATLAB on ALICE and SHARK differs because MATLAB is a paid-for licensed software


ALICE

Leiden University has a campus license for MATLAB and the software is available on ALICE as a module.

You can find a list of available versions with

module avail MATLAB

but we recommend that you use the most recent one. Older versions might not work anymore.

module load MATLAB/2023b

SHARK

MATLAB on SHARK is not available for all SHARK users. Please check that your group is covered by the license. Check with the SHARK team.

module avail MATLAB

Choose a module and add it to your environment by loading it, e.g.,:


Preparations

It is always a good idea to start by looking at the load of the cluster when you want to submit a job. Also, it helps to run some short, resource-friendly tests to see if your set up is working and you have a correct batch file.

The “testing”-partition on ALICE or the “short” partition on SHARK can be used for such purpose. The examples in this tutorial are save to use on those partitions.

Here, we will assume that you have already created a directory called user_guide_tutorials in your $HOME from the previous tutorials. For this job, let's create a sub-directory and change into it:

A simple MATLAB job

In this example, we will create a very basic MATLAB script that will just print out some information and then run it through a slurm batch job.

Preparations

Directory for example

Because this section contains further tutorials, we will create a sub-directory and change into it.

The MATLAB script

We will use the following MATLAB script for this example and save it as test_matlab_simple.m.

For demonstration purposes, the script shows how to read the number of cores set for the Slurm job. The fprintf statements will write everything out to the Slurm output file.

The Slurm batch file

The next step is to create the corresponding Slurm batch file which we will name test_matlab_simple.slurm. We will make use of the testing partition on ALICE or the short partition on SHARK. Make sure to change the partition and resources requirements for your production jobs. The running time and amount of memory have already been set in a way that fits to the resources that this job needs. If you do not know this, it is best to use a conservative estimate at first and then reduce the resource requirements.


ALICE


SHARK


where you should replace <your_email_address> by an actual e-mail address of yours.

The batch file will also print out some information to the Slurm output file. To separate the output from what the MATLAB script will produce, we use [$SHELL] here.

While there are different ways to run MATLAB script non-interactively, here we have used the option -batch. It automatically prevents MATLAB from trying to start the GUI. It also does not add the splash screen output and returns a proper exit code for the MATLAB script.

Job submission

Let us submit this MATLAB job to slurm:

Immediately after you have submitted this job, you should see something like this:

Job output

In the directory where you launched your job, there should be new file created by Slurm: test_matlab_simple_<jobid>.out. It contains all the output from your job which would have normally written to the command line. Check the file for any possible error messages. The content of the file should look something like this:

The running time might differ when you run it.

You can get a quick overview of the resources actually used by your job by running:

The output from seff will probably look something like this:

A MATLAB job to create a plot

In this example, we will create a MATLAB script that will produce a plot and run it through a slurm batch job.

Preparations

Directory for example

As we have done for the previous example, we will create a sub-directory and change into it.

The MATLAB script

We will use the following MATLAB script for this example and save it as test_matlab_simple.m.

The Slurm batch file

For this example, we will name the slurm batch file test_matlab_plot.slurm. We can more-or-less re-use the job script from the previous example, but for completeness the entire batch job is shown.

Again, we will make use of the testing partition on ALICE or the short partition on SHARK. Make sure to change the partition and resources requirements for your production jobs. The running time and amount of memory have already been set in a way that fits to the resources that this job needs. If you do not know this, it is best to use a conservative estimate at first and then reduce the resource requirements.


ALICE


SHARK


where you should replace <your_email_address> by an actual e-mail address of yours.

Job submission

Let us submit this MATLAB job to slurm:

Immediately after you have submitted this job, you should see something like this:

Job output

In the directory where you launched your job, there should be new file created by Slurm: test_matlab_simple_<jobid>.out. It contains all the output from your job which would have normally written to the command line. Check the file for any possible error messages. The content of the file should look something like this:

The running time might differ when you run it.

You can get a quick overview of the resources actually used by your job by running:

The output from seff will probably look something like this:

A simple parallel MATLAB job

In this example, we will create a simple MATLAB script which does parallel calculations and run it through a slurm batch job.

Preparations

Directory for example

As we have done for the previous example, we will create a sub-directory and change into it.

The MATLAB script

We will use the following MATLAB script for this example and save it as test_matlab_simple.m.

Here, we use parpool to create pool of Process workers (not Thread workers). This script is just an example for parallel processing in MATLAB and not optimized in any way. For example, it is generally not a good idea to print out information at every iteration because the print statements are executed in parallel, but they write to the same file. This is done here only for demonstrating the parallelization.

The Slurm batch file

For this example, we will name the slurm batch file test_matlab_parallel.slurm. Once again, we can re-use the job script from the previous example and adjust it slightly.

The most important change is that we set #SBATCH --cpus-per-task=4 so that Slurm assigns multiple cores (here 4) to our job. We also switched from #SBATCH --mem to #SBATCH --mem-per-cpu to specify the amount of memory. Of course, we also changed the name of the job and MATLAB script.

Again, we will make use of the testing partition on ALICE or the short partition on SHARK. Make sure to change the partition and resources requirements for your production jobs. The running time and amount of memory have already been set in a way that fits to the resources that this job needs. If you do not know this, it is best to use a conservative estimate at first and then reduce the resource requirements.


ALICE


SHARK


where you should replace <your_email_address> by an actual e-mail address of yours.

Job submission

Let us submit this MATLAB job to slurm:

Immediately after you have submitted this job, you should see something like this:

Job output

In the directory where you launched your job, there should be new file created by Slurm: test_matlab_simple_<jobid>.out. It contains all the output from your job which would have normally written to the command line. Check the file for any possible error messages. The content of the file should look something like this:

The running time might differ when you run it.

You can get a quick overview of the resources actually used by your job by running:

The output from seff will probably look something like this: