MATLAB on ALICE and SHARK
The availability of MATLAB on ALICE and SHARK differs because MATLAB is a paid-for licensed software
ALICE
Leiden University has a campus license for MATLAB and the software is available on ALICE as a module.
You can find a list of available versions with
Code Block |
---|
module avail MATLAB |
but we recommend that you use the most recent one. Older versions might not work anymore.
Code Block |
---|
module load MATLAB/2022b2023b |
SHARK
MATLAB on SHARK is not available for all SHARK users. Please check that your group is covered by the license. Check with the SHARK team.
Code Block |
---|
module avail MATLAB |
Choose a module and add it to your environment by loading it, e.g.,:
Code Block |
---|
moduel load statistical/MATLAB/R2021a |
Preparations
It is always a good idea to start by looking at the load of the cluster when you want to submit a job. Also, it helps to run some short, resource-friendly tests to see if your set up is working and you have a correct batch file.
The “testing”-partition on ALICE or the “short” partition on SHARK can be used for such purpose. The examples in this tutorial are save to use on those partitions.
Here, we will assume that you have already created a directory called user_guide_tutorials
in your $HOME
from the previous tutorials. For this job, let's create a sub-directory and change into it:
Code Block |
---|
mkdir -p $HOME/user_guide_tutorials/first_matlab_job cd $HOME/user_guide_tutorials/first_matlab_job |
A simple MATLAB job
In this example, we will create a very basic MATLAB script that will just print out some information and then run it through a slurm batch job.
Preparations
Directory for example
Because this section contains further tutorials, we will create a sub-directory and change into it.
Code Block |
---|
mkdir -p $HOME/user_guide_tutorials/first_matlab_job/test_matlab_simple cd $HOME/user_guide_tutorials/first_matlab_job/test_matlab_simple |
The MATLAB script
We will use the following MATLAB script for this example and save it as test_matlab_simple.m
.
Code Block | ||
---|---|---|
| ||
% Example for a simple MATLAB script fprintf('MATLAB script started\n'); % getting the number of cores set for the job cpus = str2num(getenv("SLURM_CPUS_PER_TASK")); fprintf('Number of CPUS from Slurm job: %g\n', cpus); % Just saying hello here fprintf('Hello World from MATLAB\n'); fprintf('MATLAB script finished\n'); exit; |
For demonstration purposes, the script shows how to read the number of cores set for the Slurm job. The fprintf statements will write everything out to the Slurm output file.
The Slurm batch file
The next step is to create the corresponding Slurm batch file which we will name test_matlab_simple.slurm
. We will make use of the testing partition on ALICE or the short partition on SHARK. Make sure to change the partition and resources requirements for your production jobs. The running time and amount of memory have already been set in a way that fits to the resources that this job needs. If you do not know this, it is best to use a conservative estimate at first and then reduce the resource requirements.
ALICE
Code Block |
---|
#!/bin/bash #SBATCH --job-name=test_matlab_simple #SBATCH --output=%x_%j.out #SBATCH --mail-user="<your_email_address>" #SBATCH --mail-type="ALL" #SBATCH --mem=1G #SBATCH --time=00:05:00 #SBATCH --partition=testing #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 # load modules (assuming you start from the default environment) # we explicitly call the modules to improve reproducibility # in case the default settings change module load MATLABMMATLAB/2022b2023b echo "[$SHELL] #### Starting MATLAB test" echo "[$SHELL] ## This is $SLURM_JOB_USER on $HOSTNAME and this job has the ID $SLURM_JOB_ID" # get the current working directory export CWD=$(pwd) echo "[$SHELL] ## current working directory: "$CWD # Run the file echo "[$SHELL] ## Run MATLAB script" # there different ways to start a matlab script # here we use -batch # Just the name of the script without ".m" matlab -batch test_matlab_simple echo "[$SHELL] ## Script finished" echo "[$SHELL] #### Finished MATLAB test. Have a nice day" |
SHARK
Code Block |
---|
#!/bin/bash #SBATCH --job-name=test_matlab_simple #SBATCH --output=%x_%j.out #SBATCH --mail-user="<your_email_address>" #SBATCH --mail-type="ALL" #SBATCH --mem=1G #SBATCH --time=00:05:00 #SBATCH --partition=short #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 # load modules (assuming you start from the default environment) # we explicitly call the modules to improve reproducibility # in case the default settings change module load statistical/MATLAB/R2021a echo "[$SHELL] #### Starting MATLAB test" echo "[$SHELL] ## This is $SLURM_JOB_USER on $HOSTNAME and this job has the ID $SLURM_JOB_ID" # get the current working directory export CWD=$(pwd) echo "[$SHELL] ## current working directory: "$CWD # Run the file echo "[$SHELL] ## Run MATLAB script" # there different ways to start a matlab script # here we use -batch # Just the name of the script without ".m" matlab -batch test_matlab_simple echo "[$SHELL] ## Script finished" echo "[$SHELL] #### Finished MATLAB test. Have a nice day" |
where you should replace <your_email_address>
by an actual e-mail address of yours.
The batch file will also print out some information to the Slurm output file. To separate the output from what the MATLAB script will produce, we use [$SHELL]
here.
While there are different ways to run MATLAB script non-interactively, here we have used the option -batch
. It automatically prevents MATLAB from trying to start the GUI. It also does not add the splash screen output and returns a proper exit code for the MATLAB script.
Job submission
Let us submit this MATLAB job to slurm:
Code Block |
---|
sbatch test_matlab_simple.slurm |
Immediately after you have submitted this job, you should see something like this:
Code Block |
---|
[me@<node_name> test_matlab_sim[le]]$ sbatch test_matlab_simple.slurm Submitted batch job <job_id> |
Job output
In the directory where you launched your job, there should be new file created by Slurm: test_matlab_simple_<jobid>.out
. It contains all the output from your job which would have normally written to the command line. Check the file for any possible error messages. The content of the file should look something like this:
Code Block |
---|
[/bin/bash] #### Starting MATLAB test [/bin/bash] ## This is <username> on nodelogin01 and this job has the ID <jobid> [/bin/bash] ## current working directory: /home/<username>/user-guide-tutorials/first_matlab_job/test_matlab_simple [/bin/bash] ## Run MATLAB script MATLAB script started Number of CPUS from Slurm job: 1 Hello World from MATLAB MATLAB script finished [/bin/bash] ## Script finished [/bin/bash] #### Finished MATLAB test. Have a nice day |
The running time might differ when you run it.
You can get a quick overview of the resources actually used by your job by running:
Code Block |
---|
seff <job_id> |
The output from seff
will probably look something like this:
Code Block |
---|
Job ID: <jobid> Cluster: <cluster_name> User/Group: <user_name>/<group_name> State: COMPLETED (exit code 0) Cores: 1 CPU Utilized: 00:00:11 CPU Efficiency: 84.62% of 00:00:13 core-walltime Job Wall-clock time: 00:00:13 Memory Utilized: 1.30 MB Memory Efficiency: 0.13% of 1.00 GB |
A MATLAB job to create a plot
In this example, we will create a MATLAB script that will produce a plot and run it through a slurm batch job.
Preparations
Directory for example
As we have done for the previous example, we will create a sub-directory and change into it.
Code Block |
---|
mkdir -p $HOME/user_guide_tutorials/first_matlab_job/test_matlab_plot cd $HOME/user_guide_tutorials/first_matlab_job/test_matlab_plot |
The MATLAB script
We will use the following MATLAB script for this example and save it as test_matlab_simple.m
.
Code Block | ||
---|---|---|
| ||
% Simple MATLAB script to create a plot fprintf("Generating data\n"); FS=20; Cgray = gray(10); alpha=4.0; beta=1.0; delta=linspace(1,50,1000); r=linspace(0.01,10,1000); TD=sqrt(alpha.*delta)+alpha./delta-beta; TF=2*sqrt(alpha)-beta; TT=alpha/beta/beta; YY2=sqrt(alpha./delta)-beta+alpha./beta; YY3=alpha./beta; fprintf("Creating figure\n"); figure('position',[100 100 600 500]); hold on; plot(TD,delta,'k-','linewidth',2) xline(TF,'-r','linewidth',2); % xline(TT,'--g','linewidth',2); plot(YY2,delta,'k-','linewidth',2) xline(YY3,'--','linewidth',2) fill([TD YY2(end:-1:1)], [delta delta(end:-1:1)],Cgray(9,:)) xlabel('global density, $\langle\tau\rangle$','Interpreter','latex','fontsize',FS) ylabel('diffusion ratio, $\delta$','Interpreter','latex','fontsize',FS) xlim([0 15]) text(4.5,30,'spinodal region','fontsize',FS,'Interpreter','latex'); box on set(gca,'fontsize',FS,'linewidth',2,'xminortick','off','yminortick','off',... 'ticklength',[0.020 0.01]) % set(gca,'fontsize',FS,'YScale','log') set(gca, 'Layer', 'top'); fprintf("Saving figure\n"); saveas(gca, 'Fig.pdf'); fprintf("Done\n"); exit; |
The Slurm batch file
For this example, we will name the slurm batch file test_matlab_plot.slurm
. We can more-or-less re-use the job script from the previous example, but for completeness the entire batch job is shown.
Again, we will make use of the testing partition on ALICE or the short partition on SHARK. Make sure to change the partition and resources requirements for your production jobs. The running time and amount of memory have already been set in a way that fits to the resources that this job needs. If you do not know this, it is best to use a conservative estimate at first and then reduce the resource requirements.
ALICE
Code Block |
---|
#!/bin/bash #SBATCH --job-name=test_matlab_plot #SBATCH --output=%x_%j.out #SBATCH --mail-user="<your_email_address>" #SBATCH --mail-type="ALL" #SBATCH --mem=1G #SBATCH --time=00:05:00 #SBATCH --partition=testing #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 # load modules (assuming you start from the default environment) # we explicitly call the modules to improve reproducibility # in case the default settings change module load MATLAB/2022b2023b echo "[$SHELL] #### Starting MATLAB test" echo "[$SHELL] ## This is $SLURM_JOB_USER on $HOSTNAME and this job has the ID $SLURM_JOB_ID" # get the current working directory export CWD=$(pwd) echo "[$SHELL] ## current working directory: "$CWD # Run the file echo "[$SHELL] ## Run MATLAB script" # there different ways to start a matlab script # here we use -batch # Just the name of the script without ".m" matlab -batch test_matlab_plot echo "[$SHELL] ## Script finished" echo "[$SHELL] #### Finished MATLAB test. Have a nice day" |
SHARK
Code Block |
---|
#!/bin/bash #SBATCH --job-name=test_matlab_plot #SBATCH --output=%x_%j.out #SBATCH --mail-user="<your_email_address>" #SBATCH --mail-type="ALL" #SBATCH --mem=1G #SBATCH --time=00:05:00 #SBATCH --partition=short #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 # load modules (assuming you start from the default environment) # we explicitly call the modules to improve reproducibility # in case the default settings change module load statistical/MATLAB/R2021a echo "[$SHELL] #### Starting MATLAB test" echo "[$SHELL] ## This is $SLURM_JOB_USER on $HOSTNAME and this job has the ID $SLURM_JOB_ID" # get the current working directory export CWD=$(pwd) echo "[$SHELL] ## current working directory: "$CWD # Run the file echo "[$SHELL] ## Run MATLAB script" # there different ways to start a matlab script # here we use -batch # Just the name of the script without ".m" matlab -batch test_matlab_plot echo "[$SHELL] ## Script finished" echo "[$SHELL] #### Finished MATLAB test. Have a nice day" |
where you should replace <your_email_address>
by an actual e-mail address of yours.
Job submission
Let us submit this MATLAB job to slurm:
Code Block |
---|
sbatch test_matlab_plot.slurm |
Immediately after you have submitted this job, you should see something like this:
Code Block |
---|
[me@<node_name> test_matlab_plot]$ sbatch test_matlab_plot.slurm Submitted batch job <job_id> |
Job output
In the directory where you launched your job, there should be new file created by Slurm: test_matlab_simple_<jobid>.out
. It contains all the output from your job which would have normally written to the command line. Check the file for any possible error messages. The content of the file should look something like this:
Code Block |
---|
[/bin/bash] #### Starting MATLAB test [/bin/bash] ## This is <username> on nodelogin01 and this job has the ID <jobid> [/bin/bash] ## current working directory: /home/<username>/user-guide-tutorials/first_matlab_job/test_matlab_plot [/bin/bash] ## Run MATLAB script Generating data Creating figure Warning: MATLAB has disabled some advanced graphics rendering features by switching to software OpenGL. For more information, click <a href="matlab:opengl('problems')">here</a>. Saving figure Done [/bin/bash] ## Script finished [/bin/bash] #### Finished MATLAB test. Have a nice day |
The running time might differ when you run it.
You can get a quick overview of the resources actually used by your job by running:
Code Block |
---|
seff <job_id> |
The output from seff
will probably look something like this:
Code Block |
---|
Job ID: <jobid> Cluster: <cluster_name> User/Group: <user_name>/<group_name> State: COMPLETED (exit code 0) Cores: 1 CPU Utilized: 00:00:20 CPU Efficiency: 95.24% of 00:00:21 core-walltime Job Wall-clock time: 00:00:21 Memory Utilized: 1.30 MB Memory Efficiency: 0.13% of 1.00 GB |
A simple parallel MATLAB job
In this example, we will create a simple MATLAB script which does parallel calculations and run it through a slurm batch job.
Preparations
Directory for example
As we have done for the previous example, we will create a sub-directory and change into it.
Code Block |
---|
mkdir -p $HOME/user_guide_tutorials/first_matlab_job/test_matlab_parallel cd $HOME/user_guide_tutorials/first_matlab_job/test_matlab_parallel |
The MATLAB script
We will use the following MATLAB script for this example and save it as test_matlab_simple.m
.
Code Block | ||
---|---|---|
| ||
% Example of a parallel MATLAB script with parpool % Based on https://researchcomputing.princeton.edu/support/knowledge-base/matlab#slurm cpus = str2num(getenv("SLURM_CPUS_PER_TASK")); fprintf('Number of CPUS from Slurm job: %g\n', cpus); cluster = parcluster; poolobj = parpool(cluster,cpus-1); fprintf('Number of workers: %g\n', poolobj.NumWorkers); n = 20; A = 50; a = zeros(n); fprintf('Starting parallel processing\n'); parfor (i=1:n,cluster) start_time = datetime; a(i) = max(abs(eig(rand(A)))); % just for demonstration purposes fprintf('Iteration %g started %s: %g (took %gs)\n', i, start_time, a(i), seconds(datetime - start_time)); end fprintf('Finished parallel processing\n'); exit; |
Here, we use parpool
to create pool of Process workers (not Thread workers). This script is just an example for parallel processing in MATLAB and not optimized in any way. For example, it is generally not a good idea to print out information at every iteration because the print statements are executed in parallel, but they write to the same file. This is done here only for demonstrating the parallelization.
The Slurm batch file
For this example, we will name the slurm batch file test_matlab_parallel.slurm
. Once again, we can re-use the job script from the previous example and adjust it slightly.
The most important change is that we set #SBATCH --cpus-per-task=4
so that Slurm assigns multiple cores (here 4) to our job. We also switched from #SBATCH --mem
to #SBATCH --mem-per-cpu
to specify the amount of memory. Of course, we also changed the name of the job and MATLAB script.
Again, we will make use of the testing partition on ALICE or the short partition on SHARK. Make sure to change the partition and resources requirements for your production jobs. The running time and amount of memory have already been set in a way that fits to the resources that this job needs. If you do not know this, it is best to use a conservative estimate at first and then reduce the resource requirements.
ALICE
Code Block |
---|
#!/bin/bash #SBATCH --job-name=test_matlab_parallel #SBATCH --output=%x_%j.out ##SBATCH --mail-user="<your_email_address>" ##SBATCH --mail-type="ALL" #SBATCH --mem-per-cpu=4G #SBATCH --time=00:05:00 #SBATCH --partition=testing #SBATCH --ntasks=1 #SBATCH --cpus-per-task=4 # load modules (assuming you start from the default environment) # we explicitly call the modules to improve reproducibility # in case the default settings change module load MATLAB/2022b2023b echo "[$SHELL] #### Starting MATLAB test" echo "[$SHELL] ## This is $SLURM_JOB_USER on $HOSTNAME and this job has the ID $SLURM_JOB_ID" # get the current working directory export CWD=$(pwd) echo "[$SHELL] ## current working directory: "$CWD # Run the file echo "[$SHELL] ## Run MATLAB script" matlab -batch test_matlab_parallel echo "[$SHELL] ## Script finished" echo "[$SHELL] #### Finished MATLAB test. Have a nice day" |
SHARK
Code Block |
---|
#!/bin/bash #SBATCH --job-name=test_matlab_parallel #SBATCH --output=%x_%j.out ##SBATCH --mail-user="<your_email_address>" ##SBATCH --mail-type="ALL" #SBATCH --mem-per-cpu=1G #SBATCH --time=00:05:00 #SBATCH --partition=short #SBATCH --ntasks=1 #SBATCH --cpus-per-task=4 # load modules (assuming you start from the default environment) # we explicitly call the modules to improve reproducibility # in case the default settings change module load MATLAB/2022b echo "[$SHELL] #### Starting MATLAB test" echo "[$SHELL] ## This is $SLURM_JOB_USER on $HOSTNAME and this job has the ID $SLURM_JOB_ID" # get the current working directory export CWD=$(pwd) echo "[$SHELL] ## current working directory: "$CWD # Run the file echo "[$SHELL] ## Run MATLAB script" matlab -batch test_matlab_parallel echo "[$SHELL] ## Script finished" echo "[$SHELL] #### Finished MATLAB test. Have a nice day" |
where you should replace <your_email_address>
by an actual e-mail address of yours.
Job submission
Let us submit this MATLAB job to slurm:
Code Block |
---|
sbatch test_matlab_parallel.slurm |
Immediately after you have submitted this job, you should see something like this:
Code Block |
---|
[me@<node_name> test_matlab_plot]$ sbatch test_matlab_parallel.slurm Submitted batch job <job_id> |
Job output
In the directory where you launched your job, there should be new file created by Slurm: test_matlab_simple_<jobid>.out
. It contains all the output from your job which would have normally written to the command line. Check the file for any possible error messages. The content of the file should look something like this:
Code Block |
---|
[/bin/bash] #### Starting MATLAB test [/bin/bash] ## This is <username> on nodelogin01 and this job has the ID <jobid> [/bin/bash] ## current working directory: /home/<username>/user-guide-tutorials/first_matlab_job/test_matlab_parallel [/bin/bash] ## Run MATLAB script Number of CPUS from Slurm job: 4 Starting parallel pool (parpool) using the 'Processes' profile ... Connected to the parallel pool (number of workers: 4). Number of workers: 4 Starting parallel processing Iteration 2 started 10-Nov-2023 14:12:35: 25.1099 (took 0.257216s) Iteration 4 started 10-Nov-2023 14:12:39: 24.9113 (took 0.216974s) Iteration 1 started 10-Nov-2023 14:12:40: 25.0373 (took 0.22236s) Iteration 3 started 10-Nov-2023 14:12:41: 24.9415 (took 0.244377s) Iteration 5 started 10-Nov-2023 14:12:54: 24.789 (took 0.173497s) Iteration 6 started 10-Nov-2023 14:13:02: 25.1175 (took 0.195869s) Iteration 7 started 10-Nov-2023 14:13:04: 24.9419 (took 0.268012s) Iteration 8 started 10-Nov-2023 14:13:06: 25.5467 (took 0.241785s) Iteration 9 started 10-Nov-2023 14:13:16: 25.0073 (took 0.194753s) Iteration 10 started 10-Nov-2023 14:13:25: 24.7405 (took 0.289914s) Iteration 11 started 10-Nov-2023 14:13:25: 24.7002 (took 0.182165s) Iteration 12 started 10-Nov-2023 14:13:25: 25.2093 (took 0.184685s) Iteration 13 started 10-Nov-2023 14:13:35: 24.9931 (took 0.17234s) Iteration 14 started 10-Nov-2023 14:13:43: 25.3847 (took 0.207539s) Iteration 15 started 10-Nov-2023 14:13:43: 24.6339 (took 0.184508s) Iteration 16 started 10-Nov-2023 14:13:43: 25.1662 (took 0.203561s) Iteration 17 started 10-Nov-2023 14:13:52: 25.0236 (took 0.210807s) Iteration 18 started 10-Nov-2023 14:14:00: 24.7586 (took 0.153341s) Iteration 19 started 10-Nov-2023 14:14:00: 25.3056 (took 0.199779s) Iteration 20 started 10-Nov-2023 14:14:00: 25.2567 (took 0.159019s) Finished parallel processing [/bin/bash] ## Script finished [/bin/bash] #### Finished Python test. Have a nice day |
The running time might differ when you run it.
You can get a quick overview of the resources actually used by your job by running:
Code Block |
---|
seff <job_id> |
The output from seff
will probably look something like this:
Code Block |
---|
Job ID: <jobid> Cluster: <cluster_name> User/Group: <user_name>/<group_name> State: COMPLETED (exit code 0) Nodes: 1 Cores per node: 4 CPU Utilized: 00:05:53 CPU Efficiency: 92.89% of 00:06:20 core-walltime Job Wall-clock time: 00:01:35 Memory Utilized: 11.02 GB Memory Efficiency: 68.85% of 16.00 GB |