Skip to content

Commit

Permalink
MATLAB: exercises
Browse files Browse the repository at this point in the history
  • Loading branch information
pojeda committed Aug 19, 2024
1 parent 605bf39 commit 923030b
Show file tree
Hide file tree
Showing 7 changed files with 19 additions and 72 deletions.
23 changes: 11 additions & 12 deletions docs/software.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,12 @@

Matlab is available through the Menu bar if you are using ThinLinc client (recommended). Additionally, you can load
a Matlab module on a Linux terminal on Kebnekaise. Details for these two options can be found
[here](https://www.hpc2n.umu.se/resources/software/matlab).
[here](https://www.hpc2n.umu.se/resources/software/matlab){:target="_blank"}.

### First time configuration

The first time you access Matlab on Kebnekaise, you need to configure it by following these guidelines
[Configuring Matlab](https://www.hpc2n.umu.se/resources/software/configure-matlab-2018)
[Configuring Matlab](https://www.hpc2n.umu.se/resources/software/configure-matlab-2018){:target="_blank"}.

### Tools for efficient simulations

Expand All @@ -33,25 +33,24 @@ Chart flow for a more efficient Matlab code using existing tools (adapted from[^

!!! Note "Exercise 2: Matlab parallel job"

* PARFOR folder contains an example of a parallelized loop with the "parfor" directive. A pause()
* PARFOR folder contains an [example](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/PARFOR/parallel_example.m){:target="_blank"} of a parallelized loop with the "parfor" directive. A pause()
function is included in the loop to make it heavy. This function can be
submitted to the queue by running the script [submit.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/PARFOR/submit.m) in the MATLAB GUI.
submitted to the queue by running the script [submit.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/PARFOR/submit.m){:target="_blank"} in the MATLAB GUI.
The number of workers can be set by replacing the string *FIXME* (in the "submit.m"
file) with the number you desire.

Try different values for the number of workers from 1 to 28 and take a note
Try different values for the number of workers from 1 to 10 and take a note
of the simulation time output at the end of the simulation. Where does the
code achieve its peak performance?

* SPMD folder presents an example of a parallelized code using SPMD paradigm. You
can submit this job to the queue through the MATLAB GUI.
* SPMD folder presents an example of a parallelized code using [SPMD](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/SPMD/spmdex.m){:target="_blank"} paradigm. Submit this job to the queue through the MATLAB GUI. This
example illustrates the use of *parpool* to run parallel code in a more interactive manner.

!!! Note "Exercise 2: Matlab GPU job"
!!! Note "Exercise 3: Matlab GPU job"

GPU folder contains a test case that computes a Mandelbrot set both
on CPU [mandelcpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/mandelcpu.m)
and on GPU [mandelgpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/mandelgpu.m). You can submit the jobs through
the MATLAB GUI using the [submitcpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/submitcpu.m) and [submitgpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/submitgpu.m) files.
on CPU [mandelcpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/mandelcpu.m){:target="_blank"}
and on GPU [mandelgpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/mandelgpu.m){:target="_blank"}. You can submit the jobs through
the MATLAB GUI using the [submitcpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/submitcpu.m){:target="_blank"} and [submitgpu.m](https://raw.githubusercontent.com/hpc2n/intro-course/master/exercises/MATLAB/GPU/submitgpu.m){:target="_blank"} files.

The final output if everything ran well are two .png figures
which display the timings for both architectures. Use the "eom" command on the
Expand Down
28 changes: 1 addition & 27 deletions exercises/MATLAB/GPU/mandelcpu.m
Original file line number Diff line number Diff line change
@@ -1,65 +1,39 @@
function [cpuTime]=mandelcpu
maxIterations = 1000;

gridSize=1000;

xlim = [-0.748766713922161, -0.748766707771757];

ylim = [ 0.123640844894862, 0.123640851045266];

t = tic();

x = linspace( xlim(1), xlim(2), gridSize );

y = linspace( ylim(1), ylim(2), gridSize );

[xGrid,yGrid] = meshgrid( x, y );

z0 = xGrid + 1i*yGrid;

count = ones( size(z0) );



% Calculate

z = z0;

for n = 0:maxIterations

z = z.*z + z0;

inside = abs( z )<=2;

count = count + inside;

end



% show

count = log( count );

cpuTime = toc( t );



figure;

fig = gcf;

fig.Position = [200 200 600 600];

imagesc( x, y, count );



axis image

colormap( [jet();flipud( jet() );0 0 0] );

title( sprintf( '%1.2fsecs (CPU)', cpuTime ) );

print('out-cpu','-dpng');
end
end
27 changes: 1 addition & 26 deletions exercises/MATLAB/GPU/mandelgpu.m
Original file line number Diff line number Diff line change
@@ -1,65 +1,40 @@
function [nativeGPUTime]=mandelgpu
maxIterations = 1000;

gridSize=1000;

xlim = [-0.748766713922161, -0.748766707771757];

ylim = [ 0.123640844894862, 0.123640851045266];

t = tic();

x = gpuArray.linspace( xlim(1), xlim(2), gridSize );

y = gpuArray.linspace( ylim(1), ylim(2), gridSize );

[xGrid,yGrid] = meshgrid( x, y );

z0 = complex( xGrid, yGrid );

count = ones( size(z0), 'gpuArray' );



% Calculate

z = z0;

for n = 0:maxIterations

z = z.*z + z0;

inside = abs( z )<=2;

count = count + inside;

end

count = log( count );



% show

count = gather( count ); % Fetch the data back from the GPU

nativeGPUTime = toc( t );

figure;

fig = gcf;

fig.Position = [200 200 600 600];

imagesc( x, y, count );



axis image

colormap( [jet();flipud( jet() );0 0 0] );

title( sprintf( '%1.2fsecs (GPU)', nativeGPUTime ) );

print('out-gpu','-dpng');
end
end
4 changes: 2 additions & 2 deletions exercises/MATLAB/GPU/submitcpu.m
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
% Get a handle to the cluster
% See the page for configuring and setup of MATLAB 2018b for details
% See the page for configuring and setup of MATLAB > 2018b for details
c=parcluster('kebnekaise')
% Run the jobs on 4 workers
% Run the job on CPU
j = c.batch(@mandelcpu, 1, {})
% Wait till the job has finished. Use j.State if you just want to poll the
% status and be able to do other things while waiting for the job to finish.
Expand Down
2 changes: 1 addition & 1 deletion exercises/MATLAB/GPU/submitgpu.m
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
% Get a handle to the cluster
% See the page for configuring and setup of MATLAB 2018b for details
% See the page for configuring and setup of MATLAB > 2018b for details
c=parcluster('kebnekaise')
% Run the jobs on 4 workers
j = c.batch(@mandelgpu, 1, {})
Expand Down
3 changes: 1 addition & 2 deletions exercises/MATLAB/PARFOR/parallel_example.m
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
function t = parallel_example(iter)
t0 = tic;

parfor idx = 1:iter
A(idx) = idx;
parfor idx = 1:iter
pause(2)
end

Expand Down
4 changes: 2 additions & 2 deletions exercises/MATLAB/PARFOR/submit.m
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
% Get a handle to the cluster
% See the page for configuring and setup of MATLAB for details
c=parcluster('kebnekaise')
% Run the jobs on 4 workers
j = c.batch(@parallel_example, 1, {32}, 'pool', *FIXME*)
% Run the jobs on X workers
j = c.batch(@parallel_example, 1, {7}, 'pool', *FIXME*)
% Wait till the job has finished. Use j.State if you just want to poll the
% status and be able to do other things while waiting for the job to finish.
j.wait
Expand Down

0 comments on commit 923030b

Please sign in to comment.