Function handles are faster in MATLAB R2023a
Roughly speaking, there are two ways we at MathWorks can speed up MATLAB. We can dive into individual functions to remove overheads, improve algorithms and so on or we can make performance enhancements to the language itself that have the potential to speed-up a range of computations across many domains. Today, I'm going to take a look at an example of the latter where function handles have been made faster as part of the R2023a release.
What are function handles?
Straight from the documentation: "A function handle is a MATLAB data type that represents a function. A typical use of function handles is to pass a function to another function. For example, you can use function handles as input arguments to functions that evaluate mathematical expressions over a range of values."
For example, say you have the function
function y = computeSquare(x)
y = x.^2;
end
We can create a handle
f = @computeSquare
and use it to call the function to compute the square of four as follows:
b = f(4)
This simple looking idea turns out to be extremely useful and you'll find function handles used all over the place in MATLAB code.
Function handle speed test
Let's use our function handle to compute the square of 100 million numbers without storing the results. A somewhat pointless calculation but it will serve our purposes in demonstrating that the function handle overhead has been significantly reduced.
n = 1e8;
tic
for i = 1:n
out = f(i);
end
t = toc
On my machine, I get the following times
- R2022b: 33 seconds
- R2023a: 0.77 seconds
Just over 40x faster in the new release. The practical upshot is that function handles to path and local functions are now about as fast as direct function calls.
OK, that's great but this is an extremely artificial example. What does this mean in real life when we try to do something more useful?
Faster function handles means faster ODE solvers.
This example is inspired by the one we ship in the bench command. The vanderpol function is defined at the end of this script and here I call it using the handle @vanderpol. I repeat the computation 5 times and take the best result.
% ODE. van der Pol equation, mu = 1
y0 = [2; 0];
tspan = [0 20000];
numRepeats = 5; % Number of times I'll repeat the computation. I'll later only report the best one.
t = zeros(1,numRepeats); % Store the timing results
for count=1:numRepeats
tic
[s,y] = ode45(@vanderpol,tspan,y0);
t(count) = toc;
end
fprintf("ode45 best time is %.2f seconds\n",min(t))
- R2022b: 0.4 seconds
- R2023a: 0.21 seconds
That's almost a 2x speed-up. This is far from the 40x speed-up we saw earlier but this shows that there's a lot more going on in ode45 than just function calls.
Faster optimisation routines?
Anything that makes use of a function handle many times should see some speed-up. After considering ODEs, the next thing that I thought about was mathematical optimisation. After all, optimisers call their objective function many times! Here's a simple example from Global Optimisation Toolbox using simulated annealing.
numRepeats = 5; % Number of times I'll repeat the computation. I'll later only report the best one.
t = zeros(1,numRepeats); % Store the timing results
for count=1:numRepeats
rng default
x0 = [0.5 0.5];
tic;[x,fval,exitFlag,output] = simulannealbnd(@objective,x0);
t(count) = toc;
end
Let's discover the best time out of the 5 runs
fprintf("Simulated annealing best time is %.2f seconds\n",min(t))
and how many function calls were made.
fprintf("The optimiser made %d function calls\n",output.funccount);
- R2022b: 0.19 seconds
- R2023a: 0.16 seconds
This was supposed to be a demonstration of when faster function handles didn't make a noticeable difference. The savings on function handle overhead for ~3000 evaluations is expected to be much less than even 0.01 seconds on my machine but this result for this piece of code seems to be fairly robust (for me at least). Granted, this isn't going to change your life but it's heading in the right direction.
Over to you
If you have some code that uses function handles a lot and have noticed any speed-up, let me know in the comment section or on twitter.
function y = computeSquare(x)
%Used to show function handle overheads
y = x.^2;
end
function dydt = vanderpol(~,y)
%VANDERPOL Evaluate the van der Pol ODEs for mu = 1
%used for the ode45 demo
dydt = [y(2); (1-y(1)^2)*y(2)-y(1)];
end
function y = objective(x)
%used for simulated annealing demo
y = (4-2.1.*x(1).^2+x(1).^4./3).*x(1).^2+x(1).*x(2)+(-4+4.*x(2).^2).*x(2).^2;
end
댓글
댓글을 남기려면 링크 를 클릭하여 MathWorks 계정에 로그인하거나 계정을 새로 만드십시오.