Release 2007a came out today. This corresponds to MATLAB version 7.4. I will list here a few of my favorite new MATLAB features. I will discuss some of them in future blogs.
MATLAB has multithreaded computation support for many linear algebra and element-wise numeric operations, allowing performance improvement on multicore and multiprocessor systems.
When using the MATLAB Editor, for certain messages about your code, you now have the option, through a right-click menu, to replace the code with the recommendation.
When writing a function that needs to be robust, there's often a lot of code up front to validate the inputs. inputParser can now help you with that in a more methodical way.
In February 2006, I posted an article on Scalar Expansion and More that generated a lot of interest. See the new function bsxfun (for Binary Singleton Expansion Function) to help perform binary operations on element-by-element functions for arrays which agree in sizes in all dimensions unless the one of the different sizes is a scalar.
I've only listed a few of the latest features in MATLAB. What's your favorite new feature? Tell people about it here.
Get the MATLAB code
Published with MATLAB® 7.4
36 CommentsOldest to Newest
One of the features I am most interested in is the Multithreaded Computational Support. Is there a list anywhere that gives full details of exactly which functions have gained multi-thread support?
Mike (and several others via email)- Here is a pointer to some more information about the multi threading: http://www.mathworks.com/access/helpdesk/help/techdoc/rn/bq1zi27-1.html#bq4c6fx-1
There is also a demo in the product to show more.
I don’t plan to comment on exactly how much speed-up you would each get. As is often true, it depends on many things, including your processor(s), memory, what else you are running.
Try running the demo multithreadedcomputations.
Thanks for the hard work that went into the latest release.
bsxfun is by far the new feature that will have the greatest impact on my work. My work requires a lot of simulation and to keep things vectorized, I am required to work in the 3d and 4th dimensions. bsxfun will save me from a lot of repmatting. The code be cleaner, faster, and require less memory.
The other new feature that has a lot of use is the categorical and datasets arrays in the stat toolbox.
I’ve got a quick question regarding some of the new mLint behavior. It keeps telling me that cd is not supported by MCC. Now obviously it is supported…. I compiled one just to make sure, but is this a warning of things to come? Likewise, there is a warning about matlabroot, not being supported in deployed apps.
The language in that m-lint warning is probably too strong. “Not supported” should probably be interpreted as “not a good idea” and a source of problems if you use those commands.
More specifically, when someone uses MATLABROOT in a deployed app, they should often be using CTFROOT. MATLABROOT will point to the MCR installation, which will be shared among all compiler generated apps and may point at a MATLAB install or an MCR install. This depends on what has been installed on the machine and the specific order of things on the path. CTFROOT points to the base directory where the compiled app’s ctf file has been extracted. This is often what the developer is really after when building up a path. Often when someone uses MATLABROOT to build up a path, they are looking for a work directory or a specific toolbox. Neither of these things exist off MATLABROOT in a compiled app. This is really the purpose of that warning.
CD is troublesome in a compiled app because the end user can install the app in an unexpected place and start it from a different directory than the one in which it was installed. Most uses of CD in general m-code don’t account for this all that well, so we point it out in m-lint as a bad idea. More importantly, because compiled apps only work with encrypted m-files, cd-ing to a directory with unencrypted m-files won’t make those files accessible and can lead to unexpected behavior like “I know my m-files are right there! Why won’t you use them.” CD is, however, appropriate for making data files available to compiled apps.
Both of these things are compilable and useful is used with discretion under the right circumstances.
Thanks for the input. I can see the reasons why those two warnings are in there, but neither applies in this case. I use matlabroot to determine an absolute path to the ghostscript executable for converting postscript to pdf, as it is the only way I have found for printing multiple figures to a multipage pdf.
The cd that I use is to allow many computers to access data repositories on a company server, and has nothing to do with where the executable resides.
As a side question, though, is there any way to identify the directory containing the .exe and .ctf files when you are running an executable, specifically to find a parameters file which is co-located. I tried ctfroot, but that pointed to someplace deep inside the mcr folder. Thanks,
I would think that _pwd_ would give you the location of the exe and ctf when running.
PWD may give you the right answer, but only in some circumstances. CTFROOT should point to the directory where the ctf file has been extracted and is the value I think you want to use. For example:
Extracting CTF archive. This may take a few seconds, depending on the
size of your application. Please wait…
…CTF archive extraction complete.
pwd : H:\Documents
In general, the ctf and exe will be located in the directory below this. In this case d:\temp\whereami. It is possible that the ctf and exe are not colocated, but this is unusual. You should be able to do something like:
Drop me a note at tflanagan at mathworks dot com if this isn’t working for you.
Last year, you mentioned (http://blogs.mathworks.com/loren/2006/05/10/memory-management-for-functions-and-variables/)
that for functions like:
function x = myfun(x)
% modify x here, for example
x = x.^2;
future versions of Matlab might recognize that the input and output variables were the same, and modify the original data instead of copying the data needlessly.
Do you happen to know if this was implemented for R2007a?
I will write a blog on this topic soon. Part of it was implemented in R2006b, some more in R2007a, with more to come in the future. For now, if you have a function where the output name is the same as one of the input’s AND is itself called from another function AND it uses “safe” operations such as +, the calculations will be done in place now. Some of those restrictions may be relaxed further over time.
Thanks for the information. I’m a bit swamped right now, so I can’t try it yet, but I will certainly try it asap. Thanks again.
I understand that Mathworks is trying to simplify the programming process by insulating the end-user from some of the uglier memory-management issues, but it would be nice to have it better documented.
Looking forward to your blog article on the topic.
Regarding the “x = myfun(x)” method for saving memory.
This is wonderful! Can it be exploited by a mexFunction?
One idea would be for MATLAB to preassign the plhs array with pointers to any variables that also appear in the prhs array. Then, a mexFunction could check:
if (plhs  == prhs ) …
and if true, then this mxArray could be modified in place by the mexFunction.
MexFunctions that do not recognize this trick will go ahead and allocate a new array and assign the plhs  pointer, signifying (on return to MATLAB) that a new copy has been made. So this technique would be backward compatible with existing mexFunctions.
This is essential in methods that would like to modify just part of a matrix. A sparse “cholupdate” can work in time proportional to the number of entries that change in L, which can be as few as O(1). That’s much much less than nnz(L). Currently, there’s no way to write a mexFunction that does this work in O(1) time, because it requires that the mexFunction modify its input.
Now, mexFunctions can cheat by modifying their inputs anyway; you can write a mexFunction that does (in M pseudocode):
x (1) = 42 ;
and changes x in the caller. Of course, that’s exceedingly dangerous if x is an expression (what does “mangle(x-1)” do?), or if x is a shallow copy:
x = y
would also modify y.
If a safe alternative were available, then “x = mangle(x)” could be written as a mexFunction, and it could recognize that x appears as both the input and output, and avoid making a copy.
Regarding the multithreading question: I hope this is done with care. Can MATLAB be selective, using threads for some BLAS but not others? Some functions that call the BLAS are faster with multitheading, but some are slower. Much slower.
The multithreaded BLAS authors (Intel, AMD, Goto, etc.) all share one common problem: they ignore the performance of the multithreaded BLAS on small matrices. For small matrices, a single thread is much faster.
So who should care if small matrices are slower? If you work with dense matrices, you don’t care. However, x=A\b when A is square makes many many calls to the BLAS, with lots of small to medium to large dense matrices (the matrices are dense submatrices of the larger sparse matrix; using multifrontal for LU and supernodal for chol).
With lots of calls to the BLAS for small matrices, x=A\b can be very slow with multithreading, even for rather large sparse matrices.
One would think that inside the BLAS there could be a test. If the dimensions of A and B for C=A*B (in dgemm) were small, then the BLAS should skip the work in forking off a thread, and just do the work in one thread. As it is, unless you use x=A\b on sparse matrices arising from large 3D discretizations, you should avoid multithreading for sparse matrix computations.
There is no mexFunction API for inplace operations right now. It is on the enhancement list.
Thank you for bring up such an important topic.
Your concern on the performance of small matrix computations is justified. What you suggested is a great short term solution to the problem by avoiding it altogether.
Most of the vendor BLASes have already built-in logics to selectively ignore the default multithreading setting. Unfortunately, the logic are not always optimal. The long term solution is to work with vendors to fix the logic and push for high performance both the single-threaded and multithreaded BLASes for small matrix computations.
In MATLAB, the wording in the preference panel is very carefully chosen. You are selecting the maximum number of computational threads and not the number of computational threads. MATLAB are free to ignore the number if the threading overhead proved to be too great.
I’m using Tim Myers “MATLAB OLEDB Connection and Query Functions” for different commands against MS SQL databases. I have not investigated this very deep, but after installing 2007a it is extreem slow! This has been tested on another “clean” machine with same result. What has changed in this version?
Please see the link to the release notes in the post itself. If you don’t see anything relevant, please contact technical support with an example of different behavior.
One of the features I like best is the inputParser object (you know, I’m writing code that other people will use without looking at the arguments!!!). By the way, I noticed that the inputParser object supports the “object.method” syntax, for instance you can have ip.AddRequired(…).
Is it possible to implement such a behaviour (more intuitive to the user) in user-defined MATLAB object (documentation apparently says no)?
I mean, if p is a ‘polynom’ object, p.plot(0:0.1:1) sounds better than plot(p,0:0.1:1)…
Thanks a lot and keep up the great work, I really like your blog Loren!
Thanks for the note. In fact, you can use dot notation with user objects but you have to overload subsref for the class in order to achieve this. I don’t have any examples myself to share at the moment.
Tom Krauss presented a solution in the CSSM-thread “THIS (or SELF) object in matlab” (Date: 2005-11-23 08:56:48).
I’d like to thank Loren for her kind reply and Per for the link, which is a good example I’m going to try out soon.
Hi Loren I am using matlab 7.4, I was tesing the multithreading and while it works perfectly on the “inv” function it does not seem to work with matrix vector multiplication.
If I do something like
the time elapsed is always the same, am I doing something wrong?
You don’t say what kind of machine you run on. Is it a multicore or multiprocessor? Have you changed the preferences? What happens when you run the multithreaded demo?
Talking to developers involved here, they say that matrix-vector multiply will not speed up because it is dominated by data access, similar to .*. However, you should see a speed up with matrix-matrix multiply. Level 3 BLAS are the main beneficiaries of multithreading. The BLAS level 1 and 2 are dominated by data access typically. These are common results if you read sites for hardware manufacturers.
I have a pc with two opteron dual-core.
I have tried and it works fine with matrix by matrix multiplications. I did not know that matrix by vector multiplications were not improved by multithreading.
Thank you very much
Is there anyway of switching on multithreading from command line? I don’t use GUI (and to be honest, getting that running each time would be a pain) and would like to enable multithreading, but so far it seems it is only possible from the preference pane.
You can only switch the multithreading through the GUI, not the command line in R2007a. You may be able to get programmable control in future releases.
Oh, fantastic – what is the command to enable it then?
I had a typo, sorry. I’ve fixed it now. Only through the GUI currently.
Thanks for the help though.
I read in this topic that we are not selecting the number of computational threads but the maximum number of computational threads in the GUI preferences.
I would like to know if the use of the function “setNumberOfComputationalThreads(N)” allow us to select the exact number of thread to use.
In addition, I made benchmarks on matrix-matrix operations with sparse matrix or not (addition, subtraction, multiplication…). I obtained performance improvement on my dual core with matrix-matrix multiplication only ( and not with sparse matrix ).
I know that Intel MKL/BLAS library allows to use multithread with all of this operations ( and on sparse matrix too). Is matlab able to use all mkl functions ?
Thank you in advance for your response,
First setNumberOfComputationalThreads is not a supported function. In R2007b, MATLAB provides maxNumCompThreads to change the maximum number of computational threads programmatically.
Again, this function only provides the ability to set the maximum number of computational threads. Can you tell us why this is important in your work? I think even if we provided such function, MATLAB built-in function would not react to this setting, because a) vendor BLAS has their own threading algorithms that MATLAB has not control of, and b) many computation will slow down if we don’t take care how to/whether to multithread.
One good example is sparse in bench.m. You did not observe speed up for sparse becuase it involves a lot of BLAS calls to smaller matrices and vendor BLAS decided it will not multithread. In fact, many customer has observed slow down with sparse in bench.m when multithreading is turned on. This turned out that vendor BLAS are not universally faster with multithreading on. In fact, the multithreaded BLAS can be slower on particular matrice size. Here is one of the many reasons why the default setting for multithreading is off for R2007a and R2007b. We are working with vendors to plug these holes.
The long term goal obviously is to turn on multithreaded computation by default. All users need to know is that MATLAB will take advantage of their latest multi-core chips effectively.
I need your help related with Multithreading issue of Matlab R2007b.
We just purchased a Quad Core2Duo work station (in total 8 cores), on which we would like to run some simulations. What we would like to do is to see from Windows Task Manager that all 8 cores are working with full speed on our simulations? Is it possible? And if it is possible, how?
We enabled Multithreading property of Matlab R2007b from preferences and it shows us that it is able to use 8 cores for computational efforts. But what we see from Windows Task Manager’s CPU Performance window is that only 1 of the 8 cores is making a peak, and the others are sleeping.
How can we activate all cores to work with their maximum performance at the same time in order to run our simulation under Matlab R2007b?
Best wishes and regards,
I’d like to know whether Matlab allows you to program using threads, like java. I need to run two M-Files at the same time. Thanks, Best wishes,
There is no way to program directly with threads using M-files in a single MATLAB session. You might consider our Parallel Computing Toolbox, depending on what you are trying to do.
Alacon AV —
If you are interested in seeing programmatic threading in MATLAB, drop me a line (I am a product manager for MATLAB). I would love to hear what you have in mind. My email address is:
ken dot atwell at mathworks dot com