MATLAB arithmetic expands in R2016b47

Posted by Loren Shure,

With pleasure, I introduce today's guest blogger, my colleague, Steve Eddins. He has been heavily involved in image processing capabilities in our tools and more recently has also contributed substantially to designing additions and improvements to the MATLAB language.

Earlier this summer, I was writing some color-space conversion code. At one point in the code, I had a Px3 matrix called RGB, which contained P colors, one per row. I also had a 1x3 vector, v. I needed to multiply each column of RGB by the corresponding element of v, like this:

RGB_c = [RGB(:,1)*v(1)  RGB(:,2)*v(2)  RGB(:,3)*v(3)];

But since I was using an internal developer build of MATLAB R2016 (released on September 14), I didn't type the code above. Instead, I typed this:

RGB_c = RGB .* v;

In R2016a and older MATLAB releases, that line of code produced an error:

>> RGB_c = RGB .* v
Error using  .*
Matrix dimensions must agree.


In the new release, though, MATLAB implicitly expands the vector v to be the same size as the matrix RGB and then carries out the elementwise multiplication. I say "implicitly" because MATLAB does not actually make an in-memory copy of the expanded vector.

Today I want to explain this new implicit expansion behavior of MATLAB arithmetic operators (and some functions). I will talk about how it works and why we did it. For the next part of the discussion, I'll use an example that almost everyone at MathWorks uses when talking about this topic: subtracting the column means from a matrix.

Contents

Suppose you have a matrix A.

A = rand(3,3)

A =
0.48976      0.70936       0.6797
0.44559      0.75469       0.6551
0.64631      0.27603      0.16261


And suppose you want to modify each column of A by subtracting the column's mean. The mean function conveniently gives you each of the column means:

ma = mean(A)

ma =
0.52722      0.58003      0.49914


But since ma is not the same size as A and is not a scalar, you couldn't just subtract ma from A directly. Instead, you had to expand ma to be the same size as A and then do the subtraction.

In the first decade of MATLAB, expert users typically used an indexing technique called Tony's Trick to do the expansion.

ma_expanded = ma(ones(3,1),:)

ma_expanded =
0.52722      0.58003      0.49914
0.52722      0.58003      0.49914
0.52722      0.58003      0.49914

A - ma_expanded

ans =
-0.037457      0.12934      0.18057
-0.081635      0.17466      0.15596
0.11909       -0.304     -0.33653


In the second decade (roughly speaking) of MATLAB, most people started using a function called repmat (short for "replicate matrix") to do the expansion.

ma_expansion = repmat(ma,3,1)

ma_expansion =
0.52722      0.58003      0.49914
0.52722      0.58003      0.49914
0.52722      0.58003      0.49914


Using the function repmat was more readable than using Tony's Trick, but it still created the expanded matrix in memory. For really large problems, the extra memory allocation and memory copy could noticeably slow down the computation, or even result in out-of-memory errors.

So, in the third decade of MATLAB, we introduced a new function called bsxfun that could do the subtraction operation directly without making an expanded vector in memory. You call it like this:

bsxfun(@minus,A,ma)

ans =
-0.037457      0.12934      0.18057
-0.081635      0.17466      0.15596
0.11909       -0.304     -0.33653


The "bsx" in the function name refers "Binary Singleton Exansion," where the term "Binary" in this context refers to operators that take two inputs. (No, it's not anyone's favorite function name.)

This function works and has been used quite a bit. As of a year ago, there were about 1,800 uses of bsxfun in 740 files.

But there were complaints about bsxfun.

bsxfun Pains

Besides the awkward name, there were other usability and performance issues associated with bsxfun.

• Not many people know about this function. It's not at all obvious that one should go looking for it when looking for help with subtraction.
• Using bsxfun requires a level of programming abstraction (calling one function to apply another function to a set of inputs) that seems mismatched with the application (basic arithmetic).
• Using bsxfun requires relatively advanced knowledge of MATLAB programming. You have to understand function handles, and you have to know about the functional equivalents of MATLAB arithmetic operators (such as plus, minus, times, and rdivide).
• It is more difficult for the MATLAB Execution Engine builders to generate code that is as efficient as the code for basic arithmetic.
• Code that uses bsxfun doesn't appear mathematical. (Some go so far as to call it ugly.)

And so, fourteen years after Cleve Moler originally proposed doing it, we have changed MATLAB two-input arithmetic operators, logical operators, relational operators, and several two-input functions to do bsxfun-style implicit expansion automatically whenever the inputs have compatible sizes.

Compatible Sizes

The expression A - B works as long as A and B have compatible sizes. Two arrays have compatible sizes if, for every dimension, the dimension sizes of the inputs are either the same or one of them is 1. In the simplest cases, two array sizes are compatible if they are exactly the same or if one is a scalar.

Here are some illustrations of compatible sizes for different cases.

• Two inputs which are exactly the same size.

• One input is a scalar.

• One input is a matrix, and the other is a column vector with the same number of rows.

• One input is a column vector, and the other is a row vector. Note that both inputs are implicitly expanded in this case, each in a different direction.

• One input is a matrix, and the other is a 3-D array with the same number of rows and columns. Note that the size of the matrix A in the third dimension is implicitly considered to be 1, and so A can be expanded in the third dimension to be the same size as B.

• One input is a matrix, and the other is a 3-D array. The dimensions are all either the same or one of them is 1. Note that this is another case where both inputs are implicitly expanded.

Supported Operators and Functions

Here is the initial set of MATLAB operators and functions that now have implicit expansion behavior.

+       -       .*      ./
.\      .^      <       <=
>       >=      ==      ~=
|       &
xor     bitor   bitand  bitxor
min     max     mod     rem
hypot   atan2

I anticipate that other functions will be added to this set over time.

Objections

This change to MATLAB arithmetic was not without controversy at MathWorks. Some people were concerned that users might have written code that somehow depended on these operators producing an error in some cases. But after examining our own code base, and after previewing the change in both the R2016a and R2016b Prereleases, we did not see significant compatibility issues arise in practice.

Other people thought that the new operator behavior was not sufficiently based on linear algebra notation. However, instead of thinking of MATLAB as a purely linear algebra notation, it is more accurate to think of MATLAB as being a matrix and array computation notation. And in that sense, MATLAB has a long history of inventing notation that became widely accepted, including backslash, colon, and various forms of subscripting.

Finally, some were concerned about what would happen when users tried to add two vectors without realizing that one is a column and the other is a row. In earlier versions of MATLAB, that would produce an error. In R2016b, it produces a matrix. (I like to call this matrix the outer sum of the two vectors.) But we believed that this problem would be immediately noticed and easily corrected. In fact, I think it's easier to notice this problem than when you mistakenly use the * operator instead of the .* operator. Also, the relatively new protective limit on array sizes in MATLAB (Preferences -> MATLAB -> Workspace -> MATLAB array size limit) prevents MATLAB from trying to form an extremely large matrix that might cause an out-of-memory condition.

Implicit Expansion in Practice

As part of the research we did before deciding to make this change, we reviewed how people use bsxfun. I'll finish the post by showing you what some of the most common uses of bsxfun look like when you rewrite them using implicit expansion in R2016b.

Apply a mask to a truecolor image.

% mask: 480x640
% rgb:  480x640x3
% OLD
rgb2 = bsxfun(@times,rgb,mask);
% NEW
rgb2 = rgb .* mask;

Normalize matrix columns (subtract mean and divide by deviation).

% X: 1000x4
mu = mean(X);
sigma = std(X);
% OLD
Y = bsxfun(@rdivide,bsxfun(@minus,X,mu),sigma);
% NEW
Y = (X - mu) ./ sigma;

Compute the pairwise distance matrix.

For two sets of vectors, compute the Euclidean distance between every vector pair.

% X: 4x2 (4 vectors)
% Y: 3x2 (3 vectors)
X = reshape(X,[4 1 2]);
Y = reshape(Y,[1 3 2]);
% OLD
m = bsxfun(@minus,X,Y);
D = hypot(m(:,:,1),m(:,:,2));
% NEW
m = X - Y;
D = hypot(m(:,:,1),m(:,:,2));

Compute outer sum.

This example is from the implementation of the toeplitz function. See also my 25-Feb-2008 post on neighbor indexing for another application.

cidx = (0:m-1)';
ridx = p:-1:1;
% OLD
ij = bsxfun(@plus,cidx,ridx);
% NEW
ij = cidx + ridx;

Find integers that are multiples of each other.

This example is from the computation of the Redheffer matrix in the gallery function. It illustrates implicit expansion behavior in a function as opposed to an operator.

i = 1:n;
% OLD
A = bsxfun(@rem,i,i') == 0;
% NEW
A = rem(i,i') == 0;

I want to finish with a shout out to all the MATLAB users who have asked for this behavior over the years. File Exchange contributor Yuval spoke for all of you when he included this comment inside the implementation of his submission:

% We need full singleton expansion everywhere. Why isn't
% it the case that
%
%   [1 2] + [0 1]' == [1 2;2 3] ?
%
% bsxfun() is a total hack, and polluting
% everybody's code.

Yuval, this one's for you.

Readers, have you used bsxfun before? Will you make use of the new arithmetic behavior? Let us know here.

Get the MATLAB code

Published with MATLAB® R2016b

Note

Phil replied on : 2 of 47

I really like this new feature. Next, I wish Matlab would natively do matrix multiplications across pages, similar to mmx (file exchange) on CPUs and pagefun on GPUs. This would make code very readable and compact.

Luc replied on : 3 of 47

Cool! NumPy calls this “broadcasting” and has had this feature for a long time (I found an article about it from 16 years ago!), and GNU Octave added it in 2012. Nice to see it in MATLAB, too! I hate to admit it but I was a repmat user…in Simulink, too…though in my defense the matrices I was replicating were never very large.

Jeremy Marschke replied on : 4 of 47

I’ve been a bsxfun evangelist for a fair while now… I’m almost sorry to see it replaced. This change is excellent both for a better coding environment and for code readability. Thanks MathWorks!

Will the performance (speed) of the new implementation be comparable to that of bsxfun?

Peter Wittenberg replied on : 5 of 47

That’s a good change for MATLAB and a good explanation of the need for these tricks. To those who say that MATLAB should be pure in following linear algebra notation, well, I’m an engineer. I need to manipulate numbers to get a project done. Certainly I use linear algebra, but I also manipulate the matrices in ways that are not part of the development of linear algebra because it’s needed in quite a number of cases. I was not as much against bsxfun as some others, but I didn’t like it particularly. I had to relearn bsxfun from the help files each time I started a project that needed it.

To be sure, anyone doing an operation of this sort must be careful that they are creating a matrix that, if they are not careful, will come out not doing what they expect. However, there are all sorts of ways to have problems with writing code, so having the user have to mentally keep track that they are producing the right matrix or array sizes should not be an issue.

Steve Eddins replied on : 6 of 47

Jeremy—A few observant users spotted this feature in the R2016a Prerelease and then wondered why it wasn’t in the final R2016a release. One of the reasons that we took the extra time was to make sure the performance of the arithmetic operators compared favorably to bsxfun. In R2016b, implicit expansion works as fast or faster than bsxfun in most cases. The best performance gains for implicit expansion are with small matrix and array sizes. For large matrix sizes, implicit expansion tends to be roughly the same speed as bsxfun.

Steve Eddins replied on : 7 of 47

Cris—Cleve’s suggestion was made in an internal proposal that was not discussed publicly. At the time he suggested it, most MATLAB designers thought it would be a good idea, but the needed development resources were busy on other critical projects. The proposal has been revisited several times internally since then. User experience with bsxfun helped us validate the need and the requirements, and with the new MATLAB Execution Engine complete, the time was finally right for us to implement it with good performance.

Steve Eddins replied on : 8 of 47

Phil—The MATLAB Math Team monitors and periodically evaluates requests such as yours.

Steve Eddins replied on : 9 of 47

Luc—I wrote the prototype for repmat way back when, so I’ve always been fond of it, too.

Steve Eddins replied on : 10 of 47

Peter—I like the way you put it.

Sacha Kozlov replied on : 11 of 47

And what about performance? Is it faster or slower to use the new notation instead of bsxfun?

Just for your survey: I use bsxfun every day.

Michal Kvasnicka replied on : 12 of 47

This is really good news! Thanks MathWorks!!!
These kind of improvements should be the main topic of your work.

But I am afraid that in some cases the bxsfun is still faster than the newly added method. See “Find integers that are multiples of each other.” for example. In this case (for n=10000) is the new method systematically slower than corresponding bxsfun code.

Joao replied on : 13 of 47

Great news! This will make Matlab code so much more elegant. Kudos to the team for making this decision and making it work!

Steve Eddins replied on : 15 of 47

Sacha—See my comment above. For most cases, using the operators directly is as fast or faster than using bsxfun. The biggest performance gains for the operators is with relatively small matrix sizes. For example, the MATLAB Math team sped up corrcoef.m significantly for small matrix sizes simply by removing two calls to bsxfun.

Michal—I did say “most” cases. When there is an expansion in the first dimension, the operators might not be quite as fast as bsxfun. In my own coding practice, I never use bsxfun anymore. I would only switch back to bsxfun if I profiled a performance-critical application and found the implicit expansion operators in a hot spot and in a situation with a first-dimension expansion.

Irl Smith replied on : 16 of 47

I’ve been using bsxfun since April, 2012, and am very happy that it’s no longer so necessary. Question: is there any way to replace existing (“legacy”) calls to bsxfun with the equivalent new-style text (e.g., one of those nice editor pop-ups : “Best practice is yada yada, hit shift-enter to correct”)?

Terry Brennan replied on : 17 of 47

I like it and will use it! I will find the rgb.*mask construct particularly useful. Since I am typically doing statistics over masked frames of 2-D data I would also find it useful if rgb(mask,:) were equivalent to
[u,v,w] = size(rgb);
rgb = reshape(rgb,u*v,w);

Michal Kvasnicka replied on : 18 of 47

Steve, so i am right. The new method has some limitations. So, the bsxfun still remains as a faster method at least in some cases. And this is the problem, because you must evaluate and/or compare performance of these two methods.

Finally, the new method is really great, but the importance of bxsfun still remains (in some cases). These fact is very confusing and produce difficulty to find general best programming rules in cases where performance is key requirement.

Steve Eddins replied on : 19 of 47

Irl—That’s a great suggestion. I will pass it along to the team that maintains the Code Analyzer.

Terry—I’m glad you like the rgb.*mask example. As an image processing guy, I definitely like that one. I’m not sure I followed your second comment. If you have time and would consider saying more about the computations you are doing, maybe it’ll be a topic I could blog about.

Michal—I understand. Thanks for providing additional feedback.

Bobby Cheng replied on : 20 of 47

Michal—As n increases, the cost of memory allocation will increasingly affect the timing. I put the code in a loop hoping that the noise will even out, here is what I see on my computer. They seem roughly the same to me. Can you try this on your machine?

>> i = 1:10000;
>> tic; for x = 1:10; A = bsxfun(@rem,i,i’) == 0; end; toc
Elapsed time is 3.197104 seconds.
>> tic; for x = 1:10; A = bsxfun(@rem,i,i’) == 0; end; toc
Elapsed time is 3.211696 seconds.
>> tic; for x = 1:10; A = rem(i,i’) == 0; end; toc
Elapsed time is 3.321290 seconds.
>> tic; for x = 1:10; A = rem(i,i’) == 0; end; toc
Elapsed time is 2.771270 seconds.

We very much like to hear performance issue with implicit expansion. So a big thank you. There may be performance rough edges that we have not covered. If you find anything, let us know. We are committed to improve the performance of MATLAB.

MANOJ HARICHNDAN replied on : 22 of 47

new implicit expansion behavior of MATLAB is a good approach and will be helpful.

Dylan Muir replied on : 23 of 47

A fantastic addition to the language. What is the suggested way to incorporate the new implicit expansion constructs, while maintaining backwards compatibility with users who are unwilling or unable to upgrade?

Steve Eddins replied on : 24 of 47

Dylan—The only method I can think of is to put the implicit expansion operation in a try and bsxfun in a catch.

Doug—I went back and skimmed that old thread. Thanks for the providing the link. We traveled those paths inside MathWorks more than once.

Michal Kvasnicka replied on : 25 of 47

Bobby – there are my results (Ubuntu 16.04, R2016b):

\$ cpuinfo
Intel(R) processor family information utility, Version 5.1.3 Build 20160120 (build id: 14053)
===== Processor composition =====
Processor name : Intel(R) Core(TM) i7-5820K
Packages(sockets) : 1
Cores : 6
Processors(CPUs) : 12
Cores per package : 6

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> i = 1:10000;
>> tic; for x = 1:10; A = bsxfun(@rem,i,i’) == 0; end; toc
Elapsed time is 2.479125 seconds.
>> tic; for x = 1:10; A = bsxfun(@rem,i,i’) == 0; end; toc
Elapsed time is 2.226690 seconds.
>> tic; for x = 1:10; A = rem(i,i’) == 0; end; toc
Elapsed time is 2.400712 seconds.
>> tic; for x = 1:10; A = rem(i,i’) == 0; end; toc
Elapsed time is 2.401538 seconds.

Michal Kvasnicka replied on : 26 of 47

Bobby – One additional note: It is extremely important to be sure, that this new expansion method will be always faster than bsxfun. Current situation is not good, because there are cases where bsxfun is still faster. MATLAB programmers must be sure, that in every cases use the method, which is more suitable to create code optimized for speed.

Bård Skaflestad replied on : 27 of 47

I will probably adopt the new convention, but there is one property that bsxfun has that the new operators do not. No-one writes

z = bsxfun(@op, x, y)

by accident so in terms of expressing intent I currently think that using bsxfun is more clear (“yes, I really want singleton expansion here”). Then again I suspect as I become more familiar with automatic singleton expansion I may not think too much about it.

Mike Croucher replied on : 28 of 47

Very happy to see this! I used it for the first time this week and it was faster than bsxfun in my case.

Is bsxfun still needed for anything or does this completely replace it?

Bobby Cheng replied on : 29 of 47

Michal — I understand your concern. Let me try to address this here. Personally I would still recommend replacing bsxfun calls whenever possible wholeheartedly, like what we did for MATLAB Math area. Let me explain.

First both mathematically equivalent codes would be executed differently in MATLAB. One would go through bsxfun, the other would be handled entirely in MATLAB Execution Engine. Function bsxfun only needs to do one thing, which is to do the binary operation. MATLAB Execution Engine, however, tries to execute the entire expression efficiently balancing between speed and temporary memory usage. In our case, there are three operations: rem, ctranspose(‘), and ==.

To see the full capability of MATLAB Execution Engine, our test would need to be written like “real” code. Since most code are organized in functions, we will put our example above in a functon testrem.m. Here is what it looks like.

function testrem(n)
i = 1:n;
tic; bsxtest(i); toc
tic; ietest(i); toc
end

%subfunctions
function bsxtest(i)
for x = 1:10; A = bsxfun(@rem,i,i’) == 0; end;
end
function ietest(i)
for x = 1:10; A = rem(i,i’) == 0; end;
end
%end of testrem

Here is the timing when I run it.

>> testrem(10000);
Elapsed time is 3.542025 seconds.
Elapsed time is 2.948101 seconds.

In this setting, I see implicit expansion perform much better. Can you see this on your machine? Then I try to do a slight bigger problem.

>> testrem(20000);
Elapsed time is 13.550786 seconds.
Elapsed time is 11.166507 seconds.

Two things you will see, (1) implicit expansion is faster. (2) If you look at the memory usage during the computation, you should see that bsxfun version of the code use more memory.

So what am I saying? I think making sure implicit expansion is always faster than bsxfun is a very good description of what we want to achieve. In general, however, there will always be situation where this is not true. It could be cache effect, or different threading overheads, for example.

The bottom line is that implicit expansion should at least be similar in performance if not faster. This has be my experience since I started using implicit expansion in MATLAB.

Steve Eddins replied on : 30 of 47

Mike—I see two continuing uses for bsxfun: 1. Code that must run in both new and old versions of MATLAB. 2. Implementing a new, two-input, element-wise function so that it has implicit expansion behavior.

Steve Eddins replied on : 31 of 47

Bård—You wrote, “Then again I suspect as I become more familiar with automatic singleton expansion I may not think too much about it.” That’s what I think will happen with most people. It fits the experienced described by some people at MathWorks who were initially cautious or skeptical about implicit expansion.

RonaldF replied on : 32 of 47

Nice feature, although the possibility to display a warning would be helpful. It may occur that e.g. you may think you add two row matrices, but one of the two is mistakenly a column matrix. In previous versions adding the two would give an error but now it’s added to a full matrix without notice.

Steve Eddins replied on : 33 of 47

Ronald—Thanks for your feedback. Other people have made similar suggestions, and I’m thinking about writing up some detailed notes on the topic.

Michal Kvasnicka replied on : 34 of 47

Steve and Mike – I completely agree with Steve, The one of most important reason why to keep use of bsxfun is: “Implementing a new, two-input, element-wise function so that it has implicit expansion behavior.”!!! This fact is crucial and bsxfun will be still very important.

Harald Hentschke replied on : 35 of 47

Good move by the MathWorks & very interesting discussion here! My formative Matlab years must have been in the ‘second decade’ as the repmat method is so entrenched in my coding style that I’ve only remembered (or bothered) to use bsxfun in time-critical pieces of code so far. So, the new automatic expansion should be helpful for producing faster code by default.
As an aside, as an instructor of a basic Matlab course I’m curious to learn how easily beginners will get the idea and not mix it up with e.g. matrix multiplication (e.g. see the conceptual difference between [1; 2]*[3 4] and [1; 2].*[3 4] although the result is the same?) and actually use it in their code. Any experience someone?

Ravi S replied on : 36 of 47

This is great! Thank you for releasing this feature! My code is filled with bsxfuns, and this new feature is going to make it a lot more readable.

halfSpinDoctor replied on : 37 of 47

I have never even heard of bsxfun before.

Ting Pi replied on : 38 of 47

Thanks a lot for bringing us this detailed explanation !

David Goodmanson replied on : 39 of 47

Hi Loren,
I like this idea quite a lot. Frequently over the years I have wanted to do what you are calling an outer sum and I just got inured to having to use repmat for that purpose. I used bsxfun a few times but gave up on it. It would have been better had it been called something less awkward like ‘ bef ‘, but let’s face it, the syntax sucked.

I have had R2016b for a couple of months and didn’t realize that implicit expansion (IE) had even happened. I only just now found out because I visited your blog. Maybe I was not paying attention or not being responsible enough in finding out about new features, but I just ducked over to ‘Release 2016b highlights’ on the Mathworks website. The featured attraction was BigData. OK, fair enough I guess. You can find IE but it’s a few levels down at /Release Notes / MATLAB / Mathematics. I think that’s too well hidden. Admittedly I am more on the math&physics side of things, but this is not a new feature or new toolbox #863 that I might not need for awhile. It’s integral to the code.

Will Hirsch replied on : 40 of 47

Steve—It sounds like Terry is describing singleton expansion for logical indexing. This Stack Overflow question frames the problem in a similar way.

I think it would be very elegant for logical indexing to behave like this, but somewhat doubt that it could ever be implemented since unlike the elementwise operators affected by implicit expansion, logical indexing is already defined for logical arrays with smaller dimensions than the indexed array.

I think that assuming the logical array is repeated along singleton dimensions is a more helpful implicit behaviour than assuming it is extended with zeros, but backward compatibility will probably always win this conflict.

Steve Eddins replied on : 41 of 47

Will—Yes, I would like to see that logical indexing behavior as well. I wish I could go back in time to prevent us from making the change that allowed logical indexing with mismatched sizes.

Jack Hogan replied on : 42 of 47

Could this be used to multiply a 3×3 color matrix by an MxNx3 color image?

Steve Eddins replied on : 43 of 47

Jack—No, implicit expansion works for element-wise operations, and matrix multiplication is not element-wise. Use the Image Processing Toolbox function imapplymatrix, or try something like

out = reshape(reshape(in,[],3) * M,size(in));

Jack Hogan replied on : 44 of 47

Ok, thanks Steve, didn’t know imapplymatrix existed. About a third faster, too. BTW, you meant:

out = reshape(reshape(in,[],3) * M',size(in));

right? ;-)

Steve Eddins replied on : 45 of 47

Jack—Yep! ;-)

Petr replied on : 46 of 47

This is an absolutely appaling break of the established semantics of working with matrices!

This nonsense cost me twenty minutes in lecture today. Taking the norm of a vector was silently changed to a norm of a matrix: where I would have gotten a useful debugging hint I got a stab in the back. Not cool, Mathworks!

I hope common sense prevails and this will revert to the familiar semantics with the next release

Petr Krysl replied on : 47 of 47

This is a horribly misguided attempt to make a few operations more efficient or, more likely, just easier to write. So for convenience we are making it much harder to debug Matlab programs.

The problem is that the intent of A+B is not clear when both compatible and incompatible matrices are allowed. Imagine you are debugging a piece of code: When A and B are incompatible, and your intent was to have compatible matrices in that place, you may or may not get a warning that something is not right. For instance, pass A+B to a function such as norm(): you will get back a number, so no indication whether or not the matrices were compatible or incompatible. Previously if the matrices were incompatible, you would get a shot across the bow. Now you get back zilch feedback.

When reading someone else’s code this becomes worse: you don’t even know just looking at A+B what the intent was. And that is a very bad thing!

This was not a problem with the addition A+b, with b a number instead of a matrix. The intent was clear.

The operations enabled by the implicit expansion were already possible, aas outlined in the text above. By the implicit expansion the Matlab language was made harder to debug in the name of convenience for a few? Where is the logic in that?