# Purpose of inv 33

Posted by **Loren Shure**,

I have received comments from several savvy customers suggesting that we remove the function `inv` from MATLAB. The reasons are because people who don't know enough numerical analysis will use it blindly and incorrectly
and often get a poor quality answer unnecessarily.

### Contents

### Reasons to Keep inv

There are several reasons why we do keep the function `inv`. These reasons are:

- for pedagogical and teaching use.
- historical (we'd create an incompatibility).
- meeting expectations. This is somewhat similar to the
`factorial`, function; users expected it to be there even though they could use`gamma`instead). - for certain computations. An example is when you need the covariance of least squares estimates. It's hard to accomplish
that without
`inv`.

### Reasons to Not Use or Remove inv

There are good reasons to **not** use `inv` however. The main one is

- numerical considerations.

Here is the second paragraph of the description for `inv` in the documentation:

*In practice, it is seldom necessary to form the explicit inverse of a matrix. A frequent misuse of inv arises when solving
the system of linear equations.*

*One way to solve this is with x = inv(A)*b. A better way, from both an execution time and numerical accuracy standpoint, is to use the matrix division operator x = A\b. This produces the solution using Gaussian elimination, without forming the inverse. See \ and / for further information.*

According to Cleve, the best way to compute `inv(X'*X)` is to avoid computing X'*X, and use the economy-sized QR factorization of `X` instead.

[Q,R] = qr(X,0)

S = inv(R)

S*S'

To see an extreme example, try

d = 1.e-10 X = [1 1; d 0; 0 d]

d = 1.0000e-010 X = 1.0000 1.0000 0.0000 0 0 0.0000

inv(X'*X)

Warning: Matrix is singular to working precision. ans = Inf Inf Inf Inf

[Q,R] = qr(X,0) S = inv(R) S*S'

Q = -1.0000 0.0000 -0.0000 -0.7071 0 0.7071 R = -1.0000 -1.0000 0 0.0000 S = 1.0e+009 * -0.0000 -7.0711 0 7.0711 ans = 1.0e+019 * 5.0000 -5.0000 -5.0000 5.0000

The computed `X' * X` is exactly singular and does not have an inverse, but `R` does have an inverse. We're still using the `inv` function, but on a nonsingular matrix.

### Your Thoughts?

So, we have the function `inv` that is ripe for misuse. What's the best way we can steer users to better solutions? Please post your thoughts here.

Get the MATLAB code

Published with MATLAB® 7.4

**Category:**- Best Practice,
- Common Errors,
- Efficiency,
- Robustness

## 33 CommentsOldest to Newest

**1**of 33

Well, I don’t see any problems at all with having functions that should only be used in certain circumstances. Of course, eliminating them may help people write better programs, but you can’t force people to do it. The comment in the documentation is perfectly good enough.

**2**of 33

Excellent discussion Loren. Yes, you do need to keep inv around, as much as I would also prefer it almost never be used. (Sigh.) The real problem is that too many books out there write formulas that use a matrix inverse. This suggests to people that the matrix inverse is actually the correct thing to use. It’s easy to write an expression that is mathematically correct, yet still numerically poor. Sometimes I do wish that Matlab would pop up a dialogue box that asks the user, “Do you really need to use inv here? Are you absolutely sure?”

As for myself, I do OCCASIONALLY have a use for inv. I can think of roughly two places where I do use it. (The covariance matrix is one, inv is also very useful in a code that computes barycentric coordinates for non-convex tessellations, where tsearchn fails.)

Even there, I could find an alternative if forced to do so, for example A\eye(n).

John

**3**of 33

Why not implement the more robust algorithms (e.g., the qr decomposition) within inv()? Maybe provide the option to use additional arguments to pick which algorithm.

-Jessee

**4**of 33

I sometimes use it to take the inverse of a diagonal matrix

e.g. I might have an expression like

x = b’ * (inv(D)).^2 * b

where D is a diagonal matrix. I just like the look of this better than

x = b’ * diag( (1 ./ diag(D)).^2 ) * b

I figure there’s no harm in using it on a diagonal matrix – but please correct me anyone if I’m wrong …

Richard

**5**of 33

In Robert Davies nice C++ package, newmat, the inverse of the matrix A is denoted A.i(). In the expression

X = A.i() * B; // equation solve (square matrix A),

the inverse is not explicitly calculated. An LU decomposition of A is performed and this is applied to B.

It would be nice if the Matlab language supported this type of evaluation.

**6**of 33

I use inv() in introductory classes on linear algebra, for easy 2×2 and 3×3 cases. (FORMAT RAT is very handy for this.) I could work around it if it were removed, as John says, but I think it should stay.

You could type out a warning in the command window upon first invocation, then add “Do xxx to disable this warning in future sessions.” Ideally xxx would be as simple as clicking a link.

Other elements of MATLAB are just as suspect as inv(). rref() is an obvious bad seed, and even det() and rank() are of questionable numerical use. For that matter, syntax like A(:,2)=1 brings the possibility of silent bugs that could be prevented with strict policing. But I see MATLAB’s style as giving sophisticated users all the tools they could want, even if they allow stupid behavior. You’ll have to trust us professor types to keep cranking out sophisticated users.

**7**of 33

I would love it if inv(A) went away, but realisticaly it should stay. But with a warning.

Inverses are abused in textbooks on linear algebra. Yes, they are of theoretical interest, and have rare practical uses. High school students abuse them to solve Ax=b.

mlint should report an error whenever it sees x=inv(A)*b, though. Maybe it should report a warning whenever it sees inv(A) at all, then if you really insist, just add the #%ok comment at the end of the line.

inv(A) is horribly abused when A is sparse. inv(A) is typically completely nonzero, destroying all sparsity. More precisely, it is completely nonzero if [p,q,r,s] = dmperm(A) returns length(r) == 2. Each diagonal block, of sizes diff(r) in dimension, will be full (ignoring numerical cancellation), and many entries in the upper block triangular part of A(p,q) will be nonzero too. Try this:

load west0479

[L,U,P,Q] = lu (west0479);

spy (L+U)

spy (inv (west0479))

[p,q,r,s] = dmperm (west0479) ;

max (diff (r))

Maybe “inv(A)” should always report a warning:

“Do you really know what you’re doing? … don’t do that again!”. And on the 2nd try, “Cleve says inv(A) is a bad idea.” Like the reshape(1,[-1 -1]) easter egg in MATLAB 6.5 or earlier :-).

**8**of 33

Whenever the inv(A) is really needed, do you really need all of it? If you want one column at a time, it’s better to use LU or CHOL and then solve A*x=I(:,k) where I=speye(n), one column k at a time.

Regarding Toby’s comment: rref(A) is suspect, too, I agree. It is also exceedling slow, particularly for the sparse case:

load west0479 ;

tic ; R = rref (west0479) ; toc

tic ; R = rref (full (west0479)) ; toc

the full case takes 28 seconds on my laptop, the sparse case takes over 11 *minutes* … in contrast:

[L,U,P,Q] = lu (west0479) ;

takes 0.005 seconds. Like inv, rref can be a nice pedagogical tool, but it’s not for production use.

**9**of 33

I like Tim’s idea. Have an mlint flag that suggests that use of inv is not recommended. At the very least, it may give them a nudge in the right direction. Point them towards some explanations that show why it should be avoided and what are better alternatives.

Even better, require the user to sign a statement that they have indeed taken at least one class in numerical linear algebra or numerical analysis before the use of inv was allowed. I know, this last one is not a terribly practical idea.

John

**10**of 33

It is reminiscent of the discussion about including or not a GOTO statement in a language that is written after the 70s. It is bad thing to structure (or un-structure) your programs using GOTOs, but sometimes it may save you from trouble.

Flexibility has value. And why would I be prevented to use inv, if I only want to invert a 4×4 matrix occasionally ?

**11**of 33

Dmitri wrote:

> It is reminiscent of the discussion about including or not a > GOTO statement in a language that is written after the 70s. > It is bad thing to structure (or un-structure) your programs > using GOTOs, but sometimes it may save you from trouble.

> Flexibility has value. And why would I be prevented to use

> inv, if I only want to invert a 4×4 matrix occasionally ?

This isn’t really the same thing – inv can give you highly incorrect and misleading results without you being aware of it, whereas GOTO is just hard to read and debug.

I do think inv has a place though: teaching, debugging, sanity checks, etc. – but not in production code. I fully agree with the mlint suggestion.

**12**of 33

Dmitri wrote:

> Flexibility has value. And why would I be prevented to use

> inv, if I only want to invert a 4×4 matrix occasionally ?

That’s why inv should stay. It should not generate a warning at run time either (much as I prefer that it would).

mlint pesters you with all sorts of things that MATLAB can do, such as warning you about variables growing in a loop. Growing a variable in a loop is nice to have when it doesn’t affect performance, but it can kill performance in the wrong place. I find myself adding “%#ok” frequently for little things like this.

Ditto for inv. It’s nice to have around for those odd cases when you really do want to use it. If you never use mlint, then it will never pester you. If you really want to use it in (say) a code posted at MATLAB Central (which runs mlint) just do:

s = inv (A) ; %#ok

which tells mlint to keep quiet. Then at least you will have been reminded once that inv(A) is a bad idea.

Having said all this … what do you want to do with a 4-by-4 inverse? Even for scalar values, a/b is safer than a*(1/b). Try this:

a = 1e-310 ; b = 2e-310 ;

a/b

a*(1/b)

the former gives you 0.5, the latter is “Inf”, which is a pretty bad approximation to 0.5 :-).

Regarding Richard’s use of inv(D) for a sparse diagonal D. Yes, this is reasonably fast, but the same problem occurs there. I got caught by this myself … when you do [L,U,P,Q,R]=lu(A) you get the factorization L*U = P*(R\A)*Q where R is diagonal (for scaling the rows of A prior to factorization). Note the backslash. I was going to do L*U = P*S*A*Q where S=inv(R), but backed off right away when it was pointed out to me how hazardous it was. Try [L,U,P,Q,R]=lu(sparse(1e-310)), and note that inv(R)=Inf.

So even for diagonal matrices, multiplying by the inverse is a bad idea.

**13**of 33

Dropping inv from the list of available options seems like a bad idea. It does have legitimate uses and I actually use it a fair amount (see below). A better solution would simply be to include in the help description that there are usually preferable methods. Although the function inv is never necessary, sometimes having the matrix inverse is, so why not keep it for brevity.

I use INV:

1) When using the Woodbury formula.

2) In spectral element schemes where the inverse of a generalized Vandermonde matrix is needed since it contains the expansion coefficients of interpolating basis.

3) When studying pseudospectra.

4) In certain problems were I need trace(inv(A)) and this is faster than sum(1./eig(A));

Example:

>> A=rand(1000);

>> tic; sum(1./eig(A)); toc

Elapsed time is 9.939163 seconds.

>> tic; trace(inv(A)); toc

Elapsed time is 0.601663 seconds.

**14**of 33

In some cases using explicit inverse can be as much as 40! times faster than using LU, with comparable errors.

Consider a large system of linear equations

Ax(k) = b(k), k = 1, . . . , K, where b(k+1)=f (x(k)) solved in two different ways:

1) By finding the LU decomposition of A and then solving each equation by backward and forward substiution.

2) By first calculating the explicit inverse and then solving each equation by multiplying the inverse matrix by

b(k).

We find that the second way, using the explicit inverse, is significally faster than when using LU.

The reason is that matrix-vector multiplication is by far faster than backward or forward substitution. This is primarly due to the simpler memory stucture of a full matrix which allows for the extended optimization of the matrix-vector algorithms.

For more details, please see the recently published article on this issue at

http://dx.doi.org/10.1007/s10915-006-9112-x.

**15**of 33

Everyone-

This had been a very interesting topic in terms of hearing your responses. Thank you!

Just so you know, we have no intention of actually taking out the function `inv` but we do wish that its usage would be appropriate, for example as Greg mentioned for the Woodbury formula, or as I mentioned earlier, for solving systems with a non-identity covariance.

I made sure we have an enhancement request for mlint to suggest `\` when `inv` is used. It was actually put in the system by me a while ago.

As for using `lu` vs. `inv`, I used `qr` many years ago to iteratively find a solution stably. I don’t have benchmarks, but you might find that adequately fast and numerically superior.

–Loren

**16**of 33

One should never multiply by the inverse to solve Ax=b. Just because it (may) be faster, doesn’t mean it’s good. Besides, solving Lx=b and Ux=b is very fast if you use the right methods (the optimized BLAS function, as done in MATLAB, and using linsolve). Consider:

n = 2000 ;

A = rand (n) ;

b = rand (n,1) ;

lower.LT = true ;

upper.UT = true ;

tic ; S = inv (A) ; toc

tic ; x = S*b ; toc

norm (A*x-b)

tic ; [L,U,p] = lu (A, ‘vector’) ; toc

tic ; x = linsolve (U, linsolve (L, b(p), lower), upper) ; toc

norm (A*x-b)

inv(A) takes 7.76 seconds on my Intel Pentium 4 Mobile, and x=S*B takes 0.032 seconds. LU takes 2.95 seonds, and the forward/backsolve takes 0.034 seconds. Do you really care about saving .002 seconds? The difference is so small that I would have to do more careful performance analyses to see if it is real.

Solving Ax=b with 1000 right-hand-sides one at a time would take 39.76 seconds using inv(A), and 36.945 seconds using LU and LINSOLVE. Linsolve directly calls the BLAS dtrsm function.

Random matrices are not very good matrices to use, as you do in the paper. They are notoriously well-behaved. Random matrices do not arise in practice.

I see in your paper that you make a critical mistake. You do x = linsolve (U, linsolve (L, P*b, lower), upper). That is, you compute P*b instead of b(p). In the former, P is a permutation matrix, and P*b takes O(n^2) time since MATLAB does not “know” it’s a permutation matrix. In b(p), we tell MATLAB right away that p is a permutation vector, and b(p) takes O(n) time. So P*b takes the same time as S*b, and then of course S*b will be “faster”. Your paper is flawed. You are measuring a peculuarity of the MATLAB language and its interface to the BLAS. If you use MATLAB in the right way, the supposed advantage of inv(A) simply vanishes.

One might also wrongly conclude that LU takes more memory, since L and U are both n-by-n, but S is just n-by-n. Yes, in MATLAB that’s true. But L and U can be stored in the same array, the same size as S. MATLAB does not support that operation, except in the case where p is not returned (which is needed to solve Ax=b).

So I question the results in this paper, on two grounds. Using inv(A) is not faster unless you abuse MATLAB. Random matrices are useless for looking into problems regarding numerical accuracry. I conclude that multiply by inv(A) is still a very very bad idea.

**17**of 33

In my last comment … I may not have been clear that “In your paper …” was refering to Nir Gavish’s comment, not Loren’s blog. Sorry for any confusion.

**18**of 33

I agree with Tom Richardson’s comment. Something like:

Z = linfactor(A,options) ;

x = Z\b ;

would compute the same thing as x=A\b. Z would not be a matrix. It would be an object that would contain the factors of A, using all the heavy-duty machinery in mldivide (either LU, Cholesky, QR, sparse/full, banded/tridiagonal/otherwise, etc). “options” would a struct, something like linsolve.

I prefer x=Z\b over x=Z*b, because if (algebraicly, at least, Z = L*U, then x=L*U*b would not be the solution but x = (L*U)\b would be. Of couse it would not be computed as x=(L*U)\b…

Then “Z” would take the place of S=inv(A) ; x =S*b.

The only advantage to S=inv(A);x=S*b as compared to Z=linfactor(A);x=Z\b is that the former would win in a MATLAB Golf game. The latter would always be faster (in either the sparse or full case) and would always be more accurate.

**19**of 33

Nice discussion guys,

I think “inv” is important in Matlab.

In one of my iterative optimization algorithm, I had to solve the linear system (Ax=b) many times with the same right-hand side matrix (A), while b was changing at each iteration. So, it was convineint to precomute the inverse of A using inv. S=inv(A). And then simply update x=S*b1, x=S*b2….x=S*b_t.

Please correct me if there are faster ways to do it.

Cheers!

**20**of 33

Andriy, Since you asked to be corrected, I’ll try to do so gently and with all respect: you have missed the point of Loren’s blog, and the discussion. inv(A) should never ever ever ever be used to solve Ax=b. It is dangerous even when A is diagonal. inv(A) is not important for solving Ax=b with many right-hand sides, as you suggest; it is dangerous and should not be used as such.

The Perennially Asked Question (The PAQ for short) that I get is “How do I use LU or CHOL to solve Ax=b? Why not just use inv(A)?” The answer to The PAQ is “PAQ = LU (pun intended); you should never use inv(A) to solve Ax=b”.

Since this question comes up so often (here and elsewhere), I’ve written a LINFACTOR function that I have just submitted to the File Exchange in MATLAB Central. It describes how to use LU or CHOL instead of inv(A) to solve Ax=b.

**21**of 33

Tim, thanks for the reply. I’ve searched the LINFACTOR function on File Exchange, but couldn’t find it. I don’t see it among the recently submitted files too. Could you, please, provide a direct link?

Going back to the problem of solving Ax=b.

I need to solve the system hundreds of thousand times with the same right-hand side matrix A. Which means first I need to solve Ax(1)=b(1). Next “b” depends on the obtained solution: b(2)=F(x(1)). And I need to solve it again Ax(2)=b(2), and so on (Ax(n)=b(n)).

Precomputing inv(A) makes it so much faster. I understood that it may not be extremly accurate, but still it is much faster then solving the linear system every time.

Please correct me again, if there is a different way

and comparable in speed as precomputing inv(A).

Thanks

**22**of 33

It takes a day or two for a new code to post. The function can do:

F = linfactor (A) ;

to factorize A into the object F (a struct, actually), and then

x = linfactor (F,b)

to solve Ax=b. These two functions would correspond to your S = inv(A) and x=S*b, respectively.

The relative speed of the forward/backsolve vs multiplying by inv(A) depends on the size of the matrix, whether it’s sparse or full, and whether or not it’s symmetric positive definite or not. It also depends on whether or not you can quickly compute the permuted b, instead of needing to do P*b (for a sparse P output of LU or CHOL) or b(p) (for the ‘vector’ output of LU or CHOL). It also depends on your MATLAB version (the BLAS, mostly) and your computer.

In the sparse case, linfactor is always fastest, unless A happens to be diagonal.

If there were a built-in linfactor function (or if I were to write it as a mexFunction …) then linfactor ought to be just as fast as inv(A)*b in the full case, at least when n is about 1000 or more. For tiny n (100 or so) it might be that inv(A)*b is intrinsically faster. The functions S*b and L\b are both done in the level-2 BLAS.

linfactor has some extra machinery to determine which to use (LU or CHOL, sparse or full). If you know which to you, then you should just cut-and-paste the corresponding code into yours, and you’ll gain a little speed.

Here’s an example. On my Intel Core Duo, assuming that you’re doing a dense LU, linfactor does the following (note the use of ‘vector’ which requires MATLAB 7.3 or later):

[L, U, p] = lu (A, ‘vector’) ;

lower.LT = true ;

upper.UT = true ;

then for each right-hand side, it does:

x = linsolve (U, linsolve (L, b (p,:), lower), upper) ;

For n = 1000, S=inv(A) takes 1.3806 seconds, where LU takes 0.4776. x=S*b takes 0.002301 seconds, whereas the two linsolves take 0.003094. You need to do at least about 1100 solves for inv(A) to catch up. Doing 100,000 systems would take 231 seconds using inv(A), and 310 seconds using LU.

For the same system using Cholesky, the situation is more favorable for CHOL. Solving 100,000 systems wich CHOL and linsolve takes 0.254 seconds for CHOL and 0.002504 seconds for linsolve. S=inv(A) takes 1.356 seconds, and x=S*b takes 0.002479 seconds (I ran the codes again; all timings are done by repeating the computation until “toc” reports at least 10 second of run time. It takes 44,000 systems for inv(A) to catch up. Solving 100,000 systems would take 249.26 seconds with INV and 250.65 seconds with CHOL. I would call that a tie.

Timings can vary slightly. I could rerun the test and get different results. Or you could run lintests.m in the linfactor package yourself, and see what you get.

For n = 2000, my CHOL tests showed that using CHOL and linsolve takes 1.889 seconds to factorize and 0.0103 seconds for the linsolve. INV takes 10.5 seconds and each S*b takes 0.00948 seconds. So … it would take 10,716 Ax=b’s for inv to break even. A 100,000 of them would take 1031 seconds with CHOL and 958 seconds with INV. I would hardly pick INV over CHOL to save one minute out of about a 17 minute computation, when the former is numerically dubious.

As you can see, inv is not a speed demon. It is not much faster, if at all, compared with factorizating and forward/backsolve.

inv has the advantage of being a built-in. Likewise, S*b is just one operator but I need 3 to do linsolve (U, linsolve (L, b (p)), lower), upper). It’s quite possible that if linfactor were a MATLAB built-in (a wrapper around mldivide) that it would be faster.

Here’s why. Suppose linfactor returned not a struct but an opaque object. For CHOL, it would return a lower triangular matrix (in the dense case) and an integer permutation vector. For the sparse case, CHOL would return a supernodal Cholesky factorization (see CHOLMOD for more details, which is what CHOL uses). A supernodal factorization takes less memory that a vanilla sparse matrix for L, and using it for the forward/backsolve is faster since it can use dense BLAS2 for each supernode.

Likewise, a dense LU could be packed into a single array, and along with that an integer permutation p would be kept. This would cut the memory usage in half, leading to the same memory usage as S*b, and thus likely better performance – probably matching if not exceeding the use of inv.

In each case, the opaque object would contain not a MATLAB matrix or collection of matrices, but a more tightly-packed object whose sole purpose is to make the forward/backsolves go fast.

So, no, I would say that using inv is not “much faster”. It’s only that way because either (1) you’re not using the factorization and linsolve methods properly, or (2) MATLAB needs a built-in linfactor. MATLAB also desperately needs a sparse linsolve; try using cs_lsolve which is a very simple function in CSparse and compare it with L\b, for example.

Once this machinery is in place, maybe we can leave inv on the dustbin … at least for solving Ax=b’s, that is. Even if you could save 10% time using inv instead of chol, is it really worth it? This is a case of the fast-but-mediocre driving out the nearly-as-fast-but-much-safer.

It might also be that the modest increase in reliability means that in your overall application, you converge more quickly and thus need to solve fewer Ax=b’s when using LU or CHOL.

Even for some of the other uses, such as the Woodbury formula, there may be better options. If A is symmetric positive definite, for example, updating/downdating a Cholesky factorization is very fast. In the sparse case, it takes time proportional to the number of entries in L that change … and not all of them change (that’s in CSparse and CHOLMOD, but not in MATLAB). MATLAB has a dense cholupdate that’s decent, for a rank-1 update/downdate. I haven’t done a lot of comparisons between the Woodbury formula and sparse Cholesky update/downdate, but my colleague Bill Hager has. We’re now using the sparse update/downdate precisely because it’s better, faster, and more accurate than Woodbury. I have little experience of using it in the dense case, however.

I’m also not an eigensolver guy, so I can’t make any comments about trace(inv(A)) vs. sum(1./eig(A)), and why the former is so much faster. Maybe Pat Quillen can chime in?

**23**of 33

Thanks Tim, you really conviced me. I accept your Licence agreement not to use “inv” any more. I wish I had such a nice overview of solving a linear system before. I think you did a greate job with examples. I will share your ideas in my university.

I have one more question for you. In one of our projects we faced the problem of iterativly solving the system: (A+d(t))x(t)=b(t). Where A is always constant, symmetric and positive definte. d(t) is diagonal matrix, b(t) is column vector. d(t), b(t) are changing at each iteration. We have to solve such system many times (about 10000). So far I used Choletsky factorization, and recently linear conjugate gradient (CG) to solve it. I belive there should be a way to take advatage of the fact that A is constant, p.d. and symm.

Woodbury formula doesn’t seem to give any advantages.

Do you have any ideas how to spead up such repetitive system solving. Any suggestion would be really helpful.

Thank you.

**24**of 33

Oh, the “you may use this code only if you promise not to use inv(A) to solve Ax=b” license was tongue-in-cheek, of course.

Briefly (“Briefly?!” as anyone who knows me will gasp), if all entries in d(t) change at each time step, then it’s a rank-n update and there’s nothing you can do that I know of but to completely refactorize the matrix. You can reuse the symbolic analysis if the pattern doesn’t change, but MATLAB doesn’t support that except to reuse the fill-reducing ordering (which CHOL computes efficiently anyway…). So there’s little to gain … but we’re getting off topic and I’ve already spammed Loren’s blog a little too much … so drop me an email if you have questions.

**25**of 33

It is really a bit off topic. Tim, thank you anyway.

**26**of 33

There is another use of inv that has not been mentioned: inverting a symbolic matrix. In my case, this was required for computing finite difference expressions from the Taylor series expansion around collocation points. That was the only time I have ever used inv, but it was useful. Greetings,

Roger.

**27**of 33

Regarding the use of inv(D) when D is sparse and diagonal – a colleague reminded me of another option. The following three operations compute the same thing (x1, x2, x3):

n = 1000000 ;

b = rand(n,1);

d = rand(n,1);

D = spdiags (d, 0, n,n) ;

S = inv (D) ;

tic ; x1 = D\b ; toc

tic ; x2 = d .\ b ; toc

tic ; x3 = S*b ; toc

norm (x1-x2)

norm (x1-x3)

The latter two options (d.\b and S*b) are equally fast on my Intel Core Duo (MATLAB 7.4), whereas D\b takes about 50% more time. In fact, d.\b is a teeny bit faster than S*b. Using S*b is numerically dubious, of course. So even for diagonal matrices, there’s no need to use inv(D)*b. To scale the columns of b, use b./d instead of b/D or b*inv(D).

Richard’s example:

x = b’ * (inv(D)).^2 * b

can be written cleanly as

x = b’ * (diag(D)).^2 .\ b ;

the latter is faster. Even faster is

d = full(diag(D)) ;

x = b’ * (d.^2 .\b) ;

**28**of 33

I have been reading these with interest and am wondering if anyone can help with my problem.

I have a system of equations:

(A+B*inv(C)*D)x=b

where A, B, C and D are large (50,000 x 50,000) sparse (all banded or all dense depending on ‘optimization’) matricec and I have multiple right hand sides (i.e. b is a size of 20×1).

So, I should never ever calculate inv(C). so what can I do that is computationally fast and efficient?

**29**of 33

I can’t see how to calculate the likelihood of a Gaussian vector with arbitrary covariance with out using inv. Unless, of course, you have the eigen-decomposition of the covariance.

**30**of 33

I think it is necessary to keep this function. There are cases where we want to solve Ax=b with vector b variing. If the matrix A is of small dimension, it may be advantageous to calculate and store the inversion of the matrix A. In this way, there is no need to sovle the linear equation again when b varies. I usually write my program in this way. If it is silly, welcome your comments. Thanks!

**31**of 33

If you want to use something that acts like the inverse, but isn’t the inverse, see the FACTORIZE, an object-oriented method:

It lets you write:

S = inverse(A) x = S*b

but it does not actually compute the inverse. It does the right thing instead, by factorizing A and then using forward/backsolves to compute x=S*b.

So to solve (A+B*inv(C)*D)x=b do this:

S = inverse (A+B*inverse(C)*D) ;

x = S\b ;

and rest assured that you are not actually multiplying by the inverse.

**32**of 33

which on is faster to solve Ax=b. Inverse or Lu decompose

**33**of 33

Md. Ariful islam-

Best off using the operator \. It will yieldthe numerically best solution. inv is likely to NEVER be the best way, or the fastest.

–Loren

## Recent Comments