# Timing Extraction of Parts of an Array 10

Posted by **Loren Shure**,

In Sarah's blog, Dan asked about speed of removing elements. There are a number of ways of deleting elements in MATLAB. So, what's the "best" way?

### Contents

### Vector Data

I only work with vector, or 1-D data for this blog. And I use linear indexing (i.e., indexing with only 1 number). First let me generate some points, and perform an fft.

nPts = 1e7; cutoff = round(nPts/2); A = rand(nPts,1); B = fft(A);

I want to keep a copy of `B` off to the side so I can do more experiments on it later. I want to be sure the copy doesn't share memory with the array
being used later on, so I modify the first element (keeping it the same value).

Borig = B; Borig(1) = Borig(1) + sin(0);

### Methods That Come to Mind

Let me list the methods of paring down an array that I can think of quickly.

`[]`) using

`end`.

`end`.

### Test Various Methods

t1 = tic; B=B(1:cutoff); toc(t1) B = Borig; Borig(1) = Borig(1) + sin(0); t2 = tic; B(cutoff+1:end)=[]; toc(t2) B = Borig; Borig(1) = Borig(1) + sin(0); t3 = tic; B(cutoff+1:nPts)=[]; toc(t3) B = Borig; Borig(1) = Borig(1) + sin(0); t4 = tic; C = B(1:cutoff); toc(t4)

Elapsed time is 0.131063 seconds. Elapsed time is 0.322803 seconds. Elapsed time is 0.327507 seconds. Elapsed time is 0.125420 seconds.

It doesn't seem like using `end` vs. the actual number for doing the indexing makes nearly as much difference as whether or not I copy the elements to keep
or delete the elements to remove.

### Keep Most Elements

Let's repeat the analysis but retain most of the elements this time.

B = Borig; Borig(1) = Borig(1) + sin(0); cutoff = round(0.95*nPts)

cutoff = 9500000

Test the various methods.

t1 = tic; B=B(1:cutoff); toc(t1) B = Borig; Borig(1) = Borig(1) + sin(0); t2 = tic; B(cutoff+1:end)=[]; toc(t2) B = Borig; Borig(1) = Borig(1) + sin(0); t3 = tic; B(cutoff+1:nPts)=[]; toc(t3) B = Borig; Borig(1) = Borig(1) + sin(0); t4 = tic; C = B(1:cutoff); toc(t4)

Elapsed time is 0.240744 seconds. Elapsed time is 0.510874 seconds. Elapsed time is 0.440639 seconds. Elapsed time is 0.243518 seconds.

Interesing. I see roughly the same time pattern as earlier, when I saved half the elements.

### Keep Few Elements

Finally, sticking to the vector case, keep very few of the elements this time.

B = Borig; Borig(1) = Borig(1) + sin(0); cutoff = round(0.05*nPts)

cutoff = 500000

Test the various methods.

t1 = tic; B=B(1:cutoff); toc(t1) B = Borig; Borig(1) = Borig(1) + sin(0); t2 = tic; B(cutoff+1:end)=[]; toc(t2) B = Borig; Borig(1) = Borig(1) + sin(0); t3 = tic; B(cutoff+1:nPts)=[]; toc(t3) B = Borig; Borig(1) = Borig(1) + sin(0); t4 = tic; C = B(1:cutoff); toc(t4)

Elapsed time is 0.026702 seconds. Elapsed time is 0.194346 seconds. Elapsed time is 0.195731 seconds. Elapsed time is 0.026779 seconds.

### Commentary and Question

I did this experiment with 10 million data elements. If I do it with orders of magnitude fewer points, the results are less dramatically different, though similar. My conclusion is that it is faster to keep the elements you want than to delete the ones you don't. However, it depends on how you reach the state of culling the data.

Suppose I'm handed the indices of the unwanted elements. Am I better off just using these to delete values or taking the hit to identify the wanted ones and using the these to keep elements? Here's my thinking. If I'm given the unwanted indices, to create the desired ones, I need to make an array of all indices, and then deleted the unwanted ones from this larger array to get the keepers. Then I have do to the actual indexing for the values. So, if presented with indices already, whether ones to be kept or deleted, use them. If not, calculate the keepers and assign these values to the required array.

Do you find yourself in situations where it is unnatural to calculate or specify the values you want to keep? If so, please post here because I am curious about those situations.

Get
the MATLAB code

Published with MATLAB® 7.6

## 10 CommentsOldest to Newest

Thanks for the article.

I personally find myself more in the situation where I want to delete non-contiguous indicies, rather than halving an array or similar. So, I end up using a find (similar to your previous articles) as such:

ndx=find(B==1234);

B(ndx)=[];

Any thoughts on efficiency on this?

Thanks!

Also, I often need to work with very large sets of data that cannot reside in memory — let’s say 1 billion elements — and in a fashion that I cannot pre-allocate my intended output array.

For example:

C = [];

sizeChunk = 10e6; % 10 million element working array

Nchunks=length(BigInput)/sizeChunk;

for n=1:Nchunks,

B = BigInput; % with BigInput being my 1e9 input

Borig(1) = Borig(1) + sin(0);

cutoff = ceil(sizeChunk * randn); % variable cutoff

C = cat(2,C,B(1:cutoff));

end

Any thoughts?

Thanks!

Dave-

It really depends on how you create the indices in the first place. If you have the luxury of creating the keepers instead of the discards, you are probably better of skipping find and using logical indexing instead. After all, that logical array is the input to find, so you aren’t doing extra work.

I am not sure understand your 2nd question/point. Are you trying to see what to do if you don’t know the total number? If so, there are strategies, like guessing a total amount up front, but preparing to augment in chunks as required. Perhaps that’s what you are doing???

Set the unused entries to a value that you can delete later. If you reach the end of the chunk, add a new chunk on the same way. Which might be what you are doing… If so, all I can recommend is that you do some profiling with “typical” sizes and figure out a large enough chunk size without overwhelming your memory.

–Loren

Dave-

On your first post, you could use logical indexing:

% keep all B different than 1234

indKeep=(B~=1234);

B=B(indKeep);

Keeping still is faster than deleting for randomly placed elements. Like for normal indexing, deletion of a small number of elements have less penalty than deletion of a large number of elements, but deletion seems to be always slower.

Dave,

Looking at your post 1 and Vincent’s post 4, in addition to thinking about speed, you might want to think about memory. If more than 1/8 of your elements in your FIND call will satisfy the condition, the double array of indices generated from your FIND call will require more memory than the logical “mask” created by your condition:

since real double values take up 8 bytes and logical values only take 1. In addition, you can use them the same way to delete elements:

I was interested to see this method of forcing a non-lazy copy:

A = B;

A(1) = A(1) + sin(0);

Presumably the +sin(0) prevents the line being optimized away, though I wasn’t aware that MATLAB did that kind of optimizing. In the past I’ve tended to use:

A = B;

A(1) = B(1);

Is the sin(0) method preferred? Does anybody know of a simple method that works even when the array being copied is empty? The most foolproof method is probably to write a MEX function that calls mxDuplicateArray(), however.

Jebb-

Good question! In today’s MATLAB you don’t need to do more than A(1) = B(1) but I was trying to write the code thinking about how MATLAB might get smarter in the future so I was trying to outwit the optimization to not copy for a longer tenure in MATLAB releases.

–Loren

Loren,

It is an interesting experiment. I wish to point out that even in the situation where a set of unwanted indices is given, keeping wanted could still be faster than deleting unwanted. The trick is to use logical indexing.

nPts = 1e7;

A = rand(nPts,1);

B = fft(A);

unwanted = find(A<0.5);

Borig = B;

Borig(1) = Borig(1) + sin(0);

t1=tic;

keep=true(nPts,1);

keep(unwanted)=false;

B=B(keep);

toc(t1)

B=Borig;

Borig(1) = Borig(1) + sin(0);

t2=tic;

B(unwanted)=[];

toc(t2)

B=Borig;

Borig(1) = Borig(1) + sin(0);

t3=tic;

keep=true(nPts,1);

keep(unwanted)=false;

C=B(keep);

toc(t3)

The results are:

Elapsed time is 0.464593 seconds.

Elapsed time is 0.549569 seconds.

Elapsed time is 0.468961 seconds.

Now with more elements to delete.

unwanted = find(A<0.9);

Borig = B;

Borig(1) = Borig(1) + sin(0);

t1=tic;

keep=true(nPts,1);

keep(unwanted)=false;

B=B(keep);

toc(t1)

B=Borig;

Borig(1) = Borig(1) + sin(0);

t2=tic;

B(unwanted)=[];

toc(t2)

B=Borig;

Borig(1) = Borig(1) + sin(0);

t3=tic;

keep=true(nPts,1);

keep(unwanted)=false;

C=B(keep);

toc(t3)

Elapsed time is 0.435304 seconds.

Elapsed time is 0.488471 seconds.

Elapsed time is 0.422132 seconds.

Finally, with less elements to delete

unwanted = find(A<0.1);

Borig = B;

Borig(1) = Borig(1) + sin(0);

t1=tic;

keep=true(nPts,1);

keep(unwanted)=false;

B=B(keep);

toc(t1)

B=Borig;

Borig(1) = Borig(1) + sin(0);

t2=tic;

B(unwanted)=[];

toc(t2)

B=Borig;

Borig(1) = Borig(1) + sin(0);

t3=tic;

keep=true(nPts,1);

keep(unwanted)=false;

C=B(keep);

toc(t3)

The results are

Elapsed time is 0.395496 seconds.

Elapsed time is 0.556099 seconds.

Elapsed time is 0.343438 seconds.

Three time patterns are almost the same. Deleting is still the most expensive one even when the comparison includes the time to create a logical keeping index set. If the unwanted indices are already in a logical vector, then keep=~unwanted, hence keeping is much more favorable.

Thanks Yi.

Also, for what it’s worth, I have put in an enhancement request to have the deletion syntax be on parity for performance as the other ones. I believe there are some changes we can make, but I don’t know how delicate they are so I can’t predict when they might occur.

–loren

For what it’s worth, I agree with Dave’s comments above. While in Dave’s example the find() is unnecessary, I think it might be trying to demonstrate the case in which one has the numerical index upfront.

I’ve posted a similar comment a while back. If you’re dealing with a large data set, it should be more efficient (both in memory and time) to use numerical indexing if the number of elements you’re selecting is small.

Suppose my array has 30 billion double elements =~ 256GB memory and suppose I’m interested in a 1 million element subset of this. A logical index will be 30GB in size, while a numerical index will be 8M. Not to mention other programming and microarchitectural advantages: Traversing the smaller index will not trash my CPU’s TLB (if I have large page support enabled). Also, I know the number of elements in advance, so I can preallocate the memory needed (which, unfortunately, doesn’t seem to be case with MATLAB – please correct me on this!). Of course, any operations dealing with the index itself have to only deal with a much smaller number of entries.

Logical indexing is nice in most cases but let’s not make numerical indexing a second class citizen. While uniformity may lead to greater genericity, given the increasing amounts and complexity of data people have to deal with, one-size-fits-all solutions are not the optimal way forward – a programmer needs choice in his toolbox. Unless the Mathworks wants MATLAB to be the tool of choice for only small-to-medium size problems (and I know they don’t!), facilities such as fast numerical indexing and shared memory for distributed workers located on the same machine are a must.

Think about the target audience: From my experience, such applications are written in compiled languages (read C++/Fortran). Performance oriented programmers will simply balk at the idea of using a tool that makes them choose between slow indexing or allocating an additional 30GB of memory.