Recently, I’ve been working on customer projects around improving the performance of analysis code. These projects typically involve using the Profiler to find bottlenecks, vectorizing code wherever possible, and using the appropriate data types that fit the particular tasks. Regarding the last point, I was pleasantly surprised to find out how string manipulations were so much more efficient using the new string data type rather than character arrays or cell array of chars.
In some cases, the bottlenecks I found with the Profiler could be solved by vectorizing or changing the algorithm. But one area that I found a bit challenging was when the bottleneck involved file I/O. When you need to read or write from/to files, there’s not much you can do in terms of speeding up that process.
Well, that’s not entirely true. With readtable or datastore, you can specify import options to optimize how you import your data. You can do something similar with textscan by specifying a format spec.
My pick this week falls into this category of efficient file reading. The use case is quite specific, but say you just need to read in the last N lines from a large CSV file. If you know beforehand how many lines the files has, csvread has an option for specifying the range. If you don’t know the number of lines, how can you read the last N lines? Would you need to first scan the file to figure out how many lines it has? Mike’s csvreadtail uses a clever technique of moving the file position, using fseek, to the end of the file and then “reading back” until a specified number of lines have been read. Great idea!
I do have a few suggestions for improvement, though. I’m pretty certain these will make this code much more efficient.
- Currently, the way the file position is moved back is by repeatedly calling fseek relative to the end of file. This can be done more efficiently by moving relative to the current position. Since the algorithm reads one byte at a time, you can move back 2 bytes from the current position.
- Rather than reading one byte at a time, it is more efficient to read a chunk of data (several hundreds or thousands of bytes) at a time. The tricky part is figuring out how many bytes to go back and read. Perhaps one approach is to adaptively change the amount to go back. For example, depending on how many lines of data were read with a single read, you can read more (or less) next time.
For those of you on R2016b or newer, you may be interested in checking out the new tall array capability. Tall arrays allow you to work with large data sets in out-of-memory fashion, only loading parts of data necessary for analysis. In that framework, the tail function reads in the last N rows of data.
Get the MATLAB code
Published with MATLAB® R2017a
3 CommentsOldest to Newest
I’d do it simply (by not inventing a wheel someone else already done):
function indata = cvsreadtail(FileName,N)
% cvsreadtail – read last N lines from cvs-file FileName
% indata = cvsreadtail(FileName,N)
% FileName – filename, string with full or relative path to file to be read
% N – number of lines to read from end of file, scalar integer.
[s,Vals] = system(sprintf(‘tail -n %d %s’,N,FileName));
if s == 0
indata = str2num(Vals);
Good point BjornG, as long as you’re on linux or some other OS with tail as a built in system function. Developing on Windows I’ve found this submission to be a good alternative.
Reading your reply Mike, I thought: “Surely that has to be possible, and easily too…”. Five google-minutes later I’m so happy with my OS-choice!