Program architecture for handling large .csv files
14 views (last 30 days)
Show older comments
This question was flagged by Jeremy Hughes
Good Afternoon
So i am a bit concerned. I just spent a week writing a matlab program that takes in a large amount of data from a .csv file
When i started the project i opted for a table variable type because of its ease of use with functions like readtable()
However i am not stuck with an unusable program. It takes hours for matlab to process my .csv files
I have considered the following options
it appears matlab has not yet developed any performance for the table datatype with no hope in sight.
1: re-write the entire program using arrays
Problem, the data in my file is all hex values. I will need to convert them to decimal numbers using the hex2dec function which only works with a char data type. So using an array of doubles is out of the question. Not sure where to go here
2: try to re-write the program using paraelle toolbox
thoughts?
Answers (5)
Peter Perkins
on 28 Jul 2021
Edited: Peter Perkins
on 28 Jul 2021
With no code at all to go on, it's pretty hard to give specific advice.
Hex issues aside, the first advice I would give would be to write vectorized code. The fact that you have 50k calls to subscripting suggests that you are doing scalar operations in a tight loop. That's not the best way to write code in MATLAB. Again, not much information to go on, so hard to say. So many calls to brace subscripting suggests that you are doing assignments to one variable using braces, which, as Walter points out, is slower than using dot (because it has to do a lot more in general). That's an unfortunate difference that is not highlighted in the doc, but perhaps should be. In any case, no code to go on, so ...
Walter is correct that since 18b, performance, especially for assignments into large tables, has improved quite a bit.
If you can't vectorize, and you can't upgrade to a newer version, it's probably not necessary to "re-write the entire program using arrays". It's usually possible to focus only on the tight loop and "hoist" some of the variables out of the table for that part of the code, then put them back in. Use tables for the organization and convenience they provide, use raw numeric in small doses for performance. Tables have a lot going on, and will never be as fast as double arrays. That doesn't mean you should avoid them.
It really helps to provide concrete code examples.
3 Comments
Walter Roberson
on 28 Jul 2021
It is difficult to discuss high level program architecture without some understanding of the kinds of operations that need to be performed.
For example there are some needs for which the most efficient method would be to use a multidimensional hypercube of characters, with text for any one "word" stored as columns (instead of the typical rows.) But there are other needs for which you might want the "words" stored as rows in a hypercube.
Walter Roberson
on 27 Jul 2021
In some cases, depending on the format of the file, you have some options of how to proceed:
- readtable() and readmatrix() and readcell() all permit a 'Format' option, using the same format specifications as are are used by textscan() -- including the potential to use the %x format (possibly with a length specification, if your fields are fixed width.)
- you could use textscan() directly, since you are working with text files
- you could use lower-level I/O commands, including fscanf() or fgetl() with sscanf(), depending how complicated your files are. If your format is complicated enough that you effectively need to read one line at a time, then this might be lower performance
- When your file is not super-complicated but does have different sections, then there are surprisingly high performance gains to be had by reading the entire file in as text and using regexp() to break it up into subsections and then textscan() or sscanf() the subsections. Performance gains relative to looping testing each line, that is.
Jeremy Hughes
on 28 Jul 2021
Edited: Jeremy Hughes
on 29 Jul 2021
First, it would help to see an example file, and some sample code that demonstrates the problem.
Here's the best I can say with what I see.
If the entire variable is in hex format, you can use import options to do that conversion quickly on import.
detectImportOptions will see 0x1a as a hex value, but not without the prefix, but it can be read as hex if you ask for it.
opts = detectImportOptions(filename,"Delimiter",",")
% varNamesOrNumbers = [1 3 5]
% or varNamesOrNumbers = ["Var1","Var3"]
opts = setvaropts(opts, varNamesOrNumbers, "NumberSystem", "hex", "type", "auto");
% You can also improve reading performance by selecting only the columns
% you would like (this is optional)
opts.SelectedVariableNames = opts.VariableNames([1 3 5 7 9]);
T = readtable(filename,opts)
12 Comments
dpb
on 31 Jul 2021
If i can bring it in ahead of time as a hex value i will be in much better shape. Can one of you offer a suggestions?"
This has already been done above by Walter.
NB: readXXX of the new forms like readmatrix all pass off the heavy lifting to readtable in the end; the others are just front ends that give it some klews of what the content of the file is and specify how the data are to be returned to the caller.
These may provide a small performance boost over readtable; what they mostly do is return an array or cell array instead of a table. If all operations are on an array basis, then there's probably no advantage at all in using tables.
However, using detectImportOptions and customizing an import object including the variable types as hex where they are will almost certainly give benefits -- the primary one being importing the data as the decimal value instead of 'char' or cellstr or strings which will have to then convert -- instead, pass the work off to the system i/o library.
Robert Scott
on 3 Aug 2021
3 Comments
dpb
on 3 Aug 2021
"... each line consists of repeated entries that are 4 hex digits representing signed 16 bit numbers..." @Walter Roberson
I finally got a response that all are hex per record except for the first and last elements of each record, but never responded as to what those two are. All the other details are still opaque given refusal to provide any further useful details...
See Also
Categories
Find more on Large Files and Big Data in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!