8 views (last 30 days)
Ivan Volodin on 15 Mar 2018
Commented: KL on 15 Mar 2018
Hello!
I was not sure which title fits better in my case so I just left it to be written in the most general way. I have the file with following structure:
Step 1
0.10190103
0.10145140
0.10097524
0.10050153
0.10003042
9.95795131E-02
9.91610140E-02
Step 2
9.81189385E-02
9.75561813E-02
9.80424136E-02
0.10000000
0.10000000
9.80617628E-02
9.77829769E-02
...
Step N
0.10000000
0.10000000
9.93788019E-02
0.11977901
0.12290157
0.12588248
0.12861508
And I need to read it from N arrays, so
V1 = [0.10190103
0.10145140
0.10097524
0.10050153
0.10003042
9.95795131E-02
9.91610140E-02]
V2 = [9.81189385E-02
9.75561813E-02
9.80424136E-02
0.10000000
0.10000000
9.80617628E-02
9.77829769E-02]
VN = [0.10000000
0.10000000
9.93788019E-02
0.11977901
0.12290157
0.12588248
0.12861508]
I have read some examples to read data into arrays, but have not got something related to my case...
KL on 15 Mar 2018
Ivan Volodin on 15 Mar 2018
Edited: Ivan Volodin on 15 Mar 2018
Sure, please see it in attach.

KL on 15 Mar 2018
Edited: KL on 15 Mar 2018
Something quick,
d = table2cell(d);
d = regexprep(d,' ','');
d = cellfun(@str2double,d,'uni',0);
d(cellfun(@isnan,d)) = [];
d = reshape(cell2mat(d),12,[]);
I reckon there should be faster ways. Even using one for loop right after readtable to combine all operations inside the for-loop step should also be more efficient than this. Nevertheless, this should work.
P.S: saving multiple variables (as you mentioned in your question) is not a good idea, use a matrix instead. If some of the columns have lesser rows, use a cell array.
Ivan Volodin on 15 Mar 2018
Thank you for your answer, but how it is going to work in my case if readtable fails, because "All lines of a text file must have the same number of delimiters" ..?
KL on 15 Mar 2018
Where do you get these files from? A lot can be easier if the files are exported properly (with proper delimiters and using numbers instead of strings)