Clear Filters
Clear Filters

Reuse dimensionality reduction after designing model

1 view (last 30 days)
Hi. I'm using a binary classification with SVM and MLP for financial data. My input data has 21 features so I used dimensionally reduction methods for reducing the dimension of data. Some dimensionally reduction methods like stepwise regression report best features so I will used these features for my classification mode and another methods like PCA transform data to a new space and I use for instance 60% of best reported columns (features). The critical problem is in the phase of using final model. For example I used the financial data of past year and two years ago for today financial position. So now I want use past and today data to prediction next year. My question is here: Should I use PCA for new input data before inserting to my designed classification model? How can I use (For example Principal component analysis) for this data? I must use it like before? (pca(newdata…)) or there is some results from last PCA that I must use in this phase?
Thank you so much for your kind helps.

Accepted Answer

Greg Heath
Greg Heath on 28 Mar 2014
PCA does not take into account the output variance. Therefore, it is suboptimal for classification.
Use PLS instead.
Whatever transformations are made to training data must also be used in validation, test and operational data using the EXACT SAME transformation formed from training data characteristics.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 Comment
Jack
Jack on 28 Mar 2014
Edited: Jack on 28 Mar 2014
Thank you for your answer. Beside of Partial Least Square (PLS), Witch dimensionally reduction techniques do you suggest for a binary classification problem? (I'm using Fisher Discriminant Analysis, Stepwise regression, Welch test and Neighborhood components analysis now) I have a hybrid classification and optimization model that my optimization algorithm optimize parameters of classification methods (MLP, SVM, ELM …) and find best inputs (features) according to classification accuracy (Cost function of optimization algorithm) but I used 5 fold cross validation and repeat it 5 times in every iteration of optimization algorithm for increase reliability of my system so I have high computational problem. My input data has 20 features so I want reduce the dimension of my data. After that I insert these low dimension data (for example 15 features) to my model and model select best combination of 10 features automatically. Now I want use this model for out sample data. As you said I can’t use PCA, Kernel PCA or Non-linear PCA (Auto-associative neural network) methods for my problem. In general what do you think about my structure?
I read somewhere when the sample size is low we can use PLS. my input data size regularly is between 800-6000 or above. So can I use this method?
// my data is labeled (0 and 1 / binary classification) and inputs are financial rations and variables from companies statements so I think I have correlation in input features. so i think i must only use dimensionally reduction techniques that take into account data labeling like FDA ? //
Thank you so much for your kind helps.

Sign in to comment.

More Answers (2)

Tom Lane
Tom Lane on 29 Mar 2014
Greg seems to have some good ideas. However, going back to your original question, this is how it looks to me. If you apply PCA to your original data and train a model using the components you compute, then you do not want to do a new PCA on your new data. You want to get the coefficients from the PCA of your old data, and use them to compute components (scores) for the new data.
  4 Comments
Greg Heath
Greg Heath on 30 Mar 2014
"I read somewhere" doesn't mean much unless you are positive it is with respect to PLS vs PCA.
Jack
Jack on 30 Mar 2014
Thank you for answer Greg. I add some more question.I would greatly appreciate it if you could take a look at these questions.

Sign in to comment.


Greg Heath
Greg Heath on 28 Mar 2014
I am unfamiliar with Neighborhood components analysis.
PCA maximizes variance in the input space without regard to outputs. It is used often. In general however, it is suboptimal for reduced feature classification and regression.
On the other hand, PLS considers the linear I/O transformation. Like PCA, it is applicable to polynomials. However, I do not use higher powers than squares and crossproducts.
For nonlinear classification I use MATLAB's MLP PATTERNNET. I sometimes use the RBF NEWRB. However, it is not very flexible (identically shaped spherical Gaussian basis functions at locations of algorithm selected training data).
When data sets are large use crossvalidation.
Hope this helps.
Thank you for formally accepting my answer
Greg
  2 Comments
Jack
Jack on 28 Mar 2014
Edited: Jack on 29 Mar 2014
Thank you for answer again. I read somewhere that we can use PLS only for data-sets that have low sample size. This is limitation of PLS method. Is this true?
Apart from PLS, what other dimensionally reduction techniques do you offer for this binary classification problem? This isn’t a critical problem that it must be an embed function of Matlab, so I can find or write script of unembed methods.
(I'm using Fisher Discriminant Analysis, Stepwise regression and Welch test in my code now, if you are familiar with these methods, do you think these methods are appropriate for my propose? )
Thanks again.
PS. as you said I removed PCA / Kernel PCA / Nonlinear PCA (Auto-associative neural network) and Factor analysis from my system.
Jack
Jack on 29 Mar 2014
Edited: Jack on 29 Mar 2014
Beside choosing appropriate method, is this structure true? For example I’m using Fisher discriminant analysis. I have 21 features and 6000 samples. I will transform my data with FDA so now I have data-set with 6000*21 dimension. After that I will choose first 10 best features of this data-set. So this 6000*10 data-set is now my input of hybrid classification model. Finally my model choose 4 best features (1, 3, 6, and 8).
Suppose that my out sample (testing data) is 15000*21. So I will find W*(new data-set) for calculating transformed data-set (W: weight matrix from last FDA method). Now I will select 10 first features. After that I will select 1, 3, 7 and 8 columns for input features of my design model. Is this structure true?
Which one do you think better? Normalizing data before insert it to dimensionally reduction or normalizing it after that? Which normalization method do you suggest for it? Max/Min mapping or whitening?**
I find this from stackoverflow:
%Step 1.Generate a PCA data model
[W, Y] = pca(data, 'VariableWeights', 'variance', 'Centered', true);
%# Getting the correct W, mean and weights of data (for future data)
W = diag(std(data))\W;
[~, mu, we] = zscore(data);
we(we==0) = 1;
%Step 2.Apply the previous data model to a new vector
%# New coordinates as principal components
x = newDataVector;
x = bsxfun(@minus,x, mu);
x = bsxfun(@rdivide, x, we);
newDataVector_PCA = x*W;
In this code we normalized testing data with mean and variance of training data! Is this true?
Thanks again.

Sign in to comment.

Categories

Find more on Dimensionality Reduction and Feature Extraction in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!