Clear Filters
Clear Filters

Weight initialization in patternnet

2 views (last 30 days)
Jack
Jack on 26 Apr 2014
Commented: Greg Heath on 14 Jun 2016
Hi.
Suppose that we create a network and use configure to design the structure after it, so we have:
Net = patternnet([3 2]);
Configure = (Net,inputs,tagets);
Now I want disable normalization of inputs and outputs in this network so I will use:
net.inputs{1}.processFcns={};
net.outputs{3}.processFcns={};
But after that when I check net.inputs{1} and net.outputs{3} all informations set to 0 and defaults like the situation that we don’t use “Configure”, We have this note in matlab help :
Side Effects:
Whenever this property is altered, the input size is set to 0 and the processSettings are set accordingly.
I want only disable normalization and don’t want 'train' function initialize network weights again. When I check network initial weights I have same weights. In this situation 'train' function initial weights again like un-configured network or use the same configured weights?, because this process set to defaults “net.inputs{1}” and “net.outputs{3}”.
Thanks.

Accepted Answer

Greg Heath
Greg Heath on 1 May 2014
Search using
greg cross validation
Read
29 Sep 2013 NEURAL NET CROSSVALIDATION DESIGN EXAMPLE Greg Heath neural network, crossvalidation 1 1433

More Answers (2)

Greg Heath
Greg Heath on 28 Apr 2014
1. Why in the world are you using two hidden layers when 1 is sufficient?
2. Why are you using configure? TRAIN automatically configures an unconfigured net before training. The only time you need configure is when you are training multiple nets in a loop to prevent nnet{i} from being initialized with the final weights of net{i-1}.
3. Why are you disabling the input processing? Are you normalizing the data before training to zero-mean/unit-variance so that you can deal with outliers?
4. Why are you disabling the output processing? The default (-1,1) is chosen because the default output transfer function is tansig.
5. Are you doing it because you have the target coded {0,1} and you want to use logsig or softmax?
Hope this helps.
Thank you for formally accepting my answer
Greg
PS Search NEWSGROUP and ANSWERS using
greg patternnet
  2 Comments
Jack
Jack on 28 Apr 2014
Edited: Jack on 28 Apr 2014
Thank you for answer Greg.
Two layers network was an example. I think answers of all questions is "YES" :-) .
More details:
I'm repeating a 5-fold cross validation for "n" times and average between n 5-fold cross validation accuracies to increase reliability of results so when I was using “newff” I have this structure:
%%some codes
Net = newff(.) % Create network
for n=1:w
%%cross validation indices and classsperf..
For l=1:5
%%Neural network training
end
end
1. As you know I don’t want different initial weights for this structure. I thought if I put “newff” before main loop (for n=1: w) I have same initial weights in all repeats of above code. Is this true?
2. Now how can I implement this structure by “patternent”? As I mentioned above?
3. I didn't get "The only time you need configure is when you are training multiple nets in a loop to prevent net{i} from being initialized with the final weights of net{i-1}.". Can you describe with more details? we have same problem with "newff"?
Ps. I'm normalizing data before insert it to this structure because of some limitations in my system and outlier detection.
Thanks.
Greg Heath
Greg Heath on 29 Apr 2014
1. If you are designing a classifier use newpr, not newff.
2. What are your trn/val/tst datadivision ratios?
3. Every design should have different initial weights and datadivisions
4. Unlike the current creation functions ( fitnet, patternnet, feedforwardnet ), the obsolete functions ( newfit, newpr, newff ) are created with random weights.
5. Since the code structure is different for the two generations., in a for loop YOU have to decide whether you are going to create new nets or reuse old nets by reinitializing weights and nonconstant parameters (e.g., mu).
6. Not sure what the previous version of train would do if the net was not initialized.
7. If you start to use a new version, be sure you know what changes have been made (e.g., newpr and patterennet).

Sign in to comment.


Greg Heath
Greg Heath on 3 May 2014
Validation data cannot be separated from training data
total = design + test
design = train + validate
The validation set will stop training when mseval goes through a minimum.
Unfortunately, MATLAB doesn't allow valratio = 0 except for trainFcn = trainbr.
  2 Comments
Jack
Jack on 14 Jun 2016
Thank you Greg.
Because the cross validation function don’t have any indices for validation so for 5-fold cross validation I have 20% for testing and 80% for training. For validation I used first(or last) 12.5% of training indices so now I have 10% for validation, 70% for training and 20% for testing. Is this procedure good or it is better to get the validation indices from testing sample?
So as you said the position of newff (newpr) is good in my code because we have weight initialization in this function.
Thanks.
Greg Heath
Greg Heath on 14 Jun 2016
It is unfortunate that the NNTBX doesn't have a XVAL function that recognizes the important role of the validation subset ( It is also unfortunate that trainbr doesn't allow a val subset... but that is a separate issue!).
I have posted a code which does
29 Sep 2013 NEURAL NET CROSSVALIDATION DESIGN EXAMPLE Greg Heath neural network, crossvalidation 1 1433

Sign in to comment.

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!