What is your suggestion for improving the result of such a bad ANN when the training is stopped by the maximum Mu value?
1 view (last 30 days)
Show older comments
Mohammadreza Hesami
on 4 Jun 2018
Commented: Mohammadreza Hesami
on 5 Jun 2018
As I know, increasing the value of the MU can't help the network to be improved. It is probably caused by this fact that training will not improve learning anymore. I have also increase the size of the hidden layer, but ... no significant change happened in the result!!! So,I would appreciate for sharing if there is any other suggestion or recommendation regarding this issue ? Thanks in advance Thanks very much in advance.
Useful information:
Regression problem. size of the input and target matrices are 5 and 1, respectively. Data are not normalized! (Maybe it could be a possible way of improving). The information of the network is also shown in the attached image.
Thank you very much in advance.
0 Comments
Accepted Answer
Shrestha Kumar
on 4 Jun 2018
Hi,
The MU reaches its maximum value means that further training will lead to degradation of the network.
In your case, as the dataset is very small so if the values vary a lot then it will cause large variation in weight values which will cause problem to the network. So it is better to normalize the dataset.
Another approach will be to increase the size of the hiddenlayer( which you have already tried) or add one more hidden layer.
Also if you want to train your network even if the default maximum MU is reached then you can set the value of maximum MU as you want(using the command: Net.trainParam.Mu = value).
More Answers (0)
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!