What happend if the validation performance was greater than the test performance?

3 views (last 30 days)
I have trained several networks with different hidden neurons. Now my question is what happens if I select a network with the validation performance higher than the test performance.I have read somewhere that in this situation the distribution of data between the training set and test set is not correct.Thanks for any suggestion.

Accepted Answer

Greg Heath
Greg Heath on 28 Feb 2017
There is no rule governing the order of the val and tst performances. That is why it is worthwhile to design a number of nets differing only by the random initial weights and/or if not a time series, the order of the input/target pairs.
Thank you for formally accepting my answer
Greg
  3 Comments
Greg Heath
Greg Heath on 1 Mar 2017
Sometimes it is dangerous to rely on the results of just a few designs. That is why, for challenging cases, I typically design at least 10 or 15 nets for each candidate value of H, the number of hidden nodes.
Then by tabulating the 3 NMSE values with the column for NMSEval monotonically decreasing, you can detect how well trends in NMSEval can indicate the acceptability of NMSEtst.
Of course there are other tabulations and/or plots that reveal design trends( e.g., NMSE vs H). The important thing is to design enough nets so that you are convinced your choice is reliable.

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!