Good vs Bad Neural Network Training?
12 views (last 30 days)
Show older comments
Ali Almakhmari
on 8 Jun 2023
Answered: Ersagun Kürşat YAYLACI
on 2 Oct 2024
Hey guys, I have a pretty gigantic table...about 2.3 million rows and 14 columns. 6 columns are inputs and 8 columns are outputs. And I am trainning a neural network in order to give it any combination of the 6 inputs and it will output the best estimate of the 8 outputs corresponding to the inputs. I am fairly new to neural network, but I did do a bit of reading. My values in the table has a max of 20 and a min of -20, so all my values are between -20 and 20.
I am trying to understand what kind of values in "Gradient" and "Mu" I should aim for. I am confused because those two values are just too far away from the "Target Value". My ANN has 5 layers with 9 neurals in each (ANN=newff(InputCols,OutputCols,[9 9 9 9 9])).Hopefully they will reach the target values and improve as time progresses but I am hoping someone has any reccomendations they are willing to share on my case considering the amount of data I have. I am worried because I did my first trainning with ANN=newff(InputCols,OutputCols,[6 6 6 6]) and I noticed that sometimes the outputs are just all zeros (might be a seperate issue).
0 Comments
Accepted Answer
Vijeta
on 14 Jun 2023
Hi,
Based on the information you have provided, the "Gradient" and "Mu" are hyperparameters of the neural network that you are training. The "Gradient" is the step size used to update the weights of the neural network during the backpropagation process. The "Mu" is the momentum parameter that controls how much the weight updates are influenced by the previous updates.
The choice of values for these parameters depends on the architecture of your neural network, the size of your dataset, and the type of problem you are trying to solve. In general, it is recommended to start with a small learning rate and gradually increase it as the training progresses. A common starting value for the "Gradient" is 0.01, and you can experiment with different values to see which one works best for your problem.
The "Mu" parameter is used to speed up the training process and prevent oscillations during the weight updates. A common value for "Mu" is between 0.9 and 0.99, but again, you can experiment with different values to see which one gives you the best results.
Regarding the issue of the outputs being all zeros, this could be caused by a number of factors, including a poorly chosen architecture, insufficient training data, or overfitting. You may want to try increasing the size of your neural network, collecting more data to train on, or implementing regularization techniques to prevent overfitting
0 Comments
More Answers (1)
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!