DNN training 3D Parameters
1 view (last 30 days)
Show older comments
I am training a DNN on a small dataset of MRI images in 3D with a scratch network I created with 4 sets of convolutional layer, batch normalization + relu + max pooling, followed by a global average pooling and 2 fully connected layers with a dropout in between them. I am experiencing a lot of low accuracy for both my training and validation curves, and my loss curve does not decay and is more horizontal around 1. I have tried to use l2 regularization, change momentum, and add a learn rate drop factor but it doesn't improve the accuracy. This model worked well with 2D images, but I am unable to get an accuracy above 60% for my 3D network. Would be helpful to recieve some suggestions on what paramters I could try to change
0 Comments
Answers (1)
Matt J
on 11 Apr 2024
Edited: Matt J
on 11 Apr 2024
The parameters you mention experimenting with do not include all the training options (see below for a more complete list). You could also try a different training algorithm, e.g., adam. Because it is a larger input/output dimension, you may also need to change the network architecture so that it has more weights to manipulate.
options = trainingOptions('adam', ...
'MiniBatchSize',5, ...
'MaxEpochs',100, ...
'InitialLearnRate',ilr, ...
'L2Regularization',1e-4,...
'LearnRateSchedule','piece', ...
'LearnRateDropFactor',0.8, ...
'LearnRateDropPeriod',5);
0 Comments
See Also
Categories
Find more on Biomedical Imaging in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!