Mini batch size changing value during gradient descent
6 views (last 30 days)
Show older comments
Hello everyone,
I am currently working on multimodal deep learning, with a neural network classifier receiving two time-dependent inputs, videos and a set of given features. Videos are 4D matrixes of size width x height x depth x frames and features are 2D matrixes of size number of features x frames.
I've been trying to classify the inputs based on the examples given below, as on some of my previous work.
During my training, I have come across a very singular situation. The value of minibatchsize, which I had initially set to 16, was decreased to 9. This produced an error as the layer were expecting batch sizes of 16 in the dlfeval() function.
I haven't found anything related to this problem on here, I was wondering if any of you would have a piece of advice or a solution for me.
Thank you for your help !
0 Comments
Answers (1)
Shubham
on 27 Sep 2023
I understand that while training the neural network you found that minibatch size which was initially set to 16 was later decreased to 9.
Please check whether the dataset being used has the total number of samples divisible by 16. This would ensure that all samples are used and the minibatch size is not adjusting automatically to accommodate remaining samples. The minibatch size is also dependent on the available memory. Try looking for any inconsistencies in data preprocessing or the network architecture.
You may try using Mini-Batch datastore for reading data in batches.
Please refer to the following:
Hope this helps!!
0 Comments
See Also
Categories
Find more on Image Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!