I have a series of P images, each MxNxC (in other words C channels) organized as a 4D numeric array X where size(X) is [M,N,C,P].
I have a corresponding series of P label maps organized as a 3D numeric array Y where size(Y) is [M,N,P].
Question: How do I convert this input to a form that can be fed to trainNetwork() in a semantic image segmentation application?
I know that trainNetwork has an input syntax,
trainedNet = trainNetwork(X,Y,layers,options)
but the documentation does not discuss the format of Y for semantic segmentation problems.