Why does rlQValueRepresentation always add a Regression Output (RepresentationLoss) layer to the end of the network?

1 view (last 30 days)
I have noticed that if I create a critic using rlQValueRepresentation it includes a Regression Output (named RepresentationLoss) layer. I would like to understand why is this always the case and what is the purpose of that layer. I tried reading documentation on it but I did not find any on this subject particularly.
Also, when analyzing this "loss" layer, does not seem to have any output, so I'm very confused about it. Could you please help clarify this?
Thanks in advance!
Here is the code I used to see the differences:
env = rlPredefinedEnv("CartPole-Discrete");
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
dnn = [
featureInputLayer(obsInfo.Dimension(1),'Normalization','none','Name','state')
fullyConnectedLayer(24,'Name','CriticStateFC1')
reluLayer('Name','CriticRelu1')
fullyConnectedLayer(24, 'Name','CriticStateFC2')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(length(actInfo.Elements),'Name','output')];
figure
plot(layerGraph(dnn))
title('Original network');
critic = rlQValueRepresentation(dnn,obsInfo,actInfo,'Observation',{'state'});
criticmodel = getModel(critic);
figure;
plot(criticmodel);
title('Critic network');
% what are the outputs of this layer?
criticmodel.Layers(7, 1).NumOutputs

Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Products


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!