I am trying to construct a custom neural network for regression that gives its response in whole integers rather than in continuous real numbers. This is done because the target data is also in whole integers and fractional amounts are meaningless to the application.
At first blush, I thought the solution here would be to create a copy of the purelin transfer function and sub-functions with the a=n term replaced with a=round(n). However, this seems to create only 3 discrete steps. On further inspection when the custom function runs, the inputs to the function are all bounded to [-1 1], resulting in round(n) converting them to -1, 0, or 1. Given that the purelin template is unbound, I can only surmise that there is a separate function which scales the input back up to the value range after the transfer function.
So, the question is, what function for the layer actually determines the final output? How can I achieve my aim of getting a network to output whole integers? As a note, just rounding the result after the fact is not an acceptable solution as the integer nature of the output needs to be considered when calculating the performance of the network during training.