Hello, I need help in answering this question.
So I made a system that used LVQ as the method. The input for the system is an x-ray image, and the system will give diagnosis whether the patient's heart has an anomaly or not. I use 32 images as training data, and 45 images as testing data. The training accuracy is 87,5%, and the testing accuracy is 97,8%.
My question is, why is the training accuracy not 100%? While theoretically it should be, shouldn't it?.
I used 32 samples to do the training. Since the output of the system will be either YES (the patient's heart is abnormal) or NO (the heart is normal), so the there are two target classes, class 1 for normal and class 2 for abnormal. Each of those 32 samples are put into either class 1 or class 2. The system searches for the minimum distance between each of 32 samples and the initial weight based on the initial target classes. And it keeps processing the weight, until finally there is a final weight (w) which will be used in the testing phase.
What I don't understand is, because the final weight is produced from the training process using those 32 samples, so when I use the final weight to test each of those 32 samples, each of those images should have been put into the initialized target class (has 100% training accuracy). But when I used the final weight to test those 32 samples, only 28 gave the right answers. the other 4 images were put into classes which were different from the target classes. Why? I have looked up into several papers that use LVQ too as their method, and none of them has 100% training accuracy. But none explains why. I have also checked my code and I am quite sure there is nothing wrong with it. Can anyone help me answer this, please? Thank you