Hi,
I understand that you are developing a face recognition system using webcam images and a preloaded database, but you're encountering incorrect recognition results where the identified person is not the actual one.
I assume you're using a face recognition pipeline that involves feature extraction and classification/matching, but the error may be due to preprocessing, feature inconsistency, or similarity thresholds.
In order to improve the recognition results, you can follow the below steps:
Step 1: Ensure consistent face alignment and size
Before feature extraction, make sure all faces (from webcam and database) are:
- Aligned to have eyes and mouth at the same relative positions.
- Resized to a consistent dimension (e.g., 112x92 or 100x100 pixels).
This helps maintain feature comparability.
Step 2: Normalize lighting and grayscale levels
Differences in brightness or contrast between webcam and database images can mislead the algorithm.
- Apply histogram equalization or adaptive histogram equalization (adapthisteq) on the face images to standardize lighting.
Step 3: Use robust feature extraction
If you are using basic pixel values as features, they are sensitive to noise. Prefer:
- "Eigenfaces" (PCA),
- "LBP" (Local Binary Patterns),
- or deep learning-based embeddings (e.g., using pretrained "FaceNet" features via MATLAB’s Deep Learning Toolbox).
Step 4: Implement a distance threshold
After extracting features, compare input features with database using a distance metric (e.g., Euclidean or cosine).
- Set a minimum similarity threshold to decide whether the person is recognized or unknown.
Step 5: Evaluate the recognition accuracy
Test with a validation set. If recognition accuracy is low, use confusion matrices to analyze misclassifications.
Refer to documentation on "vision.CascadeObjectDetector", "extractLBPFeatures", or "face recognition using deep learning" in MATLAB:
Hope this helps!