Hello Murat,
To plot the linear decision boundary between two Gaussian-distributed datasets with given means and covariances, you can use the concept of discriminant functions. The decision boundary is where these discriminant functions are equal, i.e., .
Given your datasets and parameters, you can follow these steps:
- Define the Discriminant Functions:
For each class , the linear discriminant function can be expressed as: where:
- Calculate the Coefficients:
Since your covariance matrices are identity matrices, the inverse is simply the same matrix, and the calculations simplify.
- Find the Decision Boundary:
The decision boundary is found by setting , which simplifies to solving: - Plot the Decision Boundary:
Solve the above equation for in terms of to get the equation of a line, and plot this line. Here is the updated code:
r1 = mvnrnd(u1, s1, 500);
r2 = mvnrnd(u2, s2, 500);
plot(r1(:,1), r1(:,2), '+r');
plot(r2(:,1), r2(:,2), '+b');
W10 = -0.5 * (u1' * inv(s1) * u1) + log(Pr1);
W20 = -0.5 * (u2' * inv(s2) * u2) + log(Pr2);
x1_vals = linspace(min([r1(:,1); r2(:,1)]), max([r1(:,1); r2(:,1)]), 100);
x2_vals = -(W_diff(1)/W_diff(2)) * x1_vals - (W0_diff/W_diff(2));
plot(x1_vals, x2_vals, '-k', 'LineWidth', 2);
title('Linear Decision Boundary');
legend('Class 1', 'Class 2', 'Decision Boundary');
The output of this code looks like:
I hope this helps!