Documentation

This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

knnsearch

Find k-nearest neighbors using data

Syntax

Idx = knnsearch(X,Y)
Idx = knnsearch(X,Y,Name,Value)
[Idx,D] = knnsearch(___)

Description

example

Idx = knnsearch(X,Y) finds the nearest neighbor in X for each query point in Y and returns the indices of the nearest neighbors in Idx, a column vector. Idx has the same number of rows as Y.

Idx = knnsearch(X,Y,Name,Value) returns Idx with additional options specified using one or more name-value pair arguments. For example, you can specify the number of nearest neighbors to search for and the distance metric used in the search.

example

[Idx,D] = knnsearch(___) additionally returns the matrix D, using any of the input arguments in the previous syntaxes. D contains the distances between each observation in Y and the corresponding closest observations in X.

Examples

collapse all

Find the patients in the hospital data set that most closely resemble the patients in Y, according to age and weight.

Load the hospital data set.

load hospital;
X = [hospital.Age hospital.Weight];
Y = [20 162; 30 169; 40 168; 50 170; 60 171];   % New patients

Perform a knnsearch between X and Y to find indices of nearest neighbors.

Idx = knnsearch(X,Y);

Find the patients in X closest in age and weight to those in Y.

X(Idx,:)
ans = 5×2

    25   171
    25   171
    39   164
    49   170
    50   172

Find the 10 nearest neighbors in X to each point in Y, first using the Minkowski distance metric and then using the Chebychev distance metric.

Load Fisher's iris data set.

load fisheriris
X = meas(:,3:4);    % Measurements of original flowers
Y = [5 1.45;6 2;2.75 .75];  % New flower data

Perform a knnsearch between X and the query points Y using Minkowski and Chebychev distance metrics.

[mIdx,mD] = knnsearch(X,Y,'K',10,'Distance','minkowski','P',5);
[cIdx,cD] = knnsearch(X,Y,'K',10,'Distance','chebychev');

Visualize the results of the two nearest neighbor searches. Plot the training data. Plot the query points with the marker X. Use circles to denote the Minkowski nearest neighbors. Use pentagrams to denote the Chebychev nearest neighbors.

gscatter(X(:,1),X(:,2),species)
line(Y(:,1),Y(:,2),'Marker','x','Color','k',...
   'Markersize',10,'Linewidth',2,'Linestyle','none')
line(X(mIdx,1),X(mIdx,2),'Color',[.5 .5 .5],'Marker','o',...
   'Linestyle','none','Markersize',10)
line(X(cIdx,1),X(cIdx,2),'Color',[.5 .5 .5],'Marker','p',...
   'Linestyle','none','Markersize',10)
legend('setosa','versicolor','virginica','query point',...
'minkowski','chebychev','Location','best')

Input Arguments

collapse all

Input data, specified as a numeric matrix. Rows of X correspond to observations, and columns correspond to variables.

Data Types: single | double

Query points, specified as a numeric matrix. Rows of Y correspond to observations, and columns correspond to variables. Y must have the same number of columns as X.

Data Types: single | double

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: knnsearch(X,Y,'K',10,'IncludeTies',true,'Distance','cityblock') searches for 10 nearest neighbors, including ties and using the city block distance.

collapse all

Number of nearest neighbors to find in X for each point in Y, specified as the comma-separated pair consisting of 'K' and a positive integer.

Example: 'K',10

Data Types: single | double

Flag to include all nearest neighbors that have the same distance from query points, specified as the comma-separated pair consisting of 'IncludeTies' and false (0) or true (1).

If 'IncludeTies' is false, then knnsearch chooses the observation with the smallest index among the observations that have the same distance from a query point.

If 'IncludeTies' is true, then:

  • knnsearch includes all nearest neighbors whose distances are equal to the kth smallest distance in the output arguments. To specify k, use the 'K' name-value pair argument.

  • Idx and D are m-by-1 cell arrays such that each cell contains a vector of at least k indices and distances, respectively. Each vector in D contains distances arranged in ascending order. Each row in Idx contains the indices of the nearest neighbors corresponding to the smallest distances in D.

Example: 'IncludeTies',true

Nearest neighbor search method, specified as the comma-separated pair consisting of 'NSMethod' and one of these values.

  • 'kdtree' — Creates and uses a Kd-tree to find nearest neighbors. 'kdtree' is the default value when the number of columns in X is less than or equal to 10, X is not sparse, and the distance metric is 'euclidean', 'cityblock', 'chebychev', or 'minkowski'. Otherwise, the default value is 'exhaustive'.

    The value 'kdtree' is valid only when the distance metric is one of the four metrics noted above.

  • 'exhaustive' — Uses the exhaustive search algorithm by computing the distance values from all the points in X to each point in Y.

Example: 'NSMethod','exhaustive'

Distance metric knnsearch uses, specified as the comma-separated pair consisting of 'Distance' and one of the values in this table or a function handle.

ValueDescription
'euclidean'Euclidean distance.
'seuclidean'Standardized Euclidean distance. Each coordinate difference between rows in X and the query matrix Y is scaled by dividing by the corresponding element of the standard deviation computed from X. To specify another scaling, use the 'Scale' name-value pair argument.
'cityblock'City block distance.
'chebychev'Chebychev distance (maximum coordinate difference).
'minkowski'Minkowski distance. The default exponent is 2. To specify a different exponent, use the 'P' name-value pair argument.
'mahalanobis'Mahalanobis distance, computed using a positive definite covariance matrix. To change the value of the covariance matrix, use the 'Cov' name-value pair argument.
'cosine'One minus the cosine of the included angle between observations (treated as vectors).
'correlation'One minus the sample linear correlation between observations (treated as sequences of values).
'spearman'One minus the sample Spearman's rank correlation between observations (treated as sequences of values).
'hamming'Hamming distance, which is the percentage of coordinates that differ.
'jaccard'One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ.

You can also specify a function handle for a custom distance metric by using @ (for example, @distfun). A custom distance function must:

  • Have the form function D2 = distfun(ZI,ZJ).

  • Take as arguments:

    • A 1-by-n vector ZI containing a single row from X or from the query points Y.

    • An m2-by-n matrix ZJ containing multiple rows of X or Y.

  • Return an m2-by-1 vector of distances D2, whose jth element is the distance between the observations ZI and ZJ(j,:).

For more information, see Distance Metrics.

Example: 'Distance','chebychev'

Exponent for the Minkowski distance metric, specified as the comma-separated pair consisting of 'P' and a positive scalar. This argument is valid only if 'Distance' is 'minkowski'.

Example: 'P',4

Data Types: single | double

Covariance matrix for the Mahalanobis distance metric, specified as the comma-separated pair consisting of 'Cov' and a positive definite matrix. This argument is valid only when 'Distance' is 'mahalanobis'.

Example: 'Cov',eye(4)

Data Types: single | double

Scale parameter value for the standardized Euclidean distance metric, specified as the comma-separated pair consisting of 'Scale' and a nonnegative numeric vector. 'Scale' has length equal to the number of columns in X. When knnsearch computes the standardized Euclidean distance, each coordinate of X is scaled by the corresponding element of 'Scale', as is each query point. This argument is valid only when 'Distance' is 'seuclidean'.

Example: 'Scale',quantile(X,0.75) - quantile(X,0.25)

Data Types: single | double

Maximum number of data points in the leaf node of the Kd-tree, specified as the comma-separated pair consisting of 'BucketSize' and a positive integer. This argument is valid only when NSMethod is 'kdtree'.

Example: 'BucketSize',20

Data Types: single | double

Output Arguments

collapse all

Input data indices of the nearest neighbors, returned as a numeric matrix or cell array of numeric vectors.

  • If you do not specify IncludeTies (false by default), then Idx is an m-by-k numeric matrix, where m is the number of rows in Y and k is the number of searched nearest neighbors. Idx(j,i) indicates that X(Idx(j,i),:) is the observation with the ith smallest distance to the query point Y(j,:).

  • If you specify 'IncludeTies',true, then Idx is an m-by-1 cell array such that cell j (Idx{j}) contains a vector of at least k indices of the closest observations in X to the query point Y(j,:). The knnsearch function arranges the elements of the vectors in ascending order by distance.

Distances of the nearest neighbors to the query points, returned as a numeric matrix or cell array of numeric vectors.

  • If you do not specify IncludeTies (false by default), then D is an m-by-k numeric matrix, where m is the number of rows in Y and k is the number of searched nearest neighbors. D(j,i) is the distance between X(Idx(j,i),:) and Y(j,:) with respect to the distance metric, and it represents the ith smallest distance.

  • If you specify 'IncludeTies',true, then D is an m-by-1 cell array such that cell j (D{j}) contains a vector of at least k distances of the closest observations in X to the query point Y(j,:). The knnsearch function arranges the elements of the vectors in ascending order by distance.

Tips

  • For a fixed positive integer k, knnsearch finds the k points in X that are the nearest to each point in Y. To find all points in X within a fixed distance of each point in Y, use rangesearch.

  • knnsearch does not save a search object. To create a search object, use createns.

Algorithms

For information on a specific search algorithm, see Distance Metrics.

Alternative Functionality

If you set the knnsearch function's 'NSMethod' name-value pair argument to the appropriate value ('exhaustive' for an exhaustive search algorithm or 'kdtree' for a Kd-tree algorithm), then the search results are equivalent to the results obtained by conducting a distance search using the knnsearch object function. Unlike the knnsearch function, the knnsearch object function requires an ExhaustiveSearcher or a KDTreeSearcher model object.

References

[1] Friedman, J. H., J. Bentely, and R. A. Finkel. “An Algorithm for Finding Best Matches in Logarithmic Expected Time.” ACM Transactions on Mathematical Software 3, no. 3 (1977): 209–226.

Extended Capabilities

Introduced in R2010a

Was this topic helpful?