# knnsearch

Find k-nearest neighbors using searcher object

## Syntax

``Idx = knnsearch(Mdl,Y)``
``Idx = knnsearch(Mdl,Y,Name,Value)``
``````[Idx,D] = knnsearch(___)``````

## Description

example

````Idx = knnsearch(Mdl,Y)` searches for the nearest neighbor (i.e., the closest point, row, or observation) in `Mdl.X` to each point (i.e., row or observation) in the query data `Y` using an exhaustive search or a Kd-tree. `knnsearch` returns `Idx`, which is a column vector of the indices in `Mdl.X` representing the nearest neighbors.```

example

````Idx = knnsearch(Mdl,Y,Name,Value)` returns the indices of the closest points in `Mdl.X` to `Y` with additional options specified by one or more `Name,Value` pair arguments. For example, specify the number of nearest neighbors to search for, distance metric different from the one stored in `Mdl.Distance`. You can also specify which action to take if the closest distances are tied.```

example

``````[Idx,D] = knnsearch(___)``` additionally returns the matrix `D` using any of the input arguments in the previous syntaxes. `D` contains the distances between each observation in `Y` that correspond to the closest observations in `Mdl.X`. By default, the function arranges the columns of `D` in ascending order by closeness, with respect to the distance metric.```

## Examples

collapse all

`knnsearch` accepts `ExhaustiveSearcher` or `KDTreeSearcher` model objects to search the training data for the nearest neighbors to the query data. An `ExhaustiveSearcher` model invokes the exhaustive searcher algorithm, and a `KDTreeSearcher` model defines a Kd-tree, which `knnsearch` uses to search for nearest neighbors.

Load Fisher's iris data set. Randomly reserve five observations from the data for query data.

```load fisheriris rng(1); % For reproducibility n = size(meas,1); idx = randsample(n,5); X = meas(~ismember(1:n,idx),:); % Training data Y = meas(idx,:); % Query data```

The variable `meas` contains 4 predictors.

Grow a default four-dimensional Kd-tree.

`MdlKDT = KDTreeSearcher(X)`
```MdlKDT = KDTreeSearcher with properties: BucketSize: 50 Distance: 'euclidean' DistParameter: [] X: [145x4 double] ```

`MdlKDT` is a `KDTreeSearcher` model object. You can alter its writable properties using dot notation.

Prepare an exhaustive nearest neighbor searcher.

`MdlES = ExhaustiveSearcher(X)`
```MdlES = ExhaustiveSearcher with properties: Distance: 'euclidean' DistParameter: [] X: [145x4 double] ```

`MdlKDT` is an `ExhaustiveSearcher` model object. It contains the options, such as the distance metric, to use to find nearest neighbors.

Alternatively, you can grow a Kd-tree or prepare an exhaustive nearest neighbor searcher using `createns`.

Search the training data for the nearest neighbors indices that correspond to each query observation. Conduct both types of searches using the default settings. By default, the number of neighbors to search for per query observation is `1`.

```IdxKDT = knnsearch(MdlKDT,Y); IdxES = knnsearch(MdlES,Y); [IdxKDT IdxES]```
```ans = 5×2 17 17 6 6 1 1 89 89 124 124 ```

In this case, the results of the search are the same.

Grow a Kd-tree nearest neighbor searcher object by using the `createns` function. Pass the object and query data to the `knnsearch` function to find k-nearest neighbors.

Load Fisher's iris data set.

`load fisheriris`

Remove five irises randomly from the predictor data to use as a query set.

```rng(1) % For reproducibility n = size(meas,1); % Sample size qIdx = randsample(n,5); % Indices of query data tIdx = ~ismember(1:n,qIdx); % Indices of training data Q = meas(qIdx,:); X = meas(tIdx,:);```

Grow a four-dimensional Kd-tree using the training data. Specify the Minkowski distance for finding nearest neighbors.

`Mdl = createns(X,'Distance','minkowski')`
```Mdl = KDTreeSearcher with properties: BucketSize: 50 Distance: 'minkowski' DistParameter: 2 X: [145x4 double] ```

Because `X` has four columns and the distance metric is Minkowski, `createns` creates a `KDTreeSearcher` model object by default. The Minkowski distance exponent is `2` by default.

Find the indices of the training data (`Mdl.X`) that are the two nearest neighbors of each point in the query data (`Q`).

`IdxNN = knnsearch(Mdl,Q,'K',2)`
```IdxNN = 5×2 17 4 6 2 1 12 89 66 124 100 ```

Each row of `IdxNN` corresponds to a query data observation, and the column order corresponds to the order of the nearest neighbors, with respect to ascending distance. For example, based on the Minkowski distance, the second nearest neighbor of `Q(3,:)` is `X(12,:)`.

Load Fisher's iris data set.

`load fisheriris`

Remove five irises randomly from the predictor data to use as a query set.

```rng(4); % For reproducibility n = size(meas,1); % Sample size qIdx = randsample(n,5); % Indices of query data X = meas(~ismember(1:n,qIdx),:); Y = meas(qIdx,:);```

Grow a four-dimensional Kd-tree using the training data. Specify the Minkowski distance for finding nearest neighbors.

`Mdl = KDTreeSearcher(X);`

`Mdl` is a `KDTreeSearcher` model object. By default, the distance metric for finding nearest neighbors is the Euclidean metric.

Find the indices of the training data (`X`) that are the seven nearest neighbors of each point in the query data (`Y`).

`[Idx,D] = knnsearch(Mdl,Y,'K',7,'IncludeTies',true);`

`Idx` and `D` are five-element cell arrays of vectors, with each vector having at least seven elements.

Display the lengths of the vectors in `Idx`.

`cellfun('length',Idx)`
```ans = 5×1 8 7 7 7 7 ```

Because cell `1` contains a vector with length greater than k = 7, query observation 1 (`Y(1,:)`) is equally close to at least two observations in `X`.

Display the indices of the nearest neighbors to `Y(1,:)` and their distances.

`nn5 = Idx{1}`
```nn5 = 1×8 91 98 67 69 71 93 88 95 ```
`nn5d = D{1}`
```nn5d = 1×8 0.1414 0.2646 0.2828 0.3000 0.3464 0.3742 0.3873 0.3873 ```

Training observations `88` and `95` are 0.3873 cm away from query observation `1`.

Train two `KDTreeSearcher` models using different distance metrics, and compare k-nearest neighbors of query data for the two models.

Load Fisher's iris data set. Consider the petal measurements as predictors.

```load fisheriris X = meas(:,3:4); % Predictors Y = species; % Response```

Train a `KDTreeSearcher` model object by using the predictors. Specify the Minkowski distance with exponent 5.

`KDTreeMdl = KDTreeSearcher(X,'Distance','minkowski','P',5)`
```KDTreeMdl = KDTreeSearcher with properties: BucketSize: 50 Distance: 'minkowski' DistParameter: 5 X: [150x2 double] ```

Find the 10 nearest neighbors from `X` to a query point (`newpoint`), first using Minkowski then Chebychev distance metrics. The query point must have the same column dimension as the data used to train the model.

```newpoint = [5 1.45]; [IdxMk,DMk] = knnsearch(KDTreeMdl,newpoint,'k',10); [IdxCb,DCb] = knnsearch(KDTreeMdl,newpoint,'k',10,'Distance','chebychev');```

`IdxMk` and `IdxCb` are 1-by-10 matrices containing the row indices of `X` corresponding to the nearest neighbors to `newpoint` using Minkowski and Chebychev distances, respectively. Element (1,1) is the nearest, element (1,2) is the next nearest, and so on.

Plot the training data, query point, and nearest neighbors.

```figure; gscatter(X(:,1),X(:,2),Y); title('Fisher''s Iris Data -- Nearest Neighbors'); xlabel('Petal length (cm)'); ylabel('Petal width (cm)'); hold on plot(newpoint(1),newpoint(2),'kx','MarkerSize',10,'LineWidth',2); % Query point plot(X(IdxMk,1),X(IdxMk,2),'o','Color',[.5 .5 .5],'MarkerSize',10); % Minkowski nearest neighbors plot(X(IdxCb,1),X(IdxCb,2),'p','Color',[.5 .5 .5],'MarkerSize',10); % Chebychev nearest neighbors legend('setosa','versicolor','virginica','query point',... 'minkowski','chebychev','Location','Best');```

Zoom in on the points of interest.

```h = gca; % Get current axis handle. h.XLim = [4.5 5.5]; h.YLim = [1 2]; axis square;```

Several observations are equal, which is why only eight nearest neighbors are identified in the plot.

## Input Arguments

collapse all

Nearest neighbor searcher, specified as an `ExhaustiveSearcher` or `KDTreeSearcher` model object, respectively.

If `Mdl` is an `ExhaustiveSearcher` model, then `knnsearch` searches for nearest neighbors using an exhaustive search. Otherwise, `knnsearch` uses the grown Kd-tree to search for nearest neighbors.

Query data, specified as a numeric matrix.

`Y` is an m-by-K matrix. Rows of `Y` correspond to observations (i.e., examples), and columns correspond to predictors (i.e., variables or features). `Y` must have the same number of columns as the training data stored in `Mdl.X`.

Data Types: `single` | `double`

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.

Example: `'K',2,'Distance','minkowski'` specifies to find the two nearest neighbors of `Mdl.X` to each point in `Y` and to use the Minkowski distance metric.

For Both Nearest Neighbor Searchers

collapse all

Distance metric used to find neighbors of the training data to the query observations, specified as one of the values in this table or function handle.

For both types of nearest neighbor searchers, `knnsearch` supports these distance metrics.

ValueDescription
`'chebychev'`Chebychev distance (maximum coordinate difference)
`'cityblock'`City block distance
`'euclidean'`Euclidean distance
`'minkowski'`Minkowski distance. The default exponent is 2. To specify a different exponent, use the `'P'` name-value argument.

If `Mdl` is an `ExhaustiveSearcher` model object, then `knnsearch` also supports these distance metrics.

ValueDescription
`'correlation'`One minus the sample linear correlation between observations (treated as sequences of values)
`'cosine'`One minus the cosine of the included angle between observations (treated as row vectors)
`'fasteuclidean'`Euclidean distance computed by using an alternative algorithm that saves time when the number of predictors is at least 10. In some cases, this faster algorithm can reduce accuracy. This distance metric is available only when `NSMethod` is `'exhaustive'`. Algorithms starting with `'fast'` do not support sparse data. For details, see Algorithms.
`'fastseuclidean'`Standardized Euclidean distance computed by using an alternative algorithm that saves time when the number of predictors is at least 10. In some cases, this faster algorithm can reduce accuracy. This distance metric is available only when `NSMethod` is `'exhaustive'`. Algorithms starting with `'fast'` do not support sparse data. For details, see Algorithms.
`'hamming'`Hamming distance, which is the percentage of coordinates that differ
`'jaccard'`One minus the Jaccard coefficient, which is the percentage of nonzero coordinates that differ
`'mahalanobis'`Mahalanobis distance, computed using a positive definite covariance matrix. To change the value of the covariance matrix, use the `'Cov'` name-value argument.
`'seuclidean'`Standardized Euclidean distance. Each coordinate difference between the rows in `X` and the query matrix `Y` is scaled by dividing by the corresponding element of the standard deviation computed from `X`. To specify a different scaling, use the `'Scale'` name-value argument.
`'spearman'`One minus the sample Spearman's rank correlation between observations (treated as sequences of values)

If `Mdl` is an `ExhaustiveSearcher` model object, then you can also specify a function handle for a custom distance metric by using `@` (for example, `@distfun`). The custom distance function must:

• Have the form ```function D2 = distfun(ZI,ZJ)```.

• Take as arguments:

• A 1-by-K vector `ZI` containing a single row from `Mdl.X` or `Y`, where K is the number of columns of `Mdl.X`.

• An m-by-K matrix `ZJ` containing multiple rows of `Mdl.X` or `Y`, where m is a positive integer.

• Return an m-by-1 vector of distances `D2`, where `D2(j)` is the distance between the observations `ZI` and `ZJ(j,:)`.

For more details, see Distance Metrics.

Example: `'Distance','minkowski'`

Data Types: `char` | `string` | `function_handle`

Flag to include nearest neighbors that have the same distance from query observations, specified as the comma-separated pair consisting of `'IncludeTies'` and `false` (`0`) or `true` (`1`).

If `IncludeTies` is `true`, then:

• `knnsearch` includes all nearest neighbors whose distances are equal to the kth smallest distance in the output arguments, where k is the number of searched nearest neighbors specified by the `'K'` name-value pair argument.

• `Idx` and `D` are m-by-`1` cell arrays such that each cell contains a vector of at least k indices and distances, respectively. Each vector in `D` contains arranged distances in ascending order. Each row in `Idx` contains the indices of the nearest neighbors corresponding to the distances in `D`.

If `IncludeTies` is `false`, then `knnsearch` chooses the observation with the smallest index among the observations that have the same distance from a query point.

Example: `'IncludeTies',true`

Number of nearest neighbors to search for in the training data per query observation, specified as the comma-separated pair consisting of `'K'` and a positive integer.

Example: `'K',2`

Data Types: `single` | `double`

Exponent for the Minkowski distance metric, specified as the comma-separated pair consisting of `'P'` and a positive scalar. This argument is valid only if `'Distance'` is `'minkowski'`.

Example: `'P',3`

Data Types: `single` | `double`

For Kd-Tree Nearest Neighbor Searchers

collapse all

Flag to sort returned indices according to distance, specified as the comma-separated pair consisting of `'SortIndices'` and either `true` (`1`) or `false` (`0`).

For faster performance, you can set `SortIndices` to `false` when the following are true:

In this case, `knnsearch` returns the indices of the nearest neighbors in no particular order. When `SortIndices` is `true`, the function arranges the nearest neighbor indices in ascending order by distance.

`SortIndices` is `true` by default. When `Mdl` is an `ExhaustiveSearcher` model object or `IncludeTies` is `true`, the function always sorts the indices.

Example: `'SortIndices',false`

Data Types: `logical`

For Exhaustive Nearest Neighbor Searchers

collapse all

Covariance matrix for the Mahalanobis distance metric, specified as the comma-separated pair consisting of `'Cov'` and a positive definite matrix. `Cov` is a K-by-K matrix, where K is the number of columns of `Mdl.X`. If you specify `Cov` and do not specify `'``Distance``','mahalanobis'`, then `knnsearch` returns an error message.

Example: `'Cov',eye(3)`

Data Types: `single` | `double`

Scale parameter value for the standardized Euclidean distance metric, specified as the comma-separated pair consisting of `'Scale'` and a nonnegative numeric vector. `Scale` has length K, where K is the number of columns of `Mdl.X`.

The software scales each difference between the training and query data using the corresponding element of `Scale`. If you specify `Scale` and do not specify `'``Distance``','seuclidean'`, then `knnsearch` returns an error message.

Example: ```'Scale',quantile(Mdl.X,0.75) - quantile(Mdl.X,0.25)```

Data Types: `single` | `double`

Note

If you specify `'``Distance``'`, `'``Cov``'`, `'``P``'`, or `'``Scale``'`, then `Mdl.Distance` and `Mdl.DistParameter` do not change value.

## Output Arguments

collapse all

Training data indices of nearest neighbors, returned as a numeric matrix or cell array of numeric vectors.

• If you do not specify `IncludeTies` (`false` by default), then `Idx` is an m-by-k numeric matrix, where m is the number of rows in `Y` and k is the number of searched nearest neighbors specified by the `'K'` name-value pair argument. `Idx(j,i)` indicates that `Mdl.X(Idx(j,i),:)` is one of the k closest observations in `Mdl.X` to the query observation `Y(j,:)`.

• If you specify `'IncludeTies',true`, then `Idx` is an m-by-`1` cell array such that cell `j` (`Idx{j}`) contains a vector of at least k indices of the closest observations in `Mdl.X` to the query observation `Y(j,:)`.

If `SortIndices` is `true`, then `knnsearch` arranges the indices in ascending order by distance.

Distances of the nearest neighbors to the query data, returned as a numeric matrix or cell array of numeric vectors.

• If you do not specify `IncludeTies` (`false` by default), then `D` is an m-by-k numeric matrix, where m is the number of rows in `Y` and k is the number of searched nearest neighbors specified by the `'K'` name-value pair argument. `D(j,i)` is the distance between `Mdl.X(Idx(j,i),:)` and the query observation `Y(j,:)` with respect to the distance metric.

• If you specify `'IncludeTies',true`, then `D` is an m-by-`1` cell array such that cell `j` (`D{j}`) contains a vector of at least k distances of the closest observations in `Mdl.X` to the query observation `Y(j,:)`.

If `SortIndices` is `true`, then `knnsearch` arranges the distances in ascending order.

## Tips

`knnsearch` finds the k (positive integer) points in `Mdl.X` that are k-nearest for each `Y` point. In contrast, `rangesearch` finds all the points in `Mdl.X` that are within distance `r` (positive scalar) of each `Y` point.

## Algorithms

collapse all

### Fast Euclidean Distance Algorithm

The values of the `Distance` argument that begin `fast` (such as `'fasteuclidean'` and `'fastseuclidean'`) calculate Euclidean distances using an algorithm that uses extra memory to save computational time. This algorithm is named "Euclidean Distance Matrix Trick" in Albanie [1] and elsewhere. Internal testing shows that this algorithm saves time when the number of predictors is at least 10. Algorithms starting with `'fast'` do not support sparse data.

To find the matrix D of distances between all the points xi and xj, where each xi has n variables, the algorithm computes distance using the final line in the following equations:

`$\begin{array}{c}{D}_{i,j}^{2}=‖{x}_{i}-{x}_{j}{‖}^{2}\\ ={\left(}^{{x}_{i}}\left({x}_{i}-{x}_{j}\right)\\ =‖{x}_{i}{‖}^{2}-2{x}_{i}^{T}{x}_{j}+‖{x}_{j}{‖}^{2}.\end{array}$`

The matrix ${x}_{i}^{T}{x}_{j}$ in the last line of the equations is called the Gram matrix. Computing the set of squared distances is faster, but slightly less numerically stable, when you compute and use the Gram matrix instead of computing the squared distances by squaring and summing. For a discussion, see Albanie [1].

## References

[1] Albanie, Samuel. Euclidean Distance Matrix Trick. June, 2019. Available at https://www.robots.ox.ac.uk/%7Ealbanie/notes/Euclidean_distance_trick.pdf.

## References

[1] Friedman, J. H., Bentley, J., and Finkel, R. A. (1977). “An Algorithm for Finding Best Matches in Logarithmic Expected Time.” ACM Transactions on Mathematical Software Vol. 3, Issue 3, Sept. 1977, pp. 209–226.

## Version History

Introduced in R2010a

expand all