KDTree can do this. The process is almost the same as when using cdist. But cdist is much faster. And as pointed out in the comments, cKDTree is even faster:
import numpy as np from scipy.spatial.distance import cdist from scipy.spatial import KDTree from scipy.spatial import cKDTree import timeit # Random data data = np.random.uniform(0., 1., (1000, 2)) def scipy_method(): # Distance between the array and itself dists = cdist(data, data) # Sort by distances dists.sort() # Select the 1 st distance, since the zero distance is always 0. #(distance of a point with itself) nn_dist = dists[: , 1] return nn_dist def KDTree_method(): # You have to create the tree to use this method. tree = KDTree(data) # Then you find the closest two as the first is the point itself dists = tree.query(data, 2) nn_dist = dists[0][: , 1] return nn_dist def cKDTree_method(): tree = cKDTree(data) dists = tree.query(data, 2) nn_dist = dists[0][: , 1] return nn_dist print(timeit.timeit('cKDTree_method()', number = 100, globals = globals())) print(timeit.timeit('scipy_method()', number = 100, globals = globals())) print(timeit.timeit('KDTree_method()', number = 100, globals = globals()))
Output:
0.34952507635557595 7.904083715193579 20.765962179145546
Last Updated : 07 Jul, 2022
3 2 1 0 2 1 0 0 1 0 0 1
If a question is poorly phrased then either ask for clarification, ignore it, or edit the question and fix the problem. Insults are not welcome.,Understand that English isn't everyone's first language so be lenient of bad spelling and grammar.,Don't tell someone to read the manual. Chances are they have and don't get it. Provide an answer or move on to the next question. ,This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
char[][] array = new char[5][5]
public static void getIndexes(Grid_23722002 grid, Piece_23722002 piece){
int row = piece.getRow();
int col = piece.getCol();
Piece_23722002[][] nhood = new Piece_23722002[8][8];
Map<Piece_23722002, Integer> unsortedLengthMap = new LinkedHashMap<Piece_23722002, Integer>();
for (int i =0; i < grid.getArrayForm().length; i++){
for (int j =0; j < grid.getArrayForm()[i].length; j++){
unsortedLengthMap.put(piece, grid.getArrayForm()[i][j].getLengthToPoint(grid, i, j, piece));
}
}
Map<Piece_23722002, Integer> sortedLengthMap = new LinkedHashMap<Piece_23722002, Integer>();
unsortedLengthMap.entrySet()
.stream()
.sorted(Map.Entry.comparingByValue())
.forEachOrdered(x -> sortedLengthMap.put(x.getKey(), x.getValue()));
}
public int getLengthToPoint(Grid_23722002 grid, int i, int j, Piece_23722002 piece){
int piecesRow = piece.getRow();
int piecesCol = piece.getCol();
int length = (int) Math.sqrt(Math.pow(grid.getArrayForm()[i][j].getRow() - piecesRow, 2) + Math.pow(grid.getArrayForm()[i][j].getCol() - piecesCol, 2));
return length;
}
Using numpy we could easily find all distances using one line, Using numpy we could easily find all distances using one line ,Before we start using NearestNeighbor let's create a simple mini-framework to apply NN and visualize results easily, Before we start using NearestNeighbor let's create a simple mini-framework to apply NN and visualize results easily
X1 = generate_random_points(20, 0, 1)
X2 = generate_random_points(20, 1, 2)
new_point = generate_random_points(1, 0, 2)
plot = init_plot([0, 2], [0, 2]) #[0, 2] x[0, 2]
plot.plot( * X1.T, 'ro', * X2.T, 'bs', * new_point.T, 'g^');
class NearestNeighbor(): "" "Nearest Neighbor Classifier" "" def __init__(self, distance = 0): "" "Set distance definition: 0 - L1, 1 - L2" "" if distance == 0: self.distance = np.abs # absolute value elif distance == 1: self.distance = np.square # square root else: raise Exception("Distance not defined.") def train(self, x, y): "" "Train the classifier (here simply save training data) x--feature vectors(N x D) y--labels(N x 1) "" " self.x_train = x self.y_train = y def predict(self, x): "" "Predict and return labels for each feature vector from x x--feature vectors(N x D) "" " predictions = [] # placeholder for N labels # loop over all test samples for x_test in x: # array of distances between current test and all training samples distances = np.sum(self.distance(self.x_train - x_test), axis = 1) # get the closest one min_index = np.argmin(distances) # add corresponding label predictions.append(self.y_train[min_index]) return predictions
distances = np.sum(self.distance(self.x_train - x_test), axis = 1)
# let 's create an array with 5x2 shape a = np.random.random_sample((5, 2)) # and another array with 1 x2 shape b = np.array([ [1., 1.] ]) print(a, b, sep = "\n\n")
[ [0.79036457 0.36571819] [0.76743991 0.08439684] [0.56876884 0.97967839] [0.77020776 0.21238365] [0.94235534 0.73884472] ] [ [1. 1.] ]