#### [ Calculating the distance between each pair of a set of points ]

So I'm working on simulating a large number of n-dimensional particles, and I need to know the distance between every pair of points. Allowing for some error, and given the distance isn't relevant at all if exceeds some threshold, are there any good ways to accomplish this? I'm pretty sure if I want `dist(A,C)`

and already know `dist(A,B)`

and `dist(B,C)`

I can bound it by `[dist(A,B)-dist(B,C) , dist(A,B)+dist(B,C)]`

, and then store the results in a sorted array, but I'd like to not reinvent the wheel if there's something better.

I don't think the number of dimensions should greatly affect the logic, but maybe for some solutions it will. Thanks in advance.

# Answer 1

If the problem was simply about calculating the distances between all pairs, then it would be a `O(n^2)`

problem without any chance for a better solution. However, you are saying that if the distance is greater than some threshold `D`

, then you are not interested in it. This opens the opportunities for a better algorithm.

For example, in 2D case you can use the sweep-line technique. Sort your points lexicographically, first by `y`

then by `x`

. Then sweep the plane with a *stripe* of width `D`

, bottom to top. As that stripe moves across the plane new points will enter the stripe through its top edge and exit it through its bottom edge. Active points (i.e. points currently inside the stripe) should be kept in some incrementally modifiable linear data structure sorted by their `x`

coordinate.

Now, every time a new point enters the stripe, you have to check the currently active points to the left and to the right no farther than `D`

(measured along the `x`

axis). That's all.

The purpose of this algorithm (as it is typically the case with sweep-line approach) is to push the practical complexity away from `O(n^2)`

and towards `O(m)`

, where `m`

is the number of interactions we are actually interested in. Of course, the worst case performance will be `O(n^2)`

.

The above applies to 2-dimensional case. For n-dimensional case I'd say you'll be better off with a different technique. Some sort of space partitioning should work well here, i.e. to exploit the fact that if the distance between partitions is known to be greater than `D`

, then there's no reason to consider the specific points in these partitions against each other.

# Answer 2

If the distance beyond a certain threshold is not relevant, and this threshold is not too large, there are common techniques to make this more efficient: limit the search for neighbouring points using space-partitioning data structures. Possible options are:

- Binning.
- Trees: quadtrees(2d), kd-trees.
- Binning with spatial hashing.

Also, since the distance from point A to point B is the same as distance from point B to point A, this distance should only be computed once. Thus, you should use the following loop:

```
for point i from 0 to n-1:
for point j from i+1 to n:
distance(point i, point j)
```

Combining these two techniques is very common for n-body simulation for example, where you have particles affect each other if they are close enough. Here are some fun examples of that in 2d: http://forum.openframeworks.cc/index.php?topic=2860.0

Here's a explanation of binning (and hashing): http://www.cs.cornell.edu/~bindel/class/cs5220-f11/notes/spatial.pdf