High-level feedback
Unless you're in a very specific domain (such as heavily-restricted embedded programming), don't write convex optimization loops of your own.
You should write regression and unit tests. I demonstrate some rudimentary tests below.
Never run a pseudo-random test without first setting a known seed.
Your variable names are poorly-chosen: in the context of your test, x
isn't actually x
, but the hidden source position vector; and y
isn't actually y
, but the calculated source position vector.
Performance
Don't write scalar-to-scalar numerical code in Python, nor re-invent vectors; call into a vectorised library like Numpy (you've already suggested this in your comments). The original implementation is very slow. For four detectors the original code runs in ~1-5 seconds and the Numpy/Scipy root-finding approach executes in about one millisecond, so the speed-up - depending on the inputs - is somewhere on the order of x1000. The analytic approach can be faster or slower depending on context. I've added rudimentary benchmarks to show these comparisons.
There are multiple reasons for the speed issue: other than Python being slower than C, you should choose an algorithm that converges faster - described below.
The Numpy/Scipy solution can scale in both space and time. In space, the demonstration's big_benchmark
shows a solution time of about 40 ms for 100,000 detectors.
In time, you would be able to run many consecutive iterations for a small number of detectors. This is essentially the use case you describe as
perform thousands to hundreds of thousands in a reasonable time frame
Science
Textbook Newton-Raphson is often not the best tool for the job. It's used with modification or as a step in a better convex optimizer. Don't use it here, despite scipy having a vetted implementation. Among other issues in this program, it simply doesn't function for some inputs. When I ran your test_error
(for a random seed of 0), your algorithm gives up at an error of 0.3 for inputs on the order of 0.5! I have not invested any effort in tracking down whether this is due to your implementation specifically, or a shortcoming of Newton-Raphson in general.
This form of localisation is typically called multilateration, is a non-linear problem, and for non-trivial systems with more than three detectors is over-determined. However, in your favour, it's fully continuous and differentiable. I'm sure it's been done to death in the literature, but I haven't done a literature survey, so here is my naїve take:
You need a convex optimisation that minimises the difference between the Frobenius norm in the input and the norm of the input detector positions to a source position. The source position is your decision variable vector. Your implementation did not have any other bounds or constraints so I don't show any.
It's easy to prove that for a 3-subset of the detector locus, i.e. solving a trilateration sub-problem of the multilateration problem, there is a dual, exactly-determined solution so long as:
- the subset is non-degenerate (no point duplicates, no collinearity); and
- the subset selects norms for which their spherical surfaces all intersect.
If the Jacobian were difficult, you would want to use Anderson acceleration with an implicit numerical Jacobian. Thankfully, the Jacobian is easy, so don't do that: instead, use Levenberg-Marquardt - a variant on Gauss-Newton, which is in turn a variant on Newton-Raphson. Pass LM an explicit Jacobian. This will be numerically stable. During this first estimation step, be very generous in tolerance. This step is best-represented as a call to scipy's vector-valued root-finding routine. As an implementation note: do not use the norm directly in your Jacobian; use the squared norm which won't have domain problems.
Again, this estimate is one value in a dual solution. It's easy to prove that the complementary value is a reflection in the plane formed by the three points used in the estimate subset. Calculate the complementary value, which is an O(1) operation. Then choose between the two estimates based on which one produces less norm error for the rest of the dataset; this will be O(n).
In the last step, you can reuse Levenberg-Marquardt, your cost function and your Jacobian. This step is best-represented as a call to scipy's vector-valued least-squares routine - it's no longer a root-finding operation because it's almost certainly not going to have exact zeros. You can crank down the tolerance, but it should still be fast since it's getting a good estimate from the prior step.
Python bits
If you really had to keep Vector
, try making it immutable: dataclass
supports frozen
; or better yet, make it a NamedTuple
. Among other advantages,NamedTuple
supports direct conversion to a Numpy array because it's iterable.
Add PEP484 type hints.
Don't shadow the built-in max()
function with a parameter name.
Add a __main__
guard.
Demonstration (root-finding approach)
import contextlib
import math
import time
import typing
import numpy as np
import scipy.optimize
class Vector(typing.NamedTuple):
"""OP's simple vector class to avoid dependencies"""
x: float
y: float
z: float
def __add__(self, operand: typing.Self) -> typing.Self:
return Vector(self.x + operand.x, self.y + operand.y, self.z + operand.z)
def __sub__(self, operand: typing.Self) -> typing.Self:
return Vector(self.x - operand.x, self.y - operand.y, self.z - operand.z)
def __rmul__(self, scalar: float) -> typing.Self:
return Vector(self.x * scalar, self.y * scalar, self.z * scalar)
def __truediv__(self, scalar: float) -> typing.Self:
return Vector(self.x / scalar, self.y / scalar, self.z / scalar)
def __abs__(self) -> float:
return math.sqrt(pow(self.x, 2) + pow(self.y, 2) + pow(self.z, 2))
@classmethod
def from_array(cls, a: np.ndarray) -> typing.Self:
return cls(*a.tolist())
def to_array(self) -> np.ndarray:
return np.array(self)
class Localize(typing.NamedTuple):
"""OP's Newton-Raphson localization class"""
a: Vector
b: Vector
c: Vector
d: Vector
@staticmethod
def tend(y: Vector, q: Vector, dxq: float) -> Vector:
dyq = abs(y - q)
return (dxq - dyq)*(y - q)/dyq
def find(
self,
dxa: float, dxb: float, dxc: float, dxd: float,
gamma: float = 1e-3, precision: float = 1e-10, max_: int = int(1e10),
) -> Vector:
y = Vector(0, 0, 0)
for i in range(max_):
f = (
self.tend(y, self.a, dxa)
+ self.tend(y, self.b, dxb)
+ self.tend(y, self.c, dxc)
+ self.tend(y, self.d, dxd)
)
y += gamma * f
if abs(f) <= precision:
return y
raise ValueError('Max. iterations insufficient.')
def root_errors(source: np.ndarray, detectors: np.ndarray, norms2: np.ndarray) -> np.ndarray:
deltas = detectors - source
# to calculate squared distance, use Einstein sum for self dot-product across second dimension
dist_squared = np.einsum('ij,ij->i', deltas, deltas)
# norm_ref**2 - deltax**2 - deltay**2 - deltaz**2
return norms2 - dist_squared
def root_jacobian(source: np.ndarray, detectors: np.ndarray, norms2: np.ndarray) -> np.ndarray:
# 3x3 gradient, i.e. d(norm_error)/dsource
return 2*(detectors - source)
def localise_estimate(
detectors: np.ndarray, # n*3
norms: np.ndarray, # n
est_mask: slice, # e.g. :3
validate: bool,
) -> np.array: # *3
"""
Phase 1 of a reasonable localisation algorithm can use an exact (iterative or otherwise)
solution for a 3-subset of the detector locus. This will converge to one of only two possible
solutions regardless of noise. It only requires the precondition that the subset (defined by
est_mask) selects non-collinear points.
This could be improved by replacing the iterative root-find with an analytic solution.
"""
exact_detectors = detectors[est_mask] # typically 3*3
args = exact_detectors, norms[est_mask]**2
x0 = detectors.mean(axis=0) # *3
if validate:
error = scipy.optimize.check_grad(root_errors, root_jacobian, x0, *args)
if not np.isclose(0, error, rtol=0, atol=1e-6):
raise ValueError(f'Jacobian reference error {error:.2e} too high')
result = scipy.optimize.root(
fun=root_errors, args=args, x0=x0,
# For an implicit Jacobian, Anderson acceleration converges fastest.
# https://en.wikipedia.org/wiki/Anderson_acceleration
# For an explicit Jacobian, Levenberg-Marquardt converges fastest.
# https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm
jac=root_jacobian, method='lm',
tol=1e-2*norms.mean(), # to within (at worst) 1% of order of magnitude of norms
)
if not result.success:
raise ValueError(result.message)
if validate:
norms_est = np.linalg.norm(exact_detectors - result.x, axis=1) # typically *3
# Levenberg-Marquardt relative error may be lower, but be lenient
if not np.allclose(norms[est_mask], norms_est, atol=0, rtol=1e-2):
raise ValueError('Estimate error too high')
return result.x # *3
def resolve_estimate(
detectors: np.ndarray, # n*3
norms: np.ndarray, # n
est_mask: slice, # e.g. :3
inv_mask: slice, # e.g. 3:
estimate_a: np.ndarray, # 3
validate: bool,
) -> np.ndarray: # 3
"""
The estimate root is one of two solutions. Generate the second via planar
reflection and choose the one that incurs less error.
"""
q1, q2, q3 = detectors[est_mask] # each *3
normal = np.cross(q2 - q1, q3 - q1) # *3
normal /= np.linalg.norm(normal)
projection = normal*normal.dot(estimate_a - q1) # *3
estimate_b = estimate_a - 2*projection # *3
if validate:
refl_norms = np.linalg.norm(detectors[est_mask] - estimate_b, axis=1) # typically *3
if not np.allclose(refl_norms, norms[est_mask]):
raise ValueError('Incorrect planar reflection')
estimates = np.stack((estimate_a, estimate_b)) # 2*3
deltas = detectors[np.newaxis, inv_mask, :] - estimates[:, np.newaxis, :] # 2*(n-3)*3
possible_norms = np.linalg.norm(deltas, axis=-1) # 2*(n-3)
# Least-squared error
errors = possible_norms - norms[inv_mask] # 2*(n-3)
errors = np.einsum('ij,ij->i', errors, errors) # *2
best_idx = errors.argmin()
return estimates[best_idx]
def localise_polish(
detectors: np.ndarray, # n*3
norms: np.ndarray, # n
estimate: np.ndarray, # 3
) -> np.ndarray:
"""
The last phase of a reasonable localisation algorithm typically cannot be exact, because (for
more than three detectors) the system is over-determined and the best we can do is a
least-squares polishing. The amount of work this has to do depends on the accuracy of the data.
"""
# Reuse the error function, Jacobian and internal method from the estimation step
result = scipy.optimize.least_squares(
fun=root_errors, jac=root_jacobian, args=(detectors, norms**2),
x0=estimate, method='lm',
)
if not result.success:
raise ValueError(result.message)
return result.x
def localise_scipy(
detectors: np.ndarray, # n*3
norms: np.ndarray, # n
validate: bool = False,
) -> np.ndarray:
# An exact root is only possible for three points; arbitrarily use the first three.
# This could be made safer by attempting to choose non-collinear points.
est_mask = slice(None, 3)
inv_mask = slice(3, None)
estimate_a = localise_estimate(
detectors=detectors, norms=norms, est_mask=est_mask, validate=validate,
)
estimate = resolve_estimate(
detectors=detectors, norms=norms, est_mask=est_mask, validate=validate,
estimate_a=estimate_a, inv_mask=inv_mask,
)
return localise_polish(
detectors=detectors, norms=norms, estimate=estimate,
)
@contextlib.contextmanager
def single_benchmark(caption: str) -> typing.Iterator[None]:
start = time.monotonic()
yield
dur = time.monotonic() - start
print(f'{caption}: {1e3*dur:.1f} ms')
def regression_test() -> None:
# synthetic detector points
a = Vector(x=0.84442185152504810, y=0.75795440294030250, z=0.4205715808308450) # random_point()
b = Vector(x=0.25891675029296335, y=0.51127472136860850, z=0.4049341374504143)
c = Vector(x=0.78379858903477260, y=0.30331272607892745, z=0.4765969541523558)
d = Vector(x=0.58338203945503120, y=0.90811288519533520, z=0.5046868558173903)
# hidden source
source_vector = Vector(x=0.28183784439970383, y=0.75580420415722390, z=0.6183689966753316)
detectors = np.stack((
a.to_array(), b.to_array(), c.to_array(), d.to_array(),
))
hidden_source = source_vector.to_array()
localize = Localize(a, b, c, d)
with single_benchmark('OP, four detectors'):
source = localize.find(abs(source_vector - a), abs(source_vector - b), abs(source_vector - c), abs(source_vector - d))
assert math.isclose(source.x, 0.28875017528775670)
assert math.isclose(source.y, 0.80460146246998620)
assert math.isclose(source.z, 0.31891289757720537)
error = abs(source_vector - source)
assert math.isclose(error, 0.3034846092048412)
# The error for OP method is quite poor. No further assertions are meaningful.
with single_benchmark('Scipy, four detectors'):
reference = localise_scipy(
detectors=detectors,
norms=np.linalg.norm(detectors - hidden_source, axis=1),
validate=True,
)
assert np.allclose(reference, hidden_source, rtol=0, atol=1e-12)
print(f'Error={reference - hidden_source}')
print()
def constant_dist_test() -> None:
# Artificial, error-free system where all detectors are the same distance from the source
hidden_source = np.array((1, 0, 2))
hidden_dist = 0.7
hidden_xy = np.array((
(1.1, 0.1),
(0.8, 0.2),
(1.0, -0.3),
(0.9, 0.4),
))
# dist == sqrt((i-i0)**2 + (j-j0)**2 + (k-k0)**2)
# k = +/- sqrt(dist**2 - (i-i0)**2 - (j-j0)**2) + k0
hidden_z = np.sqrt(
hidden_dist**2 - ((hidden_xy - hidden_source[:2])**2).sum(axis=1)
)*(1, 1, -1, 1) + hidden_source[2]
hidden_xyz = np.concatenate((hidden_xy, hidden_z[:, np.newaxis]), axis=1)
norm_check = np.linalg.norm(hidden_xyz - hidden_source, axis=1)
assert np.allclose(hidden_dist, norm_check)
# Detector positions with noise
rand = np.random.default_rng(seed=0)
noise = rand.normal(loc=0, scale=0.05, size=hidden_xyz.shape)
xyz = hidden_xyz + noise
# TDOA norms with noise
noise = rand.normal(loc=0, scale=0.05, size=xyz.shape[0])
norms = noise + np.full_like(noise, fill_value=hidden_dist)
a, b, c, d = xyz
localize = Localize(
a=Vector.from_array(a),
b=Vector.from_array(b),
c=Vector.from_array(c),
d=Vector.from_array(d),
)
with single_benchmark('OP, four detectors, constant distance'):
op_source = localize.find(*norms.tolist()).to_array()
assert np.allclose(hidden_source, op_source, rtol=0, atol=0.1)
with single_benchmark('Scipy, four detectors, constant distance'):
reference = localise_scipy(detectors=xyz, norms=norms, validate=False)
assert np.allclose(reference, hidden_source, rtol=0, atol=0.1)
print(f'Error={reference - hidden_source}')
print()
def big_benchmark(n: int = 100_000) -> None:
rand = np.random.default_rng(seed=0)
detectors = rand.uniform(low=-1, high=1, size=(n, 3))
hidden_source = np.array((0.7, -0.1, 0.2))
hidden_norms = np.linalg.norm(hidden_source - detectors, axis=1)
noise = rand.uniform(low=-0.05, high=0.05, size=n)
norms = hidden_norms + noise
with single_benchmark(f'Scipy, {n} detectors'):
source = localise_scipy(detectors=detectors, norms=norms)
print(f'Error={source - hidden_source}')
print()
if __name__ == '__main__':
regression_test()
constant_dist_test()
big_benchmark()
OP, four detectors: 4124.8 ms
Scipy, four detectors: 1.2 ms
Error=[5.55111512e-17 0.00000000e+00 1.11022302e-16]
OP, four detectors, constant distance: 1052.0 ms
Scipy, four detectors, constant distance: 0.7 ms
Error=[ 0.05115428 -0.03466696 0.0470115 ]
Scipy, 100000 detectors: 40.2 ms
Error=[0.00056583 0.00013256 0.00019433]
Analytic solution
Instead of the root-finding estimate in the first phase, you can use an analytic solution. Advantages are that it may be faster and more exact (though, in practice, this is fairly a wash); disadvantages are that it is more fragile unless you take some corrective steps to ensure intersection. If these steps are not taken, for constant_dist_test
the first five random seeds produce a negative radicand. The easiest hack is to simply clip()
the radicand to 0.
def localise_estimate(
detectors: np.ndarray, # n*3
norms: np.ndarray, # *n
est_mask: slice, # e.g. :3
validate: bool,
) -> np.array: # 2*3 in the typical case, 1*3 in the 0- or 1-solution cases
"""
Phase 1 of a reasonable localisation algorithm can use an exact solution for a 3-subset of the
detector locus. This will produce one of only two possible solutions regardless of noise. It
requires the preconditions that
- the subset (defined by est_mask) selects non-collinear points, and
- the subset sphere surfaces all intersect.
"""
exact_detectors = detectors[est_mask] # typically 3*3
r1, r2, r3 = norms[est_mask]
p1, p2, p3 = exact_detectors
# https://en.wikipedia.org/wiki/True-range_multilateration#Three_Cartesian_dimensions,_three_measured_slant_ranges
# Modified from https://stackoverflow.com/a/18654302/313768
# Inter-point vectors
p21 = p2 - p1
p31 = p3 - p1
d = np.linalg.norm(p21)
# Basis vectors
u = p21/d
i = u.dot(p31)
v = p31 - u*i
v /= np.linalg.norm(v)
j = v.dot(p31)
w = np.cross(u, v)
# Solution dimensions, still in projected space
x = 0.5/d*(r1*r1 - r2*r2 + d*d)
y = 0.5/j*(r1*r1 - r3*r3 - 2*i*x + i*i + j*j)
radicand = r1*r1 - x*x - y*y
# Instead of throwing on a negative radicand, just clamp it at 0
radicand = radicand.clip(min=0)
# Solution vectors in original space
sxy = p1 + u*x + v*y # First two dimensions, missing 'z'
if radicand <= 0:
sab = sxy[np.newaxis, :] # 1*3
else:
z = np.sqrt(radicand)
sz = w*z # 'z', prior to sign change
sa = sxy + sz # First solution
sb = sxy - sz # Second solution
sab = np.stack((sa, sb)) # Both solutions: 2*3
if validate:
for s in sab:
norms_est = np.linalg.norm(exact_detectors - s, axis=1) # typically *3
if not np.allclose(norms[est_mask], norms_est, atol=0, rtol=1e-2):
raise ValueError('Estimate error too high')
return sab
def resolve_estimate(
detectors: np.ndarray, # n*3
norms: np.ndarray, # *n
inv_mask: slice, # e.g. 3:
estimates: np.ndarray, # 1*3 or 2*3
) -> np.ndarray: # *3
"""Choose the estimate that incurs less error."""
if len(estimates) == 1:
return estimates[0]
deltas = detectors[np.newaxis, inv_mask, :] - estimates[:, np.newaxis, :] # 2*(n-3)*3
possible_norms = np.linalg.norm(deltas, axis=-1) # 2*(n-3)
# Least-squared error
errors = possible_norms - norms[inv_mask] # 2*(n-3)
errors = np.einsum('ij,ij->i', errors, errors) # *2
best_idx = errors.argmin()
return estimates[best_idx]
def root_errors(source: np.ndarray, detectors: np.ndarray, norms2: np.ndarray) -> np.ndarray:
deltas = detectors - source
# to calculate squared distance, use Einstein sum for self dot-product across second dimension
dist_squared = np.einsum('ij,ij->i', deltas, deltas)
# norm_ref**2 - deltax**2 - deltay**2 - deltaz**2
return norms2 - dist_squared
def root_jacobian(source: np.ndarray, detectors: np.ndarray, norms2: np.ndarray) -> np.ndarray:
# 3x3 gradient, i.e. d(norm_error)/dsource
return 2*(detectors - source)
def localise_polish(
detectors: np.ndarray, # n*3
norms: np.ndarray, # *n
estimate: np.ndarray, # *3
validate: bool,
) -> np.ndarray:
"""
The last phase of a reasonable localisation algorithm typically cannot be exact, because (for
more than three detectors) the system is over-determined and the best we can do is a
least-squares polishing. The amount of work this has to do depends on the accuracy of the data.
"""
if validate:
error = scipy.optimize.check_grad(
root_errors, root_jacobian,
estimate + (0.1, 0.1, 0.1),
detectors, norms**2,
)
if not np.isclose(0, error, rtol=0, atol=1e-6):
raise ValueError(f'Jacobian reference error {error:.2e} too high')
result = scipy.optimize.least_squares(
fun=root_errors, jac=root_jacobian, args=(detectors, norms**2),
x0=estimate, method='lm',
)
if not result.success:
raise ValueError(result.message)
return result.x
def localise_scipy(
detectors: np.ndarray, # n*3
norms: np.ndarray, # *n
validate: bool = False,
) -> np.ndarray:
# An exact root is only possible for three points; arbitrarily use the first three.
# This could be made safer by attempting to choose non-collinear points whose spherical
# surfaces are known to intersect.
est_mask = slice(None, 3)
inv_mask = slice(3, None)
estimates = localise_estimate(
detectors=detectors, norms=norms, est_mask=est_mask, validate=validate,
)
estimate = resolve_estimate(
detectors=detectors, norms=norms, inv_mask=inv_mask, estimates=estimates,
)
return localise_polish(
detectors=detectors, norms=norms, estimate=estimate, validate=validate,
)
@contextlib.contextmanager
def single_benchmark(caption: str) -> typing.Iterator[None]:
start = time.monotonic()
yield
dur = time.monotonic() - start
print(f'{caption}: {1e3*dur:.1f} ms')
def regression_test() -> None:
# synthetic detector points
a = Vector(x=0.84442185152504810, y=0.75795440294030250, z=0.4205715808308450) # random_point()
b = Vector(x=0.25891675029296335, y=0.51127472136860850, z=0.4049341374504143)
c = Vector(x=0.78379858903477260, y=0.30331272607892745, z=0.4765969541523558)
d = Vector(x=0.58338203945503120, y=0.90811288519533520, z=0.5046868558173903)
# hidden source
source_vector = Vector(x=0.28183784439970383, y=0.75580420415722390, z=0.6183689966753316)
detectors = np.stack((
a.to_array(), b.to_array(), c.to_array(), d.to_array(),
))
hidden_source = source_vector.to_array()
localize = Localize(a, b, c, d)
with single_benchmark('OP, four detectors'):
source = localize.find(abs(source_vector - a), abs(source_vector - b), abs(source_vector - c), abs(source_vector - d))
assert math.isclose(source.x, 0.28875017528775670)
assert math.isclose(source.y, 0.80460146246998620)
assert math.isclose(source.z, 0.31891289757720537)
error = abs(source_vector - source)
assert math.isclose(error, 0.3034846092048412)
# The error for OP method is quite poor. No further assertions are meaningful.
with single_benchmark('Scipy, four detectors'):
reference = localise_scipy(
detectors=detectors,
norms=np.linalg.norm(detectors - hidden_source, axis=1),
validate=True,
)
assert np.allclose(reference, hidden_source, rtol=0, atol=1e-12)
print(f'Error={reference - hidden_source}')
print()
def constant_dist_test() -> None:
# Artificial, error-free system where all detectors are the same distance from the source
hidden_source = np.array((1, 0, 2))
hidden_dist = 0.7
hidden_xy = np.array((
(1.1, 0.1),
(0.8, 0.2),
(1.0, -0.3),
(0.9, 0.4),
))
# dist == sqrt((i-i0)**2 + (j-j0)**2 + (k-k0)**2)
# k = +/- sqrt(dist**2 - (i-i0)**2 - (j-j0)**2) + k0
hidden_z = np.sqrt(
hidden_dist**2 - ((hidden_xy - hidden_source[:2])**2).sum(axis=1)
)*(1, 1, -1, 1) + hidden_source[2]
hidden_xyz = np.concatenate((hidden_xy, hidden_z[:, np.newaxis]), axis=1)
norm_check = np.linalg.norm(hidden_xyz - hidden_source, axis=1)
assert np.allclose(hidden_dist, norm_check)
# Detector positions with noise
rand = np.random.default_rng(seed=0)
noise = rand.normal(loc=0, scale=0.05, size=hidden_xyz.shape)
xyz = hidden_xyz + noise
# TDOA norms with noise
noise = rand.normal(loc=0, scale=0.05, size=xyz.shape[0])
norms = noise + np.full_like(noise, fill_value=hidden_dist)
a, b, c, d = xyz
localize = Localize(
a=Vector.from_array(a),
b=Vector.from_array(b),
c=Vector.from_array(c),
d=Vector.from_array(d),
)
with single_benchmark('OP, four detectors, constant distance'):
op_source = localize.find(*norms.tolist()).to_array()
assert np.allclose(hidden_source, op_source, rtol=0, atol=0.1)
with single_benchmark('Scipy, four detectors, constant distance'):
reference = localise_scipy(detectors=xyz, norms=norms, validate=False)
assert np.allclose(reference, hidden_source, rtol=0, atol=0.1)
print(f'Error={reference - hidden_source}')
print()
def big_benchmark(n: int = 100_000) -> None:
rand = np.random.default_rng(seed=0)
detectors = rand.uniform(low=-1, high=1, size=(n, 3))
hidden_source = np.array((0.7, -0.1, 0.2))
hidden_norms = np.linalg.norm(hidden_source - detectors, axis=1)
noise = rand.uniform(low=-0.05, high=0.05, size=n)
norms = hidden_norms + noise
with single_benchmark(f'Scipy, {n} detectors'):
source = localise_scipy(detectors=detectors, norms=norms)
print(f'Error={source - hidden_source}')
print()
OP, four detectors: 4783.3 ms
Scipy, four detectors: 0.9 ms
Error=[ 0.00000000e+00 1.11022302e-16 -1.11022302e-16]
OP, four detectors, constant distance: 1045.4 ms
Scipy, four detectors, constant distance: 0.4 ms
Error=[ 0.0511491 -0.0346768 0.04701431]
Scipy, 100000 detectors: 45.8 ms
Error=[0.00056583 0.00013256 0.00019433]