0

Example code:

import numpy as np
import math
import time

x=np.ones((2000,2000))

start = time.time()
print(np.linalg.norm(x, 2))
end = time.time()
print("time 1: " + str(end - start))

start = time.time()
print(math.sqrt(np.sum(x*x)))
end = time.time()
print("time 2: " + str(end - start))

The output (on my machine) is:

1999.999999999991
time 1: 3.216777801513672
2000.0
time 2: 0.015042781829833984

It shows that np.linalg.norm() takes more than 3s to solve it, while the direct solution takes just 0.01s. Why is np.linalg.norm() so slow?

kqwyf
  • 84
  • 7

2 Answers2

2

np.linalg.norm(x, 2) computes the 2-norm, taking the largest singular value

math.sqrt(np.sum(x*x)) computes the frobenius norm

These operations are different, so it should be no surprise that they take different amounts of time. What is the difference between the Frobenius norm and the 2-norm of a matrix? on math.SO may be of interest.

Eric
  • 95,302
  • 53
  • 242
  • 374
  • Slightly off-topic and merely out of curiosity, do you happen to know how one computes the operator norm for p!=2? – Paul Panzer Oct 14 '18 at 15:57
0

What is comparable is :

In [10]: %timeit sum(x*x,axis=1)**.5
36.4 ms ± 6.11 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [11]: %timeit norm(x,axis=1)
32.3 ms ± 3.94 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Neither np.linalg.norm(x, 2) nor sum(x*x)**.5 are the same thing.

B. M.
  • 18,243
  • 2
  • 35
  • 54