1

I'm translating the Python code into a C + + version,But I found that the two functions(linalg.svd and JacobiSVD svd ) produce different results.What should I do?

A = np.array([[1, 2, 3, 4],
              [5, 6, 7, 8],
              [9, 10, 11, 12],
              [13, 14, 15, 16]])
U, S, V = svd(A,0)
print("U =\n", U)
print("S =\n", S)
print("V =\n", V)

 MatrixXf m = MatrixXf::Zero(4,4);
    m << 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16;
    cout << "Here is the matrix m:" << endl << m << endl;
    JacobiSVD<MatrixXf> svd(m, ComputeFullU | ComputeFullV);
    cout << "Its singular values are:" << endl << svd.singularValues() << endl;
    cout << "Its left singular vectors are the columns of the thin U matrix:" << endl << endl << svd.matrixU() << endl;
    cout << "Its right singular vectors are the columns of the thin V matrix:" << endl << endl << svd.matrixV() << endl;

Forgive me for not being clear, but here are the Python code and C ++ code results

U =
 [[-0.13472212 -0.82574206  0.54255324  0.07507318]
 [-0.3407577  -0.4288172  -0.77936056  0.30429774]
 [-0.54679327 -0.03189234 -0.06893859 -0.83381501]
 [-0.75282884  0.36503251  0.30574592  0.45444409]]
S =
 [3.86226568e+01 2.07132307e+00 1.57283823e-15 3.14535571e-16]
V =
 [[-0.4284124  -0.47437252 -0.52033264 -0.56629275]
 [ 0.71865348  0.27380781 -0.17103786 -0.61588352]
 [-0.19891147 -0.11516042  0.82705525 -0.51298336]
 [ 0.51032757 -0.82869661  0.12641052  0.19195853]]

C++ Here is the matrix m:

1  2  3  4
5  6  7  8
9 10 11 12
13 14 15 16

Its singular values are:

    38.6227
    2.07132
2.69062e-16
  6.823e-17

Its left singular vectors are the columns of the thin U matrix:

 0.134722  0.825742 0.0384608  0.546371
 0.340758  0.428817   0.35596 -0.757161
 0.546793 0.0318923 -0.827301  -0.12479
 0.752829 -0.365033  0.432881   0.33558

Its right singular vectors are the columns of the thin V matrix:

  0.428412  -0.718653  -0.124032   0.533494
  0.474373  -0.273808  -0.232267  -0.803774
  0.520333   0.171038    0.83663 0.00706489
  0.566293   0.615884  -0.480331   0.263215

Although it turns out that there are some small deviations, will this affect my work?

Homer512
  • 9,144
  • 2
  • 8
  • 25
Joe
  • 11
  • 2
  • _"...some small deviations,..."_ are these deviations within the range that can be explained for my doing floating point computations in a different order? Please show a few of the _"deviant"_ values. Note that with floating point maths it's not guaranteed that `a + b == a + b` see [Is floating point addition commutative in C++?](https://stackoverflow.com/a/24446382/3370124) – Richard Critten Feb 19 '23 at 12:38
  • Forgive me for not being clear, but here are the Python code and C + + code results – Joe Feb 19 '23 at 12:48
  • I'm sorry, I don't know how to change the question. I've added data to the answer below – Joe Feb 19 '23 at 13:07
  • 2
    The SVD isn't unique, see for example https://stackoverflow.com/questions/28523482/why-svd-left-singular-vectors-computed-with-eigen-and-opencv-have-different-sign Also, the singular values for the last to columns are effectively zero. You can disregard them as numerical noise – Homer512 Feb 20 '23 at 09:26

0 Answers0