3

I asked this question yesterday: Simulating matlab's mrdivide with 2 square matrices

And thats got mrdivide working. However now I'm having problems with mldivide, which is currently implemented as follows:

cv::Mat mldivide(const cv::Mat& A, const cv::Mat& B ) 
{
    //return  b * A.inv();
    cv::Mat a;
    cv::Mat b;
    A.convertTo( a, CV_64FC1 );
    B.convertTo( b, CV_64FC1 );

    cv::Mat ret;
    cv::solve( a, b, ret, cv::DECOMP_NORMAL );

    cv::Mat ret2;
    ret.convertTo( ret2, A.type() );
    return ret2;
}

By my understanding the fact that mrdivide is working should mean that mldivide is working but I can't get it to give me the same results as matlab. Again the results are nothing alike.

Its worth noting I am trying to do a [19x19] \ [19x200] so not square matrices this time.

Community
  • 1
  • 1
Goz
  • 61,365
  • 24
  • 124
  • 204
  • possible duplicate of [How to implement Matrix left division in OpenCV](http://stackoverflow.com/questions/16414526/how-to-implement-matrix-left-division-in-opencv) – beaker Sep 23 '15 at 14:57
  • @beaker and if you look you'll see I have already "implemented" the method explained at that link. – Goz Sep 23 '15 at 15:34
  • Then you'll have to show us what the problem is. I'll remove the duplicate vote. – beaker Sep 23 '15 at 15:35
  • @beaker .. I'm not sure HOW to show you. The answer that the above gives me and what matlab gives me with the same data are totally different – Goz Sep 23 '15 at 15:44
  • Well, have you calculated `A*x = B` with the results from each of them and found one to be incorrect? – beaker Sep 23 '15 at 16:21

1 Answers1

5

Like I've previously mentioned in your other question, I am using MATLAB along with mexopencv, that way I can easily compare the output of both MATLAB and OpenCV.

That said, I can't reproduce your problem: I generated randomly matrices, and repeated the comparison N=100 times. I'm running MATLAB R2015a with mexopencv compiled against OpenCV 3.0.0:

N = 100;
r = zeros(N,2);
d = zeros(N,1);
for i=1:N
    % double precision, i.e CV_64F
    A = randn(19,19);
    B = randn(19,200);

    x1 = A\B;
    x2 = cv.solve(A,B);   % this a MEX function that calls cv::solve

    r(i,:) = [norm(A*x1-B), norm(A*x2-B)];
    d(i) = norm(x1-x2);
end

All results agreed and the errors were very small in the order of 1e-11:

>> mean(r)
ans =
   1.0e-12 *
    0.2282    0.2698

>> mean(d)
ans =
   6.5457e-12

(btw I also tried x2 = cv.solve(A,B, 'IsNormal',true); which sets the cv::DECOMP_NORMAL flag, and the results were not that different either).

This leads me to believe that either your matrices happen to accentuate some edge case in the OpenCV solver, where it failed to give a proper solution, or more likely you have a bug somewhere else in your code.

I'd start by double checking how you load your data, and especially watch out for how the matrices are laid out (obviously MATLAB is column-major, while OpenCV is row-major)...

Also you never told us anything about your matrices; do they exhibit a certain characteristic, are there any symmetries, are they mostly zeros, their rank, etc..

In OpenCV, the default solver method is LU factorization, and you have to explicitly change it yourself if appropriate. MATLAB on the hand will automatically choose a method that best suits the matrix A, and LU is just one of the possible decompositions.


EDIT (response to comments)

When using SVD decompositition in MATLAB, the sign of the left and right eigenvectors U and V is arbitrary (this really comes from the DGESVD LAPACK routine). In order to get consistent results, one convention is to require that the first element of each eigenvector be a certain sign, and multiplying each vector by +1 or -1 to flip the sign as appropriate. I would also suggest checking out eigenshuffle.

One more time, here is a test I did to confirm that I get similar results for SVD in MATLAB and OpenCV:

N = 100;
r = zeros(N,2);
d = zeros(N,3);
for i=1:N
    % double precision, i.e CV_64F
    A = rand(19);

    % compute SVD in MATLAB, and apply sign convention
    [U1,S1,V1] = svd(A);
    sn = sign(U1(1,:));
    U1 = bsxfun(@times, sn, U1);
    V1 = bsxfun(@times, sn, V1);
    r(i,1) = norm(U1*S1*V1' - A);

    % compute SVD in OpenCV, and apply sign convention
    [S2,U2,V2] = cv.SVD.Compute(A);
    S2 = diag(S2);
    sn = sign(U2(1,:));
    U2 = bsxfun(@times, sn, U2);
    V2 = bsxfun(@times, sn', V2)';  % Note: V2 was transposed w.r.t V1
    r(i,2) = norm(U2*S2*V2' - A);

    % compare
    d(i,:) = [norm(V1-V2), norm(U1-U2), norm(S1-S2)];
end

Again, all results were very similar and the errors close to machine epsilon and negligible:

>> mean(r)
ans =
   1.0e-13 *
    0.3381    0.1215

>> mean(d)
ans =
   1.0e-13 *
    0.3113    0.3009    0.0578

One thing I'm not sure about in OpenCV, but MATLAB's svd function returns the singular values sorted in decreasing order (unlike the eig function), with the columns of the eigenvectors in corresponding order.

Now if the singular values in OpenCV are not guaranteed to be sorted for some reason, you have to do it manually as well if you want to compare the results against MATLAB, as in:

% not needed in MATLAB
[U,S,V] = svd(A);
[S, ord] = sort(diag(S), 'descend');
S = diag(S);
U = U(:,ord)
V = V(:,ord);
Community
  • 1
  • 1
Amro
  • 123,847
  • 25
  • 243
  • 454
  • Thanks for the response. Thinking about it ... it may be working correctly. I forget that OpenCV's SVD returns similar results (but with different signs) to Matlab's. As SVD is used in an earlier step its quite possible that those signs are messing me about ... – Goz Sep 24 '15 at 08:51
  • @Goz see my edit about SVD. One thing also worth noting is that starting with OpenCV 2.3, they moved away from using LAPACK for SVD, in favor of their own implementation (which they claimed to be faster for small matrices): http://code.opencv.org/issues/1498, http://code.opencv.org/projects/opencv/wiki/ChangeLog#23-beta – Amro Sep 24 '15 at 10:22