0

I am trying to convert from CV_16SC1 to CV_32SC1

my code is quite simple:

//print value of data.type() = 16
data.convertTo(data, CV_32FC1); 
//print new value of data.type() = 21

Is there a reason why the conversion does not happen? Maybe for some reason it's not possible to convert between those two types?

Because according to this post: Getting enum names (e.g. CV_32FC1) of OpenCV image types?

my result is CV_32SC1 which is very undesirable since I am trying to perform matrix multiplication on my matrix that's why I want it to have floating point type.

Community
  • 1
  • 1
user3217278
  • 320
  • 3
  • 9

1 Answers1

1

I don't see any evidence of the conversion not working (nor any reason why it shouldn't).

#include <opencv2/opencv.hpp>
#include <iostream>


int main()
{
    cv::Mat data = cv::Mat::ones(2, 2, CV_16SC1);
    std::cout << "Initial data=" << data << "\n";
    std::cout << "Initial type= " << data.type() << "\n";
    std::cout << "Type is CV_16SC1: " << ((data.type() == CV_16SC1) ? "true" : "false") << "\n";

    cv::Mat converted;
    data.convertTo(converted, CV_32FC1);

    std::cout << "Converted" << converted << "\n";
    std::cout << "Converted type=" << converted.type() << "\n";
    std::cout << "Type is CV_32FC1: " << ((converted.type() == CV_32FC1) ? "true" : "false") << "\n";

    return 0;
}

And transcript from console:

Initial data=[1, 1;
  1, 1]
Initial type= 3
Type is CV_16SC1: true
Converted[1, 1;
  1, 1]
Converted type=5
Type is CV_32FC1: true

Looking at your snippet, I'm not even certain how you got the numbers:

  • 16 = 2*8+0 -- 3 channel, 8bit, unsigned
  • 21 = 2*8+5 -- 3 channel, 32bit, float

You made an off-by-one error interpreting the resulting data type:

#define CV_32F  5
Dan Mašek
  • 17,852
  • 6
  • 57
  • 85