5

I am using the Melexix MLX90640 32x24 thermal camera sensor connected to a Raspberry Pi 3 via i2c.

Using code from Pimoroni I can show the camera data with false colors on the screen through the framebuffer with their fbuf example.

Since this is display directly on the framebuffer and not a video stream or camera device, I am unable to read it in OpenCV. I want to use the video stream in Open CV to count people in a room but do not know how to modify the fbuf code to output video.

It does not need to be video, just an image stream that is readable continuously by OpenCV

What I Tried

I installed v4l2loopback to create a virtual camera device on the Pi at /dev/video0. Then I used Gstreamer to create a stream of the specific area of the screen that the fbuf code was writing the ir camera false color data to. This created a stream that could be read by OpenCV but it did not update the thermal image data in the stream. Sometimes the image data would partially come throught but it normal showed the Pi desktop. It also seems inelegant and buggy so I want a more solid solution.

Results So Far

Lepton has a working example with their sensor using the ondemandcam example from v4l2loopback but that is a different sensor and it communicates with SPI instead of i2c.

My goal is to combine this code with the frame capture code fbuf from Pimoroni to Get a stable video stream from the sensor so I can import it into OpenCV.

Lepton's Code is based on the ondemandcam example from v4l2loopback. It adds its own sensor code to the grab_frame() function. The open_vpipe() function is identical to the ondemandcam example.

If I could put the framebuffer code from fbuf into the grab_frame() function then I think it would work. I am unsure how to do that.

fbuf Code Snippet

This for loop seems to be what I need to put in the grab_frame() function.

for(int y = 0; y < 24; y++){
            for(int x = 0; x < 32; x++){
                float val = mlx90640To[32 * (23-y) + x];
                put_pixel_false_colour((y*IMAGE_SCALE), (x*IMAGE_SCALE), val);
            }
        }

fbuf Full Code

#include <stdint.h>
#include <iostream>
#include <cstring>
#include <fstream>
#include <chrono>
#include <thread>
#include <math.h>
#include "headers/MLX90640_API.h"
#include "lib/fb.h"

#define MLX_I2C_ADDR 0x33

#define IMAGE_SCALE 5

// Valid frame rates are 1, 2, 4, 8, 16, 32 and 64
// The i2c baudrate is set to 1mhz to support these
#define FPS 8
#define FRAME_TIME_MICROS (1000000/FPS)

// Despite the framerate being ostensibly FPS hz
// The frame is often not ready in time
// This offset is added to the FRAME_TIME_MICROS
// to account for this.
#define OFFSET_MICROS 850

void put_pixel_false_colour(int x, int y, double v) {
    // Heatmap code borrowed from: 
http://www.andrewnoske.com/wiki/Code_-_heatmaps_and_color_gradients
    const int NUM_COLORS = 7;
    static float color[NUM_COLORS][3] = { {0,0,0}, {0,0,1}, {0,1,0}, {1,1,0}, {1,0,0}, {1,0,1}, {1,1,1} };
    int idx1, idx2;
    float fractBetween = 0;
    float vmin = 5.0;
    float vmax = 50.0;
    float vrange = vmax-vmin;
    v -= vmin;
    v /= vrange;
    if(v <= 0) {idx1=idx2=0;}
    else if(v >= 1) {idx1=idx2=NUM_COLORS-1;}
    else
    {
        v *= (NUM_COLORS-1);
        idx1 = floor(v);
        idx2 = idx1+1;
        fractBetween = v - float(idx1);
    }

    int ir, ig, ib;


    ir = (int)((((color[idx2][0] - color[idx1][0]) * fractBetween) + color[idx1][0]) * 255.0);
    ig = (int)((((color[idx2][1] - color[idx1][1]) * fractBetween) + color[idx1][1]) * 255.0);
    ib = (int)((((color[idx2][2] - color[idx1][2]) * fractBetween) + color[idx1][2]) * 255.0);

    for(int px = 0; px < IMAGE_SCALE; px++){
        for(int py = 0; py < IMAGE_SCALE; py++){
            fb_put_pixel(x + px, y + py, ir, ig, ib);
        }
    }
}

int main(){
    static uint16_t eeMLX90640[832];
    float emissivity = 1;
    uint16_t frame[834];
    static float image[768];
    static float mlx90640To[768];
    float eTa;
    static uint16_t data[768*sizeof(float)];

    auto frame_time = std::chrono::microseconds(FRAME_TIME_MICROS + OFFSET_MICROS);

    MLX90640_SetDeviceMode(MLX_I2C_ADDR, 0);
    MLX90640_SetSubPageRepeat(MLX_I2C_ADDR, 0);
    switch(FPS){
        case 1:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b001);
            break;
        case 2:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b010);
            break;
        case 4:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b011);
            break;
        case 8:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b100);
            break;
        case 16:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b101);
            break;
        case 32:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b110);
            break;
        case 64:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b111);
            break;
        default:
            printf("Unsupported framerate: %d", FPS);
            return 1;
    }
    MLX90640_SetChessMode(MLX_I2C_ADDR);

    paramsMLX90640 mlx90640;
    MLX90640_DumpEE(MLX_I2C_ADDR, eeMLX90640);
    MLX90640_ExtractParameters(eeMLX90640, &mlx90640);

    fb_init();

    while (1){
        auto start = std::chrono::system_clock::now();
        MLX90640_GetFrameData(MLX_I2C_ADDR, frame);
        MLX90640_InterpolateOutliers(frame, eeMLX90640);

        eTa = MLX90640_GetTa(frame, &mlx90640);
        MLX90640_CalculateTo(frame, &mlx90640, emissivity, eTa, mlx90640To);

        for(int y = 0; y < 24; y++){
            for(int x = 0; x < 32; x++){
                float val = mlx90640To[32 * (23-y) + x];
                put_pixel_false_colour((y*IMAGE_SCALE), (x*IMAGE_SCALE), val);
            }
        }
        auto end = std::chrono::system_clock::now();
        auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
        std::this_thread::sleep_for(std::chrono::microseconds(frame_time - elapsed));
    }

    fb_cleanup();
    return 0;
}

Update #1 with Modified Code

I added this at the top:

#include "opencv2/core/core.hpp"

using namespace cv;
using namespace std;

Then I modified the loop as you suggested but it will not compile.

Update #2

Now I only have one compile error.

error:no match for 'operator[[]' (operand types are 'cv::Mat' and 'int')
test_mat[y,x] = val;
        ^

Update #3

Now the compile error is gone but these errors appeared.

g++ -I. -std=c++11 -std=c++11   -c -o examples/fbuf.o examples/fbuf.cpp
g++ -L/home/pi/mlx90640-library examples/fbuf.o examples/lib/fb.o libMLX90640_API.a -o fbuf -lbcm2835
examples/fbuf.o: In function `cv::Mat::Mat(int, int, int, void*, unsigned int)':
fbuf.cpp:(.text._ZN2cv3MatC2EiiiPvj[_ZN2cv3MatC5EiiiPvj]+0x144): undefined reference to `cv::error(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*, char const*, int)'
fbuf.cpp:(.text._ZN2cv3MatC2EiiiPvj[_ZN2cv3MatC5EiiiPvj]+0x21c): undefined reference to `cv::error(int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*, char const*, int)'
fbuf.cpp:(.text._ZN2cv3MatC2EiiiPvj[_ZN2cv3MatC5EiiiPvj]+0x2ac): undefined reference to `cv::Mat::updateContinuityFlag()'
examples/fbuf.o: In function `cv::Mat::~Mat()':
fbuf.cpp:(.text._ZN2cv3MatD2Ev[_ZN2cv3MatD5Ev]+0x3c): undefined reference to `cv::fastFree(void*)'
examples/fbuf.o: In function `cv::Mat::release()':
fbuf.cpp:(.text._ZN2cv3Mat7releaseEv[_ZN2cv3Mat7releaseEv]+0x68): undefined reference to `cv::Mat::deallocate()'
collect2: error: ld returned 1 exit status
Makefile:37: recipe for target 'fbuf' failed
make: *** [fbuf] Error 1

Update #4 Compiles but no visual Output from Mat in OpenCV

Now the program compiles. I had to make additions to the makefile.txt

I added:

CPPFLAGS = `pkg-config --cflags opencv`
LDLIBS = `pkg-config --libs opencv`

and $(I2C_LIBS) $(CPPFLAGS to the following command:

fbuf: examples/fbuf.o examples/lib/fb.o libMLX90640_API.a
    $(CXX) -L/home/pi/mlx90640-library $^ -o $@ $(I2C_LIBS) $(CPPFLAGS) $(LDLIBS)

Now my loop looks like this but when I run the program there is no output. Since I am not using the false image function anymore, how do I display the image from the mat?

Update 5 - Solved

Thanks to Mark Setchell and the original code by Pimoroni I now have code that imports the MLX90640 sensor data into OpenCV. I was also able to replace the false color function with the built in applyColorMap function in OpenCV.

Now I can start to use OpenCV to process the data. I used multiple Mat conversions to get to the final image. There is likely a more efficient way to do so.

Full Working Code

#include <stdint.h>
#include <iostream>
#include <cstring>
#include <fstream>
#include <chrono>
#include <thread>
#include <math.h>
#include "headers/MLX90640_API.h"

#include "opencv2/opencv.hpp"

using namespace cv;
using namespace std;

#define MLX_I2C_ADDR 0x33

// Valid frame rates are 1, 2, 4, 8, 16, 32 and 64
// The i2c baudrate is set to 1mhz to support these
#define FPS 8
#define FRAME_TIME_MICROS (1000000/FPS)

// Despite the framerate being ostensibly FPS hz
// The frame is often not ready in time
// This offset is added to the FRAME_TIME_MICROS
// to account for this.
#define OFFSET_MICROS 850

int main(){
    static uint16_t eeMLX90640[832];
    float emissivity = 1;
    uint16_t frame[834];
    static float image[768];
    static float mlx90640To[768];
    float eTa;
    static uint16_t data[768*sizeof(float)];

    auto frame_time = std::chrono::microseconds(FRAME_TIME_MICROS + OFFSET_MICROS);

    MLX90640_SetDeviceMode(MLX_I2C_ADDR, 0);
    MLX90640_SetSubPageRepeat(MLX_I2C_ADDR, 0);
    switch(FPS){
        case 1:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b001);
            break;
        case 2:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b010);
            break;
        case 4:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b011);
            break;
        case 8:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b100);
            break;
        case 16:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b101);
            break;
        case 32:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b110);
            break;
        case 64:
            MLX90640_SetRefreshRate(MLX_I2C_ADDR, 0b111);
            break;
        default:
            printf("Unsupported framerate: %d", FPS);
            return 1;
    }
    MLX90640_SetChessMode(MLX_I2C_ADDR);

    paramsMLX90640 mlx90640;
    MLX90640_DumpEE(MLX_I2C_ADDR, eeMLX90640);
    MLX90640_ExtractParameters(eeMLX90640, &mlx90640);

    while (1){
        auto start = std::chrono::system_clock::now();
        MLX90640_GetFrameData(MLX_I2C_ADDR, frame);
        MLX90640_InterpolateOutliers(frame, eeMLX90640);

        eTa = MLX90640_GetTa(frame, &mlx90640);
        MLX90640_CalculateTo(frame, &mlx90640, emissivity, eTa, mlx90640To);

        Mat IR_mat (32,24, CV_32FC1, data); 

        for(int y = 0; y < 24; y++){
            for(int x = 0; x < 32; x++){
                float val = mlx90640To[32 * (23-y) + x];
                IR_mat.at<float>(x,y) = val;
            }
        }

        // Normalize the mat
        Mat normal_mat;
        normalize(IR_mat, normal_mat, 0,1.0, NORM_MINMAX, CV_32FC1);

        // Convert Mat to CV_U8 to use applyColorMap
        double minVal, maxVal;
        minMaxLoc(normal_mat, &minVal, &maxVal);
        Mat u8_mat;
        normal_mat.convertTo(u8_mat, CV_8U, 255.0/(maxVal - minVal), -minVal);

        // Resize mat
        Mat size_mat;
        resize(u8_mat, size_mat, Size(240,320), INTER_CUBIC);

        // Apply false color
        Mat falsecolor_mat;
        applyColorMap(size_mat, falsecolor_mat, COLORMAP_JET);

        // Display stream in window
        namedWindow( "IR Camera Window");
        imshow ("IR Camera Window", falsecolor_mat);
        waitKey(1);

        auto end = std::chrono::system_clock::now();
        auto elapsed = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
        std::this_thread::sleep_for(std::chrono::microseconds(frame_time - elapsed));
    }


    return 0;
}

MLX90640 False Color Thermal image in OpenCV

lukecv
  • 113
  • 1
  • 11
  • 1
    Not sure I understand why you want to use v4l2, gstreamer and OpenCV's videoreader at all? Why not have a main loop that just reads an i2c frame into a Mat and process that? – Mark Setchell Dec 27 '18 at 20:46
  • I am unfamiliar with a Mat and did not know it was a possibility. I do not know how to read the 12c frame into a Mat but I found [this guide](https://docs.opencv.org/2.4/doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.html) from OpenCV. I will try to implement it. – lukecv Dec 27 '18 at 21:08
  • 1
    I would base my code on the last chunk of code you show. Remove the `fb_init()` stuff and in its place create a Mat. Decide whether you want to hold a float, uint16 or uint8 for each of the 32x24 pixels and create it with CV_32F or CV8_UC1 or whatever. Inside the double `for` loop, replace `put_pixel_false_colour()` with `YourMat[ x,y] = val` – Mark Setchell Dec 27 '18 at 21:21
  • I tried to make the code replacements like you suggested but I cannot get it to compile, I updated the question to show the modified code. – lukecv Dec 28 '18 at 02:56
  • 1
    `val` is a 4-byte `float` - you can't store that in CV8_UC1 which is an unsigned single-byte integer. Drop the IMAGE_SCALE stuff altogether. Either assign individual pixels to the Mat with square brackets (`[]`) or creeate the Mat from the float array like this https://stackoverflow.com/a/23722703/2836621 – Mark Setchell Dec 28 '18 at 08:46
  • Thank you. Now I only have one compile error but I have not been able to figure it out after many attempts. I keep getting the same error. I posted my current code in Update #2. – lukecv Dec 28 '18 at 16:17
  • 1
    Sorry, I was away from a computer and have been playing with too many languages and confused myself. Try `test_mat.at(y, x) = val;` – Mark Setchell Dec 28 '18 at 16:30
  • Now the compile error is gone but is replaced with a list of errors. – lukecv Dec 28 '18 at 16:41
  • 1
    You need to specify a bunch of flags to compile and link against OpenCV. What does the following command output? `pkg-config --libs opencv` – Mark Setchell Dec 28 '18 at 16:55
  • ```Package opencv was not found in the pkg-config search path. Perhaps you should add the directory containing `opencv.pc' to the PKG_CONFIG_PATH environment variable No package 'opencv' found``` I installed OpenCV 4 in a virtual environment according to [this guide](https://www.pyimagesearch.com/2018/09/26/install-opencv-4-on-your-raspberry-pi/) but I get the same error even if I am in the `workon cv` environment. I will try to add the package path to the `PKG_CONFIG_PATH` I know OpenCV is installed since I can run commands with it in Python, just having trouble with C++. – lukecv Dec 28 '18 at 17:08
  • 1
    See if you can find a file called `opencv.pc` like this `find / -name "opencv.pc" 2>/dev/null` – Mark Setchell Dec 28 '18 at 17:12
  • 1
    When you find that file, the following command will tell you the compiler and linker flags to use `pkg-config --libs --cflags "THATFILE"`. So your compilation command will become `g++ -std=c++11 $(pkg-config --libs --cflags "THATFILE") ...existing stuff` – Mark Setchell Dec 28 '18 at 17:22
  • Since I could not find the file `opencv.pc` I am going to reflash Raspbian and then recompile and install OpenCV without the virtual environment to see if the packages are installed right. – lukecv Dec 28 '18 at 17:58
  • I reinstalled Raspbian and recompiled OpenCV 4. It still did not have the file `opencv.pc` after additional reserch this [link](https://stackoverflow.com/questions/52489736/how-to-add-a-directory-to-the-pkg-config-path-environment-variable) has the answer by running the command `sudo apt install libopencv-dev` then I can find `opencv.pc` at the location `/user/lib/arm/linux-gnueabihf/pkgconfig/opencv.pc` right where it is supposed to be. – lukecv Dec 28 '18 at 21:43
  • I finally got the program to compile by properly editing the makefile. Now I need to figure out how to do anything with the Mat information from the sensor. First, I am just trying to display the information to confirm that I am actually collecting the data from the sensor but have been unable to do so. I put the updated code in Update #4. – lukecv Dec 28 '18 at 23:23
  • 1
    You need a `waitKey()` after `imshow()` at least. Also, try adding a `normalize()` before displaying to ensure your data aren't too dark or light. – Mark Setchell Dec 28 '18 at 23:41
  • Thank you. Unbelievable, when I added a `waitKey(0);` after `imshow` I got a little window with all black pixels. I tried many times to add a `normalize()` but was not able to get it to compile. Where would the normalize go? What goes in the `()` I need to study the syntax. – lukecv Dec 29 '18 at 00:02
  • 1
    You probably want `normalize(yourInputArray, NormalizedOutputArrayForDisplay, 0, 1.0, NORM_MINMAX, CV_32FC1)` – Mark Setchell Dec 29 '18 at 00:08
  • Thank you. I was able to normalize the output as well as convert it to CV_U8C1 to make a false color map. Could not have done it without your help. – lukecv Dec 29 '18 at 18:06

0 Answers0