0

For a university project I need to compare two images I have taken and find the differences between them.

To be precise I monitor a 3d printing process where I take a picture after each printed layer. Afterwards I need to find the outlines of the newly printed part. The pictures look like this (left layer X, right layer X+1):

I have managed to extract the layer differences with the structural similarity from scikit from this question. Resulting in this image: Differences between layer X and X+1

The recognized differences match the printed layer nearly 1:1 and seem to be a good starting point to draw the contours. However this is where I am currently stuck. I have tried several combinations of tresholding, blurring, findContours, sobel and canny operations but I am unable to produce an accurate outline of the newly printed layer.

Edit: This is what I am looking for: Image with outline

Edit2: I have uploaded the images in the original file size and format here:

Layer X Layer X+1 Difference between the layers

Are there any operations that I haven't tried yet/do not know about? Or is there a combination of operations that could help in my case?

Any help on how to solve this problem would be greatly appreciated!!

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
frage12358
  • 53
  • 1
  • 6
  • 1
    what's "the outlines of the newly printed part" in detail? e.g. in the grid in the left of the image, what's your desired contour or outline? One outline around the whole region? One outline around every stripe? A pixel mask (or heatmap)? Can you prepare a manually drawn example for your current image pair? – Micka Nov 29 '21 at 12:16
  • 1
    The outline should contain the individual stripes. I have edited my post to add an example of what I am looking for. – frage12358 Nov 29 '21 at 12:29
  • 1
    illumination is key. you need lighting in such a way that the edges of the top layer become strongly visible. consider a laser module that shines a fan/plane of light into the scene. align with top layer. might get you further. -- also, I don't see even a theoretical approach that would work. this just doesn't work. I know everyone's yeeting undergrads at this, but none of the yeeters have any ideas how any of that is supposed to actually work (afaics). computed tomography is just not possible with a simple camera. 3d prints don't have the fine texture required for SFM either. – Christoph Rackwitz Nov 29 '21 at 13:05
  • 1
    can you create another pair of images without jpeg artifacts (png, pgm or similar)? What kind of camera do you use? – Micka Nov 29 '21 at 13:32
  • @Micka I have edited the post again to add links to the orginial (png) files. I am using an IDS U3-3800CP-M-GL camera. [Website with camera info](https://en.ids-imaging.com/store/products/cameras/u3-3800cp.html) – frage12358 Nov 29 '21 at 13:58
  • @ChristophRackwitz Unfortunately I cannot change the lightning of the object due to some other process parameters that are out of my control. And would you mind to briefly expand on your comment regarding computed tomography and SFM? Shouldn't it be at least in theory be possible to use the two images to calculate the difference/outline of the new layer? – frage12358 Nov 29 '21 at 14:04
  • not in this case. imagine printing a solid cube. the big flat uniformly colored area at layer n looks _nearly identical_ to the same at layer n+1. only at the edges can you hope to see change. even just in principle, this approach can't work. I don't see a way. if your advisor thinks it's possible, they need to argue their case. -- anyway, you're new to computer vision so you can't even "fill in the blanks" of knowledge required to grasp the situation. your advisor isn't probably doing any better. chairs of computer graphics and computer vision are sometimes not even talking to each other. – Christoph Rackwitz Nov 29 '21 at 14:12
  • CT and SFM are approaches to derive 3D information (voxels and point clouds respectively) from 2D pictures. CT assumes that you can look through the object. if you can't look into it, you can only get a surface approximation (SFM). SFM requires "features" (texture) to latch onto, that the camera can see. it's stereo vision but extended. your eyes can't tell how far a flat textureless wall is. they require texture. layers of extruded plastic have no fine texture. for isolated extruded paths, yes, you could make out something... but that's not generally true of the whole layer. – Christoph Rackwitz Nov 29 '21 at 14:17
  • _maybe_, you could dust every layer, to get fine texture, and then scan it at extremely high resolution (literally flatbed scanner or pan the camera/microscope and photograph tiles). that takes time tho. -- if the goal is simply to monitor a print, you could throw AI at it. spaghetti should be obvious to a DNN. sheared layers (lost steps) may be detectable... but intentional geometry may look similar. – Christoph Rackwitz Nov 29 '21 at 14:20
  • imho it depends a lot on the desired quality of the final result. Adding a layer by such a printer will introduce some difference because the layers aren't perfect and edges are changing, so simple frame differencing (cv::absdiff > 5) gives some nice results which could already be used as a heatmap. But results wont be perfect, so if the goal is to find small areas where the printer didnt place the material (or extra material), then this wont work. – Micka Nov 29 '21 at 16:52

0 Answers0