-1

I am going to split this question in 3 parts

First, I've been given this problem, and I don't know where to start, if you have been solving related problem, would you give me some hints and keywords to help me do some more research?

I have done some research on my own

So here is some 2D chest CT scans (sorry due to reputation rule i can't implement images directly)

Image1

Image2

Image3

All photos are in the same angle. So I think I can simply read each photo to a vector of pixels, do some thresh holding to make all black and black-ish pixels going to be a non-colored pixel. Next, I'll create a vector called vector_of_photo of those vectors. Then the index of each vector in vector_of_photo are now the Z-index. Now I can render a 3d photo from those vectors of pixels right?

In the second place, I got trouble understand raycasting algorithm,

I think the idea here is, when I already got a box of pixel then everytime I rotate the box, it cast straight-lines from that angle of the camera to the box, each line found a has-colored pixel going to stop casting and render that pixel (or more specific, copy the pixel to the exactly location on the plane).

Did I understand it correctly?

At last, the OPENGL/c++ part is just the option I think I'm going to use to solve this problem. And I'm not pretty sure it is a good idea or not, so give me some more hint about the programming language, library or module I should take a look at.

Spektre
  • 49,595
  • 11
  • 110
  • 380
Ade
  • 11
  • 1
  • 4
  • I'm general, it's easier to interpret questions with question marks (?). Though you appear to have three potential questions, you only have a single question mark. Perhaps you could phrase your questions more clearly? – Richard Aug 20 '21 at 04:59
  • [Marching cubes](https://en.wikipedia.org/wiki/Marching_cubes), perhaps? – Neil Aug 20 '21 at 05:06
  • Sorry @Neil My project is about using raycasting algorithm to reconstructing 3d – Ade Aug 20 '21 at 05:20
  • So you have the images, taken at regular isosurfaces, and you want to build a model? And then you view that model using raycasting? – Neil Aug 20 '21 at 05:22
  • see [archive](https://web.archive.org/web/20180618064202/https://stackoverflow.com/questions/48090782/how-to-best-write-a-voxel-engine-in-c-with-performance-in-mind/48092685#48092685) of this deleted QA https://stackoverflow.com/a/48092685 So you can feed the CT scan directly into 3D texture and use GLSL shaders to back raytrace/raycast ... However the exact form of rendering raycast/rayrtrace methods depends on what you want to achieve you know there is transparency, SSS sub surface scattering and other techniques to render 3D voxel objects (not just boundary representation) – Spektre Aug 20 '21 at 07:34
  • for more on that see https://stackoverflow.com/a/45251335/2521214 ... to make the stuff more real like you can compute normals from neighboring voxels to allow lighting (emphasize shape) – Spektre Aug 20 '21 at 07:38

1 Answers1

0

I happen to be working on the same problem in my spare time. Haha :)

Here is one approach to your problem:

  1. Load the images into your application, such that you get the 3D volumetric dataset that you describe
  2. Remove all points that don't fit within some range of values (e.g. 0.4/1.0 to 0.6/1.0 brightness). You may need to apply preprocessing and filtering.
  3. Fit a mesh to the resulting point cloud with open-source software. Here is a good blog post about that https://towardsdatascience.com/5-step-guide-to-generate-3d-meshes-from-point-clouds-with-python-36bad397d8ba
  4. Take the resulting mesh (probably, an STL file) and visualize it in any software your want (Blender 3D, Unity 3D, Cinema 4D, a custom OpenGL application), anything really.

My own approach to this problem is very similar to the one you suggest in your question, and I have already made some headway. Therefore, I thought it would be good to suggest another route.

NOTE Please be aware that what you are working on is not a trivial problem. It's a large project, and there are many Commerical companies that put years into doing just this. This is a great project for learning OpenGL, rendering, and other concepts. It's perfectly doable, but you may be looking at several months of work, and lots of trial and error. Good luck!

Its not often that two people would happen to work on the same problem, so if you want to discuss further, feel free to contact me over linkedin and/or post a comment below. www.linkedin.com/in/michael-sohnen-a2454b1b2

Michael Sohnen
  • 953
  • 7
  • 15
  • Hello, finally I have worked my way through these steps (at least, get to know them). And I had found out that create a mesh is look like a Marching cube work, not raycasting algorithm. – Ade Sep 27 '21 at 17:12
  • Here is a post that I had found [link](https://www.raddq.com/dicom-processing-segmentation-visualization-in-python/) they guided through all 4 steps above – Ade Sep 27 '21 at 17:15
  • @Ade Thank you for sharing that post. I have since been following a similar approach to that article. Good luck and be sure to link a new question if you want some python debugging help. – Michael Sohnen Sep 29 '21 at 05:38