0

I'm trying to extract the all three views (Axial, Sagittal and Coronal) from a CTA in DICOM format, using the SimpleItk library.

I can correctly read the series from a given directory:

    ...
    import SimpleITK as sitk
    ...
    reader = sitk.ImageSeriesReader()
    dicom_names = reader.GetGDCMSeriesFileNames(input_dir)
    reader.SetFileNames(dicom_names)
    # Execute the reader
    image = reader.Execute()
    ...

and then, using numpy arrays as stated in this questions, I'm able to extract and save the 3 views.

    ...
    image_array = sitk.GetArrayFromImage(image)
    ...
    for i in range(image_array.shape[0]):
        output_file_name = axial_out_dir + 'axial_' + str(i) + '.png'
        logging.debug('Saving image to ' + output_file_name)
        imageio.imwrite(output_file_name, convert_img(image_array[i, :, :], axial_min, axial_max), format='png')
    ...

The other 2 are made by saving image_array[:, i, :] and image_array[:, :, i], while convert_img(..) is a function that only converts the data type, so it does not alter any shape.

However, the coronal and sagittal views are stretched, rotated and with wide black band (in some slice they are very wide).

Here's the screenshot from Slicer3d:

Sclicer3D screenshot

while this is the output of my code:

Axial

Axial

Sagittal

Sagittal

Coronal

Coronal

Image shape is 512x512x1723, which result in axial pngs being 512x512 pixel, coronal and sagittal being 512x1723, thus this seems correct.

Should I try using PermuteAxes filter? The problem is that I was not able to find any documentation regarding its use in python (neither in other language due to 404 in documentation page)

There is also a way to improve the contrast? I have used the AdaptiveHistogramEqualization filter from simpleitk but it's way worse than Slicer3D visualization, other than being very slow.

Any help is appreciated, thank you!

Amit Joshi
  • 15,448
  • 21
  • 77
  • 141
Ghesio
  • 89
  • 11
  • 1
    The problem is that you don't account for the voxel size, which is different in x/y and in z direction. You get the size in x/y from `Pixel Spacing`, and the z size from `Spacing Between Slices`, if available, otherwise from the difference in `Image Position Patient` between adjacent slices. – MrBean Bremen Dec 29 '20 at 18:30
  • Thank you for your answer. I will try working something from your hint. Can you kindly point me some references? I'm new to processing dicom images and it seems fairly complicated! – Ghesio Dec 29 '20 at 18:49
  • Hm, the reference is the [DICOM standard](http://dicom.nema.org/medical/dicom/current/output/html/part03.html), but it is large and needs some getting used to. You can check [this related question](https://stackoverflow.com/questions/65389506/how-to-calculate-a-voxel-size) about voxel size for a start. – MrBean Bremen Dec 29 '20 at 19:12
  • 1
    Thanks again, I'll start from there. I'm going also to try other libraries because the SimpleItk documentation it's pretty messy... – Ghesio Dec 29 '20 at 20:03

3 Answers3

1

When you convert your SimpleITK image to a NumPy array, all the pixel spacing information is lost (as the comments above suggest). If you do everything in SimpleITK, it retains that spacing information.

It's very easy to extract slices in X, Y and Z from an image in SimpleITK using python's array slicing:

import SimpleITK as sitk

# a blank test image
img = sitk.Image([100, 101, 102], sitk.sitkUInt8)

# non-uniform spacing, for illustration
img.SetSpacing([1.0, 1.1, 1.2])

# select the 42nd Z slice
zimg = img[:, :, 42]

#select the 0th X slice
ximg = img[0, :, :]

#select the 100th Y slice
yimg = img[:, 100, :]

#print the spacing to show it's retained
print(yimg.GetSpacing())
Dave Chen
  • 1,905
  • 1
  • 12
  • 18
  • Thank you, as I can see from your example the spacing get preserved but that won't solve my problem. I edited the code without using numpy - i.e. for coronal view ` sitk.WriteImage(sitk.Cast(image[:, i, :], sitk.sitkUInt8), output_file_name)` - but got the same strech other than a very bad quality in file (due I think to the uint_8 cast) – Ghesio Dec 31 '20 at 11:34
0

Answering my own question if someone need it.

Given the fact that I need to use the slices in a deep learning framework and for data augmentation, I need them to be resampled in a new spacing which is (1.0, 1.0, 1.0).

Solved it by using this function:

def resample_image(itk_image, out_spacing=(1.0, 1.0, 1.0)):
    """
    Resample itk_image to new out_spacing
    :param itk_image: the input image
    :param out_spacing: the desired spacing
    :return: the resampled image
    """
    # get original spacing and size
    original_spacing = itk_image.GetSpacing()
    original_size = itk_image.GetSize()
    # calculate new size
    out_size = [
        int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))),
        int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))),
        int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2])))
    ]
    # instantiate resample filter with properties and execute it
    resample = sitk.ResampleImageFilter()
    resample.SetOutputSpacing(out_spacing)
    resample.SetSize(out_size)
    resample.SetOutputDirection(itk_image.GetDirection())
    resample.SetOutputOrigin(itk_image.GetOrigin())
    resample.SetTransform(sitk.Transform())
    resample.SetDefaultPixelValue(itk_image.GetPixelIDValue())
    resample.SetInterpolator(sitk.sitkNearestNeighbor)
    return resample.Execute(itk_image)

and then saving by using numpy arrays as stated in the original question.

Ghesio
  • 89
  • 11
0

I might be late, but you could use Torchio for this. I think a good solution for your case is to use the CLI tool that is installed with TorchIO:

$ tiohd your_image.nii.gz
ScalarImage(shape: (1, 512, 512, 1723); spacing: (0.50, 0.50, 1.00); orientation: RAS+; memory: 1.7 GiB; dtype: torch.ShortTensor)
$ torchio-transform your_image.nii.gz Resample one_iso.nii.gz
$ tiohd one_iso.nii.gz
ScalarImage(shape: (1, 256, 256, 1723); spacing: (1.00, 1.00, 1.00); orientation: RAS+; memory: 430.8 MiB; dtype: torch.ShortTensor)

This works because 1 mm is the default target resolution for the Resample transform.

You can also manipulate your images using the normal Python interface for TorchIO, of course.

Disclaimer: I'm the main developer of TorchIO.

fepegar
  • 595
  • 5
  • 15
  • 1
    This seems nice, thank you for taking your time to answer my already solved question. I found interesting the examples in the notebooks using 3D unet - may try also them as approach – Ghesio Feb 08 '21 at 17:25