2

I have a three dimensional cell that holds images (i.e. images = cell(10,4,5)) and each cell block holds images of different sizes. The sizes are not too important in terms of what I’m trying to achieve. I would like to know if there is an efficient way to compute the sharpness of each of these cell blocks (total cell blocks = 10*4*5 = 200). I need to compute the sharpness of each block using the following function:

If it matters:

  • 40 cell blocks contain images of size 240 X 320
  • 40 cell blocks contain images of size 120 X 160
  • 40 cell blocks contain images of size 60 X 80
  • 40 cell blocks contain images of size 30 X 40
  • 40 cell blocks contain images of size 15 X 20

which totals to 200 cells.

  %% Sharpness Estimation From Image Gradients
  % Estimate sharpness using the gradient magnitude.
  % sum of all gradient norms / number of pixels give us the sharpness
  % metric.
  function [sharpness]=get_sharpness(G)
     [Gx, Gy]=gradient(double(G));
     S=sqrt(Gx.*Gx+Gy.*Gy); 
     sharpness=sum(sum(S))./(480*640);

Currently I am doing the following:

  for i = 1 : 10
     for j = 1 : 4
        for k = 1 : 5
           sharpness = get_sharpness(images{i,j,k});
        end
     end
  end

The sharpness function isn’t anything fancy. I just have a lot of data hence it takes a long time to compute everything.

Currently I am using a nested for loop that iterates through each cell block. Hope someone can help me find a better solution.

(P.S. This is my first time asking a question hence if anything is unclear please ask further questions. THANK YOU)

Dev-iL
  • 23,742
  • 7
  • 57
  • 99
J Patel
  • 21
  • 1
  • 5
    Since the sharpness of each block is calculated independently, you can consider using `parfor` for this embarrassingly parallel problem. – edwinksl Jun 22 '16 at 06:20
  • In case the matrix `S` is a 2D matri you may get some speedup using `sum(S(:))./(480*640)`. It is just micro optimization, but it sounds as if every improvement is appreciated here. – patrik Jun 22 '16 at 06:46
  • Also, if you are interested in terminology, that thing you call sharpness its the Total Variation Norm. But yeah, it is slow. If you have a CUDA enabled GPU, GPU arrays will spepd this up very significantly, else, its just slow. – Ander Biguri Jun 22 '16 at 09:19
  • I will say more: sharpness is a very bad name for this "feature". A small but uniformly distributed over the image noise will give you higher sharpness than a HUGE change in edge in a single point. To define "sharpness" see: https://stackoverflow.com/questions/7765810/is-there-a-way-to-detect-if-an-image-is-blurry/7767755#7767755 – Ander Biguri Jun 22 '16 at 09:23
  • Cells are also slow, you should try separating your batches of images into individual multi-dimensional arrays (joined along a new final dimension) for storage, and indexing those. It might not help, but it also might. – Andras Deak -- Слава Україні Jun 22 '16 at 12:21
  • FYI: This isn't the entire code, I am doing other things as well, I only posted the code I needed optimized. It maybe a poor choice of words in terms of the function naming, however, in my case this function computes the sharpness value after the images have gone through a DWT process. – J Patel Jun 23 '16 at 07:04

0 Answers0