17

I have a set of images and X-ray data generated from coupled scanning electron microscopy and energy dispersive spectroscopy. Here's my problem:

I imaged transects of a rock surface like this (purple box outlines transect region):

enter image description here

I wanted really high resolution, so I did this using 7 images at 3000X magnification and stitched them together with the photomerge script in Photoshop. Here's an example of an individual image:

enter image description here

And its position in the photomerged image transect:

enter image description here

At each of these 7 locations, I also collected X-ray data that generates an element map for each element detected and writes this to a TIFF. I want to also stitch together each element map TIFF so I can overlay it on the merged transect image of the rock. This is the result I want:

enter image description here

The problem is that the element maps don't have enough features in them to be able to stitch them together with photomerge. It's basically binary - if an element is detected, the pixel is some color (like red for iron or yellow for sulfur in my example images), or black if the element is not detected. You can see there are large portions of the element maps that are mostly black.

I now have ~20 transects x 7 images each x ~10 elements which results in ~1400 images that need to be put together, hence the need for automation.

My idea was to stitch together the rock images with photomerge. The output of photomerge is a smart object where each image is a layer. Then, I would use a script to get the top left corner coordinates, the width, and height for each of the 7 images in the photomerged image object. I would then place and assign these properties to each of the corresponding element maps for the 7 images to generate the "merged" element maps to overlay on the image. I tried to work on this myself but I'm not proficient in javascript and couldn't wrap my head around the Photoshop API.

I uploaded an example dataset on Github here. The 7 transect positions are from left to right: -2, -1, 0, 1, 2, 3, 4. There are images of the rock and subdirectories with element data for each position.

Caitlin
  • 505
  • 2
  • 14
  • 2
    Hi @Caitlin, I don't quite understand what exactly you need as the result? A Photoshop file with all layers? Or a set of exported images (with each image positioned at the correct location)? – mdomino Mar 13 '20 at 19:07
  • 2
    @Caitlin LGTM!! – Nosniw Mar 13 '20 at 19:10
  • Hi @mdomino , thanks for your comment and sorry I didn't make that clear! I would like a Photoshop file with all the layers with each image positioned at the correct location. I've been manually typing in the coordinates etc for each element image to match the coordinates of the rock images and then making groups from the layers, i.e. I have a group of iron (Fe) images, a group of sulfur (S) images, and a group of rock images. Let me know if I can clarify further! – Caitlin Mar 13 '20 at 19:18
  • 5
    Ok, just wanted to ask, because someone might know of a way to achieve this without Photoshop. But if you need all the files in Photoshop at the end, it has to be done in Photoshop of course. Having written a lot of ExtendScript scripts in the past, I have to say what you are asking is probably to big of a task to be solved here on Stack Overflow. You are basically asking for an entire script that usually would require to hire someone for you. As you will need to load files by name, arrange them on the correct layers, position them by coordinates and so on. This is quite involved. – mdomino Mar 13 '20 at 19:58
  • 1
    @mdomino ah I was hoping for a photoshop file output for convenience but exported images would definitely work as well! – Caitlin Mar 13 '20 at 20:01
  • 2
    Here's another blog post about StackOverflow that, sadly, compares it to the [Stanford prison experiment](https://yihui.org/en/2017/12/so-bounties/). @Caitlin it sounds like you're looking specifically and only for Photoshop script solutions to this problem? – Codebling Mar 17 '20 at 02:05
  • Thank you @Codebling ! The reason I want to use Photoshop is for the photomerge function because I can merge my images into a panoramic. All I really need is the XY coordinates from the layers produced by that function, as in: https://community.adobe.com/t5/photoshop/get-x-y-position-and-width-height-of-the-transform-box-of-a-layer-not-chopped-off-bounds/m-p/10986174?page=1#M315167 However, when I try to run that script I get an error which you can see in my comment on that link. Otherwise, I have a workaround in R for the rest! – Caitlin Mar 17 '20 at 02:32
  • Perhaps @mdomino knows why this Photoshop script throws an error? I put the error issue in the comment at the bottom of the page here: https://community.adobe.com/t5/photoshop/get-x-y-position-and-width-height-of-the-transform-box-of-a-layer-not-chopped-off-bounds/m-p/10986174?page=1#M315167 – Caitlin Mar 17 '20 at 02:34
  • 1
    @Caitlin I understand, but if someone had another alternative method of merging the layers, would that be acceptable? – Codebling Mar 17 '20 at 03:04
  • 1
    @Codebling definitely! I tried messing around with OpenCV in Python and found a plugin for ImageJ but the download link was dead. Apparently there's no way to do it in R, I searched the internet high and low! But I was able to reproduce the panorama in R from the images by converting them to raster and then merging the rasters with manually assigned coordinates that I got from Photoshop. It would be great to get away from Photoshop entirely! – Caitlin Mar 17 '20 at 03:18
  • I added my script using my example dataset in my github repo here: https://github.com/CaitlinCasar/dataStitcher – Caitlin Mar 17 '20 at 03:31
  • Have you solved your problem? – greg-tumolo Jul 10 '20 at 20:33
  • @greg-tumolo I did the panorama stitching in Photoshop using the SEM images, then got the coordinates for each image in the panorama. I then used these coordinates to stitch the corresponding element images into panoramic rasters in R, then stacked those into raster bricks. Would love to hear your ideas if you know how to stitch images into panoramas in R! – Caitlin Jul 10 '20 at 20:55
  • Yes, the coupled SEM/element images were collected from the exact same field of view. And yes, the adjacent images are shifted usually in both the X and Y directions. Due to sample drift at that magnification I was unable to shift in only the X or Y directions in most cases. – Caitlin Jul 10 '20 at 21:18
  • Oops...I tried to delete/repost my previous comment with "10" (elements) instead of "3" and lost it. You wrote "I then used these coordinates to stitch the corresponding element images into panoramic rasters in R...". So, do not you already know how to stitch images into panoramas in R? – greg-tumolo Jul 10 '20 at 21:29
  • Correct, I had to do the panoramic stitching of the SEM images in Photoshop using the photomerge function. I couldn't find any packages for doing this in R, and writing that from scratch in R would have cost me too much time. – Caitlin Jul 10 '20 at 22:05
  • As I reread these comments, I noticed my answer does not satisfy your want. I have implemented an alternative to photomerge; however, the alternative is probably much too slow. How much time does photomerge take to merge 7 SEM photos? – greg-tumolo Jul 13 '20 at 16:38
  • Photomerge on 7 SEM images takes under 1 minute. The process of stitching the corresponding rasterized element images into panoramic rasters in R and then combining them into a raster brick takes maybe 10 minutes for a given set of SEM/element images. How long did your script take to run? – Caitlin Jul 13 '20 at 16:54
  • 1
    Now, it takes ~8 minutes with OVERLAP = 1.0 (as in my answer code). If you reduce OVERLAP to say, 0.5, it will run faster; however, you will introduce an(other) failure mode: As can be seen in the example output (at the bottom of my answer), some images overlap by more than 40%! – greg-tumolo Jul 15 '20 at 21:19
  • 1
    will test on my end asap! thanks a ton for tackling this! :D – Caitlin Jul 15 '20 at 23:11
  • @DavidArenburg I don't know, but you can see my solution here: https://github.com/CaitlinCasar/dataStitcher/blob/master/dataStitchR.R – Caitlin Aug 03 '20 at 14:16
  • I hoped someone would translate my code from JavaScript to R for you. Alas! Did you test my code as it is? – greg-tumolo Aug 09 '20 at 02:13

1 Answers1

6

I do not know Photoshop or R, so did it JavaScript:

const names = { // map from directory names to patterns (where "#" stands for position index) of names of images therein
 "SEM_images" : "pos# image.tif",
 "Al" : "Al Kα1 pos# map data.tif",
 "Ba" : "Ba Lα1 pos# map data.tif",
 "C"  : "C Kα1_2 pos# map data.tif",
 "Ca" : "Ca Kα1 pos# map data.tif",
 "Fe" : "Fe Kα1 pos# map data.tif",
 "Hg" : "Hg Lα1 pos# map data.tif",
 "Ir" : "Ir Lα1 pos# map data.tif",
 "K"  : "K Kα1 pos# map data.tif",
 "Mg" : "Mg Kα1_2 pos# map data.tif",
 "Mn" : "Mn Kα1 pos# map data.tif",
 "Na" : "Na Kα1_2 pos# map data.tif",
 "O"  : "O Kα1 pos# map data.tif",
 "Os" : "Os Lα1 pos# map data.tif",
 "P"  : "P Kα1 pos# map data.tif",
 "S"  : "S Kα1 pos# map data.tif",
 "Si" : "Si Kα1 pos# map data.tif",
 "Ti" : "Ti Kα1 pos# map data.tif"
}

const SCALE = 1/10 // scale of output images

const OVERLAP = 1.0 // minimum *tested* (horizontal) overlap of images relative to their width
const H_BOX = 0.1 // height of comparison box relative to height of images
const W_BOX = 0.1 // width  of comparison box relative to width  of images
const ADJUSTMENT = 0 // (vertical) adjustment of comparison box [pixels]

/* Merge images given:
 * dataset - dataset address as String
 * directory - directory name for images as String
 * pattern - pattern (where "#" stands for position index) of names of images in directory
 * pos_min - minimum position index of images as Number
 * pos_max - maximum position index of images as Number
 */
function Merge(dataset, directory, pos_min, pos_max) {
  if (dataset[dataset.length - 1] != "/") dataset += "/"
  const images = []
  for (let pos = pos_min; pos <= pos_max; ++pos) (images[pos - pos_min] = new Image).src = dataset + directory + "/" + names[directory].replace("#", pos)
  merge(images, dataset, pos_min, pos_max)
}

function Laplacian(imagedata) { // 5-point stencil approximation
  const data = imagedata.data
  const L = data.length/4
  const grayscale = new Float32Array(L)
  for (let i = 0; i < L; ++i) {
    const I = 4*i
    grayscale[i] = (data[I    ] + data[I + 1] + data[I + 2])/3
  }
  const Laplacian = new Float32Array(L)
  //const H = imagedata.height
  const Hm1 = imagedata.height - 1
  const W = imagedata.width
  const Wm1 = W - 1
  for (let r = 1; r < Hm1; ++r) {
    const R = r*W
    for (let c = 1; c < Wm1; ++c) {
      const i = R + c
      Laplacian[i] = grayscale[i - W] + grayscale[i + W] + grayscale[i - 1] + grayscale[i + 1] - 4*grayscale[i]
    }
  }
  for (let c = 1; c < Wm1; ++c) {
    //const i = c
    Laplacian[c] = grayscale[c + W] + grayscale[c - 1] + grayscale[c + 1] - 4*grayscale[c]
  }
  for (let c = 1; c < Wm1; ++c) {
    const i = Hm1*W + c
    Laplacian[i] = grayscale[i - W] + grayscale[i - 1] + grayscale[i + 1] - 4*grayscale[i]
  }
  for (let r = 1; r < Hm1; ++r) {
    const i = r*W
    Laplacian[i] = grayscale[i - W] + grayscale[i + W] + grayscale[i + 1] - 4*grayscale[i]
  }
  for (let r = 1; r < Hm1; ++r) {
    const i = r*W + Wm1
    Laplacian[i] = grayscale[i - W] + grayscale[i + W] + grayscale[i - 1] - 4*grayscale[i]
  }
  {
    const Lm1 = L - 1
    const LmW = L - W
    Laplacian[0  ] = grayscale[W      ] + grayscale[1      ] - 4*grayscale[0  ]
    Laplacian[W  ] = grayscale[2*W    ] + grayscale[Wm1    ] - 4*grayscale[W  ]
    Laplacian[LmW] = grayscale[LmW - W] + grayscale[LmW + 1] - 4*grayscale[LmW]
    Laplacian[Lm1] = grayscale[Lm1 - W] + grayscale[Lm1 - 1] - 4*grayscale[Lm1]
  }
  return Laplacian
}

function merge(images, dataset, pos_min, pos_max) {
  for (const image of images) if (!image.complete) {
    setTimeout(merge, 1000, images, dataset, pos_min, pos_max) // wait 1000ms = 1s
    return
  }
  let Row, Col
  const Coords = [[Row = 0, Col = 0]]
  let index = 0
  let image = images[index]
  const H = image.naturalHeight
  const W = image.naturalWidth
  if (W*H == 0) return []
  const canvas = document.createElement("canvas")
  canvas.height = H
  canvas.width  = W
  const context = canvas.getContext('2d')
  context.drawImage(image, 0, 0)
  let prev = Laplacian(context.getImageData(0, 0, W, H))
  const length = images.length
  const h = Math.round(H_BOX*H)
  const Hmh = H - h
  const w = Math.round(W_BOX*W)
  const o = Math.max(Math.round((1 - OVERLAP)*W), w)
  const Wmw = W - w
  const row_offset = Math.round(Hmh/2) + ADJUSTMENT
  const offset = row_offset*W
  for (++index; index < length; ++index) {
    image = images[index]
    if (image.naturalHeight != H || image.naturalWidth != W) alert("Dimension mismatch: " + image.src)
    context.drawImage(image, 0, 0)
    const curr = Laplacian(context.getImageData(0, 0, W, H))
    let max = -1
    let row, col
    for (let r = 0; r < Hmh; ++r) {
      const R = r*W
      for (let c = o; c < Wmw; ++c) {
        let m = 0
        for (let i = 0; i < h; ++i) {
          const I = i*W
          const K = R + I + c
          const k = offset + I
          for (let j = 0; j < w; ++j) if (prev[K + j]*curr[k + j] > 0) ++m
        }
        if (m > max) {
          max = m
          row = r
          col = c
        }
      }
    }
    Coords[index] = [(Row += row - row_offset)/H, (Col += col)/W]
    prev = curr
  }
  Stitch(dataset, pos_min, pos_max, Coords)
}

function Stitch(dataset, pos_min, pos_max, Coords) {
  if (dataset[dataset.length - 1] != "/") dataset += "/"
  document.body.appendChild(document.createElement("h1")).innerText = `${dataset} :[${pos_min},${pos_max}] @${JSON.stringify(Coords)}`
  const tasks = []
  for (const directory in names) {
    document.body.appendChild(document.createElement("h2")).innerText = directory
    const images = []
    for (let pos = pos_min; pos <= pos_max; ++pos) (images[pos - pos_min] = new Image).src = dataset + directory + "/" + names[directory].replace("#", pos)
    const target = document.body.appendChild(document.createElement("img"))
    target.height = 0
    target.width  = 0
    tasks.push([images, target])
  }
  process(tasks, Coords)
}

const ROW = 0
const COL = 1
function stitch(images, Coords) {
  let image
  let index
  for (index in images) {
    image = images[index]
    if (image.naturalHeight != 0 && image.naturalWidth != 0) break
  }
  const H = image.naturalHeight
  const W = image.naturalWidth
  const canvas = document.createElement("canvas")
  let r_min = 0
  let r_max = 0
  let c_min = 0
  let c_max = 0
  for (coords of Coords) {
    const r = coords[ROW]
    const c = coords[COL]
    if (r < r_min) r_min = r
    if (r > r_max) r_max = r
    if (c < c_min) c_min = c
    if (c > c_max) c_max = c
  }
  canvas.height = (r_max - r_min + 1)*H
  canvas.width  = (c_max - c_min + 1)*W
  const context = canvas.getContext('2d')
  if (context == null) {
    let list = ""
    for (const image of images) list += "\n- " + image.src
    alert("Too large: stitching area required for:" + list)
    return
  }
  let coords = Coords[index]
  let row = (coords[ROW] - r_min)*H
  let col = (coords[COL] - c_min)*W
  context.drawImage(image, col, row)
  const length = images.length
  for (++index; index < length; ++index) {
    image = images[index]
    if (image.naturalHeight == 0 || image.naturalWidth == 0) continue
    if (image.naturalHeight != H || image.naturalWidth != W) alert("Dimension mismatch: " + image.src)
    coords = Coords[index]
    row = coords[ROW]*H
    col = coords[COL]*W
    context.drawImage(image, col, row)
  }
  return canvas.toDataURL()
}

function process(tasks, Coords) {
  const task = tasks.shift()
  const images = task[0]
  for (const image of images) if (!image.complete) {
    tasks.push(task)
    setTimeout(process, 1000, tasks, Coords) // wait 1000ms = 1s
    return
  }
  const target = task[1]
  target.src = stitch(images, Coords)
  target.onload = function () {
    this.height = SCALE*this.naturalHeight
    this.width  = SCALE*this.naturalWidth
    this.style = "border: solid black 1px"
  }
  if (tasks.length > 0) process(tasks, Coords)
}

To run, do something like:

Merge("https://raw.githubusercontent.com/CaitlinCasar/dataStitcher/master/example_dataset/", "SEM_images", -2, 4)

Example SEM_images with Fe overlay: SEM_images with Fe overlay

greg-tumolo
  • 698
  • 1
  • 7
  • 30