get image boundaries across stacked images in 3d array efficiently

  • Last Update :
  • Techknowledgy :

Here's one with argmax for efficiency -

def get3Dboundaries(arr):
   row_start = arr.any(2).argmax(1)
row_end = arr.shape[1] - arr.any(2)[: , ::-1].argmax(1) - 1

col_start = arr.any(1).argmax(1)
col_end = arr.shape[2] - arr.any(1)[: , ::-1].argmax(1) - 1

return np.c_[row_start, row_end, col_start, col_end]

Sample run -

In[61]: arr
Out[61]:
   array([
      [
         [0, 0, 0, 0, 0],
         [0, 1, 1, 1, 0],
         [0, 1, 0, 1, 0],
         [0, 1, 0, 0, 0],
         [0, 0, 0, 0, 0]
      ],

      [
         [0, 0, 0, 0, 0], # different second slice
         for variety[1, 1, 1, 1, 0],
         [0, 1, 0, 1, 0],
         [0, 0, 0, 1, 0],
         [0, 0, 0, 1, 0]
      ]
   ])

In[62]: get3Dboundaries(arr)
Out[62]:
   array([
      [1, 3, 1, 3],
      [1, 4, 0, 3]
   ])

We can make it handle all zeros case with an invalid specifier, say -1, like so -

def get3Dboundaries_v2(arr):
   row_start = arr.any(2).argmax(1)
row_end = arr.shape[1] - arr.any(2)[: , ::-1].argmax(1) - 1

col_start = arr.any(1).argmax(1)
col_end = arr.shape[2] - arr.any(1)[: , ::-1].argmax(1) - 1

out = np.c_[row_start, row_end, col_start, col_end]
return np.where(arr.any((1, 2))[: , None], out, -1)

Suggestion : 2

This chapter is an introduction to handling and processing images. With extensive examples, it explains the central Python packages you will need for working with images. This chapter introduces the basic tools for reading images, converting and scaling images, computing derivatives, plotting or saving results, and so on. We will use these throughout the remainder of the book.,Gaussian blurring is used to define an image scale to work in, for interpolation, for computing interest points, and in many more applications.,Experiment with successive morphological operations on a thresholded image of your choice. When you have found some settings that produce good results, try the function center_of_mass in morphology to find the center coordinates of each object and plot them in the image.,The result should look something like Figure 1-14, which also shows a blurred version of the same image for comparison. As you can see, ROF de-noising preserves edges and image structures while at the same time blurring out the “noise.”

from PIL
import Image

pil_im = Image.open('empire.jpg')
from PIL
import Image
import os

for infile in filelist:
   outfile = os.path.splitext(infile)[0] + ".jpg"
if infile != outfile:
   try:
   Image.open(infile).save(outfile)
except IOError:
   print "cannot convert", infile
pil_im.thumbnail((128, 128))
box = (100, 100, 400, 400)
region = pil_im.crop(box)
out = pil_im.resize((128, 128))
from PIL
import Image
from pylab
import *

# read image to array
im = array(Image.open('empire.jpg'))

# plot the image
imshow(im)

# some points
x = [100, 100, 400, 400]
y = [200, 500, 200, 500]

# plot the points with red star - markers
plot(x, y, 'r*')

# line plot connecting the first two points
plot(x[: 2], y[: 2])

# add title and show the plot
title('Plotting: "empire.jpg"')
show()

Suggestion : 3

Images are represented as numpy arrays. A single-channel, or grayscale, image is a 2D matrix of pixel intensities of shape (row, column). We can construct a 3D volume as a series of 2D planes, giving 3D images the shape (plane, row, column). Multichannel data adds a channel dimension in the final position containing color information.,skimage.exposure contains a number of functions for adjusting image contrast. These functions operate on pixel values. Generally, image dimensionality or pixel spacing does not need to be considered.,Image segmentation partitions images into regions of interest. Interger labels are assigned to each region to distinguish regions of interest.,Functions operating on connected components can remove small undesired elements while preserving larger shapes.

% matplotlib inline
   %
   config InlineBackend.figure_format = 'retina' %
   gui qt
import time
time.sleep(5)
import numpy as np
from matplotlib
import pyplot as plt
from scipy
import ndimage as ndi
from skimage import(exposure, feature, filters, io, measure,
   morphology, restoration, segmentation, transform,
   util)
import napari
nuclei = io.imread('../images/cells.tif')
membranes = io.imread('../images/cells_membrane.tif')

print("shape: {}".format(nuclei.shape))
print("dtype: {}".format(nuclei.dtype))
print("range: ({}, {})".format(np.min(nuclei), np.max(nuclei)))
shape: (60, 256, 256)
dtype: float64
range: (0.0, 1.0)
# The microscope reports the following spacing(in µm)
original_spacing = np.array([0.2900000, 0.0650000, 0.0650000])

# We downsampled each slice 4 x to make the data smaller
rescaled_spacing = original_spacing * [1, 4, 4]

# Normalize the spacing so that pixels are a distance of 1 apart
spacing = rescaled_spacing / rescaled_spacing[2]

print(f 'microscope spacing: {original_spacing}')
print(f 'after rescaling images: {rescaled_spacing}')
print(f 'normalized spacing: {spacing}')

Suggestion : 4

The result is a binary image, in which the individual objects still need to be identified and labeled. The function label generates an array where each object is assigned a unique number:,However, using the origin parameter instead of a larger kernel is more efficient. For multidimensional kernels, origin can be a number, in which case the origin is assumed to be equal along all axes, or a sequence giving the origin along each axis.,The inputs of this function are the array to which the transform is applied, and an array of markers that designate the objects by a unique label, where any non-zero value is a marker. For instance:,Here, a kernel footprint was specified that contains only two elements. Therefore, the filter function receives a buffer of length equal to two, which was multiplied with the proper weights and the result summed.

>>> from scipy.ndimage
import correlate
   >>>
   correlate(np.arange(10), [1, 2.5])
array([0, 2, 6, 9, 13, 16, 20, 23, 27, 30]) >>>
   correlate(np.arange(10), [1, 2.5], output = np.float64)
array([0., 2.5, 6., 9.5, 13., 16.5, 20., 23.5, 27., 30.5])
>>> footprint = np.array([
      [0, 1, 0],
      [1, 1, 1],
      [0, 1, 0]
   ]) >>>
   footprint
array([
   [0, 1, 0],
   [1, 1, 1],
   [0, 1, 0]
])
>>> from scipy.ndimage
import correlate1d
   >>>
   a = [0, 0, 0, 1, 0, 0, 0] >>>
   correlate1d(a, [1, 1, 1])
array([0, 0, 1, 1, 1, 0, 0])
>>> a = [0, 0, 0, 1, 0, 0, 0] >>>
   correlate1d(a, [1, 1, 1], origin = -1)
array([0, 1, 1, 1, 0, 0, 0])
>>> a = [0, 0, 1, 1, 1, 0, 0] >>>
   correlate1d(a, [-1, 1]) # backward difference
array([0, 0, 1, 0, 0, -1, 0]) >>>
   correlate1d(a, [-1, 1], origin = -1) # forward difference
array([0, 1, 0, 0, -1, 0, 0])
>>> correlate1d(a, [0, -1, 1])
array([0, 1, 0, 0, -1, 0, 0])