I prefer to implement the formulat in numpy vectorized operations
other than python loops
.
# implementing the formula #(Io - mo) / so * st + mt = Io * (st / so) + mt - mo * (st / so) ratio = (std_tar / std_ori).reshape(-1) offset = (mean_tar - mean_ori * std_tar / std_ori).reshape(-1) lab_tar = cv2.convertScaleAbs(lab_ori * ratio + offset)
Here is the code:
# 2019 / 02 / 19 by knight - 金
# https: //stackoverflow.com/a/54757659/3547485
import numpy as np
import cv2
def reinhard(target, original):
# cvtColor: COLOR_BGR2Lab
lab_tar = cv2.cvtColor(target, cv2.COLOR_BGR2Lab)
lab_ori = cv2.cvtColor(original, cv2.COLOR_BGR2Lab)
# meanStdDev: calculate mean and stadard deviation
mean_tar, std_tar = cv2.meanStdDev(lab_tar)
mean_ori, std_ori = cv2.meanStdDev(lab_ori)
# implementing the formula
#(Io - mo) / so * st + mt = Io * (st / so) + mt - mo * (st / so)
ratio = (std_tar / std_ori).reshape(-1)
offset = (mean_tar - mean_ori * std_tar / std_ori).reshape(-1)
lab_tar = cv2.convertScaleAbs(lab_ori * ratio + offset)
# convert back
mapped = cv2.cvtColor(lab_tar, cv2.COLOR_Lab2BGR)
return mapped
if __name__ == "__main__":
ori = cv2.imread("ori.png")
tar = cv2.imread("tar.png")
mapped = reinhard(tar, ori)
cv2.imwrite("mapped.png", mapped)
mapped_inv = reinhard(ori, tar)
cv2.imwrite("mapped_inv.png", mapped)
I managed to figure it out after looking at the numpy documentation. I just needed to replace my nested for loop with proper array accessing. It took less than a minute to iterate through all 300 images with this.
lAB_tar[: ,: , 0] = (lAB_img[: ,: , 0] - mean[0]) / std[0] * std_tar[0] + mean_tar[0] lAB_tar[: ,: , 1] = (lAB_img[: ,: , 1] - mean[1]) / std[1] * std_tar[1] + mean_tar[1] lAB_tar[: ,: , 2] = (lAB_img[: ,: , 2] - mean[2]) / std[2] * std_tar[2] + mean_tar[2]
I prefer to implement the formulat in anycodings_algorithm numpy vectorized operations other than anycodings_algorithm python loops.,My supervisor told me that I could try using anycodings_image-processing a matrix to apply the function all at once anycodings_image-processing to improve the runtime but I'm not exactly anycodings_image-processing sure how to go about doing that.,Error: Sending 500 ("Server Error") response:Exception: Could not acquire a connection to the database using the specified manager.(Heroku PostgreSql),How to merge two different size DataFrames in Pandas to update one dataframe depends on matching partial values in one column with another dataframe
def reinhard(target, img):
#converts image and target from BGR colorspace to l alpha beta
lAB_img = cv2.cvtColor(img, cv2.COLOR_BGR2Lab)
lAB_tar = cv2.cvtColor(target, cv2.COLOR_BGR2Lab)
#finds mean and standard deviation
for each color channel across the entire image
(mean, std) = cv2.meanStdDev(lAB_img)
(mean_tar, std_tar) = cv2.meanStdDev(lAB_tar)
#iterates over image implementing formula to map color normalized pixels to target image
for y in range(512):
for x in range(512):
lAB_tar[x, y, 0] = (lAB_img[x, y, 0] - mean[0]) / std[0] * std_tar[0] + mean_tar[0]
lAB_tar[x, y, 1] = (lAB_img[x, y, 1] - mean[1]) / std[1] * std_tar[1] + mean_tar[1]
lAB_tar[x, y, 2] = (lAB_img[x, y, 2] - mean[2]) / std[2] * std_tar[2] + mean_tar[2]
mapped = cv2.cvtColor(lAB_tar, cv2.COLOR_Lab2BGR)
return mapped
I prefer to implement the formulat in anycodings_algorithm numpy vectorized operations other than anycodings_algorithm python loops.
# implementing the formula #(Io - mo) / so * st + mt = Io * (st / so) + mt - mo * (st / so) ratio = (std_tar / std_ori).reshape(-1) offset = (mean_tar - mean_ori * std_tar / std_ori).reshape(-1) lab_tar = cv2.convertScaleAbs(lab_ori * ratio + offset)
Here is the code:
# 2019 / 02 / 19 by knight - é ‡ ‘
# https: //stackoverflow.com/a/54757659/3547485
import numpy as np
import cv2
def reinhard(target, original):
# cvtColor: COLOR_BGR2Lab
lab_tar = cv2.cvtColor(target, cv2.COLOR_BGR2Lab)
lab_ori = cv2.cvtColor(original, cv2.COLOR_BGR2Lab)
# meanStdDev: calculate mean and stadard deviation
mean_tar, std_tar = cv2.meanStdDev(lab_tar)
mean_ori, std_ori = cv2.meanStdDev(lab_ori)
# implementing the formula
#(Io - mo) / so * st + mt = Io * (st / so) + mt - mo * (st / so)
ratio = (std_tar / std_ori).reshape(-1)
offset = (mean_tar - mean_ori * std_tar / std_ori).reshape(-1)
lab_tar = cv2.convertScaleAbs(lab_ori * ratio + offset)
# convert back
mapped = cv2.cvtColor(lab_tar, cv2.COLOR_Lab2BGR)
return mapped
if __name__ == "__main__":
ori = cv2.imread("ori.png")
tar = cv2.imread("tar.png")
mapped = reinhard(tar, ori)
cv2.imwrite("mapped.png", mapped)
mapped_inv = reinhard(ori, tar)
cv2.imwrite("mapped_inv.png", mapped)
I managed to figure it out after looking anycodings_algorithm at the numpy documentation. I just anycodings_algorithm needed to replace my nested for loop anycodings_algorithm with proper array accessing. It took anycodings_algorithm less than a minute to iterate through anycodings_algorithm all 300 images with this.
lAB_tar[: ,: , 0] = (lAB_img[: ,: , 0] - mean[0]) / std[0] * std_tar[0] + mean_tar[0] lAB_tar[: ,: , 1] = (lAB_img[: ,: , 1] - mean[1]) / std[1] * std_tar[1] + mean_tar[1] lAB_tar[: ,: , 2] = (lAB_img[: ,: , 2] - mean[2]) / std[2] * std_tar[2] + mean_tar[2]
When we process images, we can access, examine, and / or change the colour of any pixel we wish. To do this, we need some convention on how to access pixels individually; a way to give each one a name, or an address of a sort.,In practice, it is often necessary to denoise the image before thresholding, which can be done with one of the methods from the Blurring Images episode.,This weighted average, the sum of the multiplications, becomes the new value for the centre pixel (3, 3). The same process would be used to determine the green and red channel values, and then the kernel would be moved over to apply the filter to the next pixel in the image.,In the Thresholding episode we have covered dividing an image into foreground and background pixels. In the shapes example image, we considered the coloured shapes as foreground objects on a white background.
"" " * Python libraries for learning and performing image processing.* "" " import numpy as np import skimage.io import skimage.viewer import matplotlib.pyplot as plt import ipympl
import skimage # form 1, load whole skimage library
import skimage.io # form 2, load skimage.io module only
from skimage.io
import imread # form 3, load only the imread
function
import numpy as np # form 4, load all of numpy into an object called np
% matplotlib widget
image = skimage.io.imread(fname = "data/eight.tif")
plt.imshow(image)
print(image.shape) print(image)
(5, 3)[[0. 0. 0.]
[0. 1. 0.]
[0. 0. 0.]
[0. 1. 0.]
[0. 0. 0.]]
Want the grayscale value of a pixel in a color image? Rather than converting the whole image to grayscale, and then returning the pixel, use getGrayPixel(x, y).,Finally, for convenience, the image can be scaled at the same time that it is rotated. This is done with the scale parameter. The value of the parameter is a scaling factor, similar to the scale function.,It is always preferable to control the real-world lighting and environment in order to maximize the quality of an image. However, even in the best of circumstances, an image will include pixel-level noise. That noise can complicate the detection of features on the image, so it is important to clean it up. This is the job of morphology.,In addition to extracting RGB triplets from an image, it is also possible to change the image using an RGB triplet. The following example will extract a pixel from the image, zero out the green and blue components, preserving only the red value, and then put that back into the image.
Remembering those codes can be somewhat difficult. To simplify this,
the Color
class includes a host of
predefined colors. For example, to use the color teal, rather than needing
to know that it is RGB (0, 128, 128), simply use:
from SimpleCV import Color # An easy way to get the RGB triplet values for the color teal. myPixel = Color.TEAL
Similarly, to look up the RGB values for a known color:
from SimpleCV
import Color
# Prints(0, 128, 128)
print Color.TEAL
The information for an individual pixel can be extracted from an
image in the same way an individual element of an array is referenced in
Python. The next examples show how to extract the pixel at (120, 150)
from the picture of the
Portrait of a Halberdier painting, as demonstrated
in Figure 4-2.
from SimpleCV import Image img = Image('jacopo.png') # Gets the information for the pixel located at # x coordinate = 120, and y coordinate = 150 pixel = img[120, 150] print pixel
The following example code does exactly the same thing, but uses
the getPixel()
function instead of
the index of the array. This is the more object-oriented programming
approach compared to extracting the pixel directly from the
array.
from SimpleCV import Image img = Image('jacopo.png') # Uses getPixel() to get the information for the pixel located # at x coordinate = 120, and y coordinate = 150 pixel = img.getPixel(120, 150) print pixel
Accessing pixels by their index can sometimes create problems. In the example above,
trying to use img[1000, 1000]
will throw an error, and
img.getPixel(1000, 1000)
will give a warning because
the image is only 300×389. Because the pixel indexes start at zero, not one, the dimensions
must be in the range 0-299 on the x-axis and 0-388 on the y-axis. To avoid problems like
this, use the width
and height
properties of an image to find its dimensions. For example:
from SimpleCV import Image img = Image('jacopo.png') # Print the pixel height of the image # Will print 300 print img.height # Print the pixel width of the image # Will print 389 print img.width
Since only one pixel was changed, it is hard to see the
difference, but now the pixel at (120,
150)
is a dark red color. To make it easier to see, resize the
image to five times its previous size by using the resize()
function.
from SimpleCV import Image img = Image('jacopo.png') # Get the pixel and change the color (red, green, blue) = img.getPixel(120, 150) img[120, 150] = (red, 0, 0) # Resize the image so it is 5 times bigger than its original size bigImg = img.resize(img.width * 5, img.height * 5) bigImg.show()
Right now, this looks like random fun with pixels with no actual purpose. However, pixel extraction is an important tool when trying to find and extract objects of a similar color. Most of these tricks are covered later in the book, but to provide a quick preview of how it is used, the following example looks at the color distance of other pixels compared with a given pixel, as shown in Figure 4-5.
from SimpleCV import Image img = Image('jacopo.png') # Get the color distance of all pixels compared to(120, 150) distance = img.colorDistance(img.getPixel(120, 150)) # Show the resulting distances distance.show()
The block of code above shows the next major concept with images:
scaling. In the above example, both the width and the height were
changed by taking the img.height
and
img.width
parameters and multiplying
them by 5. In this next case, rather than entering the new dimensions,
the scale()
function will resize the
image with just one parameter: the scaling factor. For example, the
following code resizes the image to five times its original size.
from SimpleCV import Image img = Image('jacopo.png') # Scale the image by a factor of 5 bigImg = img.scale(5) bigImg.show()
from SimpleCV import Image img = Image('jacopo.png') # Resize the image, keeping the original height, # but doubling the width bigImg = img.resize(img.width * 2, img.height) bigImg.show()
In this example, the image is stretched in the width dimension,
but no change is made to the height, as demonstrated in Figure 4-6. To resolve this problem, use adaptive
scaling with the adaptiveScale()
function. It will create a new image with the dimensions requested.
However, rather than wrecking the proportions of the original image, it
will add padding. For example:
from SimpleCV import Image # Load the image img = Image('jacopo.png') # Resize the image, but use the + adaptiveScale() + function to maintain # the proportions of the original image adaptImg = img.adaptiveScale((img.width * 2, img.height)) adaptImg.show()
from SimpleCV import Image, Color img = Image('jacopo.png') # Embiggen the image, put it on a green background, in the upper right emb = img.embiggen((350, 400), Color.GREEN, (0, 0)) emb.show()
For example, to crop out just the bust in the picture, you could use the following code. The resulting image is shown in Figure 4-9:
from SimpleCV import Image img = Image('jacopo.png') # Crop starting at + (50, 5) + for an area 200 pixels wide by 200 pixels tall cropImg = img.crop(50, 5, 200, 200) cropImg.show()
When performing a crop, it is sometimes more convenient to specify
the center of the region of interest rather than the upper left corner.
To crop an image from the center, add one more parameter, centered = True
, with the result shown in
Figure 4-10.
from SimpleCV import Image img = Image('jacopo.png') # Crop the image starting at the center of the image cropImg = img.crop(img.width / 2, img.height / 2, 200, 200, centered = True) cropImg.show()
Crop regions can also be defined by image features. Many of these features are covered later in the book, but blobs were briefly introduced in previous chapters. As with other features, the SimpleCV framework can crop around a blob. For example, a blob detection can also find the torso in the picture.
from SimpleCV
import Image
img = Image('jacopo.png')
blobs = img.findBlobs()
img.crop(blobs[-1]).show()
For the Python aficionados, it is also possible to do cropping by
directly manipulating the two dimensional array of the image. Individual
pixels could be extracted by treating the image like an array and
specifying the (x, y)
coordinates.
Python can also extract ranges of pixels as well. For example, img[start_x:end_x, start_y:end_y]
provides a
cropped image from (start_x, start_y)
to (end_x, end_y)
. Not including a
value for one or more of the coordinates means that the border of the
image will be used as the start or end point. So something like img[ : , 300:]
works. That will select all of
the x
values and all of the y
values that are greater than 300. In
essence, any of Python’s functions for extracting subsets of arrays will
also work to extract parts of an image, and thereby return a new image.
Because of this, images can be cropped using Python’s slice notation
instead of the crop function:
from SimpleCV import Image img = Image('jacopo.png') # Cropped image that is 200 pixels wide and 200 pixels tall starting at(50, 5). cropImg = img[50: 250, 5: 205] cropImg.show()
The simplest operation is to rotate the image so that it is
correctly oriented. This is accomplished with the rotate()
function, which only has one required
argument, angle
. This value is the
angle, in degrees, to rotate the image. Negative values for the angle
rotate the image clockwise and positive values rotate it
counterclockwise. To rotate the image 45 degrees
counterclockwise:
from SimpleCV import Image img = Image('jacopo.png') # Rotate the image counter - clockwise 45 degrees rot = img.rotate(45) rot.show()
Generally, rotation means to rotate around the center point.
However, a different axis of rotation can be chosen by passing an
argument to the point
parameter. This
parameter is a tuple of the (x, y)
coordinate for the new point of rotation.
from SimpleCV import Image img = Image('jacopo.png') # Rotate the image around the coordinates + (16, 16) + rot = img.rotate(45, point = (16, 16)) rot.show()
For example, to rotate the image without clipping off the corners:
from SimpleCV import Image img = Image('jacopo.png') # Rotate the image and then resize it so the content isn 't cropped rot = img.rotate(45, fixed = False) rot.show()
In previous examples, we've seen a one-to-one relationship between source pixels and destination pixels. To increase an image's brightness, we take one pixel from the source image, increase the RGB values, and display one pixel in the output window. In order to perform more advanced image processing functions, we must move beyond the one-to-one pixel paradigm into pixel group processing.,Since we are altering the image on a per pixel basis, all pixels need not be treated equally. For example, we can alter the brightness of each pixel according to its distance from the mouse.,Nevertheless, from time to time, we do want to break out of our mundane shape drawing existence and deal with the pixels on the screen directly. Processing provides this functionality via the pixels array.,Let's start by creating a new pixel out of a two pixels from a source image—a pixel and its neighbor to the left.
// The image appears darker.
tint(100);
image(sunflower, 0, 0);
Algorithm: for each pixel, set the green value to be the same as the red value ,Change the red/green/blue values to be all equal for each pixel. ,To change a pixel to grayscale: -Compute the pixel's average value, e.g. 75 -Set the pixel's red/green/blue values to be the average -e.g. red/green/blue all set to 75 ,For this example, we'll write code to fix this image by copying the red value over to be used as the green and blue value. So for a pixel, if red is 27, set green and blue to also be 27. What is the code to do that? What will be the visual result of this?
// your code here
// Set green and blue values to be the same as the red value.
pixel.setGreen(pixel.getRed());
pixel.setBlue(pixel.getRed());
// Usually code combines setRed() with getRed(),
// but this code gets a value from one color
// and sets it into another color.
// your code here
// Set green and blue values to be the same as the red value.
pixel.setGreen(pixel.getRed());
pixel.setBlue(pixel.getRed());
// Usually code combines setRed() with getRed(),
// but this code gets a value from one color
// and sets it into another color.
- Compute the average value of a pixel:
- Algorithm: add red+green+blue, then divide by 3
- Code below computes the average, stores it in a variable "avg"
- We'll use that line whenever we want to compute the average
avg = (pixel.getRed() + pixel.getGreen() + pixel.getBlue()) / 3;
// your code here
avg = (pixel.getRed() + pixel.getGreen() + pixel.getBlue()) / 3;
pixel.setRed(avg);
pixel.setGreen(avg);
pixel.setBlue(avg);
// For blue tint: pixel.setBlue(avg * 1.2);
// your code here
avg = (pixel.getRed() + pixel.getGreen() + pixel.getBlue()) / 3;
pixel.setRed(avg);
pixel.setGreen(avg);
pixel.setBlue(avg);
// For blue tint: pixel.setBlue(avg * 1.2);
J = imadjust(RGB,[low_in high_in],___) maps the values in truecolor image RGB to new values in J. You can apply the same mapping or unique mappings for each color channel.,newmap = imadjust(cmap,[low_in high_in],___) maps the values in colormap cmap to new values in newmap. You can apply the same mapping or unique mappings for each color channel.,If gamma is a 1-by-3 vector, then imadjust applies a unique gamma to each color component or channel.,Adjusted image, returned as a grayscale or RGB image. J has the same size and class as the input grayscale image I or truecolor image RGB.
I = imread('pout.tif');
imshow(I)
J = imadjust(I); figure imshow(J)
I = imread('pout.tif');
imshow(I);
K = imadjust(I, [0.3 0.7], []); figure imshow(K)
RGB = imread('football.jpg');
imshow(RGB)
RGB2 = imadjust(RGB, [.2 .3 0;.6 .7 1], []); figure imshow(RGB2)