I am trying to take all the intensity values of one color channel from a certain image's pixels and transfer then to a 2D array (which I called testarray below). When I run the code, however, all the values of h for a certain value of w seem to be equal (aka the "columns" of the data are all the same). Could someone help me find why this is happening? My goal is for each pixel intensity value to be directly transferred from image to testarray for a specific color. test image used
#import packages
import cv2
import numpy as np
import matplotlib.pyplot as plt
#reads image
image = cv2.imread("nighttime.jpg", cv2.IMREAD_COLOR)
#gets height and width vals from image
h, w, c = image.shape
#creates the array which will act as the copy
testarray=[[0]*w]*h
#loop to traverse through image and transfer each value of image to testarray
for i in range(h):
for j in range(w):
testarray[i][j]=image[i][j][1] #[1] is used here in order to attain the pixel intensity value for green color channel
print(image[600][450][1]) #value returned is 27
print(testarray[600][450]) #value returned is 18 (not equal to previous line's value)
CodePudding user response:
I think it's a conflict between python list and numpy arrays/variables (I'm not sure, hopefully someone can go deep into this). A fix i found is to convert testarray (which is a python list) to a numpy array before running the for loop
#------------------------Your code--------------------------
#creates the array which will act as the copy
testarray=[[0]*w]*h
#loop to traverse through image and transfer each value of image to testarray
for i in range(h):
#-------------------------The fix--------------------------
#creates the array which will act as the copy
testarray=[[0]*w]*h
testarray=np.array(testarray) #<----- The line i added
#loop to traverse through image and transfer each value of image to testarray
for i in range(h):
With that, you should get this
27
27
However, if your objective is to transfer the color intersity from image (for example, the green intensity in this case), a faster method is to use np.copy (). With this, you don't have to use the for loop
#---------------------------------Your code------------------------------
#reads image
image = cv2.imread("nighttime.jpg", cv2.IMREAD_COLOR)
#gets height and width vals from image
h, w, c = image.shape
#creates the array which will act as the copy
testarray=[[0]*w]*h
testarray=np.array(testarray) #<----- The line i added
#loop to traverse through image and transfer each value of image to testarray
for i in range(h):
for j in range(w):
testarray[i][j]=image[i][j][1] #[1] is used here in order to attain the pixel intensity value for green color channel
print(image[600][450][1]) #value returned is 27
print(testarray[600][450]) #value returned is 18 (not equal to previous line's value)
#-----------------------------------With np.copy------------------
#reads image
image = cv2.imread("nighttime.jpg", cv2.IMREAD_COLOR)
testarray=np.copy(image[:,:,1])
print(image[600][450][1]) #value returned is 27
print(testarray[600][450]) #value returned is 27
np.copy() creates an array based on another array (image is a numpy array)
Why image[:,:,1]? Because it selects all the elements when C=1 (C obtained from the image.shape instruction), in this case, all green intensities. It creates a copy based on the second element of each column for each row, similar to the for you made originally, just that numpy makes it faster.
Now, if you want to select the other channel intensities (like, red or blue), you have to change the index of image.
testarray=np.copy(image[:,:,0]) #---->Blue
testarray=np.copy(image[:,:,1]) #---->Green
testarray=np.copy(image[:,:,2]) #---->Red
EDIT #1:
If you decide to go for the first method (using the for loops and the testarray=np.array(testarray) command), i suggest you to make the transformation of elements on testarray to uint-8
#--------------------------------Old code------------------------------
testarray=[[0]*w]*h
testarray=np.array(testarray) #<----- The line i added
# tic = time.time()
#loop to traverse through image and transfer each value of image to testarray
for i in range(h):
for j in range(w):
testarray[i][j]=image[i][j][1] #[1] is used here in order to attain the pixel intensity value for green color channel
#------------------------adding uint8 transformation----------------
testarray=[[0]*w]*h
testarray=np.array(testarray) #<----- The line i added
# tic = time.time()
#loop to traverse through image and transfer each value of image to testarray
for i in range(h):
for j in range(w):
testarray[i][j]=image[i][j][1] #[1] is used here in order to attain the pixel intensity value for green color channel
testarray=testarray.astype(np.uint8)#<----------------- New line I added
What happened is that i tried the imshow instruction to verify if the pictures of the copy and the original where the same, but I got an error when i tried imshow on the testarray
src_depth != CV_16F && src_depth != CV_32S in function 'convertToShow'
I ended here and found the solution was to convert the elements from testarray to uint-8. That is possible with the testarray=testarray.astype(np.uint8) instruction.
After that, i tried imshow again with testarray and it worked properly.
However, i still recommend using np.copy() to make an array based on another array, and with np.copy(), the .astype(np.uint8) instruction in this case is not necessary