Home > OS >  sklearn PCA returns components arrays close to zero
sklearn PCA returns components arrays close to zero

Time:04-20

I am trying to use sklearn's decomposition.PCA function:

The input is 100 4096x4096x3 (RGB) human face images(in numpy array form(uint8),RGB,[0,255]range) which are read by cv2

I converted them to [1,4096x4096x3] 2d shape, like:

[255. 128. 128. ... 255. 128. 128.]

Then I put all these arrays into sklearn's PCA() with n_components=20 in order to find 20 main features.

Computation finished successfully, but all the components in PCA.components_ are very similar and are close to an array of zeros.

Here are all my trouble shooting:

1.The input images matrix have about 24% entries that have a difference >10 (in [0,255]scale) when compared with another input image.

The pca.mean_ is very normal: it is an array looks like the inputs:

[255. 128. 128. ... 255. 128. 128.]

and I can successfully reconstruct a human face image with it

However, I find that all the components are arrays consists of floats very close to 0, like:

[[ 1.4016776e-08 4.3943277e-08 2.7873748e-08]

[ 4.1034184e-08 -1.2753417e-08 6.2264380e-09]

[-6.7606822e-09 4.9416444e-09 5.4486654e-10]

...

[-0.0000000e 00 -0.0000000e 00 -0.0000000e 00]

[-0.0000000e 00 -0.0000000e 00 -0.0000000e 00]

[-0.0000000e 00 -0.0000000e 00 -0.0000000e 00]]

actually, None of them>1.

2.I tired use parameters like:

pca=PCA(n_components=20,svd_solver="randomized", whiten=True)

But the result turned out to be the same. Still very similar components.

Why is this the case and how to fix it, Thanks for any ideas!

Code:

from sklearn.decomposition import PCA
import numpy as np
import cv2
import os

folder_path = "./normal"
input=[]
for i in range(1, 101):
    if i == 0: print("loading",i,"th image")
    if i == 60: continue #special case, should be skipped

    image_path = folder_path f"/total_matrix_tangent {i}.png"
    img = cv2.imread(image_path)
    input.append(img.reshape(-1))
print("Loaded all",i,"images")
# change into numpy matrix
all_image = np.stack(input,axis=0)
# trans to 0-1 format float32!
all_image = (all_image.astype(np.float64))

### shape: #_of_imag x image_RGB_pixel_num (50331648 for normal case)
# print(all_image)
# print(all_image.shape)

# PCA, keeps 20 features
pca=PCA(n_components=20)
pca.fit(all_image)
print("finished PCA")

result=pca.components_
print("PCA mean:",pca.mean_)

result=result.reshape(-1,4096,4096,3)
# result shape: #_of_componets * 4096 * 4096 * 3
# print(result.shape)

dst=result/np.linalg.norm(result,axis=(3),keepdims=True)
saving_path = "./principle64"
for i in range(20):
    reconImage=(dst)[i]
    cv2.imwrite(os.path.join(saving_path,("p" str(i) ".png")),reconImage)
print("Saved",i 1,"principle imgs")

CodePudding user response:

pca.components_ is not a list of transformed inputs - it is the number of principle components that will be retained, in your case, 20.

To get the reduced-dimensionality images, you need to use the transform or fit_transform methods:

# PCA, keeps 20 features
pca = PCA(n_components=20)

# Transform all_image
result = pca.fit_transform(all_image)
# result shape: num_of_images, 20

Note that the transformation will reduce the number of dimensions from 409640963 to 20, so the subsequent reshape operations do not make sense and will not work.

If you wish to attempt to reconstruct the original images using the retained information, you can call inverse_transform, ie

reconImages = pca.inverse_transform(result)
# reconImages shape: num_of_images, 4096 * 4096 * 3

reconImages = reconImages.reshape(-1, 4096, 4096, 3)
  • Related