Home > Blockchain >  Applying PCA to one-dimensional array
Applying PCA to one-dimensional array

Time:03-06

For my university project, I was asked to analyse, discuss and improve the existing implementation of image face recognition.

As an input data, I got n*m matrix where:

  • 'n' is a number of images, in my case, 1500.
  • 'm' is a flatted (vectorised) pixel matrix, so just a one-dimensional array. It was converted from a 77*78 matrix to one 5236 elements long list of grayscale values (0-255).

It looks like:

[[254. 254. 236. ...  15.  20.  21.]
 [ 49.  55.  61. ...  57.  69.  60.]
 [129. 137. 159. ...  15.  15.  15.]
 ...
 [ 44.  49.  60. ...   7.   8.   8.]
 [229. 221. 201. ...  16.  16.  16.]
 [120. 116. 112. ...   7.   7.   7.]]

At some point in given example, before training the model, this data is applied to PCA (Principal Component Analysis) to calculate its principal components and decrease the dimension.

pca = PCA(n_components=200, whiten=True).fit(X_train)
X_train_pca = pca.transform(X_train)

PCA changed the shape of the matrix from 1500*5236 to 1500*200. In the later part, we are using MLPClassifier to test the accuracy, and data received from the above code makes the model very accurate.

However, on the internet, I've seen only examples with decreasing n*m matrixes dimension to (for example) n*1. I don't know if applying this algorithm to just a one-dimensional array is a good approach, I haven't found any example of that.

Should I instead reshape each image back into the matrix, and then apply PCA on it? Or just left it as it is? Is it a good approach at all? Maybe there are some alternatives for PCA in my case?

CodePudding user response:

However, on the internet, I've seen only examples with decreasing nm matrixes dimension to (for example) n1. I don't know if applying this algorithm to just a one-dimensional array is a good approach, I haven't found any example of that.

You are not applying PCA to one dimensional array. You are applying it to 2D matrix, 1500 x 5236 and reduce it to 1500 x 200; this is exactly what you see online -> 2D matrix reduced to smaller feature space. Tutorials online will often do so in an extreme fashion (e.g. to 1500 x 2) because one of the main uses of PCA is data visualisation and plotting anything beyond 2dims is ... hard ;)

Should I instead reshape each image back into the matrix, and then apply PCA on it?

No, there seems to be a confusion to what the matrix is. Your entire data is the matrix, if you were to have pictures you would end up with 3d tensor, not a matrix. PCA, as traditionally defined, cannot be applied to anything but a 2D matrix.

Or just left it as it is? Is it a good approach at all? Maybe there are some alternatives for PCA in my case?

PCA is just a heuristic regularisation technique. You are removing information from your data, to avoid overfitting. There is absolutely no guarantee it will work or help. And there are many many other regularisation techniques one can try:

  • regularisation losses, e.g. weight decay
  • regularistaion in network itself, e.g. dropout
  • regularisation in the data itself, e.g. "data augmentation" (training on slight transformations of your pictures that does not affect the final label, e.g. rotations etc.)
  • regularisation in the way loss is specified, e.g. through soft labels or mixup.
  • Related