i have a data set that contains a ton of vector where each vector has 21300 values, naturally i want to reduce the dimension of each vector i.e compress the vectors
my data set is not split into training and testing datasets because i want all the vectors to be compressed, i have already tried to fit the auto encoder with the first vector in my data set and then proceeded to use the resulting encoder to compress the remaining vectors, however all the remaining ended up looking very similar to the first vector,my question is how can i go about compressing these vectors?
CodePudding user response:
Build the autoencoder model using all the data not just the first vector.
Check here for an example in python.