Home > Enterprise >  Can autoencoders be used to extract useful (not truthful) representations?
Can autoencoders be used to extract useful (not truthful) representations?

Time:10-25

I'm looking for a neural network model that can extract useful information from an image. Here "useful" is arbitrarily defined by the user based on some specific task the autoencoder needs to be optimized for.

I'm very new to the field, and I know autoencoders are typically optimized to retain as much of the original information as possible. But would it make sense to modify the loss function to optimize an autoencoder to only save information that is relevant for the task at hand? Or would I be better off using a different kind of model?

CodePudding user response:

You are literally defining a regular MLP :)

Imagine encoder f and decoder g, we have

L_{AE}(x) = E || g(f(x)) - e ||^2

imagine that we now have additional target of interest, y, and an extra mapping onto this space, h

L_{AE y} = E || h(g(f(x))) - y ||^2

which is equivalent to

L_{MLP} = E || MLP(x) - y ||^2

you can still of course mix the two objectives, and treat it as multitask learning etc.

  • Related