74 Dimensionality Reduction Using Neural Networks
-
Published:2007
Download citation file:
A multi-layer neural network with multiple hidden layers was trained as an autoencoder using steepest descent, scaled conjugate gradient and alopex algorithms. These algorithms were used in different combinations with steepest descent and alopex used as pretraining algorithms followed by training using scaled conjugate gradient. All the algorithms were also used to train the autoencoders without any pretraining. Three datasets: USPS digits, MNIST digits, and Olivetti faces were used for training. The results were compared with those of Hinton et al. (Hinton and Salakhutdinov, 2006) for MNIST and Olivetti face dataset. Results indicate that while we were able to prove that pretraining is important for obtaining good results, the pretraining approach used by Hinton et al. obtains lower RMSE than other methods. However, scaled conjugate gradient turned out to be the fastest, computationally.