Evolution of Stacked Autoencoders

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Evolution of Stacked Autoencoders. / Silhan, Tim; Oehmcke, Stefan; Kramer, Oliver.

2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. s. 823-830 8790182.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Silhan, T, Oehmcke, S & Kramer, O 2019, Evolution of Stacked Autoencoders. i 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings., 8790182, Institute of Electrical and Electronics Engineers Inc., s. 823-830, 2019 IEEE Congress on Evolutionary Computation, CEC 2019, Wellington, New Zealand, 10/06/2019. https://doi.org/10.1109/CEC.2019.8790182

APA

Silhan, T., Oehmcke, S., & Kramer, O. (2019). Evolution of Stacked Autoencoders. I 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings (s. 823-830). [8790182] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CEC.2019.8790182

Vancouver

Silhan T, Oehmcke S, Kramer O. Evolution of Stacked Autoencoders. I 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. s. 823-830. 8790182 https://doi.org/10.1109/CEC.2019.8790182

Author

Silhan, Tim ; Oehmcke, Stefan ; Kramer, Oliver. / Evolution of Stacked Autoencoders. 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. s. 823-830

Bibtex

@inproceedings{23e782d33f924a5eae57f46b24bb3088,
title = "Evolution of Stacked Autoencoders",
abstract = "Choosing the best hyperparameters for neural networks is a big challenge. This paper proposes a method that automatically initializes and adjusts hyperparameters during the training process of stacked autoencoders. A population of autoencoders is trained with gradient-descent-based weight updates, while hyperparameters are mutated and weights are inherited in a Lamarckian kind of way. The training is conducted layer-wise, while each new layer initiates a new neuroevolutionary optimization process. In the fitness function of the evolutionary approach a dimensionality reduction quality measure is employed. Experiments show the contribution of the most significant hyperparameters, while analyzing their lineage during the training process. The results confirm that the proposed method outperforms a baseline approach on MNIST, FashionMNIST, and the Year Prediction Million Song Database.",
keywords = "autoencoder, hyperparameter tuning, neuroevolution",
author = "Tim Silhan and Stefan Oehmcke and Oliver Kramer",
year = "2019",
doi = "10.1109/CEC.2019.8790182",
language = "English",
pages = "823--830",
booktitle = "2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",
note = "2019 IEEE Congress on Evolutionary Computation, CEC 2019 ; Conference date: 10-06-2019 Through 13-06-2019",

}

RIS

TY - GEN

T1 - Evolution of Stacked Autoencoders

AU - Silhan, Tim

AU - Oehmcke, Stefan

AU - Kramer, Oliver

PY - 2019

Y1 - 2019

N2 - Choosing the best hyperparameters for neural networks is a big challenge. This paper proposes a method that automatically initializes and adjusts hyperparameters during the training process of stacked autoencoders. A population of autoencoders is trained with gradient-descent-based weight updates, while hyperparameters are mutated and weights are inherited in a Lamarckian kind of way. The training is conducted layer-wise, while each new layer initiates a new neuroevolutionary optimization process. In the fitness function of the evolutionary approach a dimensionality reduction quality measure is employed. Experiments show the contribution of the most significant hyperparameters, while analyzing their lineage during the training process. The results confirm that the proposed method outperforms a baseline approach on MNIST, FashionMNIST, and the Year Prediction Million Song Database.

AB - Choosing the best hyperparameters for neural networks is a big challenge. This paper proposes a method that automatically initializes and adjusts hyperparameters during the training process of stacked autoencoders. A population of autoencoders is trained with gradient-descent-based weight updates, while hyperparameters are mutated and weights are inherited in a Lamarckian kind of way. The training is conducted layer-wise, while each new layer initiates a new neuroevolutionary optimization process. In the fitness function of the evolutionary approach a dimensionality reduction quality measure is employed. Experiments show the contribution of the most significant hyperparameters, while analyzing their lineage during the training process. The results confirm that the proposed method outperforms a baseline approach on MNIST, FashionMNIST, and the Year Prediction Million Song Database.

KW - autoencoder

KW - hyperparameter tuning

KW - neuroevolution

UR - http://www.scopus.com/inward/record.url?scp=85071314505&partnerID=8YFLogxK

U2 - 10.1109/CEC.2019.8790182

DO - 10.1109/CEC.2019.8790182

M3 - Article in proceedings

SP - 823

EP - 830

BT - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

T2 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019

Y2 - 10 June 2019 through 13 June 2019

ER -

ID: 227137612