Mattias Ohlsson
Professor
Statistical inference with deep latent variable models
Author
Summary, in English
Finding a suitable way to represent information in a dataset is one of the fundamental problems in Artificial Intelligence. With limited labeled information, unsupervised learning algorithms help to discover useful representations. One of the applications of such models is imputation, where missing values are estimated by learning the underlying correlations in a dataset. This thesis explores two of unsupervised techniques: stacked denoising autoencoders and variational autoencoders (VAEs). Using stacked denoising autoencoders, we developed a consistent framework to handle incomplete data with multi-type variables. This deterministic method improved missing data estimation compared to several state-of-the-art imputation methods.
Further, we explored variational autoencoders, a probabilistic form of autoencoders that jointly optimize the neural network-based inference and generative models. Despite the promise of these techniques, the main difficulty is an uninformative latent space. We propose a flexible family, Student's t-distributions, as priors for VAEs to learn a more informative latent representation. By comparing different forms of the covariance matrix for both Gaussian and Student's t-distributions, we conclude that using a weakly informative prior such as the Student's t with a low number of parameters improves the ability of VAEs to approximate the true posterior.
Finally, we used VAEs both with the Gaussian and Student's t-priors as multiple imputation methods on two datasets with missing values. Moreover, with the provided labels on these datasets, we used a supervised network and evaluated the estimation of missing variables. In both cases, VAEs show improvements compared to other methods.
Further, we explored variational autoencoders, a probabilistic form of autoencoders that jointly optimize the neural network-based inference and generative models. Despite the promise of these techniques, the main difficulty is an uninformative latent space. We propose a flexible family, Student's t-distributions, as priors for VAEs to learn a more informative latent representation. By comparing different forms of the covariance matrix for both Gaussian and Student's t-distributions, we conclude that using a weakly informative prior such as the Student's t with a low number of parameters improves the ability of VAEs to approximate the true posterior.
Finally, we used VAEs both with the Gaussian and Student's t-priors as multiple imputation methods on two datasets with missing values. Moreover, with the provided labels on these datasets, we used a supervised network and evaluated the estimation of missing variables. In both cases, VAEs show improvements compared to other methods.
Department/s
- Computational Biology and Biological Physics
Publishing year
2019
Language
English
Document type
Dissertation
Publisher
Lund University, Faculty of Science
Topic
- Bioinformatics (Computational Biology)
Keywords
- Deep Learning
- Generative Models
- Variational Inference
- Missing data
- Imputation
- Fysicumarkivet A:2019:Abiri
Status
Published
Supervisor
- Mattias Ohlsson
- Patrik Edén
- Carsten Peterson
ISBN/ISSN/Other
- ISBN: 978-91-7895-271-7
- ISBN: 978-91-7895-272-4
Defence date
31 October 2019
Defence time
10:15
Defence place
Rydbergsalen, Fysicum, Sölvegatan 14A, Lund
Opponent
- Ole Winther (Professor)