Statistical theory of deep generative models
Topic | Learning Theory |
---|---|
Format | Hybird |
Location | SIMISShanghai |
Speaker | Lin Lizhen (University of Maryland) |
Time (GMT+8) |
Abstract
Deep generative models are probabilistic generative models where the generator is parameterized by a deep neural network.
They are popular models for modeling high-dimensional data such as texts, images and speeches, and have achieved impressive empirical success. Despite demonstrated success in empirical performance, theoretical understanding of such models is largely lacking . We investigate statistical properties of deep generative models from a nonparametric distribution estimation viewpoint. In the considered model, data are assumed to be observed in some high-dimensional ambient space but concentrate around some low-dimensional structure such as a lower-dimensional manifold. This talk will provide an explanation of why deep generative models can perform well from the lens of statistical theory. In particular, we will provide insights into
- how deep generative models can avoid the curse of dimensionality and outperform classical nonparametric estimates
- how likelihood approaches work for high-dimensional distribution estimation, especially in adapting to the intrinsic geometry of the data.