Composing stochastic quasi-Newton-Type algorithms
13.03.2018, 13:00-14:00
– Haus 28, Raum 0.104
SFB-Seminar
Thomas Schön (Uppsala University, Sweden)
In this talk I will focus on one of our recent developments where we show how the Gaussian process (GP) can be used to solve stochastic optimization problems. Our main motivation for studying these problems is that they arise when we are estimating unknown parameters in nonlinear state space models using sequential Monte Carlo (SMC). The very nature of this problem is such that we can only access the cost function (in this case the likelihood function) and its derivative via noisy observations, since there are no closed-form expressions available. Via SMC methods we can obtain unbiased estimates of the likelihood function. We start from the fact that many of the existing quasi-Newton algorithms can be formulated as learning algorithms, capable of learning local models of the cost functions. Inspired by this we can start assembling stochastic quasi-Newton-type algorithms, applicable in situations where we only have access to noisy observations of the cost function and its derivatives. We will show how we can make use of the GP model to learn the Hessian allowing for efficient solution of these stochastic optimization problems. Additional motivation for studying the stochastic optimization problem stems from the fact that it arise in almost all large-scale supervised machine learning problems, not least in deep learning. I will very briefly mention some ongoing work where we have removed the GP representation and scale our ideas to much higher dimensions (both in terms of the size of the dataset and the number of unknown parameters). If there is time towards the end I will also show a few additional GP constructions that we have been developing recently.