Future Work
Stochastic Optimization
● Goal: Minimize finite-sum objective function
● Stochastic gradient descent
● Approximate the gradient of the objective
● Variance reduction methods
○ Variance of estimate approaches 0
○ Examples:
■ Stochastic variance reduced gradient (SVRG)
[1]
■ Stochastic average gradient (SAG)
[2]
● Use MOACV to reduce variance further
23
Thomas Dixon
[1] Johnson, et al. (2013). “Accelerating stochastic gradient descent using predictive variance reduction”
[2] Roux, et al. (2012). “A stochastic gradient method with an exponential convergence rate for finite training sets”