Mathematics > Optimization and Control
[Submitted on 3 Nov 2020 (v1), last revised 1 May 2024 (this version, v3)]
Title:Multi-Iteration Stochastic Optimizers
View PDFAbstract:We here introduce Multi-Iteration Stochastic Optimizers, a novel class of first-order stochastic optimizers where the relative $L^2$ error is estimated and controlled using successive control variates along the path of iterations. By exploiting the correlation between iterates, control variates may reduce the estimator's variance so that an accurate estimation of the mean gradient becomes computationally affordable. We name the estimator of the mean gradient Multi-Iteration stochastiC Estimator (MICE). In principle, MICE can be flexibly coupled with any first-order stochastic optimizer, given its non-intrusive nature. Our generic algorithm adaptively decides which iterates to keep in its index set. We present an error analysis of MICE and a convergence analysis of Multi-Iteration Stochastic Optimizers for different classes of problems, including some non-convex cases. Within the smooth, strongly convex setting, we show that to approximate a minimizer with accuracy $tol$, SGD-MICE requires, on average, $O(tol^{-1})$ stochastic gradient evaluations, while SGD with adaptive batch sizes requires $O(tol^{-1} \log(tol^{-1}))$, correspondingly. Moreover, in a numerical evaluation, SGD-MICE achieved tol with less than 3% the number of gradient evaluations than adaptive batch SGD. The MICE estimator provides a straightforward stopping criterion based on the gradient norm that is validated in consistency tests. To assess the efficiency of MICE, we present several examples in which we use SGD-MICE and Adam-MICE. We include one example based on a stochastic adaptation of the Rosenbrock function and logistic regression training for various datasets. When compared to SGD, SAG, SAGA, SVRG, and SARAH, the Multi-Iteration Stochastic Optimizers reduced, without the need to tune parameters for each example, the gradient sampling cost in all cases tested, also being competitive in runtime in some cases.
Submission history
From: Andre Gustavo Carlon [view email][v1] Tue, 3 Nov 2020 14:11:52 UTC (3,180 KB)
[v2] Thu, 12 Nov 2020 08:45:58 UTC (3,188 KB)
[v3] Wed, 1 May 2024 09:45:11 UTC (4,955 KB)
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
Connected Papers (What is Connected Papers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.