Jahn, T;
Jin, B;
(2020)
On the discrepancy principle for stochastic gradient descent.
Inverse Problems
, 36
(9)
, Article 095009. 10.1088/1361-6420/abaa58.
Preview |
Text
Jin_On the discrepancy principle for stochastic gradient descent_VoR.pdf - Published Version Download (3MB) | Preview |
Abstract
Stochastic gradient descent (SGD) is a promising numerical method for solving large-scale inverse problems. However, its theoretical properties remain largely underexplored in the lens of classical regularization theory. In this note, we study the classical discrepancy principle, one of the most popular a posteriori choice rules, as the stopping criterion for SGD, and prove the finite-iteration termination property and the convergence of the iterate in probability as the noise level tends to zero. The theoretical results are complemented with extensive numerical experiments.
Type: | Article |
---|---|
Title: | On the discrepancy principle for stochastic gradient descent |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1088/1361-6420/abaa58 |
Publisher version: | https://doi.org/10.1088/1361-6420/abaa58 |
Language: | English |
Additional information: | © 2020 IOP Publishing. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence (https://creativecommons.org/licenses/by/4.0/). |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > UCL BEAMS UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10106687 |
Archive Staff Only
![]() |
View Item |