UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

A lifted Bregman formulation for the inversion of deep neural networks

Wang, Xiaoyu; Benning, Martin; (2023) A lifted Bregman formulation for the inversion of deep neural networks. Frontiers in Applied Mathematics and Statistics , 9 , Article 1176850. 10.3389/fams.2023.1176850. Green open access

[thumbnail of A lifted Bregman formulation for the inversion of neural networks.pdf]
Preview
Text
A lifted Bregman formulation for the inversion of neural networks.pdf - Published Version

Download (1MB) | Preview

Abstract

We propose a novel framework for the regularized inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalizes these variables with tailored Bregman distances. We propose a family of variational regularizations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularized inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularization operator, and that shows that the regularized inverse provably converges to the true inverse if measurement errors converge to zero.

Type: Article
Title: A lifted Bregman formulation for the inversion of deep neural networks
Open access status: An open access version is available from UCL Discovery
DOI: 10.3389/fams.2023.1176850
Publisher version: https://doi.org/10.3389/fams.2023.1176850
Language: English
Additional information: Copyright © 2023 Wang and Benning. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (http://creativecommons.org/licenses/by/4.0/). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Keywords: inverse problems, regularization theory, lifted network training, Bregman distance, perceptron, multi-layer perceptron, variational regularization, total variation regularization
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10189896
Downloads since deposit
90Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item