Author(s): Mengdi Wang | Dimitri P. Bertsekas
Journal: Stochastic Systems
ISSN 1946-5238
Volume: 3;
Issue: 1;
Start page: 39;
Date: 2013;
Original page
Keywords: Stochastic algorithm | singular system | Monte-Carlo estimation | simulation | proximal method | regularization | approximate dynamic programming
ABSTRACT
We consider the simulation-based solution of linear systems ofequations, Ax = b, of various types frequently arising in large-scaleapplications, where A is singular. We show that the convergenceproperties of iterative solution methods are frequently lost when theyare implemented with simulation (e.g., using sample averageapproximation), as is often done in important classes of large-scaleproblems. We focus on special cases of algorithms for singular systems,including some arising in least squares problems and approximatedynamic programming, where convergence of the residual sequence{Axk − b} may be obtained, while the sequence of iterates {xk}may diverge. For some of these special cases, under additionalassumptions, we show that the iterate sequence is guaranteed toconverge. For situations where the iterates diverge but the residualsconverge to zero, we propose schemes for extracting from the divergentsequence another sequence that converges to a solution of Ax = b.
Journal: Stochastic Systems
ISSN 1946-5238
Volume: 3;
Issue: 1;
Start page: 39;
Date: 2013;
Original page
Keywords: Stochastic algorithm | singular system | Monte-Carlo estimation | simulation | proximal method | regularization | approximate dynamic programming
ABSTRACT
We consider the simulation-based solution of linear systems ofequations, Ax = b, of various types frequently arising in large-scaleapplications, where A is singular. We show that the convergenceproperties of iterative solution methods are frequently lost when theyare implemented with simulation (e.g., using sample averageapproximation), as is often done in important classes of large-scaleproblems. We focus on special cases of algorithms for singular systems,including some arising in least squares problems and approximatedynamic programming, where convergence of the residual sequence{Axk − b} may be obtained, while the sequence of iterates {xk}may diverge. For some of these special cases, under additionalassumptions, we show that the iterate sequence is guaranteed toconverge. For situations where the iterates diverge but the residualsconverge to zero, we propose schemes for extracting from the divergentsequence another sequence that converges to a solution of Ax = b.