Authors
Halbert White
Publication date
1989/12/1
Journal
Journal of the American Statistical association
Volume
84
Issue
408
Pages
1003-1013
Publisher
Taylor & Francis Group
Description
We investigate the properties of a recursive estimation procedure (the method of “back-propagation”) for a class of nonlinear regression models (single hidden-layer feedforward network models) recently developed by cognitive scientists. The results follow from more general results for a class of recursive m estimators, obtained using theorems of Ljung (1977) and Walk (1977) for the method of stochastic approximation. Conditions are given ensuring that the back-propagation estimator converges almost surely to a parameter value that locally minimizes expected squared error loss (provided the estimator does not diverge) and that the back-propagation estimator is asymptotically normal when centered at this minimizer. This estimator is shown to be statistically inefficient, and a two-step procedure that has efficiency equivalent to that of nonlinear least squares is proposed. Practical issues are illustrated by a …
Total citations
19881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320243391321283622372743282631171918162118262727172115131710101010961167