Authors
David JC MacKay
Publication date
1995/8/1
Source
Network: computation in neural systems
Volume
6
Issue
3
Pages
469
Publisher
IOP Publishing
Description
Bayesian probability theory provides a unifying framework for data modelling. In this framework the overall aims are to find models that are well matched to the data, and to use these models to make optimal predictions. Neural network learning is interpreted as an inference of the most probable parameters for the model, given the training data. The search in model space (ie, the space of architectures, noise models, preprocessings, regularizers and weight decay constants) can then also be treated as an inference problem, in which we infer the relative probability of alternative models, given the data. The article describes practical techniques based on Gaussian approximations for implementation of these powerful methods for controlling, comparing and using adaptive networks.
Total citations
199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320241313342129373146485046426145525352472940403935518160747041