Authors
David JC MacKay
Publication date
1992/9
Journal
Neural computation
Volume
4
Issue
5
Pages
720-736
Publisher
MIT Press
Description
Three Bayesian ideas are presented for supervised adaptive classifiers. First, it is argued that the output of a classifier should be obtained by marginalizing over the posterior distribution of the parameters; a simple approximation to this integral is proposed and demonstrated. This involves a "moderation" of the most probable classifier's outputs, and yields improved performance. Second, it is demonstrated that the Bayesian framework for model comparison described for regression models in MacKay (1992a,b) can also be applied to classification problems. This framework successfully chooses the magnitude of weight decay terms, and ranks solutions found using different numbers of hidden units. Third, an information-based data selection criterion is derived and demonstrated within this framework.
Total citations
1992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320243511151015212720201829244541434153385243452533282338355555423524