Conference paper
Revisiting Boltzmann learning: parameter estimation in Markov random fields
This article presents a generalization of the Boltzmann machine that allows us to use the learning rule for a much wider class of maximum likelihood and maximum a posteriori problems, including both supervised and unsupervised learning. Furthermore, the approach allows us to discuss regularization and generalization in the context of Boltzmann machines.
We provide an illustrative example concerning parameter estimation in an inhomogeneous Markov field. The regularized adaptation produces a parameter set that closely resembles the “teacher” parameters, hence, will produce segmentations that closely reproduce those of the inhomogeneous teacher network
Language: | English |
---|---|
Publisher: | IEEE |
Year: | 1996 |
Pages: | 3394-3397 |
Proceedings: | 1996 IEEE International Conference on Acoustics, Speech and Signal Processing |
ISBN: | 0780331923 and 9780780331921 |
ISSN: | 2379190x and 15206149 |
Types: | Conference paper |
DOI: | 10.1109/ICASSP.1996.550606 |
ORCIDs: | Hansen, Lars Kai and Larsen, Jan |
Boltzmann learning Boltzmann machine Boltzmann machines Cost function Image segmentation Machine learning Markov processes Markov random fields Maximum likelihood estimation Minimization methods Neural networks Parameter estimation Stochastic processes Unsupervised learning image segmentation inhomogeneous Markov field learning (artificial intelligence) learning rule maximum a posteriori estimation maximum likelihood estimation parameter estimation regularization supervised learning unsupervised learning