Previous |  Up |  Next

Article

Keywords:
exponentially distributed data
Summary:
For general Bayes decision rules there are considered perceptron approximations based on sufficient statistics inputs. A particular attention is paid to Bayes discrimination and classification. In the case of exponentially distributed data with known model it is shown that a perceptron with one hidden layer is sufficient and the learning is restricted to synaptic weights of the output neuron. If only the dimension of the exponential model is known, then the number of hidden layers will increase by one and also the synaptic weights of neurons from both hidden layers have to be learned.
References:
[1] Berger J. O.: Statistical Decision Theory and Bayesian Analysis. Second edition. Springer, New York 1985 MR 0804611 | Zbl 0572.62008
[2] Brown L. D.: Fundamentals of Statistical Exponential Families. Lecture Notes 9. Inst. of Mathem. Statist., Hayward, California 1986 MR 0882001 | Zbl 0685.62002
[3] Bock H. H.: A clustering technique for maximizing $\phi $-divergence, noncentrality and discriminating power. In: Analyzing and Modelling Data and Knowledge (M. Schader, ed.), Springer, Berlin 1992, pp. 19–36
[4] Devijver P., Kittler J.: Pattern Recognition: A Statistical Approach. Prentice Hall, Englewood Cliffs 1982 MR 0692767 | Zbl 0542.68071
[5] Funahashi K.: On the approximate realization of continuous mappings by neural networks. Neural Networks 2 (1989), 183–192 DOI 10.1016/0893-6080(89)90003-8
[6] Hampel F. R., Rousseeuw P. J., Ronchetti E. M., Stahel W. A.: Robust Statistics: The Approach Based on Influence Functions. Wiley, New York 1986 MR 0829458 | Zbl 0733.62038
[7] Hand D. J.: Discrimination and Classification. Wiley, New York 1981 MR 0634676 | Zbl 0587.62119
[8] Hornik K., Stinchcombe M., White H.: Multilayer feedforward networks and universal approximation. Neural Networks 2 (1989), 359–366 DOI 10.1016/0893-6080(89)90020-8
[9] Küchler U., Sørensen M.: Exponential families of stochastic processes: A unifying semimartingale approach. Internat. Statist. Rev. 57 (1989), 123–144 DOI 10.2307/1403382
[10] Lapedes A. S., Farber R. H.: How neural networks work. In: Evolution, Learning and Cognition (Y. S. Lee, ed.), World Scientific, Singapore 1988, pp. 331–340 MR 1036563
[11] Mood A. M., Graybill F. A., Boes D. C.: Introduction to the Theory of Statistics. Third edition. McGraw–Hill, New York 1974 Zbl 0277.62002
[12] Müller B., Reinhard J., Strickland M. T.: Neural Networks. Second edition. Springer, Berlin 1995
[13] Ripley B. D.: Statistical aspects of neural networks. In: Networks and Chaos (O. E. Barndorff–Nielsen, J. L. Jensen and W. S. Kendall, eds.), Chapman and Hall, London 1993. pp. 40–123 MR 1314652 | Zbl 0825.68531
[14] Vajda I.: About perceptron realizations of Bayesian decisions about random processes. In: IEEE International Conference on Neural Networks, vol. 1, IEEE, 1996, pp. 253–257
Partner of
EuDML logo