Previous |  Up |  Next

Article

Title: Exploiting tensor rank-one decomposition in probabilistic inference (English)
Author: Savicky, Petr
Author: Vomlel, Jiří
Language: English
Journal: Kybernetika
ISSN: 0023-5954
Volume: 43
Issue: 5
Year: 2007
Pages: 747-764
Summary lang: English
.
Category: math
.
Summary: We propose a new additive decomposition of probability tables – tensor rank-one decomposition. The basic idea is to decompose a probability table into a series of tables, such that the table that is the sum of the series is equal to the original table. Each table in the series has the same domain as the original table but can be expressed as a product of one- dimensional tables. Entries in tables are allowed to be any real number, i. e. they can be also negative numbers. The possibility of having negative numbers, in contrast to a multiplicative decomposition, opens new possibilities for a compact representation of probability tables. We show that tensor rank-one decomposition can be used to reduce the space and time requirements in probabilistic inference. We provide a closed form solution for minimal tensor rank-one decomposition for some special tables and propose a numerical algorithm that can be used in cases when the closed form solution is not known. (English)
Keyword: graphical probabilistic models
Keyword: probabilistic inference
Keyword: tensor rank
MSC: 15A69
MSC: 62E15
MSC: 68T37
idZBL: Zbl 1148.68539
idMR: MR2376335
.
Date available: 2009-09-24T20:28:46Z
Last updated: 2012-06-06
Stable URL: http://hdl.handle.net/10338.dmlcz/135810
.
Reference: [1] Chavira M., Darwiche A.: Compiling Bayesian networks with local structure.In: Proc. 19th Internat. Joint Conference on Artificial Intelligence (IJCAI), Edinburgh 2005, pp. 1306–1312
Reference: [2] Darwiche A.: A differential approach to inference in Bayesian networks.J. Assoc. Comput. Mach. 50 (2003), 3, 280–305 MR 2146356
Reference: [3] Lathauwer L. De, Moor B. De: From matrix to tensor: multilinear algebra and signal processing.In: 4th Internat. Conference on Mathematics in Signal Processing, Part I, IMA Conference Series, Warwick 1996, pp. 1–11
Reference: [4] Lathauwer L. De, Moor, B. De, Vandewalle J.: On the best Rank-1 and Rank-$(R_1,R_2,\ldots ,R_N)$ approximation of higher-order tensors.SIAM J. Matrix Anal. Appl. 21 (2000), 4, 1324–1342 MR 1780276
Reference: [5] Díez F. J., Galán S. F.: An efficient factorization for the noisy MAX.Internat. J. Intell. Systems 18 (2003), 2, 165–177
Reference: [6] Golub G. H., Loan C. F. Van: Matrix Computations.Third edition. Johns Hopkins University Press, Baltimore 1996 MR 1417720
Reference: [7] Heckerman D.: A tractable inference algorithm for diagnosing multiple diseases.In: Proc. Fifth Annual Conference on Uncertainty in AI (M. Henrion, R. D. Shachter, L. N. Kanal, and J. F. Lemmer, eds.), August 18–21, 1989, Windsor, ON, pp. 163–171
Reference: [8] Heckerman D.: Causal independence for knowledge acquisition and inference.In: Proc. Ninth Conference on Uncertainty in AI (D. Heckerman and A. Mamdani, eds.), July 9–11, 1993, Washington, D.C., pp. 122–127
Reference: [9] Heckerman D., Breese J. S.: A new look at causal independence.In: Proc. Tenth Conference on Uncertainty in AI (R. Lopez de Mantaras and D. Poole, eds.), July 29–31, 1994, Seattle, WA, pp. 286–292
Reference: [10] Håstad J.: Tensor Rank is NP-complete.J. Algorithms 11 (1990), 644–654 Zbl 0716.65043, MR 1079455
Reference: [11] Jensen F. V.: Bayesian Networks and Decision Graphs.(Statistics for Engineering and Information Science.) Springer–Verlag, New York – Berlin – Heidelberg 2001 MR 1876880
Reference: [12] Jensen F. V., Lauritzen S. L., Olesen K. G.: Bayesian updating in recursive graphical models by local computation.Computat. Statist. Quart. 4 (1990), 269–282 MR 1073446
Reference: [13] Lauritzen S. L.: Graphical Models.Clarendon Press, Oxford 1996 Zbl 1055.62126, MR 1419991
Reference: [14] Olesen K. G., Kjærulff U., Jensen F., Jensen F. V., Falck B., Andreassen S., Andersen S. K.: A MUNIN network for the median nerve – a case study on loops.Appl. Artif. Intell., Special issue: Towards Causal AI Models in Practice 3 (1989), 384–403
Reference: [15] Polak E.: Computational Methods in Optimization: A Unified Approach.Academic Press, New York 1971 Zbl 0301.90040, MR 0282511
Reference: [16] Takikawa M., D’Ambrosio B.: Multiplicative factorization of noisy-max.In: Proc. Fifteenth Conference on Uncertainty in AI (K. B. Laskey and H. Prade, eds.), July 30 – August 1, 1999, Stockholm, pp. 622–630
Reference: [17] Vomlel J.: Exploiting functional dependence in Bayesian network inference.In: Proc. Eighteenth Conference on Uncertainty in AI (UAI) – Edmonton (Canada), Morgan Kaufmann, San Francisco 2002, pp. 528–535
Reference: [18] Zhang N. L., Poole D.: Exploiting causal independence in Bayesian network inference.J. Artif. Intell. Res. 5 (1996), 301–328 Zbl 0900.68384, MR 1426261
.

Files

Files Size Format View
Kybernetika_43-2007-5_11.pdf 985.5Kb application/pdf View/Open
Back to standard record
Partner of
EuDML logo