OPTEC Seminar on Tensors, Computing, Optimization and Signal Processing
Start date: 2/05/2012
End date: 3/05/2012
Location: CS 200A02.112 and ESAT 00.62
May 2-3, KU Leuven
Program (see below for detailed abstracts)
Dept. of Computer Science, 200A02.112
14h30-14h50: Lieven De Lathauwer, A Short Introduction to Tensor Decompositions
14h50-15h45: Eugene Tyrtyshnikov, Kronecker Representation of TT Decompositions, New Wavelet Transforms, Fast Tensor Interpolation
16h00-16h30: Laurent Sorber, State of the Art Overview of Algorithms for Tensor Decompositions
16h30-17h00: Nick Vannieuwenhoven, A New Truncation Strategy for the Higher-order Singular Value Decomposition
Dept. of Electrical Engineering (ESAT), 00.62
14h30-15h30: Boris N. Khoromskij, Efficient Numerical Approximation of Multi-dimensional PDEs in Quantized Tensor Spaces
15h30-15h45: Stefan Vandewalle, Overview Research NATW
16h00-16h30: Ignat Domanov, On the Uniqueness of the Canonical Polyadic Decomposition: New Sufficient Conditions Based on Properties of Khatri-Rao Products of Compound Matrices
16h30-17h00: Mikael Sorensen, Canonical Polyadic Decomposition with Orthogonality Constraints
17h00-17h30: Toon Van Waterschoot, Sensing Wave Fields in a Finite Element Framework
Lieven De Lathauwer, A Short Introduction to Tensor Decompositions
We give some basic definitions from multilinear algebra and briefly discuss some fundamental tensor decompositions. We give a quick preview of their interest for computing, optimization, signal processing and large-scale data analysis.
Eugene Tyrtyshnikov, Kronecker Representation of TT Decompositions, New Wavelet Transforms, Fast Tensor Interpolation
Classical tensor decompositions (CP, Tucker) are important as data
modelsbut not good enough as a base for fast tensor algebra algorithms.
Most relevant tensor decompositions for this purpose are TT (Tensor
Train)and HT (Hierarchical Tucker), both are the results of one and same
schemefor the reduction of dimensionality. These decompositions are
frequentlyapplied to tensors after some, often ultimate, quantization of
the originaldimensions. This maximizes the number of modes and makes
the number of countsat each mode minimal possible, e.g. 2. We consider
how multilevel matrices become tensor trains with the use of the
Kronecker-product operation. Then we show how new wavelet transforms
arise in the construction oftensor trains when forcing the TT ranks be
limited. We also discuss someexamples and perspectives for applications.
In the end, we present an ambitious research goal of design of fast
tabulationprocedures using a new interpolation instrument based a TT
generalization of the skeleton (dyadic) decomposition from matrices to
tensors. REFERENCES  I.Oseledets, E. Tyrtyshnikov, TT-cross
approximation for multidimensional arrays, Linear Algebra Appl., 432
(2010), pp. 70-88.  I. Oseledets, E. Tyrtyshnikov, Algebraic wavelet
transform via quantics tensor train decomposition, SIAM J. Sci. Comp.,
vol. 31, no. 3 (2011), pp. 1315-1328.  I. Oseledets, E. Tyrtyshnikov,
N. Zamarashkin, Tensor-train ranks for matrices and their inverses,
Comput. Meth. Appl. Math., Vol. 11, No. 3 (2011), pp. 375-384.
Laurent Sorber, State of the Art Overview of Algorithms for Tensor Decompositions
The canonical polyadic and rank-(Lr,Lr,1) block term decomposition (CPD and BTD, respectively) are two closely related tensor decompositions. The CPD is an important tool in psychometrics, chemometrics, neuroscience and data mining, while the rank-(Lr,Lr,1) BTD is an emerging decomposition in signal processing and, recently, blind source separation. We present a decomposition that generalizes these two and give an overview of the state of the art in algorithms for these decompositions, including their drawbacks and benefits. Among these algorithms are recently developed (complex) optimization-based methods such as limited-memory BFGS and matrix-free nonlinear least squares techniques. In the latter, we exploit the structure of the Jacobian's Gramian by means of efficient expressions for its matrix-vector product. Combined with an effective preconditioner, numerical experiments confirm that these methods are among the most efficient and robust currently available for computing the CPD, rank-(Lr,Lr,1) BTD and their generalized decomposition.
Nick Vannieuwenhoven, A New Truncation Strategy for the Higher-order Singular Value Decomposition
We present an alternative strategy to truncate the higher-order singular value decomposition (T-HOSVD), called the sequentially truncated HOSVD (ST-HOSVD). It requires less operations to compute and often improves the approximation error with respect to the T-HOSVD. Whenever a tensor is truncated to its multilinear rank, the basis computed by the T-HOSVD and ST-HOSVD coincide. In one experiment we performed, the results of a numerical simulation of a partial differential equation were compressed by T-HOSVD and ST-HOSVD. At the same truncation rank, the approximation errors were similar, and the bases computed by both algorithms were indistinguishable. The execution time, on the other hand, was reduced from 2h45 for T-HOSVD to one minute for ST-HOSVD, representing a speedup of 133. In another experiment, we compare T-HOSVD and ST-HOSVD for constructing a multilinear model for classifying handwritten digits.
Boris N. Khoromskij, Efficient Numerical Approximation of Multi-dimensional PDEs in Quantized Tensor Spaces
Modern methods of tensor-product approximation by separation of variables allow an efﬁcient lowparametric calculus of functions and operators in higher dimensions. Most common separable representations combine the canonical, Tucker, tensor train (TT) and the quantized-TT (QTT) decompositions. The QTT tensor format makes it possible to represent (approximate) the multivariate functions, operators and dynamical systems in the quantized tensor spaces with the log-volume complexity scaling with respect to the size of full tensor . This opens the way to the profound numerical simulation of high-dimensional PDEs getting rid of the ďcurse of dimensionalityĒ and rigorous restrictions on the grid size. The numerical efﬁciency is justiﬁed by the remarkable QTT-approximation properties proven for the wide class of functions and operators [2, 3]. We focus on the approximation and complexity results in the quantized tensor formats applied to the solution of d-dimensional elliptic and parabolic equations  - . Numerical tests indicate the logarithmic computational complexity of the QTT tensor approximation for the parametric elliptic PDEs, in computational quantum chemistry and in the multi-dimensional dynamics.
 I.V. Gavrilyuk, and B.N. Khoromskij. Quantized-TT-Cayley transform to compute dynamics and spectrum of high-dimensional Hamiltonians. Comp. Meth. in Applied Math., v.11 (2011), No. 3, 273-290.
 B.N. Khoromskij. O(d log N)-Quantics Approximation of N-d Tensors in High-Dimensional Numerical Modeling. J. Constr. Approx. v. 34(2), 257-289 (2011).
 B.N. Khoromskij. Tensors-structured Numerical Methods in Scientiﬁc Computing: Survey on Recent Advances. Chemometrics and Intellingent Laboratory Systems, 110 (2012), 1-19.
 B.N. Khoromskij, and Ch. Schwab. Tensor-Structured Galerkin Approximation of Parametric and Stochastic Elliptic PDEs. SIAM J. Sci. Comp., 33(1), 2011, 1-25.
 B.N. Khoromskij, and I. Oseledets. DMRG+QTT approach to the computation of ground state for the molecular Schrodinger operator. ® Preprint 68/2010, MPI MiS, Leipzig 2010 (Numer. Math., submitted).
Ignat Domanov, On the Uniqueness of the Canonical Polyadic Decomposition: New Sufficient Conditions Based on Properties of Khatri-Rao Products of Compound Matrices
The decomposition of a tensor in a minimal linear combination of rank-1 tensors is called its Canonical Polyadic Decomposition (CPD). The decomposition is also known as Canonical / Parallel Factor Decomposition (CANDECOMP/PARAFAC). The most well-known result concerning CPD uniqueness is due to J. Kruskal: let A, B, C be the factor matrices in a CPD of a tensor T, and let kA+kB+kC≥2R+2,where kA, kB, kC denote the k-rank of A, B, C, respectively, then the CPD of T is unique. Other uniqueness conditions are due to T. Jiang and N.D. Sidiropoulos, and L. De Lathauwer. These conditions are on one hand more restrictive than Kruskalís in the sense that one of the factor matrices, say C, needs to have full column rank, which implies that the rank R is bounded by the number of rows of C, i.e., by the third tensor dimension. On the other hand, the results are an order of magnitude more relaxed than Kruskalís in terms of the conditions on A and B. The latter may be expressed as conditions on the Khatri-Rao product of the second compound matrices of A and B. In this talk we establish a theory for CPD uniqueness by working from the Jiang-Sidiropoulos-De Lathauwer results towards Kruskalís result, thereby extending work of A. Stegeman. We relax the condition on C, no longer requiring that it has full column rank, while making the conditions on A and B more restrictive. The latter are conditions on the Khatri-Rao product of m-th compound matrices of A and B, where m > 2. In the other direction, we present a condition on A and B that ismore relaxed than the least restrictive Jiang-Sidiropoulos condition. Part of the study leads to new Kruskal-type conditions that are more relaxed than the original. For instance, we explain that, if kA+rB+rC≥2R+2, and rA+kB+rC≥2R+2, and rA+rB+kC≥2R+2, where rA, rB, rC denote the rank of A, B, C, respectively, then the CPD of T is unique. We discuss both deterministic and generic conditions. We show INDSCALvariants. We also present results concerning uniqueness of one factormatrix, regardless of overall uniqueness
Mikael Sorensen, Canonical Polyadic Decomposition with Orthogonality Constraints
Canonical Polyadic Decomposition (CPD) of a higher-order tensor is an important tool in mathematical engineering. In several signal processing applications at least one of the matrix factors is constrained to be column-wise orthonormal. If the tensor admits a CPD where one of the factors is constrained to be column-wise orthonormal, then it can be shown that it admits an algebraic solution under very mild conditions. However, in practical signal processing applications noise is always present and consequently only a low-rank tensor approximation is of interest. Contrary to the unconstrained case, a low-rank approximation of a tensor by an orthogonality constrainted CPD always exists. We also explain that by taking the this constraint in account, more efficient and noise robust numerical algorithms for the computation of the constrained CPD can be obtained. In particular, orthogonality-constrained versions of the CPD methods based on simultaneous matrix diagonalization and alternating least squares are presented.
Toon Van Waterschoot, Sensing Wave Fields in a Finite Element Framewor
In this talk, we present a recently developed framework for sensing
wave fields through the combination of data-based and model-based
information. We consider two distinct ways of collecting wave field
information, either by recording data measurements obtained from
spatially distributed ensors or by assuming a physical field model in
the form of a partial differential equation. We will then show how these
two information sources can be jointly discretized in space and time by
constructing an appropriate spatiotemporal sampling grid and employing
the finite element method. This iscretization leads to a
high-dimensional yet highly sparse and structured wave field
representation, hich can then be used for a variety of applications: the
estimation of static and dynamic wave fields, the inverse problem of
source recovery, the localization of point sources, the identification
of a wave propagation model, etc. A crucial issue in all of these
applications is the possibility to formulate the related parameter
estimation problem as a large-scale convex optimization problem.
Furthermore, the sparse and structured wave field representation allows
for an efficient distributed optimization approach, which is useful in
applications featuring networked devices with local sensing, processing,
and communication capabilities.