Introduction to Matrix Polynomial Regularity and Singularity

Exploring the Connection between Matrix Polynomial Regularity and Differential-Algebraic Equations

by Ashok Kumar Yadav*,

- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659

Volume 3, Issue No. 5, May 2012, Pages 0 - 0 (0)

Published by: Ignited Minds Journals


ABSTRACT

Apolynomial with matrix coefficients is called a matrix polynomial, or apolynomial matrix if we regard it as a matrix whose elements arepolynomials. It is well known that matrix polynomials play an important role inthe analytical theory of elementary divisors, i.e., the theory by which asquare matrix can be reduced to some normal forms (esp. the Smith canonicalform and Jordan canonical form) of which important applications have been madeto the analysis of differential and difference equations. The motivation forour study of regularity and singularity of matrix polynomialscomes mainly from two sources. One is the study of differential-algebraicequations, which is due to the close connection, as we have presented inChapter 2, between regularity and singularity of a matrix polynomial and theproperties of the solutions of the system of Differential-Algebraic Equationswhich is associated with the matrix polynomial; the other is the study of thepolynomial eigenvalue problems:

KEYWORD

matrix polynomial, regularity, singularity, elementary divisors, Smith canonical form, Jordan canonical form, differential equations, difference equations, differential-algebraic equations, polynomial eigenvalue problems

Introduction to matrix polynomial regularity and singularity

Ashok Kumar Yadav

------------------------------------------♦-------------------------------------

A polynomial with matrix coefficients is called a matrix polynomial, or a polynomial matrix if we regard it as a matrix whose elements are polynomials. It is well known that matrix polynomials play an important role in the analytical theory of elementary divisors, i.e., the theory by which a square matrix can be reduced to some normal forms (esp. the Smith canonical form and Jordan canonical form) of which important applications have been made to the analysis of differential and difference equations. The motivation for our study of regularity and singularity of matrix polynomials comes mainly from two sources. One is the study of differential-algebraic equations, which is due to the close connection, as we have presented in Chapter 2, between regularity and singularity of a matrix polynomial and the properties of the solutions of the system of Differential-Algebraic Equations which is associated with the matrix polynomial; the other is the study of the polynomial eigenvalue problems: whereis an n x n matrix polynomial of degree l, the nonzero vector (respectively, is the right (respectively, left) eigenvector associated with the eigenvalue In the last section we saw that DAEs differ in many ways from ordinary differential equations. For instance the circuit in figure 1.3 lead to a DAE where a differentiation process is involved when solving the equations. This differentiation needs to be carried out numerically, which is an unstable operation. Thus there are some problems to be expected when solving these systems. In this section we try to measure the difficulties arising in the theoretical and numerical treatment of a given DAE. Modelling with differential-algebraic equations plays a vital role, among others, for constrained mechanical systems, electrical circuits and chemical reaction kinetics. In this section we will give examples of how DAEs are obtained in these fields. We will point out important characteristics of differential-algebraic equations that distinguish them from ordinary differential equations. More information about differential-algebraic equations can be found in [2, 15] but also in Consider the mathematical pendulum in figure 1.1. By construction the rows of Aa are linearly dependent. However, after deleting one row the remaining rows describe a set of linearly independent equations; the node corresponding to the deleted row will be denoted as the ground node. As seen in the previous sections a DAE can be assigned an index in several ways. In the case of linear equations with constant coefficients all index notions coincide with the Kronecker index. Apart from that, each index definition stresses different aspects of the DAE under consideration. While the differentiation index aims at finding possible reformulations in terms of ordinary differential equations, the tractability index is used to study DAEs without the use of derivative arrays. In this section we made use of the sequence (3.2) established in the context of the tractability index in order to perform a refined analysis of linear DAEs with properly stated leading terms. We were able to find explicit expressions of (3.12) for these equations with index 1 and 2. Let m be the pendulum’s mass which is attached to a rod of length l [15]. In order to describe the pendulum in Cartesian coordinates we write down the potential energy U(x; y) = mgh = mgl ¡ mgy where ¡ x(t); y(t) ¢ is the position of the moving mass at time t. The earth’s acceleration of gravity is given by g, the pendulum’s height is h. If we denote derivatives of x and y by x˙ and y˙ respectively, the kinetic energy Some additional simple examples: Consider the (linear implicit) DAE system:

Available online at www.ignited.in Page 2

Ey' = A y + g(t) with consistent initial conditions and apply implicit Euler: E(yn+1 - yn)/h = A yn+1 + g(tn+1) and rearrangement gives: yn+1 = (E - A h)-1 [E yn + h g(tn+1)] Now the true solution, y(tn), satisfies: E[(y(tn+1) - y(tn))/h + h y''(x)/2] = A y(tn+1) + g(tn+1) and defining en = y(tn) - yn, we have: en+1 = (E - A h)-1 [E en - h2 y''(x)/2] e0 = 0, known initial conditions where the columns of Aa correspond to the voltage, resistive and capacitive branches respectively. The rows represent the network’s nodes, so that ¡1 and 1 indicate the nodes that are connected by each branch under consideration. Thus Aa assigns a polarity to each branch. This detailed analysis lead us to results about existence and uniqueness of solutions for DAEs with low index. We were able to figure out precisely what initial conditions are to be posed, namely D(t0)x(t0) = D(t0)x0 and D(t0)P1(t0)x(t0) = D(t0)P1(t0)x0 in the index 1 and index 2 case respectively. These initial conditions guarantee that solutions u of the inherent regular ODE (3.5) and (3.10) lie in the corresponding invariant subspace. Let us stress that only those solutions of the regular inherent ODE that lie in the invariant subspace are relevant for the DAE. Even if this subspace varies with t we know the dynamical degree of freedom to be rankG0 and rankG0+rankG1¡m for index 1 and 2 respectively Traditionally, for polynomial eigenvalue problems, especially those of degrees greater than or equal to 2, most research results including spectral analysis, canonical forms, linearization, Jordan pairs, etc., and numerical methods such as numerical algorithms, model reduction, and perturbation analysis (conditioning, backward error, pseudo spectra), etc., are mainly based on the regularity assumption that the matrix polynomial is regular, namely, it is square and its determinantis not identically equal to zero. For more details, see, for example, [17] and [44]. There are two major reasons for the regularity assumption. The first is that the regular case frequently occurs in applications. Take, for example, the quadratic eigenvalue problem associated with a gyroscopic system (cf. [44] and the references therein: wherepositive definite,and Since the leading coefficient matrix M is nonsingular, the determinant of the quadratic matrix polynomialis a polynomial inof degree 2n, and thereforeis regular. Such regular polynomial eigenvalue problems with a nonsingular leading coefficient matrix frequently arise from the analysis of structural mechanical and acoustic systems, electrical circuit simulation, fluid mechanics, and modeling microelectronics mechanical systems; see [44] and the references therein. The second reason for the regularity assumption is that the study of regular matrix polynomials clearly shows the main features of spectral theory. Take, for instance, the monograph of Lancaster [35], as well as that of Gohberg, Lancaster, and Rodman [17], which has regular matrix polynomials as its whole subject. However, there are applications from which singular polynomial eigenvalue problems of degrees greater than or equal to 2, not to mention singular generalized eigenvalue problems, arise, as the following examples show. Example 4.1 (Signal processing) [44] Consider the symmetric quadratic eigenvalue problem whereand Since the leading coefficient matrixis singular and the last coefficient matrix may also be singular, the determinant of the quadratic matrix polynomialmay be identically equal to zero. Therefore,may be a singular matrix polynomial. Example 4.2 (Vibration of rail tracks) [41] Consider the complex quadratic eigenvalue problem

Available online at www.ignited.in Page 3

whereandis singular. Since the leading coefficient matrix Mi and the last of the corresponding matrix polynomial are singular,may be singular. In addition, although the study of singular matrix pencils, which can be regarded as matrix polynomials of degree 1, has a long history (see, for example, Gantmacher [15], Chapter XII), some related theoretical and numerical aspects have not yet been completely clarified or solved, such as geometrical characterization of singular matrix pencils (we shall return to this topic in Subsection 4.2.4), detecting the regularity or singularity, and the nearness to singularity problem for regular matrix pencils (see Byers, He, and Mehrmann [4]). Thus, from a theoretical and/or numerical point of view, the following tasks naturally arise: 1. To obtain characterizations of the regularity and singularity of matrix polynomials. 2. To detect whether or not a given matrix polynomial is regular. 3. To find a solution of or a useful characterization for the nearness to singularity problem for a regular matrix polynomial. The investigations of the above tasks will be carried out in this chapter. In Section 4.2 we present sufficient and necessary conditions for the singularity and regularity of matrix polynomials, which lay a theoretical foundation for the investigations conducted in the subsequent sections 4.3 and 4.4. In addition, we will present a simple sufficient and necessary geometrical characterization of the column-singularity of rectangular matrix pencils, as well as a canonical form, under equivalence transformations (2.17), for 2 x 2 singular quadratic matrix polynomials. In Section 4.3 we will present a natural approach to detect the regularity or singularity of a given square matrix polynomial via the rank information of its coefficient matrices. At last, Section 4.4 deals with the nearness to singularity problem for square and regular matrix polynomials. We will give a definition, some general properties, and theoretical characterizations of the nearest distance to singularity, and derive two types of lower bounds on the nearest distance.

REFERENCES

[1] U. M. Ascher, L. R. Petzold. Projected collocation for higher-order higher-index di_erential-algebraic equations. J. Comp. Appl. Math. 43 (1992) 243{259. [2] K. Balla, R. M•arz. A uni_ed approach to linear di_erential algebraic algebraic equations and their adjoints. Z. Anal. Anwendungen, 21(2002)3, 783-802. [3] K. E. Brenan, S. L. Campbell, and L. R. Petzold. Numerical Solutions of Initial- Value Problems in Di_erential-Algebraic Equations. Classics in Applied Mathematics, Vol. 14, SIAM, 1996. [4] R. Byers, C. He, V. Mehrmann. Where is the nearest non-regular pencil? Lin. Alg. Appl., 285: 81-105, 1998. [5] E. A. Coddington, R. Carlson. Linear Ordinary Di_erential Equations. SIAM. Philadelphia. 1997. [6] R. Courant, F. John. Introduction to Calculus and Analysis I. Springer-Verlag, New York, Inc. 1989. [7] S. L. Campbell. Singular Systems of Di_erential Equations. Pitman, Boston, 1980. [8] S. L. Campbell. Singular Systems of Di_erential Equations II. Pitman, Boston, 1982. [9] E. K.-W. Chu. Perturbation of eigenvalues for matrix polynomials via the Bauer{ Fike theorems. SIAM J. Matrix Anal. Appl. 25(2003), pp. 551-573. [10] E. A. Coddington, N. Levinson. Theory of Ordinary Di_erential Equations. McGraw-Hill Book Company, Inc. 1955. [11] C. De Boor, H. O. Kreiss. On the condition of the linear systems associated with discretized BVPs of ODEs. SIAM J. Numer. Anal., Vol 23, 1986, pp. 936-939. [12] J. Demmel. Applied Numerical Linear Algebra. SIAM Press. 1997. [13] J. -P. Dedieu, F, Tisseur. Perturbation theory for homogeneous polynomial eigenvalue problems. Lin. Alg. Appl., 358: 71-74, 2003.

Available online at www.ignited.in Page 4

[14] F. R. Gantmacher. The Theory of Matrices. Vol. I. Chelsea Publishing Company, New York, 1959. [15] F. R. Gantmacher. The Theory of Matrices. Vol. II. Chelsea Publishing Company, New York, 1959. [16] E. Griepentrog, M. Hanke and R. M•arz. Toward a better understanding of differential algebraic equations (Introductory survey). Seminar Notes Edited by E. Griepentrog, M. Hanke and R. M•arz, Berliner Seminar on Di_erential-Algebraic Equations, 1992. http://www.mathematik.hu-berlin.de/publ/SB-92-1/s Differential-Algebraic Equations.html [17] I. Gohberg, P. Lancaster, L. Rodman. Matrix Polynomials. Academic Press. 1982. [18] E. Griepentrog, R. M•arz. Di_erential-Algebraic Equations and Their Numerical Treatment. Teubner-Texte zur Mathematik, Band 88, 1896. [19] G. H. Golub, C. F. Van Loan. Matrix Computations, Third Edition. The Johns Hopkins University Press. 1996. [20] N. J. Higham. Matrix nearness problems and applications. In M. J. C. Gover and S. Barnett, editors, Applications of Matrix Theory, pages 1-27, Oxford University Press, 1989. [21] D. J. Higham, N. J. Higham. Structured backward error and condition of generalized eigenvalue problems. SIAM J. Matrix Anal. Appl. 20 (2) (1998) 493-512. [22] N. J. Higham, F. Tisseur. More on pseudospectra for polynomial eigenvalue problems and applications in control theory. Lin. Alg. Appl., 351-352: 435-453, 2002. [23] N. J. Higham, F. Tisseur. Bounds for Eigenvalues of Matrix Polynomials. Lin. Alg. Appl., 358: 5-22, 2003. [24] N. J. Higham, F. Tisseur, and P. M. Van Dooren. Detecting a de_nite Hermitian pair and a hyperbolic or elliptic quadratic eigenvalue problem, and associated nearness problems. Lin. Alg. Appl., 351-352: 455-474, 2002. [25] R. A. Horn, C. R. Johnson. Matrix Analysis. Cambridge University Press, July, 1990. [26] T.-M. Hwang, W.-W. Lin and V. Mehrmann. Numerical solution of quadratic eigenvalue problems with structure-preserving methods. SIAM J. Sci. Comput. 24:1283-1302, 2003. [27] P. Kunkel and V. Mehrmann. Smooth factorizations of matrix valued functions and their derivatives. Numer. Math. 60: 115-132, 1991. [28] P. Kunkel and V. Mehrmann. Canonical forms for linear di_erential-algebraic equations with variable coe_cients. J. Comp. Appl. Math. 56(1994), 225-251. [29] P. Kunkel and V. Mehrmann. A new look at pencils of matrix valued functions. Lin. Alg. Appl., 212/213: 215-248, 1994. [30] P. Kunkel and V. Mehrmann. Local and global invariants of linear di_erentialalgebraic equations and their relation. Electron. Trans. Numer. Anal. 4: 138-157, 1996. [31] P. Kunkel and V. Mehrmann. A new class of discretization methods for the solution of linear di_erential-algebraic equations. SIAM J. Numer. Anal. 33: 1941-1961, 1996.