An Analysis of Numerical Practice Problems of Differential Equations of Multi-Linear Algebra

Exploring the Applications and Challenges of Numerical Multilinear Algebra in Modern Data Analysis

by Bhupendra Singh Gaur*,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 15, Issue No. 7, Sep 2018, Pages 767 - 771 (5)

Published by: Ignited Minds Journals


ABSTRACT

The classical Multi-linear Algebra is branch and highlights how tensor-R operations work (over a commutative ring). It discusses associated algebra, external module algebra, symmetrical module algebra, coalgebra and hop algebras etc. However, more features of a higher-order tensor are required in modern multi-way data analysis and signal processing. Although the linear numerical algebra is an extension of it, it has many essential differences and more difficulties than the linear numerical algebra. This paper presents an incomplete survey of state-of-the-art knowledge on this issue and shows new trends in further research. Our survey also includes an important part of a detailed bibliography. A new branch of computer math‟s is numerical multilinear algebra. It deals with the numerical handling by replacing matrix of higher-order tensors. Various computational issues related to the higher order tensors are included, such as decomposition of the tensor, tensor range calculation, own-value computation of the tensor, lower rank tensor approximations, numerical stability, and tensor calculation disturbance analysis, etc. This new business branch has a strong practical background and broad applications in the fields of digital image restore psychometrics, chemometrics, econometrics and multiway data analysis.

KEYWORD

numerical practice problems, differential equations, multi-linear algebra, tensor-R operations, higher-order tensor, linear numerical algebra, state-of-the-art knowledge, new trends, detailed bibliography, numerical multilinear algebra, computational issues, decomposition of the tensor, tensor range calculation, eigenvalue computation, lower rank tensor approximations, numerical stability, tensor calculation disturbance analysis, practical background, digital image restoration, psychometrics, chemometrics, econometrics, multiway data analysis

INTRODUCTION

Numerical multi-linear algebra, in which instead of matrices and vectors the higher-order tensors are considered in numerical viewpoint, is a new branch of computational mathematics. For example, how to decompose the tensor into a total of vector products, how to approximate the tensor with a lower rank tensor, how to calculate the tensor value itself and its specific values, and how to apply higher range tensor in blind-source separating (BSS). All of these are a modern machine math branch — a multilinear numerical algebra. Almost all systems undergoing change can describe differential equations. Science and engineering, economy, social science, biology, business, healthcare and other things are ubiquitous. For hundreds of years several mathematicians have been studying the nature of these equations and many advanced solutions are available. Systems defined in differential equations are often so complex or systems so broad as to preclude traceability of a strictly analytical solution. Computer simulations and numerical procedures are useful in these complex systems. Techniques have been developed before programmable computers to resolve differential equations based on numerical approximations. During people (usually women) working on mechanical computers were commonly found in rooms where differential equation systems were numerically determined for military calculations. In addition, analogies to electrical systems were popular in order to design analogue computers to analyse mechanical, thermal or chemical systems before programmable computers. With programmable computers increasing in speed and lower costs, ever more complex differential equation systems can be solved using simple programmes written to work on a common PC. Currently, your desk computer can address issues that were only 5 to 10 years ago inaccessible to the fastest supercomputers. Although it is a very young field, the multi linear numerical algebra has recently received much attention and dramatic developments, because the practical background and applications are strongly motivated. Different experts and experts in linear numerical algebra and engineering put their energies into this field. In the USA, in France, in Switzerland, there have been several international workshops and conferences on this new branch. For instance, the workshop on Tensor Decomposition was organised between 19th and 23rd July 2004 by Golub, Kolda, Nagy and Van Loan at the United States Institute in Palo Alto, California. Roughly 35 people – computer scientists, mathematicians and a wide variety of researchers who use tensor decay – came to the 29 August – 2, 2005, also was organised by De Lathauwer and Comon on the tensor decompositions and applications. The workshop was attended by about 43 people from 13 nations. The workshop will discuss the major problems, tensor area topological characteristics, tensor decompositions accurately or approximately, tensor decomposition mathematical characteristics, separate component analysis, telecommunications application, pattern analysis and statistical modelling, data analysis diagnostics and sensor arrays. Golub, Mahony, Drineas, Lim organised June 21-24, 2006 workshop on Modern Massive Data Set at the Stanford University to bridge the gap between numerical linear algebra, theoretical computer sciences, and data applications. The workshop attracted 232 participants, including 45 lectures and 24 poster presentations. Tensor-based data applications were the subject of the last day of the workshop. Golub said that a new branch of applied math has been developed at the workshop's concluding address. The Minisymposium of ICIAM on 'Numerical multilinear algebra: a new beginning' was recently held in Zurich, Switzerland from 16 to20 July 2007 by Golub, Comon, De Lathauwer and Lim. Golub wrote: "There is still no common name for 'numeric multilinear algebra.' We generally define this as the testing and use in computational mathematics of tensors / multi linear algebra, symmetric tensors / symmetric algebra, alternating tensors / external algebra, spiners / clifford algebra. There was a mistake. This minisymposium is a great step towards defining and developing this new discipline in computer mathematics." It is our hope.

1. FIRST ORDER SYSTEMS

A simple first order differential equation has general form When dy/dt means the shift in y as far as time and f(y, t) is concerned, y and time are both functions. Note that the y variable derivatives depend on themselves. For d/dt there are several different scores, popular are y and y'. One of the simplest differential equations is We will focus on this equation in order to introduce the many definitions. The equation is convenient since the simple analysis solution allows us to verify the accuracy of our numerical method. It also controls the efficiency of a heating and a cooling procedure, radioactive decay of chemicals, drug absorption in the body, the charge of a condenser, and population To analytically solve the equation, we first rearrange the equation as and integrate once with respect to time to obtain Where C is an integration constant. By taking the exponential of the entire equation we remove the normal log term which finally can be written as You can verify that this answer complies with the equation by replacing the solution in the original equation. Since we have obtained the solution via integration, integration constant would still be established. This constant is defined according to the initial conditions or the initial state of the system (C in our above solution). We will continue with the initial condition for simplicity of this analysis that

2. NOTATIONS

The N-way tensor, i.e. its entries, are accessed via the N index, say i1,i2, is an N-way array. ,iN range from 1 to Ik for every ik. For instance, a vector has an order 1 tensor and a matrix is an order 2 tensor. In this paper, variables are given their values in the real world, unless stated otherwise. However, in this complex field all statements remain valid. Bold lowercase letters (e.g. u) are denoted to vectors, while matrices are represented with uppercase letters (e.g. A). Calligraphic, uppercase letters denote higher-order tensors (e.g. A ). The array entries are scalar in size and labelled with single letters, such as ui or Ti,j,...,l, i,j,., l., l., T. After a change of coordinate system, an N-order tensor has the multilinearity property. To classify the ideas, consider T with the instruments Ti1i2i3, the inversible matrices A, B and C of three square squares with the elements Ai1j1, Bi2j2 and Ci3j3, respectively, and change of coordinates. To describe the ideas, consider T. The tensor T can be written as

respectively. The Tucker model (or the Tucker product), is commonly used in factor analyses, multi-way data analysis and psychometrics. (2.1)-(2.2) is a tensor representation. Two vectors' external product is defined as Given two tensors, having the same first dimension , one can define the mode-1 contraction product (or inner product) This product is caused by multiplication of the regular matrix. Indeed, if two matrices A and B are similar, then the regular matrix product if column number A and row number B are identic (= p), is which can be written in mode-1 contraction product with element Similarly, as long as the tensors A and B have the same p-th dimension, we can define a mode-p contraction product. The Tucker product (2.1) is therefore often known to be a contraction product and often is referred to as by or where ×k denotes summing on ik. This representation also induces several higher-order rank-factorization and higher-order singular value decomposition. Given two tensors of order N with the same dimensions. One defines their Hadamard product with element as It says that the Hadamard product of two tensors gives a tensor with the same order and same dimension with A and B. As usual, the Kronecker product of two vectors u and v of m × 1 and n × 1, respectively, is defined as the vector of mn × 1 with all the possible cross-product as follows: We can give some definitions of super symmetric tensor, tensor scalar product and tensor Frobenius norm, which are the generalization of matrix case in a straightforward way.

3. NUMERICAL ANALYSIS PRACTICE PROBLEMS

The following issues are descriptive of the classroom approaches. They are representative of the problems that are going to be checked.

Solving Equations

Problem 1. Suppose that f : R → R is continuous and suppose that for a < b ∈ R, f(a) · f(b) < 0. Show that there is a c with a < c < b such that f(c) = 0. Problem 2. Solve the equation x5 − 3x4 + 2x3 − x2 + x = 3. Solve using the Bisection method. Solve using the Newton-Raphson method. How many solutions are there? Problem 3. The Bisection method and Newton Raphson method solve the equation x = cos x. What are the numbers of solutions? The sin(x) = cos x equation can be solved by means of bisection and Newton-Raphson. How many solutions are there? Problem 4. Let h be a continuous function h : Rn → Rn . Let x0 ∈ Rn . Suppose that h n (x0) → z as n → ∞. Show that h(z) = z. converge? Problem 7. Show that the Newton-Raphson method converges quadratically. That is, suppose that the fixed point is z and that the error of the nth iteration is for h small enough.

Differential Equations

Problem 1. Solve the differential equation for Solve using Picard iteration for five iterations. Solve using the Taylor method of order 3,4, and 5. Solve using the Euler method, modified Euler, Heun, and Runge-Kutta methods using Compare the answers and the errors for each of these methods. Problem 2. How would you go about solving the differential equation with with each of the methods listed in the previous problem. What changes would need to be made in the programs? Solve this problem as a linear differential equation using the linearode program. Solve on the interval [0, 1] with Problem 3. Consider the following differential equation. Solve on the interval [0, 1] using h = .1. Problem 4. Compare Euler, Heun, and Runge-Kutta on [0, 1] using h = .1. Problem 5. Use the Euler method to solve the following differential equation limit

CONCLUSION

In this study, we discuss the motivation with the context and development of the key areas of the multi-linear numerical algebra. We have examined decompositions of tensors (for example, decomposition in high-order, decomposition of high-order singular value, canonical and pseudo-canonical decomposition, decomposition of tensors, etc.), best approximation for rank 1 and rank r, related high-order value-adding problems, polynomial optimization in multiple variations, and typical applications of tensors; (for example, BSS, BD, SDP, and other optimization problems). The growth of this field is currently greatly stimulated by various applications of digital image restore, signal processing, wireless communications, psychometry, and multi-way data analysis and high-order statistics. So far, the first move is only beginning with several topics in the field. There are also minimal numerical analysis and numerical performance.

REFERENCES

1. Alon N, de la Vega W F, Kannan R, et al. (2003), Random sampling and approximation of max-csps. J Comput System Sci, 67: pp. 212–243 2. Bergman G M (1969), Ranks of tensors and change of base field. J Algebra, 11: pp. 613–621 3. Bienvenu G, Kopp L (1983). Optimality of high-resolution array processing using the eigensystem approach. IEEE Trans ASSP, 31: pp. 1235–1248 4. Cao X R, Liu R W (1996), General approach to blind source separation. IEEE Trans Signal Processing, 44: pp. 562–570 5. Cardoso J F (1991), Super-symmetric decomposition of the forth-order cumulant tensor. Blind identification of more sources than sensors. In: Proceedings of the IEEE International Conference on Acoust, Speech, and Signal Processing (ICASSP‘91), Toronto, Canada. 6. Cardoso J F (1999), High-order contrasts for independent component analysis. Neural Computation, 11: pp. 157–192

Young decomposition. Psychometrika, 35: pp. 283–319 8. Comon P. (1994). Tensor diagonalization, a useful tool in signal processing. In: Blanke M, Soderstrom T, eds. IFAC-SYSID, 10th IFAC Symposium on System Identification (Copenhagen, Denmark, July 4-6, Invited Session). Vol. 1, pp. 77–82 9. Comon P. (1994). Independent component analysis, a new concept? Signal Processing, Special Issue on Higher-Order Statistics, 36: pp. 287–314 10. Comon P, Mourrain B. (1996). Decomposition of quantic in sums of powers of linear forms. Signal Processing, Special Issue on Higher-Order Statistics, 53: pp. 96–107 11. Comon P. (2000). Block methods for channel identification and source separation. In: IEEE Symposium on Adaptive Systems for Signal Process, Commun. Control (Lake Louise, Alberta, Canada, Invited Plenary). pp. 87–92 12. Comon P, Chevalier P. (2000). Blind source separation: Models, concepts, algorithms, and performance. In: Haykin S, ed. Unsupervised Adaptive Filtering, Vol. 1. New York: John Wiley. 13. Comon P. (2002). Tensor decompositions: State of the art and applications. In: McWhirter J G, Proundler I K, eds. Mathematics in Signal Processing V. Oxford: Oxford University Press. 14. Comon P. (2004). Canonical tensor decompositions. In: ARCC Workshop on Tensor Decompositions, Americal Institute of Mathematics (AIM), Palo Alto, California, USA. 15. Comon P, Golub G, Lim L H, et al. (2007). Symmetric tensors and symmetric tensor rank. SIAM J Matrix and Applications, (in press)

Corresponding Author Bhupendra Singh Gaur*

Assistant Professor, Mathematics, Thakur Yugraj Singh Mahavidyalaya, Fatehpur bhupendra201276@gmail.com