Applications and Advances in Linear Algebra Research

Exploring the Applications and Advancements in Linear Algebra Research

by Byula Parsa*, Dr. Shalu Garg,

- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540

Volume 19, Issue No. 5, Oct 2022, Pages 361 - 365 (5)

Published by: Ignited Minds Journals


ABSTRACT

Linear algebra is a crucial area of study in mathematics. Systems of linear equations, matrices, determinants, vector spaces, and linear transformations all fall under this major subfield of mathematics, which is concerned with structures in mathematics that are closed under addition and scalar multiplication. Vectors, matrices, and linear transformations are the primary objects of study in linear algebra, a branch of mathematics. As compared to other areas of mathematics, where new ideas and unanswered problems often inject fresh energy, linear algebra is generally considered to be thoroughly known. Its worth comes from the various domains to which it has been put to use, including mathematical physics, modern algebra, engineering, medicine, and image processing and analysis. description of the area of mathematics known as linear algebra, which includes the study of all the ideas and structures related to this topic. An innovative and essential usage of linear algebra is discussed the principle components analysis' application to the compression of medical pictures.

KEYWORD

Linear algebra, Systems of linear equations, Matrices, Determinants, Vector spaces, Linear transformations, Vectors, Branch of mathematics, Mathematical physics, Modern algebra

INTRODUCTION

There are at least two reasons why students from a wide range of academic backgrounds should prioritize their time in linear algebra classes. To begin, very few fields of study can make the claim that they are applicable to such a broad spectrum of fields, including not only other branches of mathematics (such as multivariable calculus, differential equations, and probability), but also the fields of physics, biology, chemistry, economics, finance, psychology, and sociology, as well as all areas of engineering. Second, this topic offers the student who is currently enrolled in the sophomore year an outstanding opportunity to get experience in managing abstract ideas and concepts (Santo, 2012). One of the most well-known areas of study in mathematics, linear algebra is notable for both the depth of its theoretical roots and the breadth of its practical applications in fields such as science and engineering. In linear algebra, some examples of fundamental issues that have been researched for a significant amount of time are solving systems of linear equations and calculating determinants. Both of these topics are instances of linear algebra (Sudipto, 2014). In 1693, Leibnitz discovered the formula for determinants, and in 1750, Cramer published a method for solving systems of linear equations that is now known as Cramer's Rule. Both of these discoveries were monumental in the field of mathematics. This is the first stone laid in the foundation that would eventually support the development of linear algebra and matrix theory. The matrix calculus was accorded a significant amount of focus at an early stage in the development of digital computers. Alan Turing and John von Neumann are regarded as the two most important pioneers in the field of computer science. They made important contributions that were essential to the development of computer linear algebra. In 1947, von Neumann and Goldstine conducted research on the impact that rounding mistakes had on the process of solving linear equations (Thierry Joffrain, 2006). When another year had passed, Turing devised a technique for factoring matrices into products of lower triangular matrices and echelon matrices. The field of computer linear algebra is now receiving a lot of attention. This is due to the fact that the field is now recognized as an absolutely essential tool in many branches of computer applications. These branches include computer graphics, geometric modeling, robotics, and other similar fields that require computations that are time-consuming and difficult to get right when done by hand (Stephen, 1993).

Linear Algebra

Linear Algebra is a standout amongst the most essential fundamental regions in Mathematics, having at least as awesome an effect as Calculus, and to be sure it gives a significant piece of the hardware that is required to summarize Calculus to vector-esteemed elements of numerous variables. considered in Mathematics or connected inside or outside of it, a significant portion of the issues concentrated on in linear algebra are manageable to precise and even algorithmic arrangements, and this makes them implementable on PCs. This elucidates why so much calculational utilization of PCs incorporates this kind of polynomial math, and it also explains why it is so generally utilized (Robert, 2011). The concepts of linear algebra are used to the study of a variety of geometric topics, and the idea of a direct change is an arithmetical translation of the concept of geometric change. In conclusion, a significant number of contemporary one-of-a-kind variable-based mathematical constructions are built on linear algebra, which also frequently presents good examples of general thought. The topic of mathematics that is based on linear algebra may be partially elucidated by the use of the two terms that are involved in the title. You will have a better understanding of the term "linear" by the time you reach the conclusion of this course, and in all honesty, attaining this gratefulness may be considered one of the most important goals of this course. But, until until notice, you can understand it to imply anything that is "level" or "straight." For example, on the xy-plane, you could be used to depicting straight lines (are there any other kinds?) as the arrangement of answers for a mathematical statement of the form y=mx+b, where the slope m and the y-capture b are both constants that jointly describe the line. Is there some other kind? In the event that you have thought about multivariate analytics, then you have probably been on flights (Paolo Bientinesi, 2005). Living in three measurements, with directions portrayed by triples (x,y,z), One way to think about them is as an ordering of the solutions to the mathematical problems posed by the structure ax+by+cz=d, where a,b,c,d are constants that together focus the plane. Planes, on the other hand, may be represented as being level, whereas lines in three measures can be represented as linear. You will recall, after taking a class in multivariate analytics, that lines are collections of foci that are exemplified by comparisons. for example, x=3t−4, y=−7t+2, z=9t, where t is a parameter that can tackle any worth.

Scalars:

First, we will explain what is meant by scalars, and only then will we move on to examine vectors. These are "numbers" of various types, along with logarithmic procedures that are used to combine them. The primary categories that we shall investigate are denoted by the letters Q, R, and C, and these are the objective numbers, authentic numbers, and mind boggling numbers respectively. In spite of this, mathematicians frequently work with a variety of fields, such as the restricted fields, also known as Galois fields, which are crucial in coding hypotheses, A field of scalars, or simply a field, is composed of a set denoted by the letter F, each member of which is referred to as a scalar, as well as two arithmetic operations denoted by the symbols expansion (+) and augmentation (x), which are used to unite each pair of scalars x, y ∈ F to give new scalars x + y ∈ F and x × y ∈ F.

Vector Algebra:

Here, we introduce a few useful operations which are defined for free vectors. Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α| A. The vector B, is parallel to A and points in the same direction if α >0 . For α <0 , the vector B is parallel to A but points in the opposite direction (antiparallel). Multiplication by a scalar If we multiply a vector A by a scalar α, the result is a vector B = αA, which has magnitude B = |α|A. The vector B, is parallel to A and points in the same direction if α > 0. For α <0 , the vector B is parallel to A but points in the opposite direction (antiparallel) (Margaret, 2017). When an arbitrary vector, denoted by the letter A, is multiplied by the inverse of its magnitude, denoted by the fractional value (1/A), we obtain a unit vector that is parallel to A. There are a few different popular notations that may be used to indicate a unit vector, such as Aˆ , eA, etc. Thus, we have that Aˆ = A/A = A/|A|, and A = A Aˆ , |Aˆ | =1.

BASIC CONCEPTS OF ALGREBRA

Definition 1: Let A be a nonempty set. A function that goes from AxA into A is the result of performing the binary operation * on A. To put it another way, a binary operation * performed on a set A is a rule of correspondence that assigns to each ordered pair the values 1 and 0 (a b), A xA, some elements { a,b} of the set A. Example: Set of all integers under + operation is a group. Introduction to Eigenvalues and Eigenvectors

What we need to find out is whether or not the events described below are even conceivable. Is it conceivable that, rather of simply obtaining a fresh new vector as a result of the multiplication, one might instead obtain the following, In other words is it possible, at least for certain λ and , to have matrix multiplication be the same as just multiplying the vector by a constant? Of course, we probably wouldn‘t be talking about this if the answer was no. So, it is possible for this to happen, however, it won‘t happen for just any value of λ or . If we do happen to have a λ and for which this works (and they will always come in pairs) then we call λ an eigenvalue of A and an eigenvector of A (Kazushige, 2008).

Example 2:

Find the eigenvalues and eigenvectors of the following matrix.

Solution:

Discovering the eigenvalues is the first step that has to be taken by us. This indicates that we need the matrix shown below, In specifically, we need to pinpoint the location inside this matrix at which the determinant has a value of zero.

So, it looks like we will have two simple eigenvalues for this matrix, and . The next step is to locate the eigenvectors that correspond to each of these. Additionally, keep in mind that the two Just inserting each eigenvalue into (2) and then solving for it allows us to get the eigenvectors. So, let's go ahead and do it. In this case we need to solve the following system Remember that the official method for solving this system involves using the augmented matrix that is presented below.

It is important to note that we can find a solution to this problem for any of the two variables. The eigenvector may therefore be written as,

Summarizing we have,

Note that the two eigenvectors are linearly independent as predicted.

Applications of Eigenvectors and Eigenvalues

The use of eigenvalues and, on occasion, eigenvectors may be found in a variety of engineering and scientific applications that include matrices. Control theory, vibration analysis, advanced dynamics, electric circuits, and quantum mechanics are just few of the application fields that can benefit from this knowledge. In this section, we will cover the process of transforming a given matrix process (Inderjit, 2004). After that, we move on to demonstrate how this method is extraordinarily helpful in the process of solving coupled differential equations as well as the applications of eigenvalues and eigenvectors in Principal Components Analysis. Many uses of matrices are found in engineering and research; these applications make use of eigenvalues and, in some circumstances, eigenvectors. Control theory, vibration analysis, electric circuits, propelled motion, and quantum mechanics are just a few of the application areas that can benefit from this knowledge. Substantial percentages of the applications include the exploitation of eigenvalues and eigenvectors throughout the process of converting a given matrix into a diagonal matrix. This process takes place during the time spent altering the matrix (Howard, 2005).

Diagonalization of a matrix with distinct eigenvalues

The term "diagonalization" refers to the process of converting a matrix that is not diagonal into an analogous matrix that is diagonal and is thus easier to work with. A matrix A with unique eigenvalues will have eigenvectors that are not dependent on one another linearly. It is possible to demonstrate that, if we construct a matrix P in which the columns are these eigenvectors, then, et P ≠0 so that P −1 exists.

Systems of linear differential equations-Real, distinct eigenvalue

It is time to begin resolving the systems of differential equations that have been presented. We've seen how several approaches can solve the system's problems. will be of the form is nonzero. We are going to start by looking at the case where our two eigenvalues, and are real and distinct. To put it another way, the eigenvalues will be straightforward and real. It is important to keep in mind that the eigenvectors of simple eigenvalues are solutions are linearly independent of one another, then the matrix X must be nonsingular, and therefore, these two solutions will constitute a basic set of solutions (Field G, 2015). The overarching answer to this particular problem is going to be,

CONCLUSION

In general, in addition to its usages in mathematics, linear algebra has vast usages and applications in the majority of engineering, medical, and biological fields. [Citation needed] [Citation needed] Because new mathematical issues are met and new mathematical abilities are required, the application of mathematics is growing along with the expansion of the scientific and technical fields. In this regard, linear algebra has been especially sensitive to the development of computer science. This is due to the fact that linear algebra is involved in a substantial role in a variety of significant computer science endeavors. The close relationship that exists between the discrete character of matrix mathematics and digital technology is reflected in the wide range of applications that may be found in computer science when linear algebra is applied. In this thesis, we have looked at one essential application of linear algebra; its name is principle components analysis, and it was really interesting. This method is widely utilized in the medical industry for the purpose of compressing medical pictures while maintaining the necessary and desirable characteristics. Face morphing, face detection, image transformations such as blurring and edge detection, image perspective removal, classification of tumors as malignant or benign, integer factorization, error-correcting codes, and secret-sharing are some of the applications that can be carried out with the help of this technology.

REFERENCES

1. Brian C. Gunter, Robert A. van de Geijn, Parallel out-of-core computation and updating of the QR factorization, ACM Transactions on Mathematical Software (TOMS), 2005. 2. David, P. (2010). Linear Algebra: A Modern Introduction. United Kingdom: Cengage – Brooks/Cole. 3. Ed Anderson, Zhaojun Bai, James Demmel, Jack J. Dongarra, Jeremy DuCroz, Ann Greenbaum, Sven Hammarling, Alan E. McKenney, Susan Ostrouchov, and Danny Sorensen, LAPACK Users‘ Guide, SIAM, Philadelphia, 1992.

Mathematical Software, Vol. 41, No. 3, June 2015. 5. Field G. Van Zee and Tyler M. Smith, Implementing High-performance Complex Matrix Multiplication via the 3M and 4M Methods, ACM Transactions on Mathematical Software, Vol. 44, No. 1, pp. 7:1-7:36, July 2017. 6. Gene H. Golub and Charles F. Van Loan, Matrix Computations, Fourth Edition, Johns Hopkins Press, 2013. 7. Gilbert, S. (2009). Introduction To Linear Algebra (4th ed.). United Kingdom: WellesleyCambridge Press. 8. Glazman, I. M. & Ljubic, Ju. I. (2006). Finite-Dimensional Linear Analysis. United Kingdom: Dover Publications 9. Howard, A. (2005). Elementary Linear Algebra (Applications Version). USA: Wiley International. 10. Inderjit S. Dhillon and Beresford N. Parlett, Multiple Representations to Compute Orthogonal Eigenvectors of Symmetric Tridiagonal Matrices, Lin. Alg. Appl., Vol. 387, 2004. 11. Katta, G. (2014). Computational and Algorithmic Linear Algebra and n-Dimensional Geometry. USA: World Scientific Publishing. 12. Kazushige Goto and Robert van de Geijn, Anatomy of High-Performance Matrix Multiplication, ACM Transactions on Mathematical Software, Vol. 34, No. 3: Article 12, May 2008. 13. Margaret E. Myers and Robert A. van de Geijn, LAFF-On Programming for Correctness, self-published at ulaff.net, 2017. 14. Margaret E. Myers and Robert A. van de Geijn, Linear Algebra: Foundations to Frontiers, ulaff.net, 2014. A Massive Open Online Course offered on edX. 15. ohnathan, S. G. (2007). The Linear Algebra a Beginning Graduate Student Ought to Know. Grmany: Springer. 16. Paolo Bientinesi, John A. Gunnels, Margaret E. Myers, Enrique S. Quintana-Orti, Robert A. van de Geijn, The science of deriving dense linear algebra algorithms, ACM Transactions on Mathematical Software (TOMS), 2005. 17. Richard Barrett, Michael Berry, Tony F. Chan, James Demmel, June M. Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Charles Romine, and Henk Van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM Press, 1993. 18. Robert van de Geijn and Kazushige Goto, BLAS (Basic Linear Algebra Subprograms), Encyclopedia of Parallel Computing, Part 2, pp. 157-164, 2011. 20. Stephen J. Wright, A Collection of Problems for Which {G}aussian Elimination with Partial Pivoting is Unstable, SIAM Journal on Scientific Computing, Vol. 14, No. 1, 1993. 21. Sudipto, B., & Anindya, R. (2014). Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science. USA: Chapman and Hall/CRC. 22. Thierry Joffrain, Tze Meng Low, Enrique S. Quintana-Orti, Robert van de Geijn, Field G. Van Zee, Accumulating Householder transformations, revisited, ACM Transactions on Mathematical Software, Vol. 32, No 2, 2006.

Corresponding Author Byula Parsa*

Research Scholar, Shridhar University