Theoretical Analysis of Linear Algebra and Linear Equation: A Review
Applications and Importance of Linear Algebra in Mathematics and Various Fields
by Rahul Hooda*,
- Published in Journal of Advances and Scholarly Researches in Allied Education, E-ISSN: 2230-7540
Volume 16, Issue No. 4, Mar 2019, Pages 775 - 780 (6)
Published by: Ignited Minds Journals
ABSTRACT
Linear algebra is a main important part of the mathematics. It is a principal branch of mathematics that is related to mathematical structures closed under the operations of addition and scalar multiplication and that includes the theory of systems of linear equations, matrices, determinants, vector spaces, and linear transformations. Linear algebra, is a mathematical discipline that deals with vectors and matrices and, more generally, with vector spaces and linear transformations. Unlike other parts of mathematics that are frequently invigorated by new ideas and unsolved problems, linear algebra is very well understood. Its value lies in its many applications, from mathematical physics to modern algebra and its usage in the engineering and medical fields such as image processing and analysis.
KEYWORD
linear algebra, linear equation, mathematics, vectors, matrices, determinants, vector spaces, linear transformations, applications, image processing
INTRODUCTION
Linear algebra is the branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. Vector spaces are a central theme in modern mathematics; thus, linear algebra is widely used in both abstract algebra and functional analysis. Linear algebra also has a concrete representation in analytic geometry and it is generalized in operator theory. It has extensive applications in the natural sciences and the social sciences, since nonlinear models can often be approximated by linear ones. Linear algebra had its beginnings in the study of vectors in Cartesian 2-space and 3-space. A vector, here, is a directed line segment, characterized by both its magnitude, represented by length, and its direction. Vectors can be used to represent physical entities such as forces, and they can be added to each other and multiplied with scalars, thus forming the first example of a real vector space. Modern linear algebra has been extended to consider spaces of arbitrary or infinite dimension. A vector space of dimension n is called an n-space. Most of the useful results from 2- and 3-space can be extended to these higher dimensional spaces. Although people cannot easily visualize vectors in n-space, such vectors or n-tuples are useful in representing data. Since vectors, as n-tuples, are ordered lists of n components, it is possible to summarize and manipulate data efficiently in this framework. For example, in economics, one can create and use, say, 8-dimensional vectors or 8-tuples to represent the Gross National Product of 8 countries. One can decide to display the GNP of 8 countries for a particular year, where the countries' order is specified, for example, (United States, United Kingdom, France, Germany, Spain, India, Japan, Australia), by using a vector (v1, v2, v3, v4, v5, v6, v7, v8) where each country's GNP is in its respective position. A vector space (or linear space), as a purely abstract concept about which theorems are proved, is part of abstract algebra, and is well integrated into this discipline. Some striking examples of this are the group of invertible linear maps or matrices, and the ring of linear maps of a vector space. Linear algebra also plays an important part in analysis, notably, in the description of higher order derivatives in vector analysis and the study of tensor products and alternating maps. In this abstract setting, the scalars with which an element of a vector space can be multiplied need not be numbers. The only requirement is that the scalars form a mathematical structure, called a field. In applications, this field is usually the field of real numbers or the field of complex numbers. Linear maps take elements from a linear space to another (or to itself), in a manner that is compatible with the addition and scalar multiplication given on the vector space(s). The set of all such transformations is itself a vector space. If a basis for a vector space is fixed, every linear transform can be represented by a table of numbers called a matrix. The detailed study of the properties of and algorithms acting on matrices, including determinants and eigenvectors, is considered to be part of linear algebra. example differential calculus does a great deal with linear approximation to functions. The difference from nonlinear problems is very important in practice. The general method of finding a linear way to look at a problem, expressing this in terms of linear algebra, and solving it, if need be by matrix calculations, is one of the most generally applicable in mathematics.
LINEAR ALGEBRA
A line passing through the origin (blue, thick) in R3 is a linear subspace, a common object of study in linear algebra. Linear algebra is a branch of mathematics concerned with the study of vectors, vector spaces (also called linear spaces), linear maps (also called linear transformations), and systems of linear equations. Vector spaces are a central theme in modern mathematics; thus, linear algebra is widely used in both abstract algebra and functional analysis. Linear algebra also has a concrete representation in analytic geometry and it is generalized in operator theory. It has extensive applications in the natural sciences and the social sciences, since nonlinear models can often be approximated by linear ones. Many of the basic tools of linear algebra, particularly those concerned with the solution of systems of linear equations, date to antiquity--- see e.g. the history of Gaussian elimination--- although many objects were not isolated and considered in their own right until the 1600s and 1700s (see the history of determinants). The method of least squares, first used by Gauss in the 1790s, is an early and significant application of the ideas of linear algebra. The subject began to take its modern form in the mid-19th century, which saw many notions and methods of previous centuries abstracted and generalized as the beginnings of abstract algebra. Matrices and tensors were introduced as abstract mathematical objects and well-studied by the turn of the 20th century. The use of these objects in special relativity, statistics, and quantum mechanics did much to spread the subject beyond pure mathematics. Linear algebra had its beginnings in the study of vectors in Cartesian 2-space and 3-space. A vector, here, is a directed line segment, characterized by both its magnitude (also called length or norm) and its direction. The zero vector is an exception; it has zero magnitude and no direction. Vectors can be used to represent physical entities such as forces, and they can be added to each other and multiplied by scalars, thus forming the first example of a real vector space, where a distinction is made between "scalars", in this case real numbers, and "vectors". Most of the useful results from 2- and 3-space can be extended to these higher dimensional spaces. Although people cannot easily visualize vectors in n-space, such vectors or n-tuples are useful in representing data. Since vectors, as n-tuples, consist of n ordered components, data can be efficiently summarized and manipulated in this framework. For example, in economics, one can create and use, say, 8-dimensional vectors or 8-tuples to represent the gross national product of 8 countries. One can decide to display the GNP of 8 countries for a particular year, where the countries' order is specified, for example, (United States, United Kingdom, Armenia, Germany, Brazil, India, Japan, Bangladesh), by using a vector (v1, v2, v3, v4, v5, v6, v7, v8) where each country's GNP is in its respective position. A vector space (or linear space), as a purely abstract concept about which theorems are proved, is part of abstract algebra, and is well integrated into this discipline. Some striking examples of this are the group of invertible linear maps or matrices, and the ring of linear maps of a vector space. Linear algebra also plays an important part in analysis, notably, in the description of higher order derivatives in vector analysis and the study of tensor products and alternating maps. In this abstract setting, the scalars with which an element of a vector space can be multiplied need not be numbers. The only requirement is that the scalars form a mathematical structure, called a field. In applications, this field is usually the field of real numbers or the field of complex numbers. Linear maps take elements from a linear space to another (or to itself), in a manner that is compatible with the addition and scalar multiplication given on the vector space(s). The set of all such transformations is itself a vector space. If a basis for a vector space is fixed, every linear transform can be represented by a table of numbers called a matrix. The detailed study of the properties of and algorithms acting on matrices, including determinants and eigenvectors, is considered to be part of linear algebra. One can say quite simply that the linear problems of mathematics - those that exhibit linearity in their behavior - are those most likely to be solved. For example differential calculus does a great deal with linear approximation to functions. The difference from nonlinear problems is very important in practice.
Some useful theorems-
• Every vector space has a basis.[1]
• A matrix is invertible if and only if its determinant is nonzero. • A matrix is invertible if and only if the linear map represented by the matrix is an isomorphism. • If a square matrix has a left inverse or a right inverse then it is invertible (see invertible matrix for other equivalent statements). • A matrix is positive semi definite if and only if each of its eigenvalues is greater than or equal to zero. • A matrix is positive definite if and only if each of its eigenvalues is greater than zero. • An n×n matrix is diagonalizable (i.e. there exists an invertible matrix P and a diagonal matrix D such that A = PDP-1) if and only if it has n linearly independent eigenvectors. • The spectral theorem states that a matrix is orthogonally diagonalizable if and only if it is symmetric. For more information regarding the invert ability of a matrix, consult the invertible matrix article.
LINEAR EQUATION
A linear equation is an algebraic equation in which each term is either a constant or the product of a constant and (the first power of) a single variable. If you plot a linear equation, the graph will be a straight line. If it is not a straight line, the equation is nonlinear. Linear equations can have one or more variables. Linear equations occur with great regularity in applied mathematics. While they arise quite naturally when modeling many phenomena, they are particularly useful since many non-linear equations may be reduced to linear equations by assuming that quantities of interest vary to only a small extent from some "background" state.
Linear equations in two variables- A common form of a linear equation in the two variables x and y is
Where m and b designate constants. The origin of the name "linear" comes from the fact that the set of solutions of such an equation forms a straight line in line crosses the y-axis, otherwise known as the y-intercept. Since terms of a linear equations cannot contain products of distinct or equal variables, nor any power (other than 1) or other function of a variable, equations involving terms such as xy, x2, y1/3, and sin(x) are nonlinear. Forms for 2D linear equations – Linear equations can be rewritten using the laws of elementary algebra into several different forms. These equations are often referred to as the "equations of the straight line". In what follows x, y and t are variables; other letters represent constants (fixed numbers).
General form –
where A and B are not both equal to zero. The equation is usually written so that A ≥ 0, by convention. The graph of the equation is a straight line, and every straight line can be represented by an equation in the above form. If A is nonzero, then the x-intercept, that is the x-coordinate of the point where the graph crosses the x-axis (y is zero), is −C/A. If B is nonzero, then the y-intercept, that is the y-coordinate of the point where the graph crosses the y-axis (x is zero), is −C/B, and the slope of the line is −A/B.
Standard form –
where A, B, and C are integers whose greatest common factor is 1, A and B are not both equal to zero, and A is non-negative (and if A = 0 then B has to be positive). The standard form can be converted to the general form, but not always to all the other forms if A or B is zero. It is worth noting that, while the term occurs frequently in school-level US textbooks, it makes little mathematical sense since most lines cannot be described by such equations. For instance, the line x + y = √2 cannot be described by a linear equation with integer coefficients since √2 is irrational.
METHODS
In mathematics, a matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers, as shown at the right. Matrices consisting of only one column or row are called vectors, while higher-dimensional, e.g. three-dimensional, arrays of numbers are called tensors. Matrices can be added and subtracted entry wise, and multiplied according to a rule corresponding to composition of use of matrices is to represent linear transformations, which are higher-dimensional analogs of linear functions of the form f(x) = cx, where c is a constant. Matrices can also keep track of the coefficients in a system of linear equations. For a square matrix, the determinant and inverse matrix (when it exists) govern the behavior of solutions to the corresponding system of linear equations, and eigenvalues and eigenvectors provide insight into the geometry of the associated linear transformation. Matrices find many applications. Physics makes use of them in various domains, for example in geometrical optics and matrix mechanics. The latter also led to studying in more detail matrices with an infinite number of rows and columns. Matrices encoding distances of knot points in a graph, such as cities connected by roads, are used in graph theory, and computer graphics use matrices to encode projections of three-dimensional space onto a two-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives of functions or exponentials to matrices. The latter is a recurring need in solving ordinary differential equations. Serialism and dodecaphonist are musical movements of the 20th century that utilize a square mathematical matrix to determine the pattern of music intervals.
Definition-
A matrix is a rectangular arrangement of numbers.[1] For example, alternatively denoted using parentheses instead of box brackets: The horizontal and vertical lines in a matrix are called rows and columns, respectively. The numbers in the matrix are called its entries. To specify a matrix's size, a matrix with m rows and n columns is called an m-by-n matrix or m × n matrix, while m and n are called its dimensions. The above is a 4-by-3 matrix. A matrix where one of the dimensions equals one is also called a vector, and may be interpreted as an element of real coordinate space. An m × 1 matrix (one column and m rows) is called a column vector and a 1 × n matrix (one row and n columns) is called Most of this article focuses on real and complex matrices, i.e., matrices whose entries are real or complex numbers. More general types of entries are discussed below.
Notation-
The entry that lies in the i-th row and the j-th column of a matrix is typically referred to as the i,j, (i,j), or (i,j)th entry of the matrix. For example, (2,3) entry of the above matrix X is 7. Matrices are usually denoted using upper-case letters, while the corresponding lower-case letters, with two subscript indices, represent the entries. For example, the (i, j)th entry of a matrix A is most commonly written as ai,j. Alternative notations for that entry are A[i,j] or Ai,j. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface upright (non-italic), to further distinguish matrices from other variables. An asterisk is commonly used to refer to all of the rows or columns in a matrix. For example, ai,∗ refers to the ith row of A, and a∗,j refers to the jth column of A. The set of all m-by-n matrices is denoted M(m, n). A common shorthand is A = [ai,j]i=1,...,m; j=1,...,n or more briefly A = [ai,j]m×n to define an m × n matrix A. In this case, the entries ai,j are defined separately for all integers 1 ≤ i ≤ m and 1 ≤ j ≤ n; for example the 2-by-2 matrix is specified by A = [i − j]i=1,2; j=1,2. Some programming languages start the numbering of rows and columns at zero, in which case the entries of an m-by-n matrix are indexed by 0 ≤ i ≤ m − 1 and 0 ≤ j ≤ n − 1.[2] This article will follow the enumeration starting from 1.
Basic operations-
There are a number of operations that can be applied to modify matrices called matrix addition, scalar multiplication and transposition.[3] These form the basic techniques to deal with matrices.
Familiar properties of numbers extend to these operations of matrices: for example, addition is commutative,i.e. the matrix sum does not depend on the order of the summands: A + B = B + A. [4] The transpose is compatible with addition and scalar multiplication, as expressed by (cA)T = c(AT) and (A + B)T = AT + BT. Finally, (AT)T = A. Linear equations- A particular case of matrix multiplication is tightly linked to linear equations: if x designates a column vector (i.e. n×1-matrix) of n variables x1, x2, ..., xn, and A is an m-by-n matrix, then the matrix equation Ax = b, where b is some m×1-column vector, is equivalent to the system of linear equations A1,1x1 + A1,2x2 + ... + A1,nxn = b1
...
Am,1x1 + Am,2x2 + ... + Am,nxn = bm .[8] This way, matrices can be used to compactly write and deal with multiple linear equations, i.e. systems of linear equations.
RESULTS
In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example, is a system of three equations in the three variables A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by Since it makes all three equations valid. algorithms for finding the solutions are an important part of numerical linear algebra, and such methods play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
CONCLUSION
Linear algebra has broad usages and applications in most of engineering, medical, and biological field. As science and engineering disciplines grow so the use of mathematics grows as new mathematical problems are encountered and new mathematical skills are required. In this respect, linear algebra has been particularly responsive to computer science as linear algebra plays a significant role in many important computer science undertakings. The broad utility of linear algebra to computer science reflects the deep connection that exists between the discrete nature of matrix mathematics and digital technology. In this thesis we have seen one important applications of the linear algebra which is called principal components analysis. This technique is used broadly in the medical field for compressing the medical images while keeping the good and needed features. However, this is not the only application of linear algebra in this field. Linear algebra has many other applications in this field. It provides many other concepts that are crucial to many areas of computer science, including graphics, image processing, cryptography, machine learning, computer vision, optimization, graph algorithms, quantum computation, computational biology, information retrieval and web search. Among these applications are face morphing, face detection, image transformations such as blurring and edge detection, image perspective removal, classification of tumors as malignant or benign, integer factorization, error-correcting codes, and secret-sharing.
REFERENCES
1. Baker, Andrew J. (2003), Matrix Groups: An Introduction to Lie Group Theory, Berlin, New York: Springer-Verlag, ISBN 978-1-85233-470-3 2. Bau III, David; Trefethen, Lloyd N. (1997). Numerical linear algebra, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0 89871-361-9 4. Bretscher, Otto (2005). Linear Algebra with Applications (3rd ed.), Prentice Hall. 5. Britton, S. and Henderson, J. (2009). Linear algebra revisited: An attempt to understand students' conceptual di_culties. International Journal of Mathematical Education in Science and Technology, 40(7): pp. 963-974. 6. David, C. (2005). Linear Algebra and Its Applications. USA: Addison Wesley. 7. Dorier, J.-L., editor (2000). On the Teaching of Linear Algebra. Dordrecht: Kluwer Academic Publishers. 8. Gerald, F., & Dianne H. (2004). Practical Linear Algebra: A Geometry Toolbox. United Kingdom: AK Peters. 9. Godsil, Chris; Royle, Gordon (2004). Algebraic Graph Theory, Graduate Texts in Mathematics, 207, Berlin, New York: Springer-Verlag, ISBN 978-0-387 95220-8 10. Henry, R. (2010). A Modern Introduction To Linear Algebra. New York, NY: CRC Press. 11. Howard, A. (2005). Elementary Linear Algebra (Applications Version). USA: Wiley International. 12. Johnathan, S. G. (1995). Foundations of Linear Algebra. Netherlands: Kluwer. 13. Johnathan, S. G. (2007). The Linear Algebra a Beginning Graduate Student Ought to Know. Grmany: Springer. 14. Katta, G. (2014). Computational and Algorithmic Linear Algebra and n Dimensional Geometry. USA: World Scientific Publishing. 15. Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, 211 (Revised third ed.), New York: Springer-Verlag, MR1878556, ISBN 978-0-387-95385-4 16. Mirsky, Leonid (1990), An Introduction to Linear Algebra, Courier Dover Publications, ISBN 978-0-486-66434-7, http://books.google.de/books?id=ULMmheb26ZcC&pg=PA1&dq=linear+algeb a+determinant&lr=lang_en&as_brr=3&as_pt=ALLTYPES#PPA16,M1
978-0-8218-4153-2
18. Sheldon, A. (2004). Linear Algebra Done Right. Germany: Springer. 19. Stephen H., Insel, A.J., & Lawrence, E. (2002). Linear Algebra. New York, NY: Prentice Hall. 20. Sudipto, B., & Anindya, R. (2014). Linear Algebra and Matrix Analysis for Statistics, Texts in Statistical Science. USA: Chapman and Hall/CRC. 21. Thomas, S. (2006). Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics. Germany: Springer.
Corresponding Author Rahul Hooda*
Assistant Professor of Mathematics, A.I.J.H.M. College Rohtak, Haryana, India rahulshooda88@gmail.com