The Estimation For Recent Trends In Special Partial Differential Equations and Its Applications

Advancements in solving partial differential equations

by Aabid Mushtaq*,

- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659

Volume 8, Issue No. 15, Nov 2014, Pages 0 - 0 (0)

Published by: Ignited Minds Journals


ABSTRACT

In the last ten years, there has been significantimprovement and growth in tools that aid the development of finite element methodsfor solving partial differential equations. These tools assist the user intransforming a weak form of a differential equation into a computable solution.Despite these advancements, solving a differential equation remainschallenging. Not only are there many possible weak forms for a particularproblem, but the most accurate or most efficient form depends on the problem’sstructure. Requiring a user to generate a weak form by hand creates asignificant hurdle for someone who understands a model, but does not know howto solve it. In this article a symmetry group of scalingtransformations is determined for a partial differential equation of fractionalorder α, containing amongparticular cases the diffusion equation, the wave equation, and the fractionaldiffusion-wave equation. The conventional differential quadrature (DQ) method islimited in its application to regular regions by using functional values alonga mesh line to approximate derivatives. In this work, we extend the idea of DQmethod to a general case. In other words, any spatial derivative isapproximated by a linear weighted sum of all the functional values in the wholephysical domain. In the last ten years, there has been significant improvementand growth in tools that aid the development of finite element methods forsolving partial differential equations. These tools assist the user intransforming a weak form of a differential equation into a computable solution.

KEYWORD

finite element methods, partial differential equations, weak form, differential quadrature method, scaling transformations

INTRODUCTION

The key defining property of a partial differential equation (PDE) is that there is more than one independent variableThere is a dependent variable that is an unknown function of these variablesWe will often denote its derivatives by subscripts; thus, and so on. A PDE is an identity that relates the independent variables, the dependent variable //, and the partial derivatives of //. It can be written as

(1)

This is the most general PDE in two independent variables of first order. The order of an equation is the highest derivative that appears. The most general second-order PDE in two independent variables is

(2)

A solution of a PDE is a functionthat satisfies the equation identically, at least in some region of thevariables. When solving an ordinary differential equation (ODE), one sometimes reverses the roles of the independent and the dependent variables—for instance, for the separable. For PDEs, the distinction between the independent variables and the dependent variable (the unknown) is always maintained. Some examples of PDEs (all of which occur in physical theory) are: 1. ux + uy = 0 (transport) 2. ux + yuy = 0 (transport) 3. ux + uuy = 0 (shock wave) 4. uxx + uyy = 0 (Laplace's equation) 6. ut + uux + uxxx = 0 (dispersive wave) 7. Utt + Uxxxx = 0 (vibrating bar) 8. ut — iuxx — 0 (/ =) (quantum mechanics) Each of these has two independent variables, written either as x and y or as x and t. Examples 1 to 3 have order one; 4, 5, and 8 have order two; 6 has order three; and 7 has order four. Examples 3,5, and 6 are distinguished from the others in that they are not "linear.'' We shall now explain this concept. Linearity means the following. Write the equation in the form whereis an operator. That is, if t' is any function,is a new function. For instance,is the operator that takesinto its partial derivative In Example 2, the operatoris.(.) The definition we want for linearity is

(3)

for any functions //,and any constant c. Whenever (3) holds (for all choices of //,, and c),is called linear operator. The equation

(4)

is called linear ifis a linear operator. Equation (4) is called a homogeneous linear equation. The equation

(5)

whereis a given function of the independent variables, is called an inhomogeneous linear equation. For instance, the equation

(6)

is an inhomogeneous linear equation. As you can easily verify, five of the eight equations above are linear as well as homogeneous. Example 5, on the other hand, is not linear because althoughand satisfy property (3), the cubic term does not: The advantage of linearity for the equationis that if u and v are both solutions, so is>. Ifare all solutions, so is any linear combination (This is sometimes called the superposition principle.) Another consequence of linearity is that if you add a homogeneous solution to an inhomogeneous solution, you get an inhomogeneous solution. (Why?) The mathematical structure that deals with linear combinations and linear operators is the vector space. We'll study, almost exclusively, linear systems with constant coefficients. Recall that for ODEs you get linear combinations. The coefficients are the arbitrary constants. For an ODE of order rn, you get rn arbitrary constants.

BASIC DEFINITIONS

To start with partial differential equations, just like ordinary differential or integral equations, are functional equations. That means that the unknown, or unknowns, we are trying to determine are functions. In the case of partial differential equations (PDE) these functions are to be determined from equations which involve, in addition to the usual operations of addition and multiplication, partial derivatives of the functions. The simplest example, which has already been described this compendium, is the Laplace equation in,

(7)

where. The other two examples described fundamental mathematical definitions are the heat equation, with k = 1,

(8)

and the wave equation with k = 1,

(9)

In these last two cases one is asked to find a function, depending on the variables t, x, y,z, which verifies the corresponding equations. Observe that both (8) and (9) involve the symbolwhich has the same meaning a5 in the first equation, that is

Aabid Mushtaq

Both equations are called evolution equations, simply because they are supposed to describe the change relative to the time parameter t of a particular physical object . Observe that (1) can be interpreted as a particular case of both (9) and (8). Indeed solutionsof either (3) or (2) which are independent of t, i.e., verify (7). A variation of (9), important in modern particle physics, is the Klein-Gordon equation, describing the free evolution, i.e. in the absence interactions, of a massive particle.

(10)

Another basic equation of mathematical physics, which describes the time evolution of a quantum particle, is the Schrodinger equation,

(11)

with u a function of the same variables (£, x, y, z) with values in the complex space and, wherecorresponds to the Planck constant and the mass of the particle. As with our other two evolution equations, (8) and (9). above we simplify our discussion by taking k = 1.

Observe that all three PDE mentioned above satisfy the following simple property called the principle of

superposition: Ifare solutions of an equation so is any linear combination of themwhereandare arbitrary real numbers. Such equations are called linear. The following equation, called the minimal surfaces equation, is manifestly not linear. It refers to functionswhich verify

(12)

Hereandare short hand notations for the partial derivativesand The equations we have encountered so far can be written in the form, whereis a differential operator applied to. A differential operator is simply a rule which takes functions u, defined inor an open following operations: • We can take partial derivativesrelative to the variables x = (x1,x2,.. .xn) of. One allows also higher partial derivatives of u such as the mixed second partials The associated differential operators for (2) isand that of (3) is • Can add and multiply u and its partial derivatives between themselves as well as with given functions of the variables x. Composition with given functions may also appear. In the case ofthe equation (1) the associated differential operator is whereis the diagonal 3x3 matrix with entries (1,1,1) corresponding to the euclidean scalar product of vectors X,Y in,

(13)

The associated differential operators for (8), (9) and (11) are, resp. andwith variables are. In the particular case of the wave equation (9) it pays to denote the variable t by. The wave operator can then be written in the form,

(14)

where ma3 is the diagonal 4x4 matrix with entries (—1,1,1,1), corresponding to the Minkowski scalar product in. This latter is defined, for 4 vectors and . The differential operatoris called D'Alembertian after the name of the French mathematician who has first introduced it in connection to the equation of a vi-brating string. for any functionsand real numbers. The following is another simple example of a linear differential operator

(15)

whereand a1,a2 are given functions of x. They are called the coeffi cients of the linear operator. An equation of the form, corresponding to a linear differential operatorand a given function, is called linear even though, for, the principle of superposition of solutions does not hold. In the case of the equation (12) the differential operatorcan be written, relative to the variablesand, in the form, where. Clearlyis not linear in this case. We call it a nonlinear operator; the corresponding equation (6) is said to be a nonlinear equation. An important property of both linear and nonlinear differential operators is locality. This means that whenever we applyto a function u, which vanishes in some open set D, the resulting functionalso vanish in D. Observe also that our equations (7)-(11) are also translation invariant. This means, in the case (1) for example, that whenever the functionis a solution so is the function where is the translation . On the other hand the equation, corresponding to the operatordefined by (15) is not, unless the coefficients ai,a2 are constant. Clearly the set of invertible transformationswhich map any solution, of, to another solutionform a group, called the invariance group of the equation. The Laplace equation (7) is invariant not only with respect to translations but also rotations, i.e linear transformationswhich preserve the euclidean scalar product, i.e.for all vectors . So far we have tacitly fourth andfor the last example). In reality one is often restricted to a domain of the corresponding space. Thus, Partial differential equations are ubiquitous throughout Mathematics and Science. They provide the basic mathematical framework for some of the most important physical theories, such as Elasticity, Hydrodynamics, Electromagnetism, General Relativity and Non-relativistic Quantum Mechanics. The more modern relativistic quantum field theories lead, in principle, to equations in infinite number of un-knowns, which lie beyond the scope of partial differential equations. Yet, even in that case, the basic equations preserve the locality property of PDE. Moreover the starting point of a quantum field theory is always a classical field theory, described by systems of PDE's. This is the case, for example, of the Standard Model of weak and strong interactions, based on a Yang -Mills-Higgs field theory. If we also include the ordinary differential equations of Classical Mechanics, which can be viewed as one dimensional PDE, we see that, essentially, all of Physics is described by differential equations. As examples of partial differential equations underlining some of our most basic physical theories we refer to the articles of the compendium in which the Maxwell, Yang-Mills, Einstein, Euler and Navier Stokes equations are introduced.

PARTIAL DIFFERENTIAL EQUATION OF FRACTIONAL ORDER

In this paper we present the group-invariant solutions of a partial differential equation of fractional order containing among particular cases the diffusion equation, the wave equation. and the so-called fractional diffusion-wave equation. This equation is obtained by replacing the first or second order time-derivative in the diffusion or wave equation, respectively, by a generalized derivative of order, defined in the sense of the Riemann-Liouville fractional calculus:

(16)

With Such equations have already appeared both in texts in physics and mathematics. Mathematical aspects of the boundary-value problems for this equation and for more general ones and their applications in physics have been treated in papers by Engler , Saichev and Zaslavski . In a series of papers and the references

Aabid Mushtaq

considered: a) Cauchy problem: b) Signaling problem: By using integral transforms (Laplace, Fourier or Mellin type) the Green's functionsand for these problems were expressed in terms of some special functions (of Wright or Mittag-Leffier type, Fox H-function) with the similarity argument. We will explain this fact in our article and determine by using the method of group analysis that Eq. (1) is in fact invariant under a symmetry group of scaling transformations. The method of group analysis of differential equations began with the work of Sophus Lie more than a hundred years ago. Roughly speaking a symmetry group of a system of differential equations is a group which transforms solutions of the system to other solutions. For partial differential equations one can determine special types of solutions, which are invariant under some subgroup of the full symmetry group of the system. These 'group-invariant* solutions are found by solving a reduced system of equations having fewer independent variables than the original system. In recent years the ideas of the Lie group approach have been extended to difference equations and also to integro-differential equations. One can find the full symmetry groups of the diffusion and the wave equation. Here we will focus on the so-called similarity method, developed by G. D. Birkhoff in the 1930s and consider the special case of scaling transformations. In the general caseone cannot use the chain rule for the operation of differentiation to get a reduced equation for the scale-invariant solutions of (16) as in the case of partial differential equations i. In spite of this we will transform Eq. (16) into an ordinary differential equation of fractional order with the new independent variable. The derivative then is an Erdelvi-Kober derivative depending on a parameter. Forand (the diffusion and the wave equations) this reduced equation corre-

SOLUTION OF PARTIAL DIFFERENTIAL EQUATIONS BY A GLOBAL RADIAL BASIS FUNCTION BASED DQ METHOD

The differential quadrature (DQ) method was introduced by Richard Bellman and his associates in the early of 1970s, following the idea of integral quadrature. The basic idea of the DQ method is that any derivative at a mesh point can be approximated by a weighted linear sum of all the functional values along a mesh line. The key procedure in the DQ method is the determination of weighting coefficients. As shown by Shu and Richards , when the solution of a partial differential equation (PDE) is approximated by a high order polynomial, the weighting coefficients can be computed by a simple algebraic formulation or by a recurrence relationship. Later, Shu and Chew also showed that when the solution of the PDE is approximated by a Fourier series expansion, the weighting coefficients of the first- and second-order derivatives can be computed explicitly by algebraic formulations. The details of the polynomial-based and Fourier series expansion-based DQ methods can be found in the book of Shu. Currently, the DQ method has been extensively applied in engineering for the rapid and accurate solution of various linear and nonlinear differential equations. On the other hand, it is noted that the function approximation (polynomial or Fourier series expansion) in the DQ method is along a straight line. This means that numerical discretization of derivatives by the DQ method is also along a straight line. Due to this feature, the DQ method can be directly applied to regular regions, such as rectangular and circular domains. For complex geometry, the DQ method cannot be directly applied. One has to rely on the coordinate transformation technique. In this technique, the irregular domain in the physical space is first mapped to a regular domain in the computational space. Then the differential equations and their associated boundary conditions are transformed into relevant forms in the computational space. The numerical discretization is only made in the computational space by the DQ method. Although this technique can obtain very good results for problems with complex geometry, we have to admit that the process is very complicated, and the approach is not as flexible as the finite element method. Practically, there is a demanding to develop a more efficient method for solving complex problems. It was found that the need of coordinate transformation by the DQ method for complex discretization along a straight line is due to one-dimensional function approximation used. It is expected that if a two-dimensional polynomial is used to approximate a function, the DQ approximation of a derivative can involve any point on the two-dimensional plane. And as a consequence, no coordinate transformation is needed. This is the idea of differential cubature (DC) method. Unfortunately, due to oscillatory feature of high order polynomials, the DC method can only obtain stable solution of a PDE by using a limited number of mesh points. It seems that the multi-dimensional polynomial approximation as the test function may not be a good choice in the DQ approximation. As will be shown in this paper, the radial basis functions (RBFs), which have ‘truly’ meshless property and insensitivity to high dimension, could be a good choice in the DQ approximation. RBFs have been under intensive research as a technique for multivariate data and function interpolation in the past decades, especially in multi-dimensional applications. Their performance demonstrates that RBFs constitute a powerful framework for interpolating or approximating data on nonuniform grids. Furthermore, Buhmann and Micchelli showed that RBFs are attractive for pre-wavelet construction due to their exceptional rates of convergence and infinite differentiability. Since RBFs have excellent performance for function approximation, many researchers turn to explore their ability for solving PDEs. In other words, as the spatial dimension of the problem increases, the convergence order also increases, and hence, much fewer scattered collocation points will be needed to maintain the same accuracy as compared with conventional finite difference, finite element and finite volume methods. This shows the applicability of the RBFs for solving high-dimensional problems. The process is very complicated, especially for nonlinear problems. For the nonlinear case, some special techniques such as numerical continuation and bifurcation approach proposed by Fedoseyev et al. have to be used to solve the resultant nonlinear equations. Since the techniques are very complicated, it is not easy to apply them for solving practical problems such as fluid dynamics, which usually require a large number of mesh points for accurate solution. As will be shown in this paper, the advantages of the DQ approximation and RBFs can be combined to provide an efficient discretization method, which is a derivative approximation approach and is mesh-free. In our proposed method, the RBFs are taken as the test functions in the DQ approximation to compute the weighting coefficients. Once the weighting coefficients are computed, the solution process for a PDE by the new method is exactly the same as the conventional DQ method and finite difference schemes. Moreover, the new method can be consistently well applied to linear and nonlinear problems. Our numerical as high accuracy and efficient computation, but also owns the merits of RBFs such as mesh-free feature and easy extension to high dimension. This article is the first of a series works. We hope to present a new framework in applying the DQ method to practical problems.

SOLUTION OF A PARTIAL DIFFERENTIAL EQUATION FROM THE STRONG FORM

The variety of algorithms used to solve a partial differential equation has been both an asset as well as a burden. On one hand, we have an assortment of extremely sophisticated tools that allow us to solve a diverse set of problems. On the other, the heterogeneous nature of these algorithmsmakes it challenging to design a generalmodeling tool. Within the realm of finite element methods, there has been considerable progress towards this goal.Modeling tools such as deal.II , FEniCS, FreeFEM, GetDP , and Sundance allow the user to specify the weak form of a differential equation by hand. Then, given a specific kind of element, these tools either assist in or automate the construction of the linear system that arises from the discretization. In spite of their usefulness, these tools assume that their user possesses the technical expertise to find the weak form of a differential equation. Unfortunately, this can be a difficult task. Ideally, we would like a system that can transform the original strong form of the differential equation into a computable solution. This would allow a user with far less technical knowledge to solve a problem than is currently possible. While it is doubtful that such a perfect mechanism exists for all differential equations, we focus on a system that can achieve this goal for a relatively broad class of problems. Specifically, we automate a first order system least squares algorithm using triangular B´ezier patches as our shape functions.Neither our choice of the straightforward least squares algorithm nor our choice of B´ezier patches is unique. Nonetheless, we combine these pieces in such way that we can automate the construction and solution of any polynomial differential equation where every function can be adequately approximated by a surface composed of several B´ezier patches. This includes all smooth functions as well as, in a practical sense, some discontinuous functions. We do not intend nor claim that this system will provide the best possible solution in all cases. Simply, it provides a smooth solution given relatively little analytical work by the user. In this way, we view it as a tool that allows an end user to rapidly prototype a problem and then determine whether further investigation into an alternative algorithm is necessary.

Aabid Mushtaq

Bernstein polynomial of degree k over the jth simplex within the set t as where„, anddenotes the solution y of the (/> + 1) x (/> + 1) linear system whereand denotes a corners of the simplex. Based on these polynomials, we form Bezier patches by taking the sum over all possible polynomials of degree k. We form a surface by summing over all possible simplices.

REFERENCES

  • A.Saichev and G. Zaslavsky, Fractional kinetic equations: solutions and applications, Chaos 7, No 4 (1997), 753-764.
  • Bellman RE, Casti J. Differential quadrature and long-term integration. J Math Anal Appl 1971;34:235–8.
  • Bellman RE, Kashef BG, Casti J. Differential quadrature: a technique for the rapid solution of nonlinear partial differential equations. J Comput Phys 1972;10:40–52.
  • Buhmann MD, Micchelli CA. Multiquadric interpolation improved. Computers Math Appl 1992;24:21–5.
  • Dular, P., Geuzaine, C., Henrotte, F., Legros, W.: A general environment for the treatment of discrete problems and its application to the finite element method. IEEE Transactions on Magnetics 34(5) (September 1998) 3395–3398
  • F.John Partial Differential Equations fourth edition, Springer.
  • Fedoseyev AI, Friedman MJ, Kansa EJ. Continuation for nonlinear elliptic partial differential equation discretized by the
  • G.Folland, Introduction to Partial Differential Equations, Princeton University Press, Princeton, NJ, 2nd ed., 1996.
  • H.Engler, Similarity solutions for a class of hyperbolic integrodifferential equations, Differential Integral Eqns. 10, No 5 (1997), 815-840.
  • L.C. Evans, Partial Differential Equations, Amer. Math. Soc., Providence, 1998.
  • Logg, A.: Automating the finite element method. Sixth Winter School in Computational Mathematics (March 2006)
  • N.H. Ibragimov (Ed.), ”CRC Handbook of Lie Group Analysis of Differential Equations”, Vol.3: ”New Trends in Theoretical Developments and Computational Methods”, CRC Press, Boca-Raton, 1996.
  • Partial Differential equations in the 20th century Advances in Math. 135 (1998), p. 76-144.
  • Shu C, Chen W, Du H. Free vibration analysis of curvilinear quadrilateral plates by the differential quadrature method. J Comput Phys 2000;163:452–66.
  • Shu C, Chew YT. Fourier expansion-based differential quadrature and its application to Helmholtz eigenvalue problems. Commun Numer Meth Engng 1997;13:643–53.
  • Shu C, Richards BE. Application of generalized differential quadrature to solve two-dimensional incompressible Navier–Stokes equations. Int J Numer Meth Fluids 1992;15:791–8.
  • Shu C. Differential quadrature and its application in engineering. London: Springer; 2000.
  • W.Boyce and R. DiPrima, Elementary Differential Equations and Boundary Value Problems, Wiley, New York, 8th ed., 2004.

 Zheng, J., Sederberg, T.W., Johnson, R.W.: Least squares methods for solving differential equations using B´ezier control points. Applied Numerical Mathematics (48) (2004) 237–252