Menu
For free
Registration
home  /  Success stories/ Systems of differential equations, methods of integration. Solving systems of differential equations using a matrix method What does it mean to solve a system of differential equations

Systems of differential equations, methods of integration. Solving systems of differential equations using a matrix method What does it mean to solve a system of differential equations

Matrix representation of a system of ordinary differential equations (SODE) with constant coefficients

Linear homogeneous SODE with constant coefficients $\left\(\begin(array)(c) (\frac(dy_(1) )(dx) =a_(11) \cdot y_(1) +a_(12) \cdot y_ (2) +\ldots +a_(1n) \cdot y_(n) \\ (\frac(dy_(2) )(dx) =a_(21) \cdot y_(1) +a_(22) \cdot y_(2) +\ldots +a_(2n) \cdot y_(n) ) \\ (\ldots ) \\ (\frac(dy_(n) )(dx) =a_(n1) \cdot y_(1) +a_(n2) \cdot y_(2) +\ldots +a_(nn) \cdot y_(n) ) \end(array)\right. $,

where $y_(1)\left(x\right),\; y_(2)\left(x\right),\; \ldots ,\; y_(n) \left(x\right)$ -- the required functions of the independent variable $x$, coefficients $a_(jk) ,\; 1\le j,k\le n$ -- we represent the given real numbers in matrix notation:

  1. matrix of required functions $Y=\left(\begin(array)(c) (y_(1) \left(x\right)) \\ (y_(2) \left(x\right)) \\ (\ldots ) \\ (y_(n) \left(x\right)) \end(array)\right)$;
  2. matrix of derivative solutions $\frac(dY)(dx) =\left(\begin(array)(c) (\frac(dy_(1) )(dx) ) \\ (\frac(dy_(2) )(dx ) ) \\ (\ldots ) \\ (\frac(dy_(n) )(dx) ) \end(array)\right)$;
  3. SODE coefficient matrix $A=\left(\begin(array)(cccc) (a_(11) ) & (a_(12) ) & (\ldots ) & (a_(1n) ) \\ (a_(21) ) & (a_(22) ) & (\ldots ) & (a_(2n) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ (a_(n1) ) & ( a_(n2) ) & (\ldots ) & (a_(nn) ) \end(array)\right)$.

Now, based on the rule of matrix multiplication, this SODE can be written in the form of a matrix equation $\frac(dY)(dx) =A\cdot Y$.

General method for solving SODE with constant coefficients

Let there be a matrix of some numbers $\alpha =\left(\begin(array)(c) (\alpha _(1) ) \\ (\alpha _(2) ) \\ (\ldots ) \\ (\alpha _ (n) ) \end(array)\right)$.

The solution to the SODE is found in the following form: $y_(1) =\alpha _(1) \cdot e^(k\cdot x) $, $y_(2) =\alpha _(2) \cdot e^(k\ cdot x) $, \dots , $y_(n) =\alpha _(n) \cdot e^(k\cdot x) $. In matrix form: $Y=\left(\begin(array)(c) (y_(1) ) \\ (y_(2) ) \\ (\ldots ) \\ (y_(n) ) \end(array )\right)=e^(k\cdot x) \cdot \left(\begin(array)(c) (\alpha _(1) ) \\ (\alpha _(2) ) \\ (\ldots ) \\ (\alpha _(n) ) \end(array)\right)$.

From here we get:

Now the matrix equation of this SODE can be given the form:

The resulting equation can be represented as follows:

The last equality shows that the vector $\alpha $ is transformed using the matrix $A$ into a parallel vector $k\cdot \alpha $. This means that the vector $\alpha $ is an eigenvector of the matrix $A$, corresponding to the eigenvalue $k$.

The number $k$ can be determined from the equation $\left|\begin(array)(cccc) (a_(11) -k) & (a_(12) ) & (\ldots ) & (a_(1n) ) \\ ( a_(21) ) & (a_(22) -k) & (\ldots ) & (a_(2n) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ ( a_(n1) ) & (a_(n2) ) & (\ldots ) & (a_(nn) -k) \end(array)\right|=0$.

This equation is called characteristic.

Let all roots $k_(1) ,k_(2) ,\ldots ,k_(n) $ of the characteristic equation be different. For each value $k_(i) $ from the system $\left(\begin(array)(cccc) (a_(11) -k) & (a_(12) ) & (\ldots ) & (a_(1n) ) \\ (a_(21) ) & (a_(22) -k) & (\ldots ) & (a_(2n) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ (a_(n1) ) & (a_(n2) ) & (\ldots ) & (a_(nn) -k) \end(array)\right)\cdot \left(\begin(array)(c) (\alpha _(1) ) \\ (\alpha _(2) ) \\ (\ldots ) \\ (\alpha _(n) ) \end(array)\right)=0$ a matrix of values ​​can be defined $\left(\begin(array)(c) (\alpha _(1)^(\left(i\right)) ) \\ (\alpha _(2)^(\left(i\right)) ) \\ (\ldots ) \\ (\alpha _(n)^(\left(i\right)) ) \end(array)\right)$.

One of the values ​​in this matrix is ​​chosen randomly.

Finally, the solution to this system in matrix form is written as follows:

$\left(\begin(array)(c) (y_(1) ) \\ (y_(2) ) \\ (\ldots ) \\ (y_(n) ) \end(array)\right)=\ left(\begin(array)(cccc) (\alpha _(1)^(\left(1\right)) ) & (\alpha _(1)^(\left(2\right)) ) & (\ ldots ) & (\alpha _(2)^(\left(n\right)) ) \\ (\alpha _(2)^(\left(1\right)) ) & (\alpha _(2)^ (\left(2\right)) ) & (\ldots ) & (\alpha _(2)^(\left(n\right)) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ (\alpha _(n)^(\left(1\right)) ) & (\alpha _(2)^(\left(2\right)) ) & (\ldots ) & (\alpha _(2)^(\left(n\right)) ) \end(array)\right)\cdot \left(\begin(array)(c) (C_(1) \cdot e^(k_ (1) \cdot x) ) \\ (C_(2) \cdot e^(k_(2) \cdot x) ) \\ (\ldots ) \\ (C_(n) \cdot e^(k_(n ) \cdot x) ) \end(array)\right)$,

where $C_(i) $ are arbitrary constants.

Task

Solve the DE system $\left\(\begin(array)(c) (\frac(dy_(1) )(dx) =5\cdot y_(1) +4y_(2) ) \\ (\frac(dy_( 2) )(dx) =4\cdot y_(1) +5\cdot y_(2) ) \end(array)\right. $.

We write the system matrix: $A=\left(\begin(array)(cc) (5) & (4) \\ (4) & (5) \end(array)\right)$.

In matrix form, this SODE is written as follows: $\left(\begin(array)(c) (\frac(dy_(1) )(dt) ) \\ (\frac(dy_(2) )(dt) ) \end (array)\right)=\left(\begin(array)(cc) (5) & (4) \\ (4) & (5) \end(array)\right)\cdot \left(\begin( array)(c) (y_(1) ) \\ (y_(2) ) \end(array)\right)$.

We obtain the characteristic equation:

$\left|\begin(array)(cc) (5-k) & (4) \\ (4) & (5-k) \end(array)\right|=0$, that is, $k^( 2) -10\cdot k+9=0$.

The roots of the characteristic equation are: $k_(1) =1$, $k_(2) =9$.

Let's create a system for calculating $\left(\begin(array)(c) (\alpha _(1)^(\left(1\right)) ) \\ (\alpha _(2)^(\left(1\ right)) ) \end(array)\right)$ for $k_(1) =1$:

\[\left(\begin(array)(cc) (5-k_(1) ) & (4) \\ (4) & (5-k_(1) ) \end(array)\right)\cdot \ left(\begin(array)(c) (\alpha _(1)^(\left(1\right)) ) \\ (\alpha _(2)^(\left(1\right)) ) \end (array)\right)=0,\]

that is, $\left(5-1\right)\cdot \alpha _(1)^(\left(1\right)) +4\cdot \alpha _(2)^(\left(1\right)) =0$, $4\cdot \alpha _(1)^(\left(1\right)) +\left(5-1\right)\cdot \alpha _(2)^(\left(1\right) ) =0$.

Putting $\alpha _(1)^(\left(1\right)) =1$, we obtain $\alpha _(2)^(\left(1\right)) =-1$.

Let's create a system for calculating $\left(\begin(array)(c) (\alpha _(1)^(\left(2\right)) ) \\ (\alpha _(2)^(\left(2\ right)) ) \end(array)\right)$ for $k_(2) =9$:

\[\left(\begin(array)(cc) (5-k_(2) ) & (4) \\ (4) & (5-k_(2) ) \end(array)\right)\cdot \ left(\begin(array)(c) (\alpha _(1)^(\left(2\right)) ) \\ (\alpha _(2)^(\left(2\right)) ) \end (array)\right)=0, \]

that is, $\left(5-9\right)\cdot \alpha _(1)^(\left(2\right)) +4\cdot \alpha _(2)^(\left(2\right)) =0$, $4\cdot \alpha _(1)^(\left(2\right)) +\left(5-9\right)\cdot \alpha _(2)^(\left(2\right) ) =0$.

Putting $\alpha _(1)^(\left(2\right)) =1$, we obtain $\alpha _(2)^(\left(2\right)) =1$.

We obtain the solution to SODE in matrix form:

\[\left(\begin(array)(c) (y_(1) ) \\ (y_(2) ) \end(array)\right)=\left(\begin(array)(cc) (1) & (1) \\ (-1) & (1) \end(array)\right)\cdot \left(\begin(array)(c) (C_(1) \cdot e^(1\cdot x) ) \\ (C_(2) \cdot e^(9\cdot x) ) \end(array)\right).\]

In the usual form, the solution to the SODE has the form: $\left\(\begin(array)(c) (y_(1) =C_(1) \cdot e^(1\cdot x) +C_(2) \cdot e^ (9\cdot x) ) \\ (y_(2) =-C_(1) \cdot e^(1\cdot x) +C_(2) \cdot e^(9\cdot x) ) \end(array )\right.$.

Basic concepts and definitions The simplest problem of the dynamics of a point leads to a system of differential equations: the forces acting on the material point are given; find the law of motion, i.e. find the functions x = x(t), y = y(t), z = z(t), expressing the dependence of the coordinates of a moving point on time. The resulting system, in general, has the form Here x, y, z are the coordinates of the moving point, t is time, f, g, h are known functions of their arguments. A system of type (1) is called canonical. Turning to the general case of a system of m differential equations with m unknown functions of the argument t, we call a system of the form resolved with respect to higher derivatives canonical. A system of first-order equations resolved with respect to the derivatives of the desired functions is called normal. If we take new auxiliary functions, then the general canonical system (2) can be replaced by an equivalent normal system consisting of equations. Therefore, it is sufficient to consider only normal systems. For example, one equation is a special case of the canonical system. Putting ^ = y, by virtue of the original equation we will have As a result, we obtain a normal system of equations SYSTEMS OF DIFFERENTIAL EQUATIONS Methods of integration Method of elimination Method of integrable combinations Systems of linear differential equations Fundamental matrix Method of variation of constants Systems of linear differential equations with constant coefficients Matrix method equivalent to the original equation. Definition 1. A solution to the normal system (3) on the interval (a, b) of changing the argument t is any system of n functions differentiable on the interval that turns the equations of system (3) into identities with respect to t on the interval (a, b). The Cauchy problem for system (3) is formulated as follows: find a solution (4) of the system that satisfies the initial conditions at t = to Theorem 1 (existence and uniqueness of the solution by the tasks of Which). Let us have a normal system of differential equations and let the functions be defined in some (n + 1)- dimensional domain D of changes in the variables t, X\, x2, ..., xn. If there is a neighborhood ft in which the functions ft are continuous in the set of arguments and have bounded partial derivatives with respect to the variables X\, x2, ..., xn, then there is an interval to - A0 of change t, on which there is a unique solution of the normal system (3) that satisfies the initial conditions Definition 2. A system of n functions depending on tun of arbitrary constants is called a general solution of the normal system (3) in some region Π of existence and uniqueness of the solution Cauchy problem if 1) for any admissible values, the system of functions (6) turns equations (3) into identities, 2) in the domain Π, functions (6) solve any Cauchy problem. Solutions obtained from the general at specific values ​​of the constants are called particular solutions. For clarity, let us turn to the normal system of two equations. We will consider the system of values ​​t> X\, x2 as rectangular Cartesian coordinates of a point in three-dimensional space referred to the coordinate system Otx\x2. The solution of system (7), which takes values ​​at t - to, defines in space a certain line passing through the point) - This line is called the integral curve of the normal system (7). The Koshi problem for system (7) receives the following geometric formulation: in the space of variables t> X\, x2, find the integral curve passing through a given point Mo(to, x1, x2) (Fig. 1). Theorem 1 establishes the existence and uniqueness of such a curve. The normal system (7) and its solution can also be given the following interpretation: we will consider the independent variable t as a parameter, and the solution of the system as parametric equations of a curve on the x\Ox2 plane. This plane of variables X\X2 is called the phase plane. In the phase plane, the solution (0 of system (7), taking at t = t0 initial values ​​x°(, x2, is depicted by the curve AB passing through the point). This curve is called the trajectory of the system (phase trajectory). The trajectory of system (7) is the projection integral curve onto the phase plane. From the integral curve, the phase trajectory is determined uniquely, but not vice versa. § 2. Methods for integrating systems of differential equations 2.1. Method of elimination One of the methods of integration is the method of elimination. A special case of the canonical system is one equation of the nth order, resolved with respect to the highest derivative, Introducing the new function equation with the following normal system of n equations: we replace this one equation of the nth order is equivalent to the normal system (1).We can also state the converse that, generally speaking, a normal system of n equations of the first order is equivalent to one equation of order p. This is the basis of the elimination method for integrating systems of differential equations. It's done like this. Let us have a normal system of differential equations. Let us differentiate the first of equations (2) with respect to t. We have Replacing the product on the right side or, in short, Equation (3) is again differentiated with respect to t. Taking into account system (2), we obtain or Continuing this process, we find Assume that the determinant (Jacobian of the system of functions is nonzero for the values ​​under consideration Then the system of equations composed of the first equation of system (2) and the equations will be solvable with respect to the unknowns will be expressed through Introducing the found expressions in the equation we obtain one equation of the nth order. From the very method of its construction it follows that if) there are solutions to system (2), then the function X\(t) will be a solution to equation (5). Conversely, let be the solution to equation (5). Differentiating this solution with respect to t, we calculate and substitute the found values ​​as known functions. By assumption, this system can be resolved with respect to xn as a function of t. It can be shown that the system of functions constructed in this way constitutes a solution to the system of differential equations (2). Example. It is required to integrate the system. Differentiating the first equation of the system, we have from where, using the second equation, we obtain a second-order linear differential equation with constant coefficients with one unknown function. Its general solution has the form. By virtue of the first equation of the system, we find the function. The found functions x(t), y(t), as can be easily verified, for any values ​​of C| and C2 satisfy the given system. The functions can be represented in the form from which it can be seen that the integral curves of the system (6) are helical lines with a step with a common axis x = y = 0, which is also an integral curve (Fig. 3). Eliminating the parameter in formulas (7), we obtain the equation so that the phase trajectories of a given system are circles with a center at the origin of coordinates - projections of helical lines onto a plane. When A = 0, the phase trajectory consists of one point, called the rest point of the system. " It may turn out that the functions cannot be expressed through Then we will not obtain an nth order equation equivalent to the original system. Here's a simple example. The system of equations cannot be replaced by an equivalent second-order equation for x\ or x2. This system is composed of a pair of 1st order equations, each of which is integrated independently, giving the Method of integrable combinations Integration of normal systems of differential equations dXi is sometimes carried out by the method of integrable combinations. An integrable combination is a differential equation that is a consequence of equations (8), but is already easily integrable. Example. Integrate a system SYSTEMS OF DIFFERENTIAL EQUATIONS Methods of integration Method of elimination Method of integrable combinations Systems of linear differential equations Fundamental matrix Method of variation of constants Systems of linear differential equations with constant coefficients Matrix method 4 By adding the given equations term by term, we find one integrable combination: Subtracting term by term from the first equation of the system the second, we obtain second integrable combination: from where we found two finite equations from which the general solution of the system is easily determined: One integrable combination makes it possible to obtain one equation connecting the independent variable t and unknown functions. Such a finite equation is called the first integral of system (8). Otherwise: the first integral of a system of differential equations (8) is a differentiable function that is not identically constant, but maintains a constant value on any integral curve of this system. If n first integrals of system (8) are found and they are all independent, that is, the Jacobian of the system of functions is nonzero: A system of differential equations is called linear if it is linear with respect to unknown functions and their derivatives included in the equation. A system of n linear equations of the first order, written in normal form, has the form or, in matrix form, Theorem 2. If all functions are continuous on an interval, then in a sufficiently small neighborhood of each point., xn), where), the conditions of the existence theorem are satisfied and the uniqueness of the solution to the Causchia problem, therefore, through each such point there passes a unique integral curve of system (1). Indeed, in this case, the right-hand sides of system (1) are continuous with respect to the set of arguments t)x\,x2)... , xn and their partial derivatives with respect to, are limited, since these derivatives are equal to coefficients continuous on the interval. We introduce a linear operator. Then the system ( 2) is written in the form If the matrix F is zero on the interval (a, 6), then system (2) is called linear homogeneous and has the form Let us present some theorems that establish the properties of solutions of linear systems. Theorem 3. If X(t) is a solution to a linear homogeneous system where c is an arbitrary constant, it is a solution to the same system. Theorem 4. The sum of two solutions to a homogeneous linear system of equations is a solution to the same system. Consequence. A linear combination, with arbitrary constant coefficients c, of solutions to a linear homogeneous system of differential equations is a solution to the same system. Theorem 5. If X(t) is a solution to a linear inhomogeneous system - a solution to the corresponding homogeneous system, then the sum will be a solution to the inhomogeneous system. Indeed, by condition, Using the additivity property of the operator, we obtain This means that the sum is a solution to the inhomogeneous system of equations Definition. Vectors where are said to be linearly dependent on an interval if there are constant numbers such that at, and at least one of the numbers a is not equal to zero. If identity (5) is valid only for then the vectors are said to be linearly independent on (a, b). Note that one vector identity (5) is equivalent to n identities: . The determinant is called the Wronski determinant of a system of vectors. Definition. Let us have a linear homogeneous system where is a matrix with elements. A system of n solutions to a linear homogeneous system (6), linearly independent on the interval, is called fundamental. Theorem 6. The Wronski determinant W(t) of a system of solutions fundamental on an interval to a linear homogeneous system (6) with coefficients a-ij(t) continuous on the interval a b is nonzero at all points of the interval (a, 6). Theorem 7 (on the structure of the general solution of a linear homogeneous system). The general solution in the field of a linear homogeneous system with coefficients continuous on an interval is a linear combination of n solutions of system (6) linearly independent on the interval a: arbitrary constant numbers). Example. The system has, as is easy to verify, solutions Ash solutions are linearly independent, since the Wronski determinant is non-zero: “The general solution of the system has the form or are arbitrary constants.) 3.1 Fundamental matrix A square matrix whose columns are linearly independent solutions of the system (6), is called the fundamental matrix of this system. It is easy to verify that the fundamental matrix satisfies the matrix equation. If X(t) is the fundamental matrix of system (6), then the general solution of the system can be represented in the form - a constant column matrix with arbitrary elements. Putting in we have whence hence , The matrix is ​​called the Cauchy matrix. With its help, the solution to system (6) can be represented as follows: Theorem 8 (on the structure of the general solution to a linear inhomogeneous system of differential equations).The general solution in the field of a linear inhomogeneous system of differential equations with coefficients continuous on an interval and right-hand sides fi (t) is equal to the sum of the general solution of the corresponding homogeneous system and some particular solution X(t) of the inhomogeneous system (2): 3.2. Method of variation of constants If the general solution of a linear homogeneous system (6) is known, then a particular solution of an inhomogeneous system can be found by the method of variation of constants (Lag-Rang method). Let there be a general solution to the homogeneous system (6), then dXk and the solutions are linearly independent. We will look for a particular solution to the inhomogeneous system where are unknown functions of t. Differentiating we have Substituting we get Since then for the definition we get a system or, in expanded form, System (10) is a linear algebraic system with respect to 4(0 > whose determinant is the Wronski determinant W(t) of the fundamental system of solutions. This determinant is nonzero everywhere on interval so that the system) has a unique solution where MO are known continuous functions. Integrating the last relations, we find Substituting these values, we find a particular solution to system (2): (here the symbol is understood as one of the antiderivatives for the function §4. Systems of linear differential equations with constant coefficients Consider a linear system of differential equations in which all coefficients are constant. More often In general, such a system is integrated by reducing it to one equation of a higher order, and this equation will also be linear with constant coefficients. Another effective method for integrating systems with constant coefficients is the Laplace transform method. We will also consider Euler's method of integrating linear homogeneous systems of differential equations with constant coefficients It consists of the following: Euler's method We will look for a solution to the system where are constants. Substituting x* in form (2) into system (1), canceling by e* and transferring all terms to one part of the equality, we obtain the system In order for this system (3) of linear homogeneous algebraic equations with n unknowns an had a nontrivial solution; it is necessary and sufficient that its determinant be equal to zero: Equation (4) is called characteristic. On its left side there is a polynomial with respect to A of degree n. From this equation we determine those values ​​of A for which system (3) has nontrivial solutions a\. If all the roots of the characteristic equation (4) are different, then by substituting them in turn into the system ( 3), we find the corresponding nontrivial solutions of this system and, therefore, we find n solutions to the original system of differential equations (1) in the form where the second index indicates the number of the solution, and the first indicates the number of the unknown function. The n partial solutions of the linear homogeneous system (1) constructed in this way form, as can be verified, a fundamental system of solutions to this system. Consequently, the general solution of the homogeneous system of differential equations (1) has the form - arbitrary constants. We will not consider the case when the characteristic equation has multiple roots. M We are looking for a solution in the form of a Characteristic equation System (3) for determining 01.02 looks like this: Substituting we get from where Therefore, Assuming we find therefore The general solution of this system: SYSTEMS OF DIFFERENTIAL EQUATIONS Methods of integration Method of elimination Method of integrable combinations Systems of linear differential equations Fundamental matrix Method of variation constants Systems of linear differential equations with constant coefficients Matrix method Let us also present the matrix method for integrating a homogeneous system (1). Let us write system (1) as a matrix with constant real elements a,j. Let us recall some concepts from linear algebra. Vector g ФО is called an eigenvector of matrix A if Number A is called an eigenvalue of matrix A corresponding to the eigenvector g and is the root of the characteristic equation where I is the identity matrix. We will assume that all eigenvalues ​​A„ of matrix A are different. In this case, the eigenvectors are linearly independent and there exists an n x n matrix T that reduces the matrix A to diagonal form, i.e., such that the columns of the matrix T are the coordinates of the eigenvectors. Let us introduce the following concepts. Let B(ξ) be an n × n-matrix, elements 6,;(0 of which are functions of the argument t defined on the set. The matrix B(f) is called continuous on Π if all its elements 6,j(f) are continuous on Q. A matrix B(*) is said to be differentiable on Π if all elements of this matrix are differentiable on Q. In this case, the derivative of the ^p-matrix B(*) is a matrix whose elements are the derivatives of the corresponding elements of the matrix B(*). Let B be column vector. Taking into account the rules of matrix algebra, by direct verification we are convinced of the validity of the formula In particular, if B is a constant matrix, then since ^ is a null matrix. Theorem 9. If the eigenvalues ​​of the matrix A are different, then the general solution of system (7) has the form where - the eigenvectors-columns of the matrix are arbitrary constant numbers. Let's introduce a new unknown column vector using the formula where T is a matrix that reduces matrix A to diagonal form. Substituting, we get the system Multiplying both sides of the last relation on the left by T 1 and taking into account that T 1 AT = А, we arrive at the system We have obtained a system of n independent equations, which can be easily integrated: (12) Here are arbitrary constant numbers. By introducing unit n-dimensional column vectors, the solution can be represented in the form Since the columns of the matrix T are the eigenvectors of the matrix, the eigenvector of the matrix A. Therefore, substituting (13) into (11), we obtain formula (10): Thus, if the matrix A system of differential equations (7) has different eigenvalues, to obtain a general solution of this system: 1) find the eigenvalues ​​„ of the matrix as the roots of the algebraic equation 2) find all eigenvectors 3) write out the general solution of the system of differential equations (7) using the formula (10 ). Example 2. Solve the system Matrix method 4 Matrix A of the system has the form 1) Compose the characteristic equation The roots of the characteristic equation. 2) Find the eigenvectors For A = 4 we obtain a system from which = 0|2, so that similarly for A = 1 we find I 3) Using formula (10), we obtain a general solution to the system of differential equations. The roots of the characteristic equation can be real and complex. Since, by assumption, the coefficients ay of system (7) are real, the characteristic equation will have real coefficients. Therefore, along with the complex root A, it will also have a root \*, complex conjugate to A. It is easy to show that if g is an eigenvector corresponding to the eigenvalue of A, then A* is also an eigenvalue to which the eigenvector g* corresponds, complex conjugate with g. For complex A, the solution to system (7) taioKe will be complex. The real part and the imaginary part of this solution are solutions to system (7). The eigenvalue A* will correspond to a pair of real solutions. the same pair as for the eigenvalue A. Thus, the pair A, A* of complex conjugate eigenvalues ​​corresponds to a pair of real solutions to system (7) of differential equations. Let be real eigenvalues, complex eigenvalues. Then any real solution of system (7) has the form where c, are arbitrary constants. Example 3. Solve the system -4 System matrix 1) Characteristic equation of the system Its roots Eigenvectors of the matrix 3) Solution of the system where are arbitrary complex constants. Let's find real solutions of the system. Using Euler's formula, we obtain Therefore, any real solution of the system has the form of arbitrary real numbers. Exercises Integrate systems using the method of elimination: Integrate systems using the method of integrated combinations: Integrate systems using the matrix method: Answers

................................ 1

1. Introduction............................................... ........................................................ ... 2

2. Systems of differential equations of the 1st order.................................... 3

3. Systems of linear differential equations of the 1st order......... 2

4. Systems of linear homogeneous differential equations with constant coefficients.................................................... ........................................................ .... 3

5. Systems of inhomogeneous differential equations of the 1st order with constant coefficients.................................................... ........................................................ ....... 2

Laplace transform................................................................................ 1

6. Introduction........................................................ ........................................................ ... 2

7. Properties of the Laplace transform.................................................... ............ 3

8. Applications of the Laplace transform.................................................... ...... 2

Introduction to Integral Equations............................................................... 1

9. Introduction................................................... ........................................................ ... 2

10. Elements of the general theory of linear integral equations.............. 3

11. The concept of iterative solution of Fredholm integral equations of the 2nd kind.................................................... ........................................................ ................................... 2

12. Volterra equation.................................................... ........................... 2

13. Solving the Volterra equations with a difference kernel using the Laplace transform................................................... ........................................... 2


Systems of ordinary differential equations

Introduction

Systems of ordinary differential equations consist of several equations containing derivatives of unknown functions of one variable. In general, such a system has the form

where are unknown functions, t– independent variable, – some given functions, the index numbers the equations in the system. Solving such a system means finding all functions satisfying this system.

As an example, consider Newton's equation, which describes the motion of a mass body under the influence of force:

where is a vector drawn from the origin to the current position of the body. In the Cartesian coordinate system, its components are functions Thus, equation (1.2) reduces to three second-order differential equations

To find functions at each moment of time, obviously, you need to know the initial position of the body and its speed at the initial moment of time - a total of 6 initial conditions (which corresponds to a system of three second-order equations):

Equations (1.3) together with the initial conditions (1.4) form the Cauchy problem, which, as is clear from physical considerations, has a unique solution that gives a specific trajectory of the body if the force satisfies reasonable smoothness criteria.

It is important to note that this problem can be reduced to a system of 6 first-order equations by introducing new functions. Let's denote the functions as , and introduce three new functions defined as follows:

System (1.3) can now be rewritten in the form

Thus, we have arrived at a system of six first-order differential equations for the functions The initial conditions for this system have the form

The first three initial conditions give the initial coordinates of the body, the last three give the projection of the initial velocity on the coordinate axes.

Example 1.1. Reduce a system of two 2nd order differential equations

to a system of four 1st order equations.

Solution. Let us introduce the following notation:

In this case, the original system will take the form

Two more equations give the introduced notation:

Finally, we will compose a system of differential equations of the 1st order, equivalent to the original system of equations of the 2nd order

These examples illustrate the general situation: any system of differential equations can be reduced to a system of 1st order equations. Thus, in the future we can limit ourselves to studying systems of 1st order differential equations.

Systems of differential equations of the 1st order

In general, a system of n 1st order differential equations can be written as follows:

where are the unknown functions of the independent variable t, – some specified functions. Common decision system (2.1) contains n arbitrary constants, i.e. has the form:

When describing real problems using systems of differential equations, a specific solution, or private solution system is found from a general solution by specifying some initial conditions. The initial condition is recorded for each function and for the system n 1st order equations looks like this:

Solutions are determined in space line called integral line systems (2.1).

Let us formulate a theorem of existence and uniqueness of solutions for systems of differential equations.

Cauchy's theorem. The system of 1st order differential equations (2.1) together with the initial conditions (2.2) has a unique solution (i.e., a single set of constants is determined from the general solution) if the functions and their partial derivatives with respect to all arguments are limited in the vicinity of these initial conditions.

Naturally, we are talking about a solution in some domain of variables .

Solving a system of differential equations can be seen as vector function X, the components of which are functions and the set of functions is like a vector function F, i.e.

Using such notation, we can briefly rewrite the original system (2.1) and the initial conditions (2.2) in the so-called vector form:

One method for solving a system of differential equations is to reduce the system to a single higher-order equation. From equations (2.1), as well as equations obtained by their differentiation, one can obtain one equation n th order for any of the unknown functions. By integrating it, the unknown function is found. The remaining unknown functions are obtained from the equations of the original system and intermediate equations obtained by differentiating the original ones.

Example 2.1. Solve a system of two first order differentials

Solution. Let's differentiate the second equation:

Let us express the derivative through the first equation

From the second equation

We have obtained a linear homogeneous differential equation of the 2nd order with constant coefficients. Its characteristic equation

from which we obtain Then the general solution of this differential equation will be

We have found one of the unknown functions of the original system of equations. Using the expression you can find:

Let us solve the Cauchy problem under initial conditions

Let's substitute them into the general solution of the system

and find the integration constants:

Thus, the solution to the Cauchy problem will be the functions

The graphs of these functions are shown in Figure 1.

Rice. 1. Particular solution of the system of Example 2.1 on the interval

Example 2.2. Solve the system

reducing it to a single 2nd order equation.

Solution. Differentiating the first equation, we get

Using the second equation, we arrive at a second-order equation for x:

It is not difficult to obtain its solution, and then the function, by substituting what was found into the equation. As a result, we have the following solution to the system:

Comment. We found the function from Eq. At the same time, at first glance it seems that one can obtain the same solution by substituting the known one into the second equation of the original system

and integrating it. If found in this way, then a third, extra constant appears in the solution:

However, as is easy to check, the function satisfies the original system not at an arbitrary value, but only at Thus, the second function should be determined without integration.

Let's add the squares of the functions and :

The resulting equation gives a family of concentric circles centered at the origin in the plane (see Figure 2). The resulting parametric curves are called phase curves, and the plane in which they are located is phase plane.

By substituting any initial conditions into the original equation, it is possible to obtain certain values ​​of the integration constants, which means a circle with a certain radius in the phase plane. Thus, each set of initial conditions corresponds to a specific phase curve. Let's take, for example, the initial conditions . Their substitution into the general solution gives the values ​​of the constants , thus, the particular solution has the form . When changing a parameter over an interval, we follow the phase curve clockwise: the value corresponds to the point of the initial condition on the axis, the value corresponds to the point on the axis, the value corresponds to the point on the axis, the value corresponds to the point on the axis, and we return to the starting point.

Many systems of differential equations, both homogeneous and inhomogeneous, can be reduced to one equation for one unknown function. Let's demonstrate the method with examples.

Example 3.1. Solve the system

Solution. 1) Differentiating by t first equation and using the second and third equations to replace And , we find

We differentiate the resulting equation with respect to again

1) We create a system

From the first two equations of the system we express the variables And through
:

Let us substitute the found expressions for And into the third equation of the system

So, to find the function
obtained a third order differential equation with constant coefficients

.

2) We integrate the last equation using the standard method: we compose the characteristic equation
, find its roots
and construct a general solution in the form of a linear combination of exponentials, taking into account the multiplicity of one of the roots:.

3) Next to find the two remaining functions
And
, we differentiate the resulting function twice

Using connections (3.1) between the functions of the system, we recover the remaining unknowns

.

Answer. ,
,.

It may turn out that all known functions except one are excluded from the third-order system even with a single differentiation. In this case, the order of the differential equation for finding it will be less than the number of unknown functions in the original system.

Example 3.2. Integrate the system

(3.2)

Solution. 1) Differentiating by the first equation, we find

Excluding Variables And from equations

we will have a second order equation with respect to

(3.3)

2) From the first equation of system (3.2) we have

(3.4)

Substituting into the third equation of system (3.2) the found expressions (3.3) and (3.4) for And , we obtain a first order differential equation to determine the function

Integrating this inhomogeneous equation with constant first-order coefficients, we find
Using (3.4), we find the function

Answer.
,,
.

Task 3.1. Solve homogeneous systems by reducing them to one differential equation.

3.1.1. 3.1.2.

3.1.3. 3.1.4.

3.1.5. 3.1.6.

3.1.7. 3.1.8.

3.1.9. 3.1.10.

3.1.11. 3.1.12.

3.1.13. 3.1.14.

3.1.15. 3.1.16.

3.1.17. 3.1.18.

3.1.19. 3.1.20.

3.1.21. 3.1.22.

3.1.23. 3.1.24.

3.1.25. 3.1.26.

3.1.27. 3.1.28.

3.1.29.
3.1.30.

3.2. Solving systems of linear homogeneous differential equations with constant coefficients by finding a fundamental system of solutions

The general solution to a system of linear homogeneous differential equations can be found as a linear combination of the fundamental solutions of the system. In the case of systems with constant coefficients, linear algebra methods can be used to find fundamental solutions.

Example 3.3. Solve the system

(3.5)

Solution. 1) Let's rewrite the system in matrix form

. (3.6)

2) We will look for a fundamental solution of the system in the form of a vector
. Substituting functions
in (3.6) and reducing by , we get

, (3.7)

that is the number must be an eigenvalue of the matrix
, and the vector the corresponding eigenvector.

3) From the course of linear algebra it is known that system (3.7) has a non-trivial solution if its determinant is equal to zero

,

that is . From here we find the eigenvalues
.

4) Find the corresponding eigenvectors. Substituting the first value into (3.7)
, we obtain a system for finding the first eigenvector

From here we get the connection between the unknowns
. It is enough for us to choose one non-trivial solution. Believing
, Then
, that is, the vector is eigenof eigenvalue
, and the function vector
fundamental solution of a given system of differential equations (3.5). Similarly, when substituting the second root
in (3.7) we have a matrix equation for the second eigenvector
. Where do we get the connection between its components?
. Thus, we have the second fundamental solution

.

5) The general solution of system (3.5) is constructed as a linear combination of the two obtained fundamental solutions

or in coordinate form

.

Answer.

.

Task 3.2. Solve systems by finding the fundamental system of solutions.

It's a sultry time outside, poplar fluff is flying, and this weather is conducive to relaxation. During the school year, everyone has accumulated fatigue, but the anticipation of summer vacations/holidays should inspire you to successfully pass exams and tests. By the way, the teachers are also dull during the season, so soon I will also take a time out to unload my brain. And now there’s coffee, the rhythmic hum of the system unit, a few dead mosquitoes on the windowsill and a completely working condition... ...oh, damn it... the fucking poet.

To the point. Who cares, but today is June 1st for me, and we will look at another typical problem of complex analysis - finding a particular solution to a system of differential equations using the operational calculus method. What do you need to know and be able to do to learn how to solve it? First of all, highly recommend refer to the lesson. Please read the introductory part, understand the general statement of the topic, terminology, notation and at least two or three examples. The fact is that with diffuser systems everything will be almost the same and even simpler!

Of course, you must understand what it is system of differential equations, which means finding a general solution to the system and a particular solution to the system.

Let me remind you that the system of differential equations can be solved in the “traditional” way: by elimination or using the characteristic equation. The method of operational calculus that will be discussed is applicable to the remote control system when the task is formulated as follows:

Find a particular solution to a homogeneous system of differential equations , corresponding to the initial conditions .

Alternatively, the system can be heterogeneous - with “add-on weights” in the form of functions and on the right sides:

But, in both cases, you need to pay attention to two fundamental points of the condition:

1) It's about only about a private solution.
2) In parentheses of initial conditions are strictly zeros, and nothing else.

The general course and algorithm will be very similar to solving a differential equation using the operational method. From the reference materials you will need the same table of originals and images.

Example 1


, ,

Solution: The beginning is trivial: using Laplace transform tables Let's move on from the originals to the corresponding images. In a problem with remote control systems, this transition is usually simple:

Using tabular formulas No. 1, 2, taking into account the initial condition, we obtain:

What to do with the “games”? Mentally change the “X’s” in the table to “I’s”. Using the same transformations No. 1, 2, taking into account the initial condition, we find:

Let's substitute the found images into the original equation :

Now in the left parts equations need to be collected All terms in which or is present. To the right parts equations need to be “formalized” other terms:

Next, on the left side of each equation we carry out bracketing:

In this case, the following should be placed in the first positions, and in the second positions:

The resulting system of equations with two unknowns is usually solved according to Cramer's formulas. Let us calculate the main determinant of the system:

As a result of calculating the determinant, a polynomial was obtained.

Important technique! This polynomial is better At once try to factor it. For these purposes, one should try to solve the quadratic equation , but many readers with a trained second-year eye will notice that .

Thus, our main determinant of the system is:

Further disassembly of the system, thank Kramer, is standard:

As a result we get operator solution of the system:

The advantage of the task in question is that the fractions usually turn out to be simple, and dealing with them is much easier than with fractions in problems finding a particular solution to a DE using the operational method. Your premonition did not deceive you - the good old method of uncertain coefficients, with the help of which we decompose each fraction into elementary fractions:

1) Let's deal with the first fraction:

Thus:

2) We break down the second fraction according to a similar scheme, but it is more correct to use other constants (undefined coefficients):

Thus:


I advise dummies to write down the decomposed operator solution in the following form:
- this will make the final stage clearer - the inverse Laplace transform.

Using the right column of the table, let's move from the images to the corresponding originals:


According to the rules of good mathematical manners, we will tidy up the result a little:

Answer:

The answer is checked according to a standard scheme, which is discussed in detail in the lesson. How to solve a system of differential equations? Always try to complete it in order to add a big plus to the task.

Example 2

Using operational calculus, find a particular solution to a system of differential equations that corresponds to the given initial conditions.
, ,

This is an example for you to solve on your own. An approximate sample of the final form of the problem and the answer at the end of the lesson.

Solving a non-homogeneous system of differential equations is algorithmically no different, except that technically it will be a little more complicated:

Example 3

Using operational calculus, find a particular solution to a system of differential equations that corresponds to the given initial conditions.
, ,

Solution: Using the Laplace transform table, taking into account the initial conditions , let's move from the originals to the corresponding images:

But that's not all, there are lonely constants on the right-hand sides of the equations. What to do in cases where the constant is completely alone on its own? This was already discussed in class. How to solve a DE using the operational method. Let us repeat: single constants should be mentally multiplied by one, and the following Laplace transform should be applied to the units:

Let's substitute the found images into the original system:

Let us move the terms containing , to the left, and place the remaining terms on the right sides:

In the left-hand sides we will carry out bracketing, in addition, we will bring the right-hand side of the second equation to a common denominator:

Let's calculate the main determinant of the system, not forgetting that it is advisable to immediately try to factorize the result:
, which means the system has a unique solution.

Let's move on:



Thus, the operator solution of the system is:

Sometimes one or even both fractions can be reduced, and, sometimes, so successfully that you don’t even need to expand anything! And in some cases, you get a freebie right away, by the way, the following example of the lesson will be an indicative example.

Using the method of indefinite coefficients we obtain the sums of elementary fractions.

Let's break down the first fraction:

And we achieve the second one:

As a result, the operator solution takes the form we need:

Using the right column tables of originals and images we carry out the inverse Laplace transform:

Let us substitute the resulting images into the operator solution of the system:

Answer: private solution:

As you can see, in a heterogeneous system it is necessary to carry out more labor-intensive calculations compared to a homogeneous system. Let's look at a couple more examples with sines and cosines, and that's enough, since almost all types of the problem and most of the nuances of the solution will be considered.

Example 4

Using the operational calculus method, find a particular solution to a system of differential equations with given initial conditions,

Solution: I will also analyze this example myself, but the comments will concern only special moments. I assume you are already well versed in the solution algorithm.

Let's move on from the originals to the corresponding images:

Let's substitute the found images into the original remote control system:

Let's solve the system using Cramer's formulas:
, which means the system has a unique solution.

The resulting polynomial cannot be factorized. What to do in such cases? Absolutely nothing. This one will do too.

As a result, the operator solution of the system is:

Here's the lucky ticket! There is no need to use the method of indefinite coefficients at all! The only thing is, in order to apply table transformations, we rewrite the solution in the following form:

Let's move on from the images to the corresponding originals:

Let us substitute the resulting images into the operator solution of the system: