Menu
For free
Registration
home  /  Self-development/ An example of a linearly dependent system of vectors. Linear dependence and linear independence of vectors

An example of a linearly dependent system of vectors. Linear dependence and linear independence of vectors

Let L is the linear space over the field R . Let A1, a2, ... , an (*) a finite system of vectors from L . Vector AT = a1× A1 + a2× A2 + … + an× An (16) called A linear combination of vectors ( *), or say vector AT linearly expressed through a system of vectors (*).

Definition 14. The system of vectors (*) is called linearly dependent , if and only if there exists a nonzero set of coefficients a1, a2, … , an such that a1× A1 + a2× A2 + … + an× An = 0. If a1× A1 + a2× A2 + … + an× An = 0 Û a1 = a2 = … = an = 0, then the system (*) is called linearly independent.

Properties of linear dependence and independence.

10. If a system of vectors contains a zero vector, then it is linearly dependent.

Indeed, if in the system (*) the vector A1 = 0, Then 1× 0 + 0× A2 + ... + 0 × An = 0 .

20. If a system of vectors contains two proportional vectors, then it is linearly dependent.

Let A1 = L×a2. Then 1× A1 –l× A2 + 0× A3 + … + 0× AND N= 0.

30. A finite system of vectors (*) for n ³ 2 is linearly dependent if and only if at least one of its vectors is a linear combination of the other vectors of this system.

Þ Let (*) be linearly dependent. Then there is a nonzero set of coefficients a1, a2, … , an such that a1× A1 + a2× A2 + … + an× An = 0 . Without loss of generality, we can assume that a1 ¹ 0. Then there exists A1 = ×a2× A2 + … + ×chan× AND N. So, the vector A1 is a linear combination of the remaining vectors.

Ü Let one of the vectors (*) be a linear combination of the others. We can assume that this is the first vector, i.e. A1 = B2 A2+ … + bn AND N, Hence (–1)× A1 + b2 A2+ … + bn AND N= 0 , i.e. (*) is linearly dependent.

Comment. Using the last property, one can define the linear dependence and independence of an infinite system of vectors.

Definition 15. Vector system A1, a2, ... , an , … (**) is called linearly dependent, If at least one of its vectors is a linear combination of some finite number of other vectors. Otherwise, the system (**) is called linearly independent.

40. A finite system of vectors is linearly independent if and only if none of its vectors can be linearly expressed in terms of its other vectors.

50. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

60. If some subsystem of a given system of vectors is linearly dependent, then the whole system is also linearly dependent.

Let two systems of vectors be given A1, a2, ... , an , … (16) and В1, в2, … , вs, … (17). If each vector of system (16) can be represented as a linear combination of a finite number of vectors of system (17), then we say that system (17) is linearly expressed through system (16).

Definition 16. The two systems of vectors are called equivalent , if each of them is linearly expressed in terms of the other.

Theorem 9 (basic theorem on linear dependence).

Let and are two finite systems of vectors from L . If the first system is linearly independent and linearly expressed in terms of the second, then N£s.

Proof. Let's pretend that N> S. According to the theorem

(21)

Since the system is linearly independent, equality (18) w X1=x2=…=xN=0. Let's substitute here expressions of vectors: …+=0 (19). Hence (20). Conditions (18), (19), and (20) are obviously equivalent. But (18) is satisfied only when X1=x2=…=xN=0. Let us find when equality (20) is true. If all its coefficients are equal to zero, then it is obviously true. Equating them to zero, we obtain system (21). Since this system has zero , it

joint. Since the number of equations is greater than the number of unknowns, the system has infinitely many solutions. Therefore, it has a non-zero x10, x20, …, xN0. For these values, equality (18) will be true, which contradicts the fact that the system of vectors is linearly independent. So our assumption is wrong. Consequently, N£s.

Consequence. If two equivalent systems of vectors are finite and linearly independent, then they contain the same number of vectors.

Definition 17. The system of vectors is called The maximum linearly independent system of vectors linear space L , if it is linearly independent, but adding to it any vector from L not included in this system, it becomes linearly dependent.

Theorem 10. Any two finite maximal linearly independent systems of vectors from L Contain the same number of vectors.

Proof follows from the fact that any two maximal linearly independent systems of vectors are equivalent .

It is easy to prove that any linearly independent system of space vectors L can be complemented to the maximum linearly independent system vectors of this space.

Examples:

1. In the set of all collinear geometric vectors, any system consisting of one nonzero vector is maximal linearly independent.

2. In the set of all coplanar geometric vectors, any two noncollinear vectors constitute a maximal linearly independent system.

3. In the set of all possible geometric vectors of three-dimensional Euclidean space, any system of three non-coplanar vectors is the maximum linearly independent.

4. In the set of all polynomials, the degree is at most N With real (complex) coefficients, a system of polynomials 1, x, x2, …, xn It is maximal linearly independent.

5. In the set of all polynomials with real (complex) coefficients, examples of a maximal linearly independent system are

a) 1, x, x2, … , xn, … ;

b) 1, (1 - x), (1 - x)2, … , (1 - x)N, …

6. The set of matrices of dimension M´ N is a linear space (check it out). An example of a maximal linearly independent system in this space is the system of matrices E11= , E12 \u003d, ..., EMn = .

Let a system of vectors be given C1, c2, ... , cf (*). The subsystem of vectors from (*) is called Maximum linearly independent Subsystem Systems ( *) , if it is linearly independent, but when any other vector of this system is added to it, it becomes linearly dependent. If the system (*) is finite, then any of its maximal linearly independent subsystems contains the same number of vectors. (Proof by yourself.) The number of vectors in the maximum linearly independent subsystem of the system (*) is called rank This system. Obviously, equivalent systems of vectors have the same ranks.

Linear dependence and independence of vectors

Definitions of linearly dependent and independent systems of vectors

Definition 22

Let we have a system of n-vectors and have a set of numbers
, then

(11)

is called a linear combination of a given system of vectors with a given set of coefficients.

Definition 23

Vector system
is called linearly dependent if there is such a set of coefficients
, of which at least one is not equal to zero, which linear combination given system of vectors with this set of coefficients is equal to the zero vector:

Let
, then

Definition 24 ( through the representation of one vector of the system as a linear combination of the others)

Vector system
is called linearly dependent if at least one of the vectors of this system can be represented as a linear combination of the other vectors of this system.

Statement 3

Definitions 23 and 24 are equivalent.

Definition 25(via zero line combination)

Vector system
is called linearly independent if the zero linear combination of this system is possible only for all
equal to zero.

Definition 26(through the impossibility of representing one vector of the system as a linear combination of the rest)

Vector system
is called linearly independent if none of the vectors of this system can be represented as a linear combination of other vectors of this system.

Properties of linearly dependent and independent systems of vectors

Theorem 2 (zero vector in the system of vectors)

If there is a zero vector in the system of vectors, then the system is linearly dependent.

 Let
, then .

Get
, therefore, by definition of a linearly dependent system of vectors in terms of a zero linear combination (12) the system is linearly dependent. 

Theorem 3 (dependent subsystem in the system of vectors)

If a system of vectors has a linearly dependent subsystem, then the entire system is linearly dependent.

 Let
- linearly dependent subsystem
, among which at least one is not equal to zero:

Hence, by Definition 23, the system is linearly dependent. 

Theorem 4

Any subsystem of a linearly independent system is linearly independent.

 On the contrary. Let the system be linearly independent and have a linearly dependent subsystem. But then, by Theorem 3, the entire system will also be linearly dependent. Contradiction. Therefore, a subsystem of a linearly independent system cannot be linearly dependent. 

geometric sense linear dependence and independence of the system of vectors

Theorem 5

Two vectors and linearly dependent if and only if
.

Necessity.

and - linearly dependent
that the condition
. Then
, i.e.
.

Adequacy.

Linear dependent. 

Corollary 5.1

Zero vector is collinear to any vector

Corollary 5.2

For two vectors to be linearly independent it is necessary and sufficient that was not collinear .

Theorem 6

In order for a system of three vectors to be linearly dependent, it is necessary and sufficient that these vectors be coplanar .

Necessity.

- are linearly dependent, therefore, one vector can be represented as a linear combination of the other two.

, (13)

where
and
. According to the parallelogram rule is the diagonal of a parallelogram with sides
, but a parallelogram is a flat figure
coplanar
are also coplanar.

Adequacy.

- coplanar. We apply three vectors to the point O:

C

B`

– linearly dependent 

Corollary 6.1

The zero vector is coplanar to any pair of vectors.

Corollary 6.2

In order for the vectors
are linearly independent if and only if they are not coplanar.

Corollary 6.3

Any plane vector can be represented as a linear combination of any two non-collinear vectors of the same plane.

Theorem 7

Any four vectors in space are linearly dependent .

Let's consider 4 cases:

Let's draw a plane through the vectors, then a plane through the vectors and a plane through the vectors. Then we draw the planes passing through the point D, parallel to the pairs of vectors ; ; respectively. We build a parallelepiped along the lines of intersection of the planes OB 1 D 1 C 1 ABDC.

Consider OB 1 D 1 C 1 - parallelogram by construction according to the parallelogram rule
.

Consider OADD 1 - a parallelogram (from the parallelepiped property)
, then

EMBED Equation.3 .

By Theorem 1
such that . Then
, and by definition 24 the system of vectors is linearly dependent. 

Corollary 7.1

The sum of three non-coplanar vectors in space is a vector that coincides with the diagonal of the parallelepiped built on these three vectors attached to a common origin, and the beginning of the sum vector coincides with the common origin of these three vectors.

Corollary 7.2

If we take 3 non-coplanar vectors in a space, then any vector of this space can be decomposed into a linear combination of these three vectors.

Vectors, their properties and actions with them

Vectors, actions with vectors, linear vector space.

Vectors are an ordered collection of a finite number of real numbers.

Actions: 1. Multiplying a vector by a number: lambda * vector x \u003d (lamda * x 1, lambda * x 2 ... lambda * x n). (3.4, 0. 7) * 3 \u003d (9, 12,0.21)

2. Addition of vectors (they belong to the same vector space) vector x + vector y \u003d (x 1 + y 1, x 2 + y 2, ... x n + y n,)

3. Vector 0=(0,0…0)---n E n – n-dimensional (linear space) vector x + vector 0 = vector x

Theorem. In order for a system of n vectors in an n-dimensional linear space to be linearly dependent, it is necessary and sufficient that one of the vectors be a linear combination of the others.

Theorem. Any set of n+ 1st vector of n-dimensional linear space yavl. linearly dependent.

Addition of vectors, multiplication of vectors by numbers. Subtraction of vectors.

The sum of two vectors is the vector directed from the beginning of the vector to the end of the vector, provided that the beginning coincides with the end of the vector. If the vectors are given by their expansions in terms of basis vectors, then adding the vectors adds up their respective coordinates.

Let's consider this using the example of a Cartesian coordinate system. Let

Let us show that

Figure 3 shows that

The sum of any finite number of vectors can be found using the polygon rule (Fig. 4): to construct the sum of a finite number of vectors, it is enough to match the beginning of each subsequent vector with the end of the previous one and construct a vector connecting the beginning of the first vector with the end of the last one.

Properties of the vector addition operation:

In these expressions m, n are numbers.

The difference of vectors is called the vector. The second term is a vector opposite to the vector in direction, but equal to it in length.

Thus, the vector subtraction operation is replaced by the addition operation

The vector, the beginning of which is at the origin of coordinates, and the end at the point A (x1, y1, z1), is called the radius vector of the point A and is denoted or simply. Since its coordinates coincide with the coordinates of the point A, its expansion in terms of vectors has the form

A vector starting at point A(x1, y1, z1) and ending at point B(x2, y2, z2) can be written as

where r 2 is the radius vector of the point B; r 1 - radius vector of point A.

Therefore, the expansion of the vector in terms of orts has the form

Its length is equal to the distance between points A and B

MULTIPLICATION

So in the case of a flat problem, the product of a vector by a = (ax; ay) and a number b is found by the formula

a b = (ax b; ay b)

Example 1. Find the product of the vector a = (1; 2) by 3.

3 a = (3 1; 3 2) = (3; 6)

So in the case of a spatial problem, the product of the vector a = (ax; ay; az) and the number b is found by the formula

a b = (ax b; ay b; az b)

Example 1. Find the product of the vector a = (1; 2; -5) by 2.

2 a = (2 1; 2 2; 2 (-5)) = (2; 4; -10)

Dot product of vectors and where is the angle between the vectors and ; if either , then

From the definition of the scalar product, it follows that

where, for example, is the value of the projection of the vector onto the direction of the vector .

Scalar square of a vector:

Dot product properties:

Dot product in coordinates

If then

Angle between vectors

Angle between vectors - the angle between the directions of these vectors (smallest angle).

Vector product(The vector product of two vectors.)- this is a pseudovector, perpendicular to the plane, built by two factors, which is the result of the binary operation "vector multiplication" over vectors in three-dimensional Euclidean space. The product is neither commutative nor associative (it is anticommutative) and is different from the dot product of vectors. In many engineering and physics problems, it is necessary to be able to build a vector perpendicular to two existing ones - the vector product provides this opportunity. The cross product is useful for "measuring" the perpendicularity of vectors - the length of the cross product of two vectors is equal to the product of their lengths if they are perpendicular, and decreases to zero if the vectors are parallel or anti-parallel.

Vector product is defined only in three-dimensional and seven-dimensional spaces. The result of the vector product, like the scalar product, depends on the metric of the Euclidean space.

Unlike the formula for calculating the scalar product from the coordinates of the vectors in a three-dimensional rectangular coordinate system, the formula for the vector product depends on the orientation of the rectangular coordinate system, or, in other words, its “chirality”

Collinearity of vectors.

Two non-zero (not equal to 0) vectors are called collinear if they lie on parallel lines or on the same line. We allow, but not recommended, a synonym - "parallel" vectors. Collinear vectors can be directed in the same direction ("co-directed") or oppositely directed (in the latter case they are sometimes called "anticollinear" or "antiparallel").

Mixed product of vectors( a,b,c)- scalar product of vector a and vector product of vectors b and c:

(a,b,c)=a ⋅(b×c)

sometimes it is called the triple scalar product of vectors, apparently due to the fact that the result is a scalar (more precisely, a pseudoscalar).

Geometric meaning: The modulus of the mixed product is numerically equal to the volume of the parallelepiped formed by the vectors (a,b,c) .

Properties

A mixed product is skew-symmetric with respect to all its arguments: that is, e. a permutation of any two factors changes the sign of the product. It follows that the mixed product in the right Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of the matrix composed of the vectors and:

The mixed product in the left Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of a matrix composed of vectors and taken with a minus sign:

In particular,

If any two vectors are parallel, then with any third vector they form a mixed product equal to zero.

If three vectors are linearly dependent (i.e., coplanar, lie in the same plane), then their mixed product is zero.

Geometric sense - Mixed product by absolute value is equal to the volume of the parallelepiped (see figure) formed by the vectors and; the sign depends on whether this triple of vectors is right or left.

Complanarity of vectors.

Three vectors (or more) are called coplanar if they, being reduced to a common origin, lie in the same plane

Complanarity Properties

If at least one of the three vectors is zero, then the three vectors are also considered coplanar.

A triple of vectors containing a pair of collinear vectors is coplanar.

Mixed product of coplanar vectors. This is a criterion for the coplanarity of three vectors.

Coplanar vectors are linearly dependent. This is also a criterion for coplanarity.

In 3-dimensional space, 3 non-coplanar vectors form a basis

Linearly dependent and linearly independent vectors.

Linearly dependent and independent systems of vectors.Definition. The system of vectors is called linearly dependent, if there is at least one non-trivial linear combination of these vectors equal to the zero vector. Otherwise, i.e. if only a trivial linear combination of given vectors is equal to the null vector, the vectors are called linearly independent.

Theorem (linear dependence criterion). For a system of vectors in a linear space to be linearly dependent, it is necessary and sufficient that at least one of these vectors be a linear combination of the others.

1) If there is at least one zero vector among the vectors, then the entire system of vectors is linearly dependent.

Indeed, if, for example, , then, assuming , we have a nontrivial linear combination .▲

2) If some of the vectors form a linearly dependent system, then the entire system is linearly dependent.

Indeed, let the vectors , , be linearly dependent. Hence, there exists a non-trivial linear combination equal to the zero vector. But then, assuming , we also obtain a non-trivial linear combination equal to the zero vector.

2. Basis and dimension. Definition. The system is linearly dependent vectors vector space called basis this space, if any vector from can be represented as a linear combination of the vectors of this system, i.e. for each vector there are real numbers such that equality holds. This equality is called vector decomposition according to the basis , and the numbers called vector coordinates relative to the basis(or in basis) .

Theorem (on the uniqueness of the expansion in terms of the basis). Each space vector can be expanded in terms of the basis in a unique way, i.e. coordinates of each vector in the basis are defined unambiguously.

a 1 = { 3, 5, 1 , 4 }, a 2 = { –2, 1, -5 , -7 }, a 3 = { -1, –2, 0, –1 }.

Decision. We are looking for a general solution to the system of equations

a 1 x 1 + a 2 x 2 + a 3 x 3 = Θ

Gaussian method. To do this, we write this homogeneous system in coordinates:

System Matrix

The allowed system looks like: (r A = 2, n= 3). The system is consistent and undefined. Its general solution ( x 2 – free variable): x 3 = 13x 2 ; 3x 1 – 2x 2 – 13x 2 = 0 => x 1 = 5x 2 => X o = . The presence of a non-zero private solution, for example, , indicates that the vectors a 1 , a 2 , a 3 linearly dependent.

Example 2

Find out if it is this system vectors linearly dependent or linearly independent:

1. a 1 = { -20, -15, - 4 }, a 2 = { –7, -2, -4 }, a 3 = { 3, –1, –2 }.

Decision. Consider the homogeneous system of equations a 1 x 1 + a 2 x 2 + a 3 x 3 = Θ

or expanded (by coordinates)

The system is homogeneous. If it is non-degenerate, then it has a unique solution. In the case of a homogeneous system, the zero (trivial) solution. Hence, in this case the system of vectors is independent. If the system is degenerate, then it has non-zero solutions and, therefore, it is dependent.

Checking the system for degeneracy:

= –80 – 28 + 180 – 48 + 80 – 210 = – 106 ≠ 0.

The system is non-degenerate and, therefore, the vectors a 1 , a 2 , a 3 are linearly independent.

Tasks. Find out if the given system of vectors is linearly dependent or linearly independent:

1. a 1 = { -4, 2, 8 }, a 2 = { 14, -7, -28 }.

2. a 1 = { 2, -1, 3, 5 }, a 2 = { 6, -3, 3, 15 }.

3. a 1 = { -7, 5, 19 }, a 2 = { -5, 7 , -7 }, a 3 = { -8, 7, 14 }.

4. a 1 = { 1, 2, -2 }, a 2 = { 0, -1, 4 }, a 3 = { 2, -3, 3 }.

5. a 1 = { 1, 8 , -1 }, a 2 = { -2, 3, 3 }, a 3 = { 4, -11, 9 }.

6. a 1 = { 1, 2 , 3 }, a 2 = { 2, -1 , 1 }, a 3 = { 1, 3, 4 }.

7. a 1 = {0, 1, 1 , 0}, a 2 = {1, 1 , 3, 1}, a 3 = {1, 3, 5, 1}, a 4 = {0, 1, 1, -2}.

8. a 1 = {-1, 7, 1 , -2}, a 2 = {2, 3 , 2, 1}, a 3 = {4, 4, 4, -3}, a 4 = {1, 6, -11, 1}.

9. Prove that a system of vectors will be linearly dependent if it contains:

a) two equal vectors;

b) two proportional vectors.


The concepts of linear dependence and independence of a system of vectors are very important in the study of vector algebra, since the concepts of dimension and space basis are based on them. In this article, we will give definitions, consider the properties of linear dependence and independence, obtain an algorithm for studying a system of vectors for linear dependence, and analyze in detail the solutions of examples.

Page navigation.

Determination of linear dependence and linear independence of a system of vectors.

Consider a set of p n-dimensional vectors , denote them as follows. Compose a linear combination of these vectors and arbitrary numbers (real or complex): . Based on the definition of operations on n-dimensional vectors, as well as the properties of the operations of adding vectors and multiplying a vector by a number, it can be argued that the recorded linear combination is some n-dimensional vector , that is, .

So we came to the definition of the linear dependence of the system of vectors.

Definition.

If a linear combination can be a zero vector when among the numbers there is at least one other than zero, then the system of vectors is called linearly dependent.

Definition.

If the linear combination is a null vector only when all numbers are equal to zero, then the system of vectors is called linearly independent.

Properties of linear dependence and independence.

Based on these definitions, we formulate and prove properties of linear dependence and linear independence of a system of vectors.

    If several vectors are added to a linearly dependent system of vectors, then the resulting system will be linearly dependent.

    Proof.

    Since the system of vectors is linearly dependent, equality is possible if there is at least one non-zero number from the numbers . Let .

    Let's add s more vectors to the original system of vectors , and we get the system . Since and , then the linear combination of vectors of this system of the form

    is a null vector, and . Therefore, the resulting system of vectors is linearly dependent.

    If several vectors are excluded from a linearly independent system of vectors, then the resulting system will be linearly independent.

    Proof.

    We assume that the resulting system is linearly dependent. Adding all the discarded vectors to this system of vectors, we get the original system of vectors. By condition, it is linearly independent, and due to the previous property of linear dependence, it must be linearly dependent. We have arrived at a contradiction, hence our assumption is wrong.

    If a system of vectors has at least one zero vector, then such a system is linearly dependent.

    Proof.

    Let the vector in this system of vectors be zero. Assume that the original system of vectors is linearly independent. Then vector equality is possible only when . However, if we take any non-zero, then the equality will still be valid, since . Therefore, our assumption is wrong, and the original system of vectors is linearly dependent.

    If a system of vectors is linearly dependent, then at least one of its vectors is linearly expressed in terms of the others. If the system of vectors is linearly independent, then none of the vectors can be expressed in terms of the others.

    Proof.

    Let us first prove the first assertion.

    Let the system of vectors be linearly dependent, then there is at least one non-zero number and the equality is true. This equality can be resolved with respect to , since , in this case, we have

    Consequently, the vector is linearly expressed in terms of the remaining vectors of the system, which was to be proved.

    Now we prove the second assertion.

    Since the system of vectors is linearly independent, equality is possible only for .

    Suppose that some vector of the system is expressed linearly in terms of the others. Let this vector be , then . This equality can be rewritten as , on its left side there is a linear combination of the vectors of the system, and the coefficient in front of the vector is non-zero, which indicates a linear dependence of the original system of vectors. So we have come to a contradiction, which means that the property is proved.

An important statement follows from the last two properties:
if the system of vectors contains vectors and , where is an arbitrary number, then it is linearly dependent.

Study of the system of vectors for linear dependence.

Let's set the task: we need to establish a linear dependence or linear independence of the system of vectors .

The logical question is: “how to solve it?”

Something useful from a practical point of view can be derived from the above definitions and properties of linear dependence and independence of a system of vectors. These definitions and properties allow us to establish a linear dependence of a system of vectors in the following cases:

What about in other cases, which are the majority?

Let's deal with this.

Recall the formulation of the theorem on the rank of a matrix, which we cited in the article.

Theorem.

Let r is the rank of matrix A of order p by n , . Let M be the basic minor of the matrix A . All rows (all columns) of the matrix A that do not participate in the formation of the basis minor M are linearly expressed in terms of the rows (columns) of the matrix that generate the basis minor M .

And now let us explain the connection of the theorem on the rank of a matrix with the study of a system of vectors for a linear dependence.

Let's make a matrix A, the rows of which will be the vectors of the system under study:

What will the linear independence of the system of vectors mean?

From the fourth property of the linear independence of a system of vectors, we know that none of the vectors of the system can be expressed in terms of the others. In other words, no row of the matrix A will be linearly expressed in terms of other rows, therefore, linear independence of the system of vectors will be equivalent to the condition Rank(A)=p.

What will the linear dependence of the system of vectors mean?

Everything is very simple: at least one row of the matrix A will be linearly expressed in terms of the rest, therefore, linear dependence of the system of vectors will be equivalent to the condition Rank(A)

.

So, the problem of studying a system of vectors for a linear dependence is reduced to the problem of finding the rank of a matrix composed of the vectors of this system.

It should be noted that for p>n the system of vectors will be linearly dependent.

Comment: when compiling matrix A, the system vectors can be taken not as rows, but as columns.

Algorithm for studying a system of vectors for a linear dependence.

Let's analyze the algorithm with examples.

Examples of studying a system of vectors for linear dependence.

Example.

Given a system of vectors . Examine it for a linear relationship.

Decision.

Since the vector c is zero, the original system of vectors is linearly dependent due to the third property.

Answer:

The system of vectors is linearly dependent.

Example.

Examine the system of vectors for linear dependence.

Decision.

It is not difficult to see that the coordinates of the vector c are equal to the corresponding coordinates of the vector multiplied by 3, that is, . Therefore, the original system of vectors is linearly dependent.