Lemma 1 : If in a matrix of size n n at least one row (column) is zero, then the rows (columns) of the matrix are linearly dependent.

Proof: Let the first line be zero, then

Where a 1 0. That's what was required.

Definition: A matrix whose elements located below the main diagonal are equal to zero is called triangular:

and ij = 0, i>j.

Lemma 2: The determinant of a triangular matrix is ​​equal to the product of the elements of the main diagonal.

The proof is easy to carry out by induction on the dimension of the matrix.

Theorem on linear independence of vectors.

A)Necessity: linearly dependent D=0 .

Proof: Let them be linearly dependent, j=,

that is, there are a j , not all equal to zero, j= , What a 1 A 1 + a 2 A 2 + ... a n A n = , A j – matrix columns A. Let, for example, a n¹0.

We have a j * = a j / a n , j£ n-1a 1 * A 1 + a 2 * A 2 + ... a n -1 * A n -1 + A n = .

Let's replace the last column of the matrix A on

A n * = a 1 * A 1 + a 2 * A 2 + ... a n -1 A n -1 + A n = .

According to the above-proven property of the determinant (it will not change if another column multiplied by a number is added to any column in the matrix), the determinant of the new matrix is ​​equal to the determinant of the original one. But in the new matrix one column is zero, which means that, expanding the determinant over this column, we get D=0, Q.E.D.

b)Adequacy: Size matrix n nwith linearly independent rows can always be reduced to a triangular form using transformations that do not change absolute value determinant. Moreover, from the independence of the rows of the original matrix, it follows that its determinant is equal to zero.

1. If in the size matrix n n with linearly independent rows element a 11 is equal to zero, then the column whose element a 1 j ¹ 0. According to Lemma 1, such an element exists. The determinant of the transformed matrix may differ from the determinant of the original matrix only in sign.

2. From lines with numbers i>1 subtract the first line multiplied by the fraction a i 1 /a 11. Moreover, in the first column of rows with numbers i>1 you will get zero elements.

3. Let's start calculating the determinant of the resulting matrix by decomposing over the first column. Since all elements in it except the first are equal to zero,

D new = a 11 new (-1) 1+1 D 11 new,

Where d 11 new is the determinant of a matrix of smaller size.

Next, to calculate the determinant D 11 repeat steps 1, 2, 3 until the last determinant turns out to be the determinant of the size matrix 1 1. Since step 1 only changes the sign of the determinant of the matrix being transformed, and step 2 does not change the value of the determinant at all, then, up to the sign, we will ultimately obtain the determinant of the original matrix. In this case, since due to the linear independence of the rows of the original matrix, step 1 is always satisfied, all elements of the main diagonal will turn out to be unequal to zero. Thus, the final determinant, according to the described algorithm, is equal to the product of non-zero elements located on the main diagonal. Therefore, the determinant of the original matrix is ​​not equal to zero. Q.E.D.


Appendix 2

Let L – linear space over the field R . Let А1, а2, …, аn (*) finite system of vectors from L . Vector IN = a1× A1 +a2× A2 + … + an× An (16) is called Linear combination of vectors ( *), or they say that the vector IN linearly expressed through a system of vectors (*).

Definition 14. The system of vectors (*) is called Linearly dependent , if and only if there is a non-zero set of coefficients a1, a2, … , an such that a1× A1 +a2× A2 + … + an× An = 0. If a1× A1 +a2× A2 + … + an× An = 0 Û a1 = a2 = … = an = 0, then the system (*) is called Linearly independent.

Properties of linear dependence and independence.

10. If a system of vectors contains a zero vector, then it is linearly dependent.

Indeed, if in the system (*) the vector A1 = 0, That's 1× 0 + 0× A2 + … + 0 × Аn = 0 .

20. If a system of vectors contains two proportional vectors, then it is linearly dependent.

Let A1 = L×a2. Then 1× A1 –l× A2 + 0× A3 + … + 0× A N= 0.

30. A finite system of vectors (*) for n ³ 2 is linearly dependent if and only if at least one of its vectors is a linear combination of the remaining vectors of this system.

Þ Let (*) be linearly dependent. Then there is a non-zero set of coefficients a1, a2, … , an, for which a1× A1 +a2× A2 + … + an× An = 0 . Without loss of generality, we can assume that a1 ¹ 0. Then there exists A1 = ×a2× A2 + … + ×an× A N. So, vector A1 is a linear combination of the remaining vectors.

Ü Let one of the vectors (*) be a linear combination of the others. We can assume that this is the first vector, i.e. A1 = B2 A2+ … + bn A N, Hence (–1)× A1 + b2 A2+ … + bn A N= 0 , i.e. (*) is linearly dependent.

Comment. Using the last property, we can define the linear dependence and independence of an infinite system of vectors.

Definition 15. Vector system А1, а2, …, аn , … (**) is called Linearly dependent, If at least one of its vectors is a linear combination of some finite number the remaining vectors. Otherwise, the system (**) is called Linearly independent.

40. A finite system of vectors is linearly independent if and only if none of its vectors can be linearly expressed in terms of its remaining vectors.

50. If a system of vectors is linearly independent, then any of its subsystems is also linearly independent.

60. If some subsystem of a given system of vectors is linearly dependent, then the entire system is also linearly dependent.

Let two systems of vectors be given А1, а2, …, аn , … (16) and В1, В2, …, Вs, … (17). If each vector of system (16) can be represented as a linear combination of a finite number of vectors of system (17), then system (17) is said to be linearly expressed through system (16).

Definition 16. The two vector systems are called Equivalent , if each of them is linearly expressed through the other.

Theorem 9 (basic linear dependence theorem).

Let it – two finite systems of vectors from L . If the first system is linearly independent and linearly expressed through the second, then N£s.

Proof. Let's assume that N> S. According to the conditions of the theorem

(21)

Since the system is linearly independent, equality (18) Û X1=x2=…=xN= 0. Let us substitute here the expressions of the vectors: …+=0 (19). Hence (20). Conditions (18), (19) and (20) are obviously equivalent. But (18) is satisfied only when X1=x2=…=xN= 0. Let's find when equality (20) is true. If all its coefficients are zero, then it is obviously true. Equating them to zero, we obtain system (21). Since this system has zero , then it

joint Since the number of equations more number unknowns, then the system has infinitely many solutions. Therefore, it has a non-zero X10, x20, …, xN0. For these values, equality (18) will be true, which contradicts the fact that the system of vectors is linearly independent. So our assumption is wrong. Hence, N£s.

Consequence. If two equivalent systems of vectors are finite and linearly independent, then they contain the same number of vectors.

Definition 17. The vector system is called Maximal linearly independent system of vectors Linear space L , if it is linearly independent, but when adding to it any vector from L , not included in this system, it becomes linearly dependent.

Theorem 10. Any two finite maximal linearly independent systems of vectors from L Contain the same number of vectors.

Proof follows from the fact that any two maximal linearly independent systems of vectors are equivalent .

It is easy to prove that any linearly independent system of space vectors L can be expanded to a maximal linearly independent system of vectors in this space.

Examples:

1. In the set of all collinear geometric vectors any system consisting of one nonzero vector is maximally linearly independent.

2. In the set of all coplanar geometric vectors, any two non-collinear vectors constitute a maximal linearly independent system.

3. In the set of all possible geometric vectors of three-dimensional Euclidean space, any system of three non-coplanar vectors is maximally linearly independent.

4. In the set of all polynomials, degrees are not higher than N With real (complex) coefficients, a system of polynomials 1, x, x2, … , xn Is maximally linearly independent.

5. In the set of all polynomials with real (complex) coefficients, examples of a maximal linearly independent system are

A) 1, x, x2, ... , xn, ... ;

b) 1, (1 – x), (1 – x)2, … , (1 – x)N, ...

6. Set of dimension matrices M´ N is a linear space (check this). An example of a maximal linearly independent system in this space is the matrix system E11= , E12 =, …, EMn = .

Let a system of vectors be given C1, c2, …, cf (*). The subsystem of vectors from (*) is called Maximum linearly independent Subsystem Systems ( *) , if it is linearly independent, but when adding any other vector of this system to it, it becomes linearly dependent. If the system (*) is finite, then any of its maximal linearly independent subsystems contains the same number of vectors. (Prove it yourself). The number of vectors in the maximum linearly independent subsystem of the system (*) is called Rank This system. Obviously, equivalent systems of vectors have the same ranks.

The following give several criteria for linear dependence and, accordingly, linear independence of vector systems.

Theorem. (Necessary and sufficient condition for linear dependence of vectors.)

A system of vectors is dependent if and only if one of the vectors of the system is linearly expressed through the others of this system.

Proof. Necessity. Let the system be linearly dependent. Then, by definition, it represents the zero vector non-trivially, i.e. there is a non-trivial combination of this system of vectors equal to the zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let , .

Let's divide both sides of the previous equality by this non-zero coefficient (i.e. multiply by:

Let's denote: , where .

those. one of the vectors of the system is linearly expressed through the others of this system, etc.

Adequacy. Let one of the vectors of the system be linearly expressed through other vectors of this system:

Let's move the vector to the right of this equality:

Since the coefficient of the vector is equal to , then we have a nontrivial representation of zero by a system of vectors, which means that this system of vectors is linearly dependent, etc.

The theorem has been proven.

Consequence.

1. Vector system vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

Proof.

1) Necessity. Let the system be linearly independent. Let us assume the opposite and there is a vector of the system that is linearly expressed through other vectors of this system. Then, according to the theorem, the system is linearly dependent and we arrive at a contradiction.

Adequacy. Let none of the vectors of the system be expressed in terms of the others. Let's assume the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a vector of the system that is linearly expressed through other vectors of this system, and we again come to a contradiction.

2a) Let the system contain a zero vector. Let us assume for definiteness that the vector :. Then the equality is obvious

those. one of the vectors of the system is linearly expressed through the other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, etc.

Note that this fact can be proven directly from a linearly dependent system of vectors.

Since , the following equality is obvious

This is a non-trivial representation of the zero vector, which means the system is linearly dependent.

2b) Let the system have two equal vectors. Let for . Then the equality is obvious

Those. the first vector is linearly expressed through the remaining vectors of the same system. It follows from the theorem that this system linearly dependent, etc.

Similar to the previous one, this statement can be proven directly by the definition of a linearly dependent system. Then this system represents the zero vector non-trivially

whence follows the linear dependence of the system.

The theorem has been proven.

Consequence. A system consisting of one vector is linearly independent if and only if this vector is nonzero.

3.3. Linear independence of vectors. Basis.

Linear combination vector systems

called a vector

where a 1, a 2, ..., a n - arbitrary numbers.

If all a i = 0, then the linear combination is called trivial . In this case, obviously

Definition 5.

If for a system of vectors

there is a non-trivial linear combination (at least one ai¹ 0) equal to the zero vector:

then the system of vectors is called linear dependent.

If equality (1) is possible only in the case when all a i =0, then the system of vectors is called linear independent .

Theorem 2 (Conditions of linear dependence).

Definition 6.

From Theorem 3 it follows that if a basis is given in space, then by adding an arbitrary vector to it, we obtain a linearly dependent system of vectors. According to Theorem 2 (1) , one of them (it can be shown that the vector) can be represented as a linear combination of the others:

.

Definition 7.

Numbers

are called coordinates vectors in the basis

(denoted

If the vectors are considered on the plane, then the basis will be an ordered pair of non-collinear vectors

and the coordinates of the vector in this basis are a pair of numbers:

Note 3. It can be shown that for a given basis, the coordinates of the vector are determined uniquely . From this, in particular, it follows that if the vectors are equal, then their corresponding coordinates are equal, and vice versa .

Thus, if a basis is given in a space, then each vector of the space corresponds to an ordered triple of numbers (coordinates of the vector in this basis) and vice versa: each triple of numbers corresponds to a vector.

On the plane, a similar correspondence is established between vectors and pairs of numbers.

Theorem 4 (Linear operations through vector coordinates).

If in some basis

And a is an arbitrary number, then in this basis

In other words:

When a vector is multiplied by a number, its coordinates are multiplied by that number ;

when adding vectors, their corresponding coordinates are added .

Example 1 . In some basis the vectorshave coordinates

Show that the vectors form a basis and find the coordinates of the vector in this basis.

Vectors form a basis if they are non-coplanar, therefore (in accordance with by Theorem 3(2) ) are linearly independent.

By definition 5 this means that equality

only possible ifx = y = z = 0.

Def. The set w is called a linear space, and its element. -vectors if:

*law is specified (+) according to cat. any two elements x, y from w are associated with an element called. their sum [x + y]

*a law is given (* for the number a), according to the cat element x from w and a, an element from w is compared, called the product of x and a [ax];

* completed

the following requirements (or axioms):

Trace c1. zero vector (ctv 0 1 and 0 2. by a3: 0 2 + 0 1 = 0 2 and 0 1 + 0 2 = 0 1. by a1 0 1 + 0 2 = 0 2 + 0 1 => 0 1 = 0 2.)

c2. .(ctv, a4)

c3. 0 vect.(a7)

c4. a(number)*0=0.(a6,c3)

c5. x (*) -1 =0 vector, opposite to x, i.e. (-1)x = -x. (a5,a6)

c6. In w, the subtraction action is defined: the vector x is called the difference of vectors b and a, if x + a = b, and is denoted x = b - a.

Number n called dimension lin. pr-a L , if in L there is a system of n lin. nezav. vectors, and any system of n+1 vector - lin. dependent dim L= n. Space L called n-dimensional.

An ordered collection of n lines. nezav. vectors n dimensional independent. space – basis

Theorem. Each vector X can be represented the only way in the form of linear combinations of basis vectors

Let (1) be the basis of an n-dimensional linear. pr-va V, i.e. a collection of linearly independent vectors. The set of vectors will be linear. dependent, because their n+ 1.

Those. there are numbers that are not all equal to zero at the same time, what does that have to do with it (otherwise (1) are linearly dependent).

Then where is the vector decomposition x by basis(1) .

This expression is unique, because if another expression exists (**)

subtracting equality (**) from (*),

we get

Because are linearly independent, then . Chtd

Theorem. If - lin. independent vectors of space V and each vector x from V can be represented through , then these vectors form a basis of V

Doc: (1)-lin.independent =>the document remains that is linear-independent. According to the convention Each vector a is expressed through (1): , consider , rang≤n => among the columns no more than n are linearly independent, but m > n=> m columns are linearly dependent => s=1, n

That is, the vectors are linear dependent

Thus, the space V is n-dimensional and (1) its basis

№4Def. Subset L lin. production V is called lin. cond. of this space if, with respect to the operations (+) and (*a) specified in V, the subspace L is a linear space

Theorem The set l of vectors of space V is linear. A subspace of this space performs

(advance) let (1) and (2) be satisfied, in order for L to be a subsimple.V it remains to prove that all the axioms of lin are satisfied. pr-va.

(-x): -x+x=0 d. a(x + y) = ax + ay;

(a-b) and (e-h) follows from the validity of V; let us prove (c)

(necessity) Let L be lin. subspace of this space, then (1) and (2) are satisfied by virtue of the definition of lines. pr-va

Def. A collection of all kinds of lines. combinations of some elements (x j) lin. the product is called a linear shell

Theorem an arbitrary set of all lines. combinations of vectors V with real. coefficient is lin. subpr V (linear shell given system of vectors lin. pr. is the linear subpr of this pr. )

ODA.Non-empty subset of L line vectors. production V is called lin. subspace if:

a) the sum of any vectors from L belongs to L

b) the product of each vector from L by any number belongs to L

Sum of two subspacesLis again a subspaceL

1) Let y 1 +y 2 (L 1 +L 2)<=>y 1 =x 1 +x 2, y 2 =x’ 1 +x’ 2, where (x 1,x’ 1) L 1, (x 2,x’ 2) L 2. y 1 +y 2 =(x 1 +x 2)+(x' 1 +x' 2)=(x 1 +x' 1)+(x 2 +x' 2), where (x 1 +x' 1 ) L 1 , (x 2 +x' 2) L 2 => the first condition of a linear subspace is satisfied.

ay 1 =ax 1 +ax 2, where (ax 1) L 1, (ax 2) L 2 => because (y 1 +y 2) (L 1 +L 2) , (ly 1) (L 1 +L 2) => conditions are met => L 1 +L 2 is a linear subspace.

The intersection of two subdivisionsL 1 AndL 2 lin. pr-vaL is also a subsp. this space.

Consider two arbitrary vectors x,y, belonging to the intersection of subspaces, and two arbitrary numbers a,b:.

According to def. intersections of sets:

=> by definition of a subspace of a linear space:,.

T.K. vector ax + by belongs to many L 1, and many L 2, then it belongs, by definition, to the intersection of these sets. Thus:

ODA.They say that V is the direct sum of its subdivisions. if and b) this decomposition is unique

b") Let us show that b) is equivalent to b’)

When b) is true b’)

All sorts of (M, N) from intersect only along the zero vector

Let ∃ z ∈

Fair returnL=

contradiction

Theorem To (*) is necessary and sufficient for the union of bases ( formed the basis of space

(Required) let (*) and vectors be bases of subsets. and there is an expansion in ; x is expanded over the basis L, in order to state that ( constitute a basis, it is necessary to prove their linear independence; they all contain 0 0=0+...+0. Due to the uniqueness of the expansion of 0 over : => due to the linear independence of the basis => ( – basis

(Ext.) Let ( form the basis of L a unique decomposition (**) at least one decomposition exists. By uniqueness (*) => uniqueness (**)

Comment. The dimension of the direct sum is equal to the sum of the dimensions of the subspace

Any non-singular quadratic matrix can serve as a transition matrix from one basis to another

Let in n dimensional linear space V there are two bases and

(1) =A, where the elements * and ** are not numbers, but we will extend certain operations on a numeric matrix to such rows.

Because otherwise the vectors ** would be linear dependent

Back. If then the columns of A are linearly independent =>form a basis

Coordinates And related by the relation , Where transition matrix elements

Let the decomposition of the elements of the “new” basis into the “old” one be known

Then the equalities are true

But if a linear combination of linearly independent elements is 0 then =>

Basic Linear Dependence Theorem

If (*) is linearly expressed through (**) Thatn<= m

Let us prove by induction on m

m=1: system (*) contains 0 and lin. manager - impossible

let it be true for m=k-1

let's prove for m=k

It may turn out that 1) , i.e. v-ry (1) are lin.comb. lin. in-ditch (2)System (1) linear undependable, because is part of lin.nezav. systems (*). Because in system (2) there are only k-1 vectors, then by the induction hypothesis we obtain k+1