Chapter 3. Linear vector spaces

Topic 8. Linear vector spaces

Definition linear space. Examples of linear spaces

The addition operation is defined in §2.1 free vectors from R 3 and the operation of multiplying vectors by real numbers, and also lists the properties of these operations. The extension of these operations and their properties to a set of objects (elements) of arbitrary nature leads to a generalization of the concept of linear space geometric vectors from R 3 defined in §2.1. Let us formulate the definition of a linear vector space.

Definition 8.1. Many V elements X , at , z ,... called linear vector space, If:

there is a rule that every two elements x And at from V matches the third element from V, called amount X And at and designated X + at ;

there is a rule that each element x and anyone real number matches an element from V, called product of the element X per number and designated x .

Moreover, the sum of any two elements X + at and work x any element for any number must satisfy the following requirements - axioms of linear space:

1°. X + at = at + X (commutativity of addition).

2°. ( X + at ) + z = X + (at + z ) (associativity of addition).

3°. There is an element 0 , called zero, such that

X + 0 = X , x .

4°. For anyone x there is an element (– X ), called opposite for X , such that

X + (– X ) = 0 .

5°. ( x ) = ()x , x , , R.

6°. x = x , x .

7°. () x = x + x , x , , R.

8°. ( X + at ) = x + y , x , y , R.

We will call the elements of linear space vectors regardless of their nature.

From axioms 1°–8° it follows that in any linear space V the following properties are valid:

1) there is a single zero vector;

2) for each vector x there is only one opposite vector (– X ) , and (– X ) = (– l) X ;

3) for any vector X the equality 0× is true X = 0 .

Let us prove, for example, property 1). Let us assume that in space V there are two zeros: 0 1 and 0 2. Putting 3° in the axiom X = 0 1 , 0 = 0 2, we get 0 1 + 0 2 = 0 1. Likewise, if X = 0 2 , 0 = 0 1, then 0 2 + 0 1 = 0 2. Taking into account axiom 1°, we obtain 0 1 = 0 2 .

Let us give examples of linear spaces.

1. The set of real numbers forms a linear space R. Axioms 1°–8° are obviously satisfied in it.

2. The set of free vectors in three-dimensional space, as shown in §2.1, also forms a linear space, denoted R 3. The zero of this space is the zero vector.


The set of vectors on the plane and on the line are also linear spaces. We will denote them R 1 and R 2 respectively.

3. Generalization of spaces R 1 , R 2 and R 3 serves space Rn, n N, called arithmetic n-dimensional space, whose elements (vectors) are ordered collections n arbitrary real numbers ( x 1 ,…, x n), i.e.

Rn = {(x 1 ,…, x n) | x i R, i = 1,…, n}.

It is convenient to use the notation x = (x 1 ,…, x n), while x i called i-th coordinate(component)vector x .

For X , at Rn And R We define addition and multiplication by a number using the following formulas:

X + at = (x 1 + y 1 ,…, x n+ y n);

x = (x 1 ,…, x n).

The zero element of space Rn is a vector 0 = (0,…, 0). Equality of two vectors X = (x 1 ,…, x n) And at = (y 1 ,…, y n) from Rn, by definition, means the equality of the corresponding coordinates, i.e. X = at Û x 1 = y 1 &… & x n = y n.

The fulfillment of axioms 1°–8° is obvious here.

4. Let C [ a ; b] – set of real continuous ones on the interval [ a; b] functions f: [a; b] R.

Sum of functions f And g from C [ a ; b] is called a function h = f + g, defined by equality

h = f + g Û h(x) = (f + g)(x) = f(X) + g(x), " x Î [ a; b].

Product of a function f Î C [ a ; b] by number a Î R is determined by equality

u = f Û u(X) = (f)(X) = f(x), " x Î [ a; b].

Thus, the introduced operations of adding two functions and multiplying a function by a number transform the set C [ a ; b] into a linear space whose vectors are functions. Axioms 1°–8° are obviously satisfied in this space. The zero vector of this space is the identically zero function, and the equality of two functions f And g means, by definition, the following:

f = g f(x) = g(x), " x Î [ a; b].

Lecture 6. Vector space.

Basic questions.

1. Vector linear space.

2. Basis and dimension of space.

3. Space orientation.

4. Decomposition of a vector by basis.

5. Vector coordinates.

1. Vector linear space.

A set consisting of elements of any nature in which they are defined linear operations: Adding two elements and multiplying an element by a number is called spaces, and their elements are vectors this space and are designated in the same way as vector quantities in geometry: . Vectors Such abstract spaces, as a rule, have nothing in common with ordinary geometric vectors. Elements of abstract spaces can be functions, a system of numbers, matrices, etc., and in a particular case, ordinary vectors. Therefore, such spaces are usually called vector spaces .

Vector spaces are, For example, a set of collinear vectors, denoted V1 , set of coplanar vectors V2 , set of vectors of ordinary (real space) V3 .

For this particular case, we can give the following definition of a vector space.

Definition 1. The set of vectors is called vector space, if a linear combination of any vectors of a set is also a vector of this set. The vectors themselves are called elements vector space.

More important, both theoretically and appliedly, is the general (abstract) concept of vector space.

Definition 2. Many R elements, in which the sum is determined for any two elements and for any element https://pandia.ru/text/80/142/images/image006_75.gif" width="68" height="20"> called vector(or linear) space, and its elements are vectors, if the operations of adding vectors and multiplying a vector by a number satisfy the following conditions ( axioms) :

1) addition is commutative, i.e..gif" width="184" height="25">;

3) there is such an element (zero vector) that for any https://pandia.ru/text/80/142/images/image003_99.gif" width="45" height="20">.gif" width=" 99" height="27">;

5) for any vectors and and any number λ the equality holds;

6) for any vectors and any numbers λ And µ the equality is true: https://pandia.ru/text/80/142/images/image003_99.gif" width="45 height=20" height="20"> and any numbers λ And µ fair ;

8) https://pandia.ru/text/80/142/images/image003_99.gif" width="45" height="20">.

The simplest axioms that define a vector space follow: consequences :

1. In a vector space there is only one zero - the element - the zero vector.

2. In vector space, each vector has a single opposite vector.

3. For each element the equality is satisfied.

4. For any real number λ and zero vector https://pandia.ru/text/80/142/images/image017_45.gif" width="68" height="25">.

5..gif" width="145" height="28">

6..gif" width="15" height="19 src=">.gif" width="71" height="24 src="> is a vector that satisfies the equality https://pandia.ru/text/80 /142/images/image026_26.gif" width="73" height="24">.

So, indeed, the set of all geometric vectors is a linear (vector) space, since for the elements of this set the actions of addition and multiplication by a number are defined that satisfy the formulated axioms.

2. Basis and dimension of space.

The essential concepts of a vector space are the concepts of basis and dimension.

Definition. A set of linearly independent vectors, taken in a certain order, through which any vector of space can be linearly expressed, is called basis this space. Vectors. The components of the basis of space are called basic .

The basis of a set of vectors located on an arbitrary line can be considered one collinear vector to this line.

Basis on the plane let's call two non-collinear vectors on this plane, taken in a certain order https://pandia.ru/text/80/142/images/image029_29.gif" width="61" height="24">.

If the basis vectors are pairwise perpendicular (orthogonal), then the basis is called orthogonal, and if these vectors have a length equal to one, then the basis is called orthonormal .

Largest number linearly independent vectors of space are called dimension of this space, i.e. the dimension of the space coincides with the number of basis vectors of this space.

So, according to these definitions:

1. One-dimensional space V1 is a straight line, and the basis consists of one collinear vector https://pandia.ru/text/80/142/images/image028_22.gif" width="39" height="23 src="> .

3. Ordinary space is three-dimensional space V3 , whose basis consists of three non-coplanar vectors

From here we see that the number of basis vectors on a straight line, on a plane, in real space coincides with what in geometry is usually called the number of dimensions (dimension) of a straight line, plane, space. Therefore, it is natural to introduce a more general definition.

Definition. Vector space R called n– dimensional if there are no more than n linearly independent vectors and is denoted R n. Number n called dimension space.

In accordance with the dimension of the space are divided into finite-dimensional And infinite-dimensional. The dimension of the null space is considered equal to zero by definition.

Note 1. In each space you can specify as many bases as you like, but all the bases of a given space consist of the same number of vectors.

Note 2. IN n– in a dimensional vector space, a basis is any ordered collection n linearly independent vectors.

3. Space orientation.

Let the basis vectors in space V3 have general beginning And ordered, i.e. it is indicated which vector is considered the first, which is considered the second and which is considered the third. For example, in the basis the vectors are ordered according to indexation.

For that to orient space, it is necessary to set some basis and declare it positive .

It can be shown that the set of all bases of space falls into two classes, that is, into two disjoint subsets.

a) all bases belonging to one subset (class) have the same orientation (bases of the same name);

b) any two bases belonging to various subsets (classes), have the opposite orientation, ( different names bases).

If one of the two classes of bases of a space is declared positive and the other negative, then it is said that this space oriented .

Often, when orienting space, some bases are called right, and others - left .

https://pandia.ru/text/80/142/images/image029_29.gif" width="61" height="24 src="> are called right, if, when observing from the end of the third vector, the shortest rotation of the first vector https://pandia.ru/text/80/142/images/image033_23.gif" width="16" height="23"> is carried out counterclockwise(Fig. 1.8, a).

https://pandia.ru/text/80/142/images/image036_22.gif" width="16" height="24">

https://pandia.ru/text/80/142/images/image037_23.gif" width="15" height="23">

https://pandia.ru/text/80/142/images/image039_23.gif" width="13" height="19">

https://pandia.ru/text/80/142/images/image033_23.gif" width="16" height="23">

Rice. 1.8. Right basis (a) and left basis (b)

Usually the right basis of the space is declared to be a positive basis

The right (left) basis of space can also be determined using the rule of a “right” (“left”) screw or gimlet.

By analogy with this, the concept of right and left is introduced threes non-coplanar vectors that must be ordered (Fig. 1.8).

Thus, in the general case, two ordered triplets of non-coplanar vectors have the same orientation (the same name) in space V3 if they are both right or both left, and - the opposite orientation (opposite) if one of them is right and the other is left.

The same is done in the case of space V2 (plane).

4. Decomposition of a vector by basis.

For simplicity of reasoning, let us consider this question using the example of a three-dimensional vector space R3 .

Let https://pandia.ru/text/80/142/images/image021_36.gif" width="15" height="19"> be an arbitrary vector of this space.

Corresponding to such a vector space. In this article, the first definition will be taken as the starting point.

N (\displaystyle n)-dimensional Euclidean space is usually denoted E n (\displaystyle \mathbb (E) ^(n)); the notation is also often used when it is clear from the context that the space is provided with a natural Euclidean structure.

Formal definition

To define Euclidean space, the easiest way is to take as the main concept the scalar product. A Euclidean vector space is defined as a finite-dimensional vector space over the field of real numbers, on whose pairs of vectors a real-valued function is defined (⋅ , ⋅) , (\displaystyle (\cdot ,\cdot),) having the following three properties:

Example of Euclidean space - coordinate space R n , (\displaystyle \mathbb (R) ^(n),) consisting of all possible sets of real numbers (x 1 , x 2 , … , x n) , (\displaystyle (x_(1),x_(2),\ldots ,x_(n)),) scalar product in which is determined by the formula (x , y) = ∑ i = 1 n x i y i = x 1 y 1 + x 2 y 2 + ⋯ + x n y n . (\displaystyle (x,y)=\sum _(i=1)^(n)x_(i)y_(i)=x_(1)y_(1)+x_(2)y_(2)+\cdots +x_(n)y_(n).)

Lengths and angles

Given on Euclidean space dot product enough to enter geometric concepts length and angle. Vector length u (\displaystyle u) is defined as (u , u) (\displaystyle (\sqrt ((u,u)))) and is designated | u | . (\displaystyle |u|.) The positive definiteness of the scalar product guarantees that the length of the nonzero vector is nonzero, and from bilinearity it follows that | a u | = | a | | u | , (\displaystyle |au|=|a||u|,) that is, the lengths of proportional vectors are proportional.

Angle between vectors u (\displaystyle u) And v (\displaystyle v) determined by the formula φ = arccos ⁡ ((x , y) | x | | y |) . (\displaystyle \varphi =\arccos \left((\frac ((x,y))(|x||y|))\right).) From the cosine theorem it follows that for a two-dimensional Euclidean space ( Euclidean plane) this definition angle coincides with the usual one. Orthogonal vectors, as in three-dimensional space, can be defined as vectors the angle between which is equal to π 2. (\displaystyle (\frac (\pi )(2)).)

The Cauchy-Bunyakovsky-Schwartz inequality and the triangle inequality

There is one gap left in the definition of angle given above: in order to arccos ⁡ ((x , y) | x | | y |) (\displaystyle \arccos \left((\frac ((x,y))(|x||y|))\right)) has been defined, it is necessary that the inequality | (x, y) | x | | y | | ⩽ 1. (\displaystyle \left|(\frac ((x,y))(|x||y|))\right|\leqslant 1.) This inequality does hold in an arbitrary Euclidean space, and is called the Cauchy–Bunyakovsky–Schwartz inequality. From this inequality, in turn, follows the triangle inequality: | u + v | ⩽ | u | + | v | . (\displaystyle |u+v|\leqslant |u|+|v|.) The triangle inequality, together with the length properties listed above, means that the length of a vector is a norm on Euclidean vector space, and the function d(x, y) = | x − y | (\displaystyle d(x,y)=|x-y|) defines the structure of a metric space on Euclidean space (this function is called the Euclidean metric). In particular, the distance between elements (points) x (\displaystyle x) And y (\displaystyle y) coordinate space R n (\displaystyle \mathbb (R) ^(n)) is given by the formula d (x , y) = ‖ x − y ‖ = ∑ i = 1 n (x i − y i) 2 . (\displaystyle d(\mathbf (x) ,\mathbf (y))=\|\mathbf (x) -\mathbf (y) \|=(\sqrt (\sum _(i=1)^(n) (x_(i)-y_(i))^(2))).)

Algebraic properties

Orthonormal bases

Conjugate spaces and operators

Any vector x (\displaystyle x) Euclidean space defines a linear functional x ∗ (\displaystyle x^(*)) on this space, defined as x ∗ (y) = (x , y) . (\displaystyle x^(*)(y)=(x,y).) This comparison is an isomorphism between Euclidean space and its dual space and allows them to be identified without compromising calculations. In particular, conjugate operators can be considered as acting on the original space, and not on its dual, and self-adjoint operators can be defined as operators that coincide with their conjugates. In an orthonormal basis, the matrix of the adjoint operator is transposed to the matrix of the original operator, and the matrix of the self-adjoint operator is symmetric.

Movements of Euclidean space

Motions of Euclidean space are metric-preserving transformations (also called isometries). Motion example - parallel translation to vector v (\displaystyle v), which translates the point p (\displaystyle p) to the point p + v (\displaystyle p+v). It is easy to see that any movement is a composition of parallel translation and transformation that keeps one point fixed. By choosing a fixed point as the origin of coordinates, any such movement can be considered as

4.3.1 Definition of linear space

Let ā , , - elements of some set ā , , L and λ , μ - real numbers, λ , μ R..

The set L is calledlinear orvector space, if two operations are defined:

1 0 . Addition. Each pair of elements of this set is associated with an element of the same set, called their sum

ā + =

2°.Multiplying by a number. Any real number λ and element ā L matches an element of the same set λ ā L and the following properties are satisfied:

1. ā+= + ā;

2. ā+(+ )=(ā+ )+ ;

3. exists zero element
, such that ā +=ā ;

4. exists opposite element -
such that ā +(-ā )=.

If λ , μ - real numbers, then:

5. λ(μ , ā)= λ μ ā ;

6. 1ā= ā;

7. λ(ā +)= λ ā+λ ;

8. (λ+ μ ) ā=λ ā + μ ā

Elements of linear space ā, , ... are called vectors.

Exercise. Show yourself that these sets form linear spaces:

1) A set of geometric vectors on a plane;

2) Many geometric vectors in three-dimensional space;

3) A set of polynomials of some degree;

4) A set of matrices of the same dimension.

4.3.2 Linearly dependent and independent vectors. Dimension and basis of space

Linear combination vectors ā 1 , ā 2 , …, ā n Lis called a vector of the same space of the form:

,

Where λ i are real numbers.

Vectors ā 1 , .. , ā n are calledlinearly independent, if their linear combination is a zero vector if and only if all λ i are equal to zero, that is

λ i =0

If the linear combination is a zero vector and at least one of λ i is different from zero, then these vectors are called linearly dependent. The latter means that at least one of the vectors can be represented as a linear combination of other vectors. Indeed, even if, for example,
. Then,
, Where

.

A maximally linearly independent ordered system of vectors is called basis space L. The number of basis vectors is called dimension space.

Let's assume that there is n linearly independent vectors, then the space is called n-dimensional. Other space vectors can be represented as a linear combination n basis vectors. Per basis n- dimensional space can be taken any n linearly independent vectors of this space.

Example 17. Find the basis and dimension of these linear spaces:

a) a set of vectors lying on a line (collinear to some line)

b) a set of vectors belonging to the plane

c) a set of vectors of three-dimensional space

d) a set of polynomials of degree no higher than two.

Solution.

A) Any two vectors lying on a straight line will be linearly dependent, since the vectors are collinear
, That
, λ - scalar. Consequently, the basis of a given space is only one (any) vector different from zero.

Usually this space is designated R, its dimension is 1.

b) any two non-collinear vectors
will be linearly independent, and any three vectors on the plane will be linearly independent. For any vector , there are numbers And such that
. The space is called two-dimensional, denoted by R 2 .

The basis of a two-dimensional space is formed by any two non-collinear vectors.

V) Any three non-coplanar vectors will be linearly independent, they form the basis of three-dimensional space R 3 .

G) As a basis for the space of polynomials of degree no higher than two, we can choose the following three vectors: ē 1 = x 2 ; ē 2 = x; ē 3 =1 .

(1 is a polynomial identically equal to one). This space will be three-dimensional.

CHAPTER 8. LINEAR SPACES § 1. Definition of a linear space

Generalizing the concept of a vector, known from school geometry, we will define algebraic structures (linear spaces) in which it is possible to construct n-dimensional geometry, a special case of which will be analytical geometry.

Definition 1. Given a set L=(a,b,c,…) and a field P=( ,…). Let L be defined algebraic operation addition and the multiplication of elements from L by elements of the field P is defined:

The set L is called linear space over the field P, if the following requirements are met (axioms of linear space):

1. L commutative group with respect to addition;

2. α(βa)=(αβ)a α,β P, a L;

3. α(a+b)=αa+αb α P, a,b L;

4. (α+β)a=αa+βa α,β P, a L;

5. a L the following equality is true: 1 a=a (where 1 is the unit of the field P).

The elements of the linear space L are called vectors (we note once again that we will denote them by Latin letters a, b, c,...), and the elements of the field P are called numbers (we denote them Greek letters α,

Remark 1. We see that the well-known properties of “geometric” vectors are taken as axioms of linear space.

Remark 2. Some well-known algebra textbooks use different notations for numbers and vectors.

Basic examples of linear spaces

1. R 1 is the set of all vectors on a certain line.

IN in what follows we will call such vectorssegment vectors on a straight line. If we take R as P, then obviously R1 is a linear space over the field R.

2. R 2 , R3 – segment vectors on the plane and in three-dimensional space. It is easy to see that R2 and R3 are linear spaces over R.

3. Let P be an arbitrary field. Consider the set P(n) all ordered sets of n elements of the field P:

P(n) = (α1 ,α2 ,α3 ,...,αn )| αi P, i=1,2,..,n .

The set a=(α1,α2,…,αn) will be called n-dimensional row vector. The numbers i will be called components

vector a.

For vectors from P(n) , by analogy with geometry, we naturally introduce the operations of addition and multiplication by a number, assuming for any (α1 ,α2 ,…,αn ) P(n) and (β1 ,β2 ,...,βn ) P(n) :

(α1 ,α2 ,…,αn )+(β1 ,β2 ,...,βn )=(α1 +β1 ,α2 +b2 ,...,αn +βn ),

(α1 ,α2 ,…,αn )= (α1 , α2 ,…, αn ) R.

From the definition of addition of row vectors it is clear that it is performed componentwise. It is easy to check that P(n) is a linear space over P.

Vector 0=(0,…,0) is the zero vector (a+0=a a P(n)), and vector -a=(-α1,-α2,…,-αn) is the opposite of a (since . a+(-a)=0).

Linear space P(n) is called the n-dimensional space of row vectors, or n-dimensional arithmetic space.

Remark 3. Sometimes we will also denote by P(n) the n-dimensional arithmetic space of column vectors, which differs from P(n) only in the way the vectors are written.

4. Consider the set M n (P) of all matrices of nth order with elements from the field P. This is a linear space over P, where the zero matrix is ​​a matrix in which all elements are zeros.

5. Consider the set P[x] of all polynomials in the variable x with coefficients from the field P. It is easy to verify that P[x] is a linear space over P. Let us call itspace of polynomials.

6. Let P n [x]=( 0 xn +…+ n | i P, i=0,1,..,n) the set of all polynomials of degree not higher than n together with

0. It is a linear space over the field P. P n [x] we will call space of polynomials of degree at most n.

7. Let us denote by Ф the set of all functions of a real variable with the same domain of definition. Then Ф is a linear space over R.

IN In this space one can find other linear spaces, for example the space linear functions, differentiable functions, continuous functions etc.

8. Every field is a linear space above itself.

Some corollaries from the axioms of linear space

Corollary 1. Let L be a linear space over the field P. L contains the zero element 0 and a L (-а) L (since L is an addition group).

IN In what follows, the zero element of the field P and the linear space L will be denoted identically by

0. This usually does not cause confusion.

Corollary 2. 0 a=0 a L (0 P on the left side, 0 L on the right side).

Proof. Let's consider α a, where α is any number from P. We have: α a=(α+0)a=α a+0 a, whence 0 a= α a +(-α a)=0.

Corollary 3. α 0=0 α P.

Proof. Consider α a=α(a+0)=α a+α 0; hence α 0=0. Corollary 4. α a=0 if and only if either α=0 or a=0.

Proof. Adequacy proven in Corollaries 2 and 3.

Let's prove the necessity. Let α a=0 (2). Let us assume that α 0. Then, since α P, then there exists α-1 P. Multiplying (2) by α-1, we obtain:

α-1 (α a)=α-1 0. By Corollary 2 α-1 0=0, i.e. α-1 (α a)=0. (3)

On the other hand, using axioms 2 and 5 of linear space, we have: α-1 (α a)=(α-1 α) a=1 a=a.

From (3) and (4) it follows that a=0. The investigation has been proven.

We present the following statements without proof (their validity is easily verified).

Corollary 5. (-α) a=-α a α P, a L. Corollary 6. α (-a)=-α a α P, a L. Corollary 7. α (a–b)=α a–α b α P, a,b L.

§ 2. Linear dependence of vectors

Let L be a linear space over the field P and a1 ,a2 ,…as (1) be some finite set vectors from L.

The set a1 ,a2 ,…as will be called a system of vectors.

If b = α1 a1 +α2 a2 +…+αs as , (αi P), then they say that the vector b linearly expressed through system (1), or is linear combination vectors of system (1).

As in analytical geometry, in linear space one can introduce the concepts of linearly dependent and linearly independent systems of vectors. Let's do this in two ways.

Definition I. The finite system of vectors (1) for s 2 is called linearly dependent, if at least one of its vectors is a linear combination of the others. Otherwise (i.e., when none of its vectors is a linear combination of the others), it is called linearly independent.

Definition II. The finite system of vectors (1) is called linearly dependent, if there is a set of numbers α1 ,α2 ,…,αs , αi P, at least one of which is not equal to 0 (such a set is called non-zero), then the equality holds: α1 a1 +…+αs as =0 (2).

From Definition II we can obtain several equivalent definitions linearly independent system:

Definition 2.

a) system (1) linearly independent, if from (2) it follows that α1 =…=αs =0.

b) system (1) linearly independent, if equality (2) is satisfied only for all αi =0 (i=1,…,s).

c) system (1) linearly independent, if any non-trivial linear combination of vectors of this system is different from 0, i.e. if β1 , …,βs is any non-zero set of numbers, then β1 a1 +…βs as 0.

Theorem 1. For s 2 definitions linear dependence I and II are equivalent.

Proof.

I) Let (1) be linearly dependent by definition I. Then we can assume, without loss of generality, that as =α1 a1 +…+αs-1 as-1 . Let's add the vector (-as) to both sides of this equality. We get:

0= α1 a1 +…+αs-1 as-1 +(-1) as (3) (since by Corollary 5

(–as ) =(-1) as ). In equality (3) the coefficient (-1) is 0, and therefore system (1) is linearly dependent and by definition

II) Let system (1) be linearly dependent by definition II, i.e. there is a non-zero set α1 ,…,αs, which satisfies (2). Without loss of generality, we can assume that αs 0. In (2) we add (-αs as) to both sides. We get:

α1 a1 +α2 a2 +…+αs as - αs as = -αs as , whence α1 a1 +…+αs-1 as-1 = -αs as .

Because αs 0, then there is αs -1 P. Let's multiply both sides of equality (4) by (-αs -1 ) and use some axioms of linear space. We get:

(-αs -1 ) (-αs as )= (-αs -1 )(α1 a1 +…+αs-1 as-1 ), which follows: (-αs -1 α1 ) a1 +…+(-αs - 1) αs-1 as-1 =as.

Let us introduce the notation β1 = -αs -1 α1 ,…, βs-1 =(-αs -1 ) αs-1 . Then the equality obtained above will be rewritten as:

as = β1 a1 +…+ βs-1 as-1 .

Since s 2, there will be at least one vector ai on the right side. We found that system (1) is linearly dependent by Definition I.

The theorem has been proven.

By virtue of Theorem 1, if necessary, for s 2 we can apply any of the above definitions of linear dependence.

Remark 1. If the system consists of only one vector a1, then only the definition is applicable to it

Let a1 =0; then 1a1 =0. Because 1 0, then a1 =0 is a linearly dependent system.

Let a1 0; then α1 a1 ≠0, for any α1 0. This means that the non-zero vector a1 is linearly independent

There are important connections between the linear dependence of a system of vectors and its subsystems.

Theorem 2. If some subsystem (i.e. part) of a finite system of vectors is linearly dependent, then the entire system is linearly dependent.

The proof of this theorem is not difficult to do on your own. It can be found in any algebra or analytical geometry textbook.

Corollary 1. All subsystems of a linearly independent system are linearly independent. Obtained from Theorem 2 by contradiction.

Remark 2. It is easy to see that linearly dependent systems can have subsystems as linearly

Corollary 2. If a system contains 0 or two proportional (equal) vectors, then it is linearly dependent (since a subsystem of 0 or two proportional vectors is linearly dependent).

§ 3. Maximal linearly independent subsystems

Definition 3. Let a1, a2,…,ak,…. (1) is a finite or infinite system of vectors of linear space L. Its finite subsystem ai1, ai2, …, air (2) is called basis of the system (1) or maximum linearly independent subsystem this system if the following two conditions are met:

1) subsystem (2) is linearly independent;

2) if any vector aj of system (1) is assigned to subsystem (2), then we obtain a linearly dependent

system ai1, ai2, …, air, aj (3).

Example 1. In the space Pn [x], consider the system of polynomials 1,x1 , …, xn (4). Let us prove that (4) is linearly independent. Let α0, α1,…, αn be numbers from P such that α0 1+α1 x+...+αn xn =0. Then, by the definition of equality of polynomials, α0 =α1 =…=αn =0. This means that the system of polynomials (4) is linearly independent.

Let us now prove that system (4) is a basis of the linear space Pn [x].

For any f(x) Pn [x] we have: f(x)=β0 xn +…+βn 1 Pn [x]; therefore, f(x) is a linear combination of vectors (4); then the system 1,x1 , …, xn ,f(x) is linearly dependent (by definition I). Thus, (4) is a basis of the linear space Pn [x].

Example 2. In Fig. 1 a1, a3 and a2, a3 – bases of the system of vectors a1,a2,a3.

Theorem 3. Subsystem (2) ai1 ,…, air of a finite or infinite system (1) a1 , a2 ,…,as ,… is a maximal linearly independent subsystem (basis) of system (1) if and only if

a) (2) linearly independent; b) any vector from (1) is linearly expressed through (2).

Necessity . Let (2) be a maximal linearly independent subsystem of system (1). Then two conditions from Definition 3 are satisfied:

1) (2) linearly independent.

2) For any vector a j from (1) the system ai1 ,…, ais ,aj (5) is linearly dependent. It is necessary to prove that statements a) and b) are true.

Condition a) coincides with 1); therefore, a) is satisfied.

Further, by virtue of 2) there is a non-zero set α1 ,...,αr ,β P (6) such that α1 ai1 +…+αr air +βaj =0 (7). Let us prove that β 0 (8). Let's assume that β=0 (9). Then from (7) we obtain: α1 ai1 +…+αr air =0 (10). From the fact that set (6) is non-zero and β=0 it follows that α1 ,...,αr is a non-zero set. And then from (10) it follows that (2) is linearly dependent, which contradicts condition a). This proves (8).

By adding the vector (-βaj) to both sides of equalities (7), we obtain: -βaj = α1 ai1 +…+αr air. Since β 0, then

there is β-1 P; multiply both sides of the last equality by β-1: (β-1 α1 )ai1 +…+ (β-1 αr )air =aj . Let's introduce

notation: (β-1 α1 )= 1 ,…, (β-1 αr )= r ; thus, we got: 1 ai1 +…+ r air =aj ; therefore, the satisfiability of condition b) has been proven.

The need has been proven.

Sufficiency. Let conditions a) and b) from Theorem 3 be satisfied. It is necessary to prove that conditions 1) and 2) from Definition 3 are satisfied.

Since condition a) coincides with condition 1), then 1) is satisfied.

Let us prove that 2) holds. By condition b), any vector aj (1) is linearly expressed through (2). Consequently, (5) is linearly dependent (by definition 1), i.e. 2) is fulfilled.

The theorem has been proven.

Comment. Not every linear space has a basis. For example, there is no basis in the space P[x] (otherwise, the degrees of all polynomials in P[x] would be, as follows from paragraph b) of Theorem 3, collectively bounded).

§ 4. The main theorem about linear dependence. Its consequences

Definition 4. Let two finite systems of vectors of linear space L:a1 ,a2 ,…,al (1) and

b1 ,b2 ,…,bs (2).

If each vector of system (1) is linearly expressed through (2), then we will say that system (1)

is linearly expressed through (2). Examples:

1. Any subsystem of a system 1 ,…,ai ,…,ak is linearly expressed through the entire system, because

ai =0 a1 +…+1 ai +…+0 ak .

2. Any system of segment vectors from R2 is linearly expressed through a system consisting of two non-collinear plane vectors.

Definition 5. If two finite systems of vectors are linearly expressed through each other, then they are called equivalent.

Note 1. The number of vectors in two equivalent systems may be different, as can be seen from the following examples.

3. Each system is equivalent to its basis (this follows from Theorem 3 and Example 1).

4. Any two systems segment vectors from R2, each of which contains two non-collinear vectors, are equivalent.

The following theorem is one of the most important statements in the theory of linear spaces. Basic theorem about linear dependence. Let in a linear space L over a field P be given two

vector systems:

a1 ,a2 ,…,al (1) and b1 ,b2 ,…,bs (2), and (1) is linearly independent and linearly expressed through (2). Then l s (3). Proof. We need to prove inequality (3). Let us assume the opposite, let l>s (4).

By condition, each vector ai from (1) is linearly expressed through system (2):

a1 =α11 b1 +α12 b2 +…+α1s bs a2 =α21 b1 +a22 b2 +…+α2s bs

…………………... (5)

al =αl1 b1 +αl2 b2 +…+αls bs .

Let's make the following equation: x1 a1 +x2 a2 +…+x1 al =0 (6), where xi are unknowns taking values ​​from the field P (i=1,…,s).

Let's multiply each of the equalities (5), respectively, by x1,x2,...,xl, substitute into (6) and put together the terms containing b1, then b2 and, finally, bs. We get:

x1 a1 +…+xl al = (α11 x1 +α21 x2 + … +αl1 xl )b1

+ (α12 x1 +α22 x2 + … +αl2 xl )b2 + …+(α1s x1 +α2s x2 +…+αls xl )bs =0.

Let's try to find a non-zero solution

equation (6). To do this, let us equate to zero all

coefficients for bi (i=1, 2,…,s) and compose the following system of equations:

α11 x1 +α21 x2 + … +αl1 xl =0

α12 x1 +α22 x2 +…+αl2 xl =0

…………………….

α1s x1 +α2s x2 +…+αls xl =0.

(8) homogeneous system s of equations for unknowns x 1 ,…,xl . She is always cooperative.

IN due to inequality (4) in this system the number of unknowns more number equations, and therefore, as follows from the Gauss method, it is reduced to a trapezoidal form. This means that there are non-zero

solutions to system (8). Let us denote one of them by x1 0 ,x2 0 ,…,xl 0 (9), xi 0 P (i=1, 2,…s).

Substituting numbers (9) into the left side of (7), we obtain: x1 0 a1 +x2 0 a2 +…+xl 0 al =0 b1 +0 b2 +…+0 bs =0. (10)

So, (9) is a non-zero solution to equation (6). Therefore, system (1) is linearly dependent, and this contradicts the condition. Therefore, our assumption (4) is incorrect and l s.

The theorem has been proven.

Corollaries from the main theorem about linear dependence Corollary 1. Two finite equivalent linearly independent vector systems consist of

the same number of vectors.

Proof. Let the systems of vectors (1) and (2) be equivalent and linearly independent. To prove this, we apply the main theorem twice.

Because system (2) is linearly independent and linearly expressed through (1), then by the main theorem l s (11).

On the other hand, (1) is linearly independent and is linearly expressed through (2), and by the main theorem s l (12).

From (11) and (12) it follows that s=l. The statement has been proven.

Corollary 2. If in some system of vectors a1 ,…,as ,… (13) (finite or infinite) there are two bases, then they consist of the same number of vectors.

Proof. Let ai1 ,…,ail (14) and aj1 ,..ajk (15) be the bases of system (13). Let us show that they are equivalent.

According to Theorem 3, each vector of system (13) is linearly expressed through its basis (15), in particular, any vector of system (14) is linearly expressed through system (15). Similarly, system (15) is linearly expressed through (14). This means that systems (14) and (15) are equivalent and by Corollary 1 we have: l=k.

The statement has been proven.

Definition 6. The number of vectors in an arbitrary basis of a finite (infinite) system of vectors is called the rank of this system (if there are no bases, then the rank of the system does not exist).

By Corollary 2, if system (13) has at least one basis, its rank is unique.

Remark 2. If a system consists only of zero vectors, then we assume that its rank is 0. Using the concept of rank, we can strengthen the main theorem.

Corollary 3. Given two finite systems of vectors (1) and (2), and (1) is linearly expressed through (2). Then the rank of system (1) does not exceed the rank of system (2).

Proof . Let us denote the rank of system (1) by r1, the rank of system (2) by r2. If r1 =0, then the statement is true.

Let r1 0. Then r2 0, because (1) is linearly expressed through (2). This means that systems (1) and (2) have bases.

Let a1 ,…,ar1 (16) be the basis of system (1) and b1 ,…,br2 (17) be the basis of system (2). They are linearly independent by definition of the basis.

Because (16) is linearly independent, then the main theorem can be applied to the pair of systems (16), (17). According to this

theorem r1 r2 . The statement has been proven.

Corollary 4. Two finite equivalent systems of vectors have the same ranks. To prove this statement, we need to apply Corollary 3 twice.

Remark 3. Note that the rank of a linearly independent system of vectors is equal to the number of its vectors (since in a linearly independent system its only basis coincides with the system itself). Therefore, corollary 1 is special case Corollary 4. But without proof of this particular case, we would not be able to prove Corollary 2, introduce the concept of rank of a system of vectors and obtain Corollary 4.

§ 5. Finite-dimensional linear spaces

Definition 7. A linear space L over a field P is called finite-dimensional if there is at least one basis in L.

Basic examples of finite-dimensional linear spaces:

1. Vector segments on a straight line, a plane and in space (linear spaces R1, R2, R3).

2. n-dimensional arithmetic space P(n) . Let us show that in P(n) there is the following basis: e1 =(1,0,…,0)

e2 =(0,1,…,0) (1)

en =(0,0,…1).

Let us first prove that (1) is a linearly independent system. Let's create the equation x1 e1 +x2 e2 +…+xn en =0 (2).

Using the form of vectors (1), we rewrite equation (2) as follows: x1 (1,0,…,0)+x2 (0,1,…,0)+…+xn (0,0,…,1)=( x1 , x2 , …,xn )=(0,0,…,0).

By the definition of equality of row vectors, it follows:

x1 =0, x2 =0,…, xn =0 (3). Therefore, (1) is a linearly independent system. Let us prove that (1) is a basis of the space P(n) using Theorem 3 on bases.

For any a=(α1 ,α2 ,…,αn ) Pn we have:

а=(α1 ,α2 ,…,αn )=(α1 ,0,…,0)+(0,α2 ,…,0)+(0,0,…,αn )= 1 e1 + 2 e2 +…+ n en .

This means that any vector in the space P(n) can be linearly expressed through (1). Consequently, (1) is a basis of the space P(n), and therefore P(n) is a finite-dimensional linear space.

3. Linear space Pn [x]=(α0 xn +...+αn | αi P).

It is easy to verify that the basis of the space Pn [x] is the system of polynomials 1,x,…,xn. So Pn

[x] is a finite-dimensional linear space.

4. Linear space M n(P). It can be verified that the set of matrices of the form Eij in which the only non-zero element 1 is on intersection of the i-th rows and jth column (i,j=1,…,n) constitute the basis Mn (P).

Corollaries from the main theorem on linear dependence for finite-dimensional linear spaces

Along with the corollaries of the main linear dependence theorem 1–4, several other important statements can be obtained from this theorem.

Corollary 5. Any two bases of a finite-dimensional linear space consist of the same number of vectors.

This statement is a special case of Corollary 2 of the main linear dependence theorem applied to the entire linear space.

Definition 8. The number of vectors in an arbitrary basis of a finite-dimensional linear space L is called the dimension of this space and is denoted by dim L.

By Corollary 5, every finite-dimensional linear space has a unique dimension. Definition 9. If a linear space L has dimension n, then it is called n-dimensional

linear space. Examples:

1. dim R 1 =1;

2. dimR 2 =2;

3. dimP (n) =n, i.e. P(n) is an n-dimensional linear space, because above, in example 2 it is shown that (1) is the basis

P(n);

4. dimP n [x]=(n+1), because, as is easy to check, 1,x,x2 ,…,xn is a basis of n+1 vectors of this space;

5. dimM n (P)=n2, because there are exactly n2 matrices of the form Eij indicated in example 4.

Corollary 6. In an n-dimensional linear space L, any n+1 vectors a1 ,a2 ,…,an+1 (3) constitute a linearly dependent system.

Proof. By definition of the dimension of space in L, there is a basis of n vectors: e1 ,e2 ,…,en (4). Let's consider a pair of systems (3) and (4).

Let us assume that (3) is linearly independent. Because (4) is a basis of L, then any vector of the space L can be linearly expressed through (4) (by Theorem 3 from §3). In particular, system (3) is linearly expressed through (4). By assumption (3) it is linearly independent; then the main theorem on linear dependence can be applied to the pair of systems (3) and (4). We get: n+1 n, which is impossible. The contradiction proves that (3) is linearly dependent.

The investigation has been proven.

Remark 1. From Corollary 6 and Theorem 2 from §2 we obtain that in an n-dimensional linear space any finite system of vectors containing more than n vectors is linearly dependent.

From this remark it follows

Corollary 7. In an n-dimensional linear space, any linearly independent system contains at most n vectors.

Remark 2. Using this statement we can establish that some linear spaces are not finite-dimensional.

Example. Let us consider the space of polynomials P[x] and prove that it is not finite-dimensional. Let us assume that dim P[x]=m, m N. Consider 1, x,…, xm – a set of (m+1) vectors from P[x]. This system of vectors, as noted above, is linearly independent, which contradicts the assumption that the dimension of P[x] is equal to m.

It is easy to check (using P[x]) that finite-dimensional linear spaces are not the spaces of all functions of a real variable, spaces of continuous functions, etc.

Corollary 8. Any finite linearly independent system of vectors a1 , a2 ,…,ak (5) of a finite-dimensional linear space L can be supplemented to the basis of this space.

Proof. Let n=dim L. Let's consider two possible cases.

1. If k=n, then a 1 , a2 ,…,ak is a linearly independent system of n vectors. By Corollary 7, for any b L the system a1 , a2 ,…,ak , b is linearly dependent, i.e. (5) – basis L.

2. Let k n. Then system (5) is not a basis of L, which means there exists a vector a k+1 L, that a1 , a2 ,…,ak , ak+1 (6) is a linearly independent system. If (k+1)

By Corollary 7, this process ends after a finite number of steps. We obtain a basis a1 , a2 ,…,ak , ak+1 ,…,an of the linear space L, containing (5).

The investigation has been proven.

From Corollary 8 it follows

Corollary 9. Any non-zero vector of a finite-dimensional linear space L is contained in some basis L (since such a vector is a linearly independent system).

It follows that if P is an infinite field, then in a finite-dimensional linear space over the field P there are infinitely many bases (since in L there are infinitely many vectors of the form a, a 0, P\0).

§ 6. Isomorphism of linear spaces

Definition 10. Two linear spaces L and L` over one field P are called isomorphic if there is a bijection: L L` satisfying the following conditions:

1. (a+b)= (a)+ (b) a, b L,

2. (a)= (a) P, a L.

Such a mapping itself is called an isomorphism or isomorphic mapping.

Properties of isomorphisms.

1. With isomorphism, the zero vector becomes zero.

Proof. Let a L and: L L` be an isomorphism. Since a=a+0, then (a)= (a+0)= (a)+ (0).

Because (L)=L` then from the last equality it is clear that (0) (we denote it by 0`) is the zero vector from

2. With isomorphism, a linearly dependent system transforms into a linearly dependent system. Proof. Let a1 , a2 ,…,as (2) be some linearly dependent system from L. Then there exists

a non-zero set of numbers 1 ,…, s (3) from P, such that 1 a1 +…+ s as =0. Let us subject both sides of this equality to an isomorphic mapping. Taking into account the definition of isomorphism, we get:

1 (a1 )+…+ s (as )= (0)=0` (we used property 1). Because set (3) is non-zero, then from the last equality it follows that (1),..., (s) is a linearly dependent system.

3. If: L L` is an isomorphism, then -1 : L` L is also an isomorphism.

Proof. Since is a bijection, then there is a bijection -1 : L` L. We need to prove that if a`,

Since it is an isomorphism, then a`+b`= (a)+ (b) = (a+b). It follows from this:

a+b= -1 ((a+b))= -1 ((a)+ (b)).

From (5) and (6) we have -1 (a`+b`)=a+b= -1 (a`)+ -1 (b`).

Similarly, it is checked that -1 (a`)= -1 (a`). So, -1 is an isomorphism.

The property has been proven.

4. With isomorphism, a linearly independent system transforms into a linearly independent system. Proof. Let: L L` is an isomorphism and a1, a2,…,as (2) is a linearly independent system. Required

prove that (a1), (a2),…, (as) (7) is also linearly independent.

Let us assume that (7) is linearly dependent. Then, when displaying -1, it goes into the system a1,...,as.

By property 3 -1 is an isomorphism, and then by property 2, system (2) will also be linearly dependent, which contradicts the condition. Therefore, our assumption is incorrect.

The property has been proven.

5. With isomorphism, the basis of any system of vectors goes into the basis of the system of its images. Proof. Let a1 , a2 ,…,as ,… (8) be a finite or infinite system of linear vectors

space L, : L L` is an isomorphism. Let system (8) have basis ai1 , …,air (9). Let us show that the system

(a1),…, (ak),… (10) has a basis (ai1),…, (air) (11).

Since (9) is linearly independent, then by property 4 system (11) is linearly independent. Let us assign to (11) any vector from (10); we get: (ai1), …, (air), (aj) (12). Consider the system ai1 , …,air , aj (13). It is linearly dependent, since (9) is the basis of system (8). But (13) under isomorphism turns into (12). Since (13) is linearly dependent, then by property 2 system (12) is also linearly dependent. This means that (11) is the basis of system (10).

Applying Property 5 to the entire finite-dimensional linear space L, we obtain

Statement 1. Let L be an n-dimensional linear space over the field P, : L L` isomorphism. Then L` is also a finite-dimensional space and dim L`= dim L = n.

In particular, Statement 2 is true. If finite-dimensional linear spaces are isomorphic, then their dimensions are equal.

Comment. In §7 the validity of the converse to this statement will also be established.

§ 7. Vector coordinates

Let L be a finite-dimensional linear space over the field P and e1 ,...,en (1) be some basis of L.

Definition 11. Let a L. Let us express the vector a through basis (1), i.e. a= 1 e1 +…+ n en (2), i P (i=1,…,n). Column (1,…, n)t (3) is called coordinate column vector a in basis (1).

The coordinate column of vector a in basis e is also denoted by [a], [a]e or [1,.., n].

As in analytical geometry, the uniqueness of the vector expression through the basis is proved, i.e. the uniqueness of the coordinate column of the vector in a given basis.

Note 1. In some textbooks, instead of coordinate columns, coordinate lines are considered (for example, in the book). In this case, the formulas obtained there in the language of coordinate columns look different.

Theorem 4. Let L be an n-dimensional linear space over the field P and (1) be some basis of L. Consider the mapping: a (1,..., n)t, which associates any vector a from L with its coordinate column in basis (1). Then is an isomorphism of the spaces L and P(n) (P(n) is an n-dimensional arithmetic space of column vectors).

Proof . The mapping is unique due to the uniqueness of the vector coordinates. It is easy to check that is a bijection and (a)= (a), (a)+ (b)= (a+b). This means isomorphism.

The theorem has been proven.

Corollary 1. A system of vectors a1 ,a2 ,…,as of a finite-dimensional linear space L is linearly dependent if and only if the system consisting of the coordinate columns of these vectors in some basis of the space L is linearly dependent.

The validity of this statement follows from Theorem 1 and the second and fourth properties of isomorphism. Remark 2. Corollary 1 allows us to study the question of the linear dependence of systems of vectors in

in a finite-dimensional linear space can be reduced to solving the same question for the columns of a certain matrix.

Theorem 5 (criterion for isomorphism of finite-dimensional linear spaces). Two finite-dimensional linear spaces L and L` over one field P are isomorphic if and only if they have the same dimension.

Necessity. Let L L` By virtue of Statement 2 from §6, the dimension of L coincides with the dimension of L1.

Adequacy. Let dim L = dim L`= n. Then, by Theorem 4, we have: L P(n)

and L` P(n) . From here

it is not difficult to obtain that L L`.

The theorem has been proven.

Note. In what follows, we will often denote an n-dimensional linear space by Ln.

§ 8. Transition matrix

Definition 12. Let in the linear space Ln

two bases are given:

e= (е1,...еn) and e`=(e1`,...,e`n) (old and new).

Let us expand the vectors of the basis e` into the basis e:

e`1 =t11 e1 +…+tn1 en

…………………..

e`n =t1n e1 +…+tnn en .

t11………t1n

T= ……………

tn1………tnn

called transition matrix from basis e to basis e`.

Note that it is convenient to write equalities (1) in matrix form as follows: e` = eT (2). This equality is equivalent to defining the transition matrix.

Remark 1. Let us formulate a rule for constructing a transition matrix: to construct a transition matrix from a basis e to a basis e`, for all vectors ej` of the new basis e`, we need to find their coordinate columns in the old basis e and write them as the corresponding columns of the matrix T.

Note 2. In the book, the transition matrix is ​​compiled row by row (from the coordinate rows of the vectors of the new basis in the old one).

Theorem 6. The transition matrix from one basis of the n-dimensional linear space Ln over the field P to its other basis is a non-degenerate matrix of nth order with elements from the field P.

Proof. Let T be the transition matrix from basis e to basis e`. The columns of the matrix T, by definition 12, are the coordinate columns of the vectors of the basis e` in the basis e. Since e` is a linearly independent system, then by Corollary 1 of Theorem 4 the columns of the matrix T are linearly independent, and therefore |T|≠0.

The theorem has been proven.

The converse is also true.

Theorem 7. Any non-degenerate square matrix of the nth order with elements from the field P serves as a transition matrix from one basis of the n-dimensional linear space Ln over the field P to some other basis Ln.

Proof . Let the basis e = (e1, ..., en) of the linear space L and a non-singular square matrix be given

Т= t11………t1n

tn1………tnn

nth order with elements from the field P. In the linear space Ln, consider an ordered system of vectors e`=(e1 `,…,e`n), for which the columns of the matrix T are coordinate columns in the basis e.

The system of vectors e` consists of n vectors and, by virtue of Corollary 1 of Theorem 4, is linearly independent, since the columns of a non-singular matrix T are linearly independent. Therefore, this system is the basis of the linear space Ln, and due to the choice of system vectors e` the equality e`=eT holds. This means that T is the transition matrix from basis e to basis e`.

The theorem has been proven.

Relationship between the coordinates of vector a in different bases

Let the bases e=(е1,...еn) and e`=(e1`,...,e`n) be given in the linear space Ln with the transition matrix T from the basis e to the basis e`, i.e. (2) is true. Vector a has coordinates in the bases e and e` [a]e =(1 ,…, n)T and [a]e` =(1 `,…,

n `)T , i.e. a=e[a]e and a=e`[a]e` .

Then, on the one hand, a=e[a]e , and on the other a=e`[a]e` =(eT)[a]e` =e(T[a]e` ) (we used the equality ( 2)). From these equalities we get: a=e[a]e =e(T[a]e` ). Hence, due to the uniqueness of the expansion of the vector in basis

This implies the equality [a]e =Т[a]e` (3), or

n` .

Relations (3) and (4) are called coordinate transformation formulas when the basis of linear space changes. They express the old vector coordinates in terms of the new ones. These formulas can be resolved relative to the new vector coordinates by multiplying (4) on the left by T-1 (such a matrix exists, since T is a non-singular matrix).

Then we get: [a]e` =T-1 [a]e . Using this formula, knowing the coordinates of the vector in the old basis e of the linear space Ln, you can find its coordinates in the new basis, e`.

§ 9. Subspaces of linear space

Definition 13. Let L be a linear space over the field P and H L. If H is also a linear space over P with respect to the same operations as L, then H is called subspace linear space L.

Statement 1. A subset H of a linear space L over a field P is a subspace of L if the following conditions are satisfied:

1. h 1 +h2 H for any h1 , h2 H;

2. h H for any h H and P.

Proof. If conditions 1 and 2 are satisfied in H, then addition and multiplication by elements of the field P are specified in H. The validity of most of the linear space axioms for H follows from their validity for L. Let us check some of them:

a) 0 h=0 H (due to condition 2);

b) h H we have: (-h)=(-1)h H (due to condition 2).

The statement has been proven.

1. The subspaces of any linear space L are 0 and L.

2. R 1 – subspace of the space R2 of segment vectors on the plane.

3. The space of functions of a real variable has, in particular, the following subspaces:

a) linear functions of the form ax+b;

b) continuous functions; c) differentiable functions.

One universal way of identifying subspaces of any linear space is associated with the concept of a linear hull.

Definition 14. Let a1 ,…as (1) be an arbitrary finite system of vectors in linear space L. Let us call linear shell of this system set ( 1 a1 +…+ s as | i P) = . The linear shell of system (1) is also denoted by L(a1 ,…,as ).

Theorem 8. The linear hull H of any finite system of vectors (1) of a linear space L is a finite-dimensional subspace of the linear space L. The basis of system (1) is also a basis of H, and the dimension of H is equal to the rank of system (1).

Proof. Let H= . From the definition of a linear hull it easily follows that conditions 1 and 2 of Statement 1 are satisfied. By virtue of this statement, H is a subspace of the linear space L. Let ai1 ,….,air (2) be the basis of system (1). Then we have: any vector h H is linearly expressed through (1) - by definition of a linear shell, and (1) is linearly expressed through its basis (2). Since (2) is a linearly independent system, it is the basis of N. But the number of vectors in (2) is equal to the rank of system (1). This means dimH=r.

The theorem has been proven.

Remark 1. If H is a finite-dimensional subspace of the linear space L and h1 ,...,hm is a basis of H, then it is easy to see that H=

. This means that linear hulls are a universal way to construct finite-dimensional subspaces of linear spaces.

Definition 15. Let A and B be two subspaces of a linear space L over a field P. Let us call their sum A+B the following set: A+B=(a+b| a A, b B).

Example. R2 is the sum of the subspaces OX (axis vectors OX) and OY. It is easy to prove the following

Statement 2. The sum and intersection of two subspaces of a linear space L are subspaces of L (it is enough to check the satisfaction of conditions 1 and 2 of Statement 1).

Fair

Theorem 9. If A and B are two finite-dimensional subspaces of a linear space L, then dim(A+B)=dimA+ dimB–dim A B.

The proof of this theorem can be found, for example, in.

Remark 2. Let A and B be two finite-dimensional subspaces of a linear space L. To find their sum A+B, it is convenient to use the definition of A and B as linear hulls. Let A= , V= . Then it is easy to show that A + B = . The dimension A+B, according to Theorem 7 proven above, is equal to the rank of the system a1,…,am, b1,…,bs. Therefore, if we find the basis of this system, we will also find dim (A+B).