L- intersection M all subspaces L containing X .

Linear shell is also called subspace generated X. Usually denoted . It is also said that the linear shell stretched over many X .

Properties

See also

Links


Wikimedia Foundation. 2010.

  • Jangar
  • Balance of payments

See what “Linear shell” is in other dictionaries:

    LINEAR SHELL- intersection M of all subspaces containing set A vector space E. At the same time, Mnaz. also a subspace generated by A. M. I. Voitsekhovsky... Mathematical Encyclopedia

    Linear shell vectors

    Linear shell vectors- a set of linear combinations of these vectors ∑αiаi with all possible coefficients (α1, …, αn) … Economic and mathematical dictionary

    linear shell vectors- A set of linear combinations of these vectors??iai with all possible coefficients (?1, …, ?n). Topics economics EN linear hull …

    linear algebra - Mathematical discipline, a branch of algebra containing, in particular, the theory linear equations, matrices and determinants, as well as the theory of vector (linear) spaces. Linear relationship “relationship of the form: a1x1 + a2x2 + … +… … Technical Translator's Guide

    Linear dependence- “relationship of the form: a1x1 + a2x2 + … + anxn = 0, where a1, a2, …, an are numbers, at least one of which is non-zero; x1, x2, ..., xn are certain mathematical objects for which addition operations are defined ... Economic and mathematical dictionary

    Shell- see Linear shell... Economic and mathematical dictionary

    Linear dependence

    Linear combination- Linear space, or vector space, is the main object of study of linear algebra. Contents 1 Definition 2 Simplest properties 3 Related definitions and properties ... Wikipedia

    LINEAR GROUP is a group of linear transformations of a vector space V of finite dimension n over a certain body K. The choice of a basis in the space V realizes the linear group as a group of non-degenerate square matrices of degree n over the body K. Thus, an isomorphism is established... Mathematical Encyclopedia

Books

  • Linear algebra. Textbook and workshop for open source education Buy for 1471 UAH (Ukraine only)
  • Linear algebra. Textbook and workshop for academic bachelor's degree, Kremer N.Sh.. This textbook includes a number of new concepts and additional questions, such as the norm of a matrix, the method of addition to a basis, isomorphism linear spaces, linear subspaces, linear...

The article describes the basics of linear algebra: linear space, its properties, the concept of basis, dimensions of space, linear hull, connection between linear spaces and the rank of matrices.

Linear space

Many L called linear space, if for all its elements the operations of adding two elements and multiplying an element by a number satisfying I group Weyl's axioms. The elements of linear space are called vectors. This is a complete definition; more briefly, we can say that a linear space is a set of elements for which the operations of adding two elements and multiplying an element by a number are defined.

Weyl's axioms.

Hermann Weil suggested that in geometry we have two types of objects ( vectors and points), the properties of which are described by the following axioms, which formed the basis of the section linear algebra. It is convenient to divide the axioms into 3 groups.

Group I

  1. for any vectors x and y the equality x+y=y+x is satisfied;
  2. for any vectors x, y and z the equality x+(y+z)=(x+y)+z is satisfied;
  3. there is a vector o such that for any vector x the equality x+o=x holds;
  4. for any vector X there is a vector (-x) such that x+(-x)=o;
  5. for any vector X the equality 1x=x holds;
  6. for any vectors X And at and any number λ the equality λ( X+at)=λ Xat;
  7. for any vector X and any numbers λ and μ the equality holds (λ+μ) XXX;
  8. for any vector X and any numbers λ and μ the equality λ(μ X)=(λμ) X;

Group II

Group I defines the concept linear combination of vectors, linear dependence and linear independence. This allows us to formulate two more axioms:

  1. there is n linearly independent vectors;
  2. any (n+1) vectors are linearly dependent.

For planimetry n=2, for stereometry n=3.

Group III

This group assumes that there is a scalar multiplication operation that assigns a pair of vectors X And at number ( x,y). In this case:

  1. for any vectors X And at equality holds ( x,y)=(y, x);
  2. for any vectors X , at And z equality holds ( x+y,z)=(x,z)+(y,z);
  3. for any vectors X And at and any number λ the equality (λ x,y)=λ( x,y);
  4. for any vector x the inequality holds ( x, x)≥0, and ( x, x)=0 if and only if X=0.

Properties of linear space

Most properties of linear space are based on Weyl's axioms:

  1. Vector O, the existence of which is guaranteed by Axiom 3, is determined in a unique way;
  2. Vector (- X), whose existence is guaranteed by Axiom 4, is determined in a unique way;
  3. For any two vectors A And b belonging to space L, there is only one vector X, also belonging to space L, which is a solution to the equation a+x=b and called the vector difference b-a.

Definition. Subset L' linear space L called linear subspace space L, if it itself is a linear space in which the sum of vectors and the product of a vector and a number are defined in the same way as in L.

Definition. Linear shell L(x1, x2, x3, …, xk) vectors x1, x2, x3, And xk is called the set of all linear combinations of these vectors. About the linear shell we can say that

-the linear span is a linear subspace;

– the linear hull is the minimal linear subspace containing the vectors x1, x2, x3, And xk.

Definition. A linear space is called n-dimensional if it satisfies Group II of the Weyl axiom system. The number n is called dimension linear space and write dimL=n.

Basis– any ordered system of n linearly independent vectors of space. The meaning of the basis is that the vectors that make up the basis can be used to describe any vector in the space.

Theorem. Any n linearly independent vectors in the space L form a basis.

Isomorphism.

Definition. Linear spaces L And L' are called isomorphic if such a one-to-one correspondence can be established between their elements x↔x’, What:

  1. If x↔x’, y↔y’, That x+y↔x’+y’;
  2. If x↔x’, then λ x↔λ X'.

This correspondence itself is called isomorphism. Isomorphism allows us to make the following statements:

  • if two spaces are isomorphic, then their dimensions are equal;
  • any two linear spaces over the same field and of the same dimension are isomorphic.

Vector(or linear) space- a mathematical structure, which is a set of elements called vectors, for which the operations of addition with each other and multiplication by a number are defined - a scalar. These operations are subject to eight axioms. Scalars can be elements of the real, complex, or any other number field. A special case of such a space is the ordinary three-dimensional Euclidean space, whose vectors are used, for example, to represent physical forces. It should be noted that a vector, as an element of vector space, does not necessarily have to be specified in the form of a directed segment. Generalizing the concept of “vector” to an element of a vector space of any nature not only does not cause confusion of terms, but also makes it possible to understand or even predict a number of results that are valid for spaces of arbitrary nature.

Vector spaces are the subject of linear algebra. One of the main characteristics of a vector space is its dimension. Dimension represents the maximum number of linearly independent elements of space, that is, resorting to a rough geometric interpretation, the number of directions inexpressible through each other through only the operations of addition and multiplication by a scalar. The vector space can be endowed with additional structures, such as a norm or an inner product. Such spaces appear naturally in mathematical analysis, primarily in the form of infinite-dimensional function spaces (English), where the functions are the vectors. Many analysis problems require finding out whether a sequence of vectors converges to this vector. Consideration of such questions is possible in vector spaces with additional structure, in most cases a suitable topology, which allows us to define the concepts of proximity and continuity. Such topological vector spaces, in particular Banach and Hilbert spaces, allow deeper study.

The first works that anticipated the introduction of the concept of vector space date back to the 17th century. It was then that analytical geometry, the doctrine of matrices, systems of linear equations, and Euclidean vectors began to develop.

Definition

Linear or vector space V (F) (\displaystyle V\left(F\right)) over the field F (\displaystyle F)- this is an ordered four (V , F , + , ⋅) (\displaystyle (V,F,+,\cdot)), Where

  • V (\displaystyle V)- a non-empty set of elements of arbitrary nature, which are called vectors;
  • F (\displaystyle F)- a field whose elements are called scalars;
  • Operation defined addition vectors V × V → V (\displaystyle V\times V\to V), which associates each pair of elements x , y (\displaystyle \mathbf (x) ,\mathbf (y) ) sets V (\displaystyle V) V (\displaystyle V) called them amount and designated x + y (\displaystyle \mathbf (x) +\mathbf (y) );
  • Operation defined multiplying vectors by scalars F × V → V (\displaystyle F\times V\to V), matching each element λ (\displaystyle \lambda) fields F (\displaystyle F) and each element x (\displaystyle \mathbf (x) ) sets V (\displaystyle V) the only element of the set V (\displaystyle V), denoted λ ⋅ x (\displaystyle \lambda \cdot \mathbf (x) ) or λ x (\displaystyle \lambda \mathbf (x) );

Vector spaces defined on the same set of elements, but over different fields, will be different vector spaces (for example, a set of pairs real numbers R 2 (\displaystyle \mathbb (R) ^(2)) can be a two-dimensional vector space over the field of real numbers or one-dimensional - over the field of complex numbers).

The simplest properties

  1. A vector space is an Abelian group under addition.
  2. Neutral element 0 ∈ V (\displaystyle \mathbf (0) \in V)
  3. 0 ⋅ x = 0 (\displaystyle 0\cdot \mathbf (x) =\mathbf (0) ) for anyone.
  4. For anyone x ∈ V (\displaystyle \mathbf (x) \in V) opposite element − x ∈ V (\displaystyle -\mathbf (x) \in V) is the only thing that follows from group properties.
  5. 1 ⋅ x = x (\displaystyle 1\cdot \mathbf (x) =\mathbf (x) ) for anyone x ∈ V (\displaystyle \mathbf (x) \in V).
  6. (− α) ⋅ x = α ⋅ (− x) = − (α x) (\displaystyle (-\alpha)\cdot \mathbf (x) =\alpha \cdot (-\mathbf (x))=-( \alpha \mathbf (x))) for any and x ∈ V (\displaystyle \mathbf (x) \in V).
  7. α ⋅ 0 = 0 (\displaystyle \alpha \cdot \mathbf (0) =\mathbf (0) ) for anyone α ∈ F (\displaystyle \alpha \in F).

Related definitions and properties

Subspace

Algebraic definition: Linear subspace or vector subspace- non-empty subset K (\displaystyle K) linear space V (\displaystyle V) such that K (\displaystyle K) is itself a linear space with respect to those defined in V (\displaystyle V) operations of addition and multiplication by a scalar. The set of all subspaces is usually denoted as L a t (V) (\displaystyle \mathrm (Lat) (V)). For a subset to be a subspace it is necessary and sufficient that

The last two statements are equivalent to the following:

For all vectors x , y ∈ K (\displaystyle \mathbf (x) ,\mathbf (y) \in K) vector α x + β y (\displaystyle \alpha \mathbf (x) +\beta \mathbf (y) ) also belonged K (\displaystyle K) for any α , β ∈ F (\displaystyle \alpha ,\beta \in F).

In particular, a vector space consisting of only one null vector is a subspace of any space; every space is a subspace of itself. Subspaces that do not coincide with these two are called own or non-trivial.

Properties of subspaces

Linear combinations

Final amount of the form

α 1 x 1 + α 2 x 2 + … + α n x n (\displaystyle \alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf (x) _(2)+\ ldots +\alpha _(n)\mathbf (x) _(n))

The linear combination is called:

Basis. Dimension

Vectors x 1 , x 2 , … , x n (\displaystyle \mathbf (x) _(1),\mathbf (x) _(2),\ldots ,\mathbf (x) _(n)) are called linearly dependent, if there is a nontrivial linear combination of them whose value is equal to zero; that is

α 1 x 1 + α 2 x 2 + … + α n x n = 0 (\displaystyle \alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf (x) _(2) +\ldots +\alpha _(n)\mathbf (x) _(n)=\mathbf (0) )

at some coefficients α 1 , α 2 , … , α n ∈ F , (\displaystyle \alpha _(1),\alpha _(2),\ldots ,\alpha _(n)\in F,) and at least one of the coefficients α i (\displaystyle \alpha _(i)) different from zero.

Otherwise these vectors are called linearly independent.

This definition allows the following generalization: an infinite set of vectors from V (\displaystyle V) called linearly dependent, if some is linearly dependent final a subset of it, and linearly independent, if any of it final the subset is linearly independent.

Properties of the basis:

x = α 1 x 1 + α 2 x 2 + … + α n x n (\displaystyle \mathbf (x) =\alpha _(1)\mathbf (x) _(1)+\alpha _(2)\mathbf ( x) _(2)+\ldots +\alpha _(n)\mathbf (x) _(n)).

Linear shell

Linear shell subsets X (\displaystyle X) linear space V (\displaystyle V)- intersection of all subspaces V (\displaystyle V) containing X (\displaystyle X).

The linear span is a subspace V (\displaystyle V).

Linear shell is also called subspace generated X (\displaystyle X). It is also said that the linear shell V (X) (\displaystyle (\mathcal (V))(X))- space, stretched over many X (\displaystyle X).

Let be a system of vectors from . Linear shell vector systems is the set of all linear combinations of vectors of a given system, i.e.

Properties of a linear shell: If , then for and .

A linear shell has the property of being closed with respect to linear operations(operations of addition and multiplication by number).

A subset of a space that has the property of being closed with respect to the operations of addition and multiplication by numbers is calledlinear subspace of space .

The linear shell of a system of vectors is a linear subspace of the space.

The system of vectors from is called a basis ,If

Any vector can be expressed as a linear combination of basis vectors:

2. The system of vectors is linearly independent.

Lemma Vector expansion coefficients according to the basis are uniquely determined.

Vector , composed of vector expansion coefficients according to the basis is called coordinate vector vector in the basis .

Designation . This entry emphasizes that the coordinates of the vector depend on the basis.

Linear spaces

Definitions

Let a set of elements of arbitrary nature be given. Let two operations be defined for the elements of this set: addition and multiplication by any real number: , and set closed regarding these operations: . Let these operations obey the axioms:

3. There is a zero vector with the property for ;

4. for each there is an inverse vector with the property ;

6. for , ;

7. for , ;

Then such a set is called linear (vector) space, its elements are called vectors, and - to emphasize their difference from the numbers from - the latter are called scalars 1) . A space consisting of only one zero vector is called trivial .

If in axioms 6 - 8 we allow multiplication by complex scalars, then such a linear space is called comprehensive. To simplify the reasoning, in what follows we will consider only real spaces.

A linear space is a group with respect to the operation of addition, and an Abelian group.

The uniqueness of the zero vector and the uniqueness of the vector inverse to the vector are easily proven: , it is usually designated .

A subset of a linear space that is itself a linear space (that is, closed under addition of vectors and multiplication by an arbitrary scalar) is called linear subspace space. Trivial subspaces A linear space is called itself and the space consisting of one zero vector.

Example. The space of ordered triples of real numbers

operations defined by the equalities:

The geometric interpretation is obvious: a vector in space, “tied” to the origin, can be specified in the coordinates of its end. The figure also shows a typical subspace of space: a plane passing through the origin. More precisely, the elements are vectors that begin at the origin and end at points in the plane. The closedness of such a set with respect to the addition of vectors and their dilation 2) is obvious.

Based on this geometric interpretation, a vector of an arbitrary linear space is often spoken of as point in space. Sometimes this point is called the "end of the vector". Apart from the convenience of associative perception, these words are not given any formal meaning: the concept of “end of a vector” is absent in the axiomatics of linear space.

Example. Based on the same example, we can give a different interpretation of vector space (embedded, by the way, in the very origin of the word “vector” 3)) - it defines a set of “shifts” of points in space. These shifts - or parallel translations of any spatial figure - are chosen to be parallel to the plane.

Generally speaking, with such interpretations of the concept of a vector, everything is not so simple. Attempts to appeal to him physical meaning- as to an object having size And direction- cause a fair rebuke from strict mathematicians. The definition of a vector as an element of vector space is very reminiscent of the episode with sepulchami from the famous fantastic story Stanislav Lem (see ☞HERE). Let's not get hung up on formalism, but explore this fuzzy object in its particular manifestations.

Example. A natural generalization is space: row or column vector space . One way to specify a subspace in is to specify a set of constraints.

Example. Set of solutions of a system of linear homogeneous equations:

forms a linear subspace of the space. In fact, if

The solution of the system, then

The same solution for any . If

Another solution to the system, then

It will also be her decision.

Why are there many solutions to the system? heterogeneous equations does not form a linear subspace?

Example. Generalizing further, we can consider the space of “infinite” strings or sequences , which is usually an object mathematical analysis- when considering sequences and series. You can consider lines (sequences) “infinite in both directions” - they are used in SIGNAL THEORY.

Example. The set of -matrices with real elements with the operations of matrix addition and multiplication by real numbers forms a linear space.

In the space of square order matrices, two subspaces can be distinguished: the subspace of symmetric matrices and the subspace of skew-symmetric matrices. In addition, subspaces form each of the sets: upper triangular, lower triangular idiagonal matrices.

Example. A set of polynomials of one variable degree exactly equal to the coefficients of (where is any of the sets or ) with the usual operations of addition of polynomials and multiplication by a number from does not form linear space. Why? - Because it is not closed under addition: the sum of polynomials will not be a polynomial of the th degree. But here are many polynomials of degree no higher

linear space forms; only to this set we must also add an identically zero polynomial 4). The obvious subspaces are . In addition, the subspaces will be the set of even and the set of odd polynomials of degree at most . The set of all possible polynomials (without restrictions on degrees) also forms a linear space.

Example. A generalization of the previous case will be the space of polynomials of several variables of degree at most with coefficients from . For example, the set of linear polynomials

forms a linear space. The set of homogeneous polynomials (forms) of degree (with the addition of an identically zero polynomial to this set) is also a linear space.

In terms of the above definition, the set of strings with integer components

considered with respect to the operations of componentwise addition and multiplication by integers scalars is not a linear space. However, all axioms 1 - 8 will be satisfied if we only allow multiplication by integer scalars. In this section we will not focus on this object, but it is quite useful in discrete mathematics, for example in ☞ CODING THEORY. Linear spaces over finite fields are considered ☞ HERE.

The variables are isomorphic to the space of symmetric matrices of the th order. The isomorphism is established by a correspondence, which we will illustrate for the case:

The concept of isomorphism is introduced in order to conduct the study of objects that arise in different areas of algebra, but with “similar” properties of operations, using the example of one sample, working out results on it that can then be cheaply replicated. Which linear space exactly should we take “as a sample”? - See the ending of the next paragraph

Let be a system of vectors from vector space V over the field P.

Definition 2: Linear shell L systems A is the set of all linear combinations of vectors of the system A. Designation L(A).

It can be shown that for any two systems A And B,

A linearly expressed through B if and only if . (1)

A equivalent B then and only when L(A)=L(B). (2)

The proof follows from the previous property

3 The linear span of any system of vectors is a subspace of the space V.

Proof

Take any two vectors and from L(A), having the following expansions in vectors from A: . Let's check the feasibility of conditions 1) and 2) of the criterion:

Since it is a linear combination of system vectors A.

Since it is also a linear combination of system vectors A.

Let's now consider the matrix. Linear span of matrix rows A is called the row space of the matrix and is denoted Lr(A). Linear span of matrix columns A is called a column space and is denoted Lc(A). Please note that when the row and column space of the matrix A are subspaces of different arithmetic spaces Pn And Pm respectively. Using statement (2), we can come to the following conclusion:

Theorem 3: If one matrix is ​​obtained from another by a chain elementary transformations, then the row spaces of such matrices coincide.

Sum and intersection of subspaces

Let L And M- two subspaces of space R.

Amount L+M is called a set of vectors x+y , Where x L And y M. Obviously, any linear combination of vectors from L+M belongs L+M, hence L+M is a subspace of the space R(may coincide with space R).

By crossing LM subspaces L And M is the set of vectors that simultaneously belong to subspaces L And M(can only consist of a zero vector).

Theorem 6.1. Sum of dimensions of arbitrary subspaces L And M finite-dimensional linear space R equal to the dimension of the sum of these subspaces and the dimension of the intersection of these subspaces:

dim L+dim M=dim(L+M)+dim(L∩M).

Proof. Let's denote F=L+M And G=L∩M. Let G g-dimensional subspace. Let us choose a basis in it. Because GL And GM, therefore basis G can be added to the basis L and to the base M. Let the basis of the subspace L and let the basis of the subspace M. Let us show that the vectors

(6.1)constitute the basis F=L+M. In order for vectors (6.1) to form the basis of the space F they must be linearly independent and any vector of space F can be represented by a linear combination of vectors (6.1).



Let's prove linear independence vectors (6.1). Let the zero vector of space F is represented by a linear combination of vectors (6.1) with some coefficients:

The left side of (6.3) is the subspace vector L, and the right side is the subspace vector M. Therefore the vector

(6.4)belongs to the subspace G=L∩M. On the other hand, the vector v can be represented by a linear combination of basis vectors of the subspace G:

(6.5) From equations (6.4) and (6.5) we have:

But vectors are the basis of subspace M, therefore they are linearly independent and . Then (6.2) will take the form:

Due to the linear independence of the basis of the subspace L we have:

Since all the coefficients in equation (6.2) turned out to be zero, then the vectors

linearly independent. But any vector z from F(by definition of the sum of subspaces) can be represented by the sum x+y , Where x L,y M. In turn x is represented by a linear combination of vectors a y - linear combination of vectors. Therefore, vectors (6.10) originate the subspace F. We found that vectors (6.10) form a basis F=L+M.

Studying subspace bases L And M and subspace basis F=L+M(6.10), we have: dim L=g+l, dim M=g+m, dim (L+M)=g+l+m. Hence:

dim L+dim M−dim(L∩M)=dim(L+M).

Direct sum of subspaces

Definition 6.2. Space F represents the direct sum of subspaces L And M, if each vector x space F can only be represented as a sum x=y+z , Where y ∈L and z M.



The direct amount is indicated LM. They say that if F=LM, That F decomposes into the direct sum of its subspaces L And M.

Theorem 6.2. In order to n-dimensional space R was the direct sum of subspaces L And M, it is enough for the intersection L And M contained only the zero element and that the dimension R was equal to the sum of the dimensions of the subspaces L And M.

Proof. Let us choose some basis in the subspace L and some basis in the subspace M. Let us prove that

(6.11) is the basis of the space R. According to the conditions of the theorem, the dimension of space Rn equal to the sum of subspaces L And M (n=l+m). It is enough to prove the linear independence of elements (6.11). Let the zero vector of space R is represented by a linear combination of vectors (6.11) with some coefficients:

(6.13) Since the left side of (6.13) is a vector of the subspace L, and the right side is the subspace vector M And LM=0 , That

(6.14) But vectors are the bases of subspaces L And M respectively. Therefore they are linearly independent. Then

(6.15) It was established that (6.12) is valid only under the condition (6.15), and this proves the linear independence of the vectors (6.11). Therefore they form a basis in R.

Let x∈R. Let's expand it according to basis (6.11):

(6.16)From (6.16) we have:

(6.18)From (6.17) and (6.18) it follows that any vector from R can be represented as a sum of vectors x 1 ∈L And x 2 ∈M. It remains to prove that this representation is unique. Let, in addition to representation (6.17), there be the following representation:

(6.19)Subtracting (6.19) from (6.17), we obtain

(6.20) Since , and LM=0 , then and . Therefore and. ■

Theorem 8.4 on the dimension of the sum of subspaces. If and are subspaces of a finite-dimensional linear space, then the dimension of the sum of subspaces is equal to the sum of their dimensions without the dimension of their intersection ( Grassmann's formula):

(8.13)

In fact, let be the basis of the intersection . Let's supplement it with an ordered set of vectors up to the basis of the subspace and an ordered set of vectors up to the basis of the subspace. Such an addition is possible by Theorem 8.2. From these three sets of vectors, let's create an ordered set of vectors. Let us show that these vectors are generators of the space. Indeed, any vector of this space is represented as a linear combination of vectors from the ordered set

Hence, . Let us prove that the generators are linearly independent and therefore they are the basis of the space. Indeed, let’s make a linear combination of these vectors and equate it to the zero vector: . All coefficients of such an expansion are zero: subspaces of a vector space with a bilinear form are the set of all vectors orthogonal to each vector from . This set is a vector subspace, which is usually denoted by .