Let's continue the conversation about actions with matrices. Namely, during the study of this lecture you will learn how to find the inverse matrix. Learn. Even if math is difficult.

What is an inverse matrix? Here we can draw an analogy with reciprocal numbers: Consider, for example, the optimistic number 5 and its inverse number . The product of these numbers is equal to one: . Everything is similar with matrices! The product of a matrix and its inverse matrix is ​​equal to – identity matrix, which is the matrix analogue of the numerical unit. However, first things first – let’s solve the important one first. practical question, namely, we will learn how to find this very inverse matrix.

What you need to know and be able to do to find inverse matrix? You must be able to decide qualifiers. You must understand what it is matrix and be able to perform some actions with them.

There are two main methods for finding the inverse matrix:
by using algebraic additions And using elementary transformations.

Today we will study the first, simpler method.

Let's start with the most terrible and incomprehensible. Let's consider square matrix. The inverse matrix can be found using the following formula:

Where is the determinant of the matrix, is the transposed matrix of algebraic complements of the corresponding elements of the matrix.

The concept of an inverse matrix exists only for square matrices, matrices “two by two”, “three by three”, etc.

Designations: As you may have already noticed, the inverse matrix is ​​denoted by a superscript

Let's start with the simplest case - a two-by-two matrix. Most often, of course, “three by three” is required, but, nevertheless, I strongly recommend studying a simpler task in order to master general principle solutions.

Example:

Find the inverse of a matrix

Let's decide. It is convenient to break down the sequence of actions point by point.

1) First we find the determinant of the matrix.

If your understanding of this action is not good, read the material How to calculate the determinant?

Important! If the determinant of the matrix is ​​equal to ZERO– inverse matrix DOES NOT EXIST.

In the example under consideration, as it turned out, , which means everything is in order.

2) Find the matrix of minors.

To solve our problem, it is not necessary to know what a minor is, however, it is advisable to read the article How to calculate the determinant.

The matrix of minors has the same dimensions as the matrix, that is, in this case.
The only thing left to do is find four numbers and put them instead of asterisks.

Let's return to our matrix
Let's look at the top left element first:

How to find it minor?
And this is done like this: MENTALLY cross out the row and column in which this element is located:

The remaining number is minor of this element, which we write in our matrix of minors:

Consider the following matrix element:

Mentally cross out the row and column in which this element appears:

What remains is the minor of this element, which we write in our matrix:

Similarly, we consider the elements of the second row and find their minors:


Ready.

It's simple. In the matrix of minors you need CHANGE SIGNS two numbers:

These are the numbers that I circled!

– matrix of algebraic additions of the corresponding elements of the matrix.

And just...

4) Find the transposed matrix of algebraic additions.

– transposed matrix of algebraic complements of the corresponding elements of the matrix.

5) Answer.

Let's remember our formula
Everything has been found!

So the inverse matrix is:

It is better to leave the answer as is. NO NEED divide each element of the matrix by 2, as you get fractional numbers. This nuance is discussed in more detail in the same article. Actions with matrices.

How to check the solution?

You need to perform matrix multiplication or

Examination:

Received already mentioned identity matrix is a matrix with ones by main diagonal and zeros in other places.

Thus, the inverse matrix is ​​found correctly.

If you carry out the action, the result will also be an identity matrix. This is one of the few cases where matrix multiplication is permutable, more detailed information can be found in the article Properties of operations on matrices. Matrix Expressions. Also note that during the check, the constant (fraction) is brought forward and processed at the very end - after the matrix multiplication. This is a standard technique.

Let's move on to a more common case in practice - the three-by-three matrix:

Example:

Find the inverse of a matrix

The algorithm is exactly the same as for the “two by two” case.

We find the inverse matrix using the formula: , where is the transposed matrix of algebraic complements of the corresponding elements of the matrix.

1) Find the determinant of the matrix.


Here the determinant is revealed on the first line.

Also, don’t forget that, which means everything is fine - inverse matrix exists.

2) Find the matrix of minors.

The matrix of minors has a dimension of “three by three” , and we need to find nine numbers.

I'll look at a couple of minors in detail:

Consider the following matrix element:

MENTALLY cross out the row and column in which this element is located:

We write the remaining four numbers in the “two by two” determinant.

This two-by-two determinant and is the minor of this element. It needs to be calculated:


That’s it, the minor has been found, we write it in our matrix of minors:

As you probably guessed, you need to calculate nine two-by-two determinants. The process, of course, is tedious, but the case is not the most severe, it can be worse.

Well, to consolidate – finding another minor in the pictures:

Try to calculate the remaining minors yourself.

Final result:
– matrix of minors of the corresponding elements of the matrix.

The fact that all the minors turned out to be negative is purely an accident.

3) Find the matrix of algebraic additions.

In the matrix of minors it is necessary CHANGE SIGNS strictly for the following elements:

In this case:

We do not consider finding the inverse matrix for a “four by four” matrix, since such a task can only be given by a sadistic teacher (for the student to calculate one “four by four” determinant and 16 “three by three” determinants). In my practice, there was only one such case, and the customer test work paid quite dearly for my torment =).

In a number of textbooks and manuals you can find a slightly different approach to finding the inverse matrix, but I recommend using the solution algorithm outlined above. Why? Because the likelihood of getting confused in calculations and signs is much less.

1. Find the determinant of the original matrix. If , then the matrix is ​​singular and there is no inverse matrix. If, then a non-degenerate and inverse matrix exists.

2. Find the matrix transposed to.

3. Find the algebraic complements of the elements and compose the adjoint matrix from them.

4. We compose the inverse matrix using the formula.

5. We check the correctness of the calculation of the inverse matrix, based on its definition:.

Example. Find the matrix inverse of this: .

Solution.

1) Matrix determinant

.

2) Find the algebraic complements of the matrix elements and compose the adjoint matrix from them:

3) Calculate the inverse matrix:

,

4) Check:

№4Matrix rank. Linear independence of matrix rows

To solve and study a number of mathematical and applied problems The concept of matrix rank is important.

In a matrix of size, by deleting any rows and columns, you can isolate square submatrices of the th order, where. The determinants of such submatrices are called minors of the matrix order .

For example, from matrices you can obtain submatrices of 1st, 2nd and 3rd order.

Definition. The rank of a matrix is ​​the highest order of the nonzero minors of that matrix. Designation: or.

From the definition it follows:

1) The rank of the matrix does not exceed the smaller of its dimensions, i.e.

2) if and only if all elements of the matrix are equal to zero, i.e.

3) For a square matrix of nth order if and only if the matrix is ​​non-singular.

Since directly enumerating all possible minors of the matrix, starting with the largest size, is difficult (time-consuming), they use elementary matrix transformations that preserve the rank of the matrix.

Elementary matrix transformations:

1) Discarding the zero row (column).

2) Multiplying all elements of a row (column) by a number.

3) Changing the order of rows (columns) of the matrix.

4) Adding to each element of one row (column) the corresponding elements of another row (column), multiplied by any number.

5) Matrix transposition.

Definition. A matrix obtained from a matrix using elementary transformations is called equivalent and is denoted A IN.

Theorem. The rank of the matrix does not change during elementary matrix transformations.

Using elementary transformations, you can reduce the matrix to the so-called step form, when calculating its rank is not difficult.

A matrix is ​​called echelon if it has the form:

Obviously, the rank of a step matrix is ​​equal to the number of non-zero rows, since there is a minor order that is not equal to zero:

.

Example. Determine the rank of a matrix using elementary transformations.

The rank of the matrix is ​​equal to the number of non-zero rows, i.e. .

№5Linear independence of matrix rows

Given a size matrix

Let's denote the rows of the matrix as follows:

The two lines are called equal , if their corresponding elements are equal. .

Let us introduce the operations of multiplying a string by a number and adding strings as operations carried out element-by-element:

Definition. A row is called a linear combination of rows of a matrix if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

Definition. The rows of the matrix are called linearly dependent , if there are numbers that are not simultaneously equal to zero, such that a linear combination of matrix rows is equal to the zero row:

Where . (1.1)

Linear dependence of matrix rows means that at least 1 row of the matrix is ​​a linear combination of the rest.

Definition. If a linear combination of rows (1.1) is equal to zero if and only if all coefficients are , then the rows are called linearly independent .

Matrix rank theorem . The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems linear equations.

№6Solving a system of linear equations with unknowns

Systems of linear equations are widely used in economics.

The system of linear equations with variables has the form:

,

where () are arbitrary numbers called coefficients for variables And free terms of the equations , respectively.

Brief entry: ().

Definition. The solution of the system is such a set of values ​​, upon substitution of which each equation of the system turns into a true equality.

1) The system of equations is called joint , if it has at least one solution, and non-joint, if it has no solutions.

2) The simultaneous system of equations is called certain if she has only decision, And uncertain , if it has more than one solution.

3) Two systems of equations are called equivalent (equivalent ) , if they have the same set of solutions (for example, one solution).

Similar to the inverse in many properties.

Encyclopedic YouTube

    1 / 5

    ✪ Inverse matrix (2 ways to find)

    ✪ How to find the inverse of a matrix - bezbotvy

    ✪ Inverse matrix #1

    ✪ Solving a system of equations using the inverse matrix method - bezbotvy

    ✪ Inverse Matrix

    Subtitles

Properties of an inverse matrix

  • det A − 1 = 1 det A (\displaystyle \det A^(-1)=(\frac (1)(\det A))), Where det (\displaystyle \\det ) denotes the determinant.
  • (A B) − 1 = B − 1 A − 1 (\displaystyle \ (AB)^(-1)=B^(-1)A^(-1)) for two square invertible matrices A (\displaystyle A) And B (\displaystyle B).
  • (A T) − 1 = (A − 1) T (\displaystyle \ (A^(T))^(-1)=(A^(-1))^(T)), Where (. . .) T (\displaystyle (...)^(T)) denotes a transposed matrix.
  • (k A) − 1 = k − 1 A − 1 (\displaystyle \ (kA)^(-1)=k^(-1)A^(-1)) for any coefficient k ≠ 0 (\displaystyle k\not =0).
  • E − 1 = E (\displaystyle \E^(-1)=E).
  • If it is necessary to solve a system of linear equations, (b is a non-zero vector) where x (\displaystyle x) is the desired vector, and if A − 1 (\displaystyle A^(-1)) exists, then x = A − 1 b (\displaystyle x=A^(-1)b). Otherwise, either the dimension of the solution space is greater than zero, or there are no solutions at all.

Methods for finding the inverse matrix

If the matrix is ​​invertible, then to find the inverse matrix you can use one of the following methods:

Exact (direct) methods

Gauss-Jordan method

Let's take two matrices: the A and single E. Let's present the matrix A to the identity matrix using the Gauss-Jordan method, applying transformations along the rows (you can also apply transformations along the columns, but not intermixed). After applying each operation to the first matrix, apply the same operation to the second. When the reduction of the first matrix to unit form is completed, the second matrix will be equal to A−1.

When using the Gaussian method, the first matrix will be multiplied on the left by one of the elementary matrices Λ i (\displaystyle \Lambda _(i))(transvection or diagonal matrix with units on the main diagonal, except for one position):

Λ 1 ⋅ ⋯ ⋅ Λ n ⋅ A = Λ A = E ⇒ Λ = A − 1 (\displaystyle \Lambda _(1)\cdot \dots \cdot \Lambda _(n)\cdot A=\Lambda A=E \Rightarrow \Lambda =A^(-1)). Λ m = [ 1 … 0 − a 1 m / a m m 0 … 0 … 0 … 1 − a m − 1 m / a m m 0 … 0 0 … 0 1 / a m m 0 … 0 0 … 0 − a m + 1 m / a m m 1 … 0 … 0 … 0 − a n m / a m m 0 … 1 ] (\displaystyle \Lambda _(m)=(\begin(bmatrix)1&\dots &0&-a_(1m)/a_(mm)&0&\dots &0\\ &&&\dots &&&\\0&\dots &1&-a_(m-1m)/a_(mm)&0&\dots &0\\0&\dots &0&1/a_(mm)&0&\dots &0\\0&\dots &0&-a_( m+1m)/a_(mm)&1&\dots &0\\&&&\dots &&&\\0&\dots &0&-a_(nm)/a_(mm)&0&\dots &1\end(bmatrix))).

The second matrix after applying all operations will be equal to Λ (\displaystyle \Lambda), that is, it will be the desired one. Algorithm complexity - O (n 3) (\displaystyle O(n^(3))).

Using the algebraic complement matrix

Matrix inverse of matrix A (\displaystyle A), can be represented in the form

A − 1 = adj (A) det (A) (\displaystyle (A)^(-1)=(((\mbox(adj))(A)) \over (\det(A))))

Where adj (A) (\displaystyle (\mbox(adj))(A))- adjoint matrix;

The complexity of the algorithm depends on the complexity of the algorithm for calculating the determinant O det and is equal to O(n²)·O det.

Using LU/LUP Decomposition

Matrix equation A X = I n (\displaystyle AX=I_(n)) for the inverse matrix X (\displaystyle X) can be considered as a collection n (\displaystyle n) systems of the form A x = b (\displaystyle Ax=b). Let's denote i (\displaystyle i) th column of the matrix X (\displaystyle X) through X i (\displaystyle X_(i)); Then A X i = e i (\displaystyle AX_(i)=e_(i)), i = 1 , … , n (\displaystyle i=1,\ldots ,n),because the i (\displaystyle i) th column of the matrix I n (\displaystyle I_(n)) is the unit vector e i (\displaystyle e_(i)). in other words, finding the inverse matrix comes down to solving n equations with the same matrix and different right-hand sides. After performing the LUP decomposition (O(n³) time), solving each of the n equations takes O(n²) time, so this part of the work also requires O(n³) time.

If the matrix A is non-singular, then the LUP decomposition can be calculated for it P A = L U (\displaystyle PA=LU). Let P A = B (\displaystyle PA=B), B − 1 = D (\displaystyle B^(-1)=D). Then from the properties of the inverse matrix we can write: D = U − 1 L − 1 (\displaystyle D=U^(-1)L^(-1)). If you multiply this equality by U and L, you can get two equalities of the form U D = L − 1 (\displaystyle UD=L^(-1)) And D L = U − 1 (\displaystyle DL=U^(-1)). The first of these equalities is a system of n² linear equations for n (n + 1) 2 (\displaystyle (\frac (n(n+1))(2))) from which the right-hand sides are known (from the properties of triangular matrices). The second also represents a system of n² linear equations for n (n − 1) 2 (\displaystyle (\frac (n(n-1))(2))) from which the right-hand sides are known (also from the properties of triangular matrices). Together they represent a system of n² equalities. Using these equalities, we can recursively determine all n² elements of the matrix D. Then from the equality (PA) −1 = A −1 P −1 = B −1 = D. we obtain the equality A − 1 = D P (\displaystyle A^(-1)=DP).

In the case of using the LU decomposition, no permutation of the columns of the matrix D is required, but the solution may diverge even if the matrix A is nonsingular.

The complexity of the algorithm is O(n³).

Iterative methods

Schultz methods

( Ψ k = E − A U k , U k + 1 = U k ∑ i = 0 n Ψ k i (\displaystyle (\begin(cases)\Psi _(k)=E-AU_(k),\\U_( k+1)=U_(k)\sum _(i=0)^(n)\Psi _(k)^(i)\end(cases)))

Error estimate

Selecting an Initial Approximation

The problem of choosing the initial approximation in the iterative matrix inversion processes considered here does not allow us to treat them as independent universal methods that compete with direct inversion methods based, for example, on the LU decomposition of matrices. There are some recommendations for choosing U 0 (\displaystyle U_(0)), ensuring the fulfillment of the condition ρ (Ψ 0) < 1 {\displaystyle \rho (\Psi _{0})<1} (spectral radius of the matrix is ​​less than unity), which is necessary and sufficient for the convergence of the process. However, in this case, firstly, it is required to know from above the estimate for the spectrum of the invertible matrix A or the matrix A A T (\displaystyle AA^(T))(namely, if A is a symmetric positive definite matrix and ρ (A) ≤ β (\displaystyle \rho (A)\leq \beta ), then you can take U 0 = α E (\displaystyle U_(0)=(\alpha )E), Where ; if A is an arbitrary non-singular matrix and ρ (A A T) ≤ β (\displaystyle \rho (AA^(T))\leq \beta ), then they believe U 0 = α A T (\displaystyle U_(0)=(\alpha )A^(T)), where also α ∈ (0 , 2 β) (\displaystyle \alpha \in \left(0,(\frac (2)(\beta ))\right)); You can, of course, simplify the situation and take advantage of the fact that ρ (A A T) ≤ k A A T k (\displaystyle \rho (AA^(T))\leq (\mathcal (k))AA^(T)(\mathcal (k))), put U 0 = A T ‖ A A T ‖ (\displaystyle U_(0)=(\frac (A^(T))(\|AA^(T)\|)))). Secondly, when specifying the initial matrix in this way, there is no guarantee that ‖ Ψ 0 ‖ (\displaystyle \|\Psi _(0)\|) will be small (perhaps it will even turn out to be ‖ Ψ 0 ‖ > 1 (\displaystyle \|\Psi _(0)\|>1)), And high order the speed of convergence will not be revealed immediately.

Examples

Matrix 2x2

The expression cannot be parsed ( syntax error): (\displaystyle \mathbf(A)^(-1) = \begin(bmatrix) a & b \\ c & d \\ \end(bmatrix)^(-1) = \frac(1)(\det (\mathbf(A))) \begin& \!\!-b \\ -c & \,a \\ \end(bmatrix) = \frac(1)(ad - bc) \begin(bmatrix) \,\ ,\,d & \!\!-b\\ -c & \,a \\ \end(bmatrix).)

Inversion of a 2x2 matrix is ​​possible only under the condition that a d − b c = det A ≠ 0 (\displaystyle ad-bc=\det A\neq 0).

Typically, inverse operations are used to simplify complex algebraic expressions. For example, if the problem involves the operation of dividing by a fraction, you can replace it with the operation of multiplying by the reciprocal of a fraction, which is the inverse operation. Moreover, matrices cannot be divided, so you need to multiply by the inverse matrix. Calculating the inverse of a 3x3 matrix is ​​quite tedious, but you need to be able to do it manually. You can also find the reciprocal using a good graphing calculator.

Steps

Using the adjoint matrix

Transpose the original matrix. Transposition is the replacement of rows with columns relative to the main diagonal of the matrix, that is, you need to swap the elements (i,j) and (j,i). In this case, the elements of the main diagonal (starts in the upper left corner and ends in the lower right corner) do not change.

  • To change rows to columns, write the elements of the first row in the first column, the elements of the second row in the second column, and the elements of the third row in the third column. The order of changing the position of the elements is shown in the figure, in which the corresponding elements are circled with colored circles.
  • Find the definition of each 2x2 matrix. Every element of any matrix, including a transposed one, is associated with a corresponding 2x2 matrix. To find a 2x2 matrix that corresponds to a specific element, cross out the row and column in which the given element is located, that is, you need to cross out five elements of the original 3x3 matrix. Four elements will remain uncrossed, which are elements of the corresponding 2x2 matrix.

    • For example, to find a 2x2 matrix for the element that is located at the intersection of the second row and the first column, cross out the five elements that are in the second row and first column. The remaining four elements are elements of the corresponding 2x2 matrix.
    • Find the determinant of each 2x2 matrix. To do this, subtract the product of the elements of the secondary diagonal from the product of the elements of the main diagonal (see figure).
    • Detailed information about 2x2 matrices corresponding to specific elements of a 3x3 matrix can be found on the Internet.
  • Create a cofactor matrix. Write the results obtained earlier in the form of a new cofactor matrix. To do this, write the found determinant of each 2x2 matrix where the corresponding element of the 3x3 matrix was located. For example, if you are considering a 2x2 matrix for element (1,1), write its determinant in position (1,1). Then change the signs of the corresponding elements according to a certain scheme, which is shown in the figure.

    • Scheme for changing signs: the sign of the first element of the first line does not change; the sign of the second element of the first line is reversed; the sign of the third element of the first line does not change, and so on line by line. Please note that the “+” and “-” signs that are shown in the diagram (see figure) do not indicate that the corresponding element will be positive or negative. In this case, the “+” sign indicates that the sign of the element does not change, and the “-” sign indicates a change in the sign of the element.
    • Detailed information about cofactor matrices can be found on the Internet.
    • This way you will find the adjoint matrix of the original matrix. It is sometimes called a complex conjugate matrix. Such a matrix is ​​denoted as adj(M).
  • Divide each element of the adjoint matrix by its determinant. The determinant of the matrix M was calculated at the very beginning to check that the inverse matrix exists. Now divide each element of the adjoint matrix by this determinant. Write the result of each division operation where the corresponding element is located. This way you will find the matrix inverse to the original one.

    • The determinant of the matrix which is shown in the figure is 1. Thus, here the adjoint matrix is ​​the inverse matrix (because when any number is divided by 1, it does not change).
    • In some sources, the division operation is replaced by the operation of multiplication by 1/det(M). However, the final result does not change.
  • Write the inverse matrix. Write the elements located on the right half of the large matrix as a separate matrix, which is the inverse matrix.

    Using a calculator

      Choose a calculator that works with matrices. It is not possible to find the inverse of a matrix using simple calculators, but it can be done on a good graphing calculator such as the Texas Instruments TI-83 or TI-86.

      Enter the original matrix into the calculator's memory. To do this, click the Matrix button, if available. For a Texas Instruments calculator, you may need to press the 2nd and Matrix buttons.

      Select the Edit menu. Do this using the arrow buttons or the appropriate function button located at the top of the calculator's keyboard (the location of the button varies depending on the calculator model).

      Enter the matrix notation. Most graphic calculators can work with 3-10 matrices, which can be designated letters A-J. Typically, just select [A] to designate the original matrix. Then press the Enter button.

      Enter the matrix size. This article talks about 3x3 matrices. But graphic calculators can work with large matrices. Enter the number of rows, press Enter, then enter the number of columns and press Enter again.

      Enter each matrix element. A matrix will be displayed on the calculator screen. If you have previously entered a matrix into the calculator, it will appear on the screen. The cursor will highlight the first element of the matrix. Enter the value for the first element and press Enter. The cursor will automatically move to next element matrices.

    Let there be a square matrix of nth order

    Matrix A -1 is called inverse matrix in relation to matrix A, if A*A -1 = E, where E is the identity matrix of the nth order.

    Identity matrix- such a square matrix in which all the elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

    inverse matrix may exist only for square matrices those. for those matrices in which the number of rows and columns coincide.

    Theorem for the existence condition of an inverse matrix

    In order for a matrix to have an inverse matrix, it is necessary and sufficient that it be non-singular.

    The matrix A = (A1, A2,...A n) is called non-degenerate, if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

    Algorithm for finding the inverse matrix

    1. Write matrix A into the table for solving systems of equations using the Gaussian method and assign matrix E to it on the right (in place of the right-hand sides of the equations).
    2. Using Jordan transformations, reduce matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
    3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table you get the identity matrix E.
    4. Write down the inverse matrix A -1, which is located in the last table under the matrix E of the original table.
    Example 1

    For matrix A, find the inverse matrix A -1

    Solution: We write matrix A and assign the identity matrix E to the right. Using Jordan transformations, we reduce matrix A to the identity matrix E. The calculations are given in Table 31.1.

    Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

    As a result of matrix multiplication, the identity matrix was obtained. Therefore, the calculations were made correctly.

    Answer:

    Solving matrix equations

    Matrix equations can look like:

    AX = B, HA = B, AXB = C,

    where A, B, C are the specified matrices, X is the desired matrix.

    Matrix equations are solved by multiplying the equation by inverse matrices.

    For example, to find the matrix from the equation, you need to multiply this equation by on the left.

    Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

    Other equations are solved similarly.

    Example 2

    Solve the equation AX = B if

    Solution: Since the inverse matrix is ​​equal to (see example 1)

    Matrix method in economic analysis

    Along with others, they are also used matrix methods . These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural divisions.

    In the process of applying matrix analysis methods, several stages can be distinguished.

    At the first stage a system of economic indicators is being formed and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual rows (i = 1,2,....,n), and in vertical columns - numbers of indicators (j = 1,2,....,m).

    At the second stage For each vertical column, the largest of the available indicator values ​​is identified, which is taken as one.

    After this, all amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is formed.

    At the third stage all components of the matrix are squared. If they have different significance, then each matrix indicator is assigned a certain weight coefficient k. The value of the latter is determined by expert opinion.

    On the last one, fourth stage found rating values Rj are grouped in order of their increase or decrease.

    The matrix methods outlined should be used, for example, when comparative analysis various investment projects, as well as when assessing other economic indicators of organizations.