Determination of the linear dependence of the columns of the matrix. Linear independence

where are some numbers (some or even all of these numbers can be equal to zero). This means that there are the following equalities between the elements of the columns:

or , .

From (3.3.1) it follows that

(3.3.2)

where is the null string.

Definition. The rows of the matrix A are linearly dependent if there are numbers , not all equal to zero at the same time, that

(3.3.3)

If equality (3.3.3) is true if and only if , then the rows are called linearly independent. Relation (3.3.2) shows that if one of the rows is linearly expressed in terms of the others, then the rows are linearly dependent.

It is also easy to see the opposite: if the rows are linearly dependent, then there is a row that is a linear combination of the other rows.

Let, for example, in (3.3.3) , then .

Definition. Let some minor be selected in the matrix A r th order and let minor ( r +1)-th order of the same matrix entirely contains the minor . We will say that in this case the minor borders the minor (or is bordering for ).

We now prove an important lemma.

Lemmaabout bordering minors. If the order minor r matrix A = is non-zero, and all the minors bordering it are equal to zero, then any row (column) of the matrix A is a linear combination of its rows (columns) that make up .

Proof. Without violating the generality of reasoning, we will assume that the non-zero minor r th order is in the upper left corner of the matrix A=:

.

For the first k rows of the matrix A, the statement of the lemma is obvious: it suffices to include in the linear combination the same row with a coefficient equal to one, and the rest with coefficients equal to zero.

Let us now prove that the remaining rows of the matrix A are linearly expressed in terms of the first k lines. To do this, we construct a minor ( r +1)th order by adding to minor k -th line () and l-th column():

.

The resulting minor is zero for all k and l . If , then it is equal to zero as containing two identical columns. If , then the resulting minor is the bordering minor for and, therefore, is equal to zero by the hypothesis of the lemma.

Let us expand the minor in terms of the elements of the latterl-th column:

(3.3.4)

where are algebraic additions to elements . The algebraic complement is the minor of the matrix A, therefore . Divide (3.3.4) by and express in terms of :

(3.3.5)

Where , .

Assuming , we get:

(3.3.6)

Expression (3.3.6) means that k th row of matrix A is linearly expressed in terms of the first r lines.

Since the values ​​of its minors do not change when a matrix is ​​transposed (due to the property of the determinants), everything proved is true for the columns as well. The theorem has been proven.

Corollary I . Any row (column) of a matrix is ​​a linear combination of its basic rows (columns). Indeed, the basis minor of the matrix is ​​different from zero, and all the minors bordering it are equal to zero.

Corollary II. Determinant n th order is equal to zero if and only if it contains linearly dependent rows (columns). Adequacy linear dependence rows (columns) for the determinant to zero was proved earlier as a property of determinants.

Let's prove the necessity. Let a square matrix be given n th order, whose only minor is equal to zero. It follows that the rank of this matrix is ​​less than n , i.e. there is at least one row that is a linear combination of the base rows of this matrix.

Let us prove one more theorem about the rank of a matrix.

Theorem.The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of its linearly independent columns and is equal to the rank of this matrix.

Proof. Let the rank of the matrix A= be equal to r . Then any of its k basis rows are linearly independent, otherwise the basis minor would be zero. On the other hand, any r +1 or more rows are linearly dependent. Assuming the opposite, we could find a minor of order greater than r , which is nonzero by Corollary 2 of the previous lemma. The latter contradicts the fact that the maximum order of non-zero minors is equal to r . Everything that has been proved for rows is also true for columns.

In conclusion, we present one more method for finding the rank of a matrix. The rank of a matrix can be determined by finding a minor of maximum order that is different from zero.

At first glance, this requires the calculation of a finite, but perhaps a very large number of minors of this matrix.

The following theorem allows, however, significant simplifications to be made.

Theorem.If the minor of matrix A is different from zero, and all the minors bordering it are equal to zero, then the rank of the matrix is ​​equal to r .

Proof. It suffices to show that any subsystem of matrix rows for S > r will be linearly dependent under the conditions of the theorem (from this it will follow that r is the maximum number of linearly independent rows of the matrix or any of its minors of order greater than k are zero).

Let's assume the opposite. Let the rows be linearly independent. By the lemma on bordering minors, each of them will be linearly expressed in terms of rows in which the minor is located and which, due to the fact that it is different from zero, are linearly independent:

(3.3.7)

Consider the matrix K from the coefficients of linear expressions (3.3.7):

.

The rows of this matrix will be denoted by . They will be linearly dependent, since the rank of the matrix K, i.e. the maximum number of its linearly independent rows does not exceed r< S . Therefore, there are such numbers, not all equal to zero, that

Let us pass to the equality of the components

(3.3.8)

Now consider the following linear combination:

or

Consider an arbitrary, not necessarily square, matrix A of size mxn.

Matrix rank.

The concept of the rank of a matrix is ​​related to the concept of linear dependence (independence) of rows (columns) of a matrix. Consider this concept for strings. For columns, it's the same.

Denote the sinks of the matrix A:

e 1 \u003d (a 11, a 12, ..., a 1n); e 2 \u003d (a 21, a 22, ..., a 2n); ..., e m \u003d (a m1, a m2, ..., a mn)

e k =e s if a kj =a sj , j=1,2,…,n

Arithmetic operations on matrix rows (addition, multiplication by a number) are introduced as operations carried out element by element: λе k =(λа k1 ,λа k2 ,…,λа kn);

e k +e s =[(а k1 +a s1),(a k2 +a s2),…,(а kn +a sn)].

Line e is called linear combination rows e 1 , e 2 ,…,e k , if it is equal to the sum of the products of these rows by arbitrary real numbers:

e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

Lines e 1 , e 2 ,…,e m are called linearly dependent, if there are real numbers λ 1 ,λ 2 ,…,λ m , not all equal to zero, that the linear combination of these rows is equal to the zero row: λ 1 e 1 +λ 2 e 2 +…+λ m e m = 0 ,Where 0 =(0,0,…,0) (1)

If the linear combination is equal to zero if and only if all coefficients λ i are equal to zero (λ 1 =λ 2 =…=λ m =0), then the rows e 1 , e 2 ,…,e m are called linearly independent.

Theorem 1. For strings e 1 ,e 2 ,…,e m to be linearly dependent, it is necessary and sufficient that one of these strings be a linear combination of the other strings.

Proof. Necessity. Let strings e 1 , e 2 ,…,e m be linearly dependent. Let, for definiteness, (1) λm ≠0, then

That. the string e m is a linear combination of the rest of the strings. Ch.t.d.

Adequacy. Let one of the rows, for example e m , be a linear combination of the other rows. Then there are numbers such that the equality holds, which can be rewritten as ,

where at least 1 of the coefficients, (-1), is non-zero. Those. rows are linearly dependent. Ch.t.d.

Definition. Minor k-th order matrix A of size mxn is called the k-th order determinant with elements lying at the intersection of any k rows and any k columns of matrix A. (k≤min(m,n)). .

Example., minors of the 1st order: =, =;

minors of the 2nd order: , 3rd order

A 3rd order matrix has 9 1st order minors, 9 2nd order minors and 1 3rd order minor (the determinant of this matrix).

Definition. Matrix rank A is the highest order of non-zero minors of this matrix. Designation - rgA or r(A).

Matrix rank properties.

1) the rank of the matrix A nxm does not exceed the smallest of its dimensions, i.e.

r(A)≤min(m,n).

2) r(A)=0 when all matrix elements are equal to 0, i.e. A=0.

3) For a square matrix A of the nth order, r(A)=n when A is nondegenerate.



(The rank of a diagonal matrix is ​​equal to the number of its non-zero diagonal elements).

4) If the rank of a matrix is ​​r, then the matrix has at least one minor of order r that is not equal to zero, and all minors of higher orders are equal to zero.

For the ranks of the matrix, the following relations are true:

2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

3) r(A+B)≥│r(A)-r(B)│; 4) r(A T A)=r(A);

5) r(AB)=r(A) if B is a square non-singular matrix.

6) r(AB)≥r(A)+r(B)-n, where n is the number of columns of matrix A or rows of matrix B.

Definition. A nonzero minor of order r(A) is called basic minor. (Matrix A can have several basis minors). Rows and columns at the intersection of which there is a basis minor are called respectively base lines And base columns.

Theorem 2 (on the basic minor). Basic rows (columns) are linearly independent. Any row (any column) of matrix A is a linear combination of basic rows (columns).

Proof. (For strings). If the basic rows were linearly dependent, then by Theorem (1) one of these rows would be a linear combination of other basic rows, then, without changing the value of the basic minor, you can subtract the specified linear combination from this row and get a zero row, and this contradicts because the basis minor is different from zero. That. the base rows are linearly independent.

Let us prove that any row of matrix A is a linear combination of basic rows. Because with arbitrary changes in rows (columns), the determinant retains the property of being equal to zero, then, without loss of generality, we can assume that the basis minor is in the upper left corner of the matrix

A=, those. located on the first r rows and the first r columns. Let 1£j£n, 1£i£m. Let us show that the determinant of the (r+1)th order

If j£r or i£r, then this determinant is equal to zero, because it will have two identical columns or two identical rows.

If j>r and i>r, then this determinant is a minor of the (r + 1)th order of the matrix A. Since the rank of the matrix is ​​r, so any minor of higher order is equal to 0.

Expanding it by the elements of the last (added) column, we get

a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the last algebraic addition A ij coincides with the basic minor М r and therefore A ij = М r ≠0.

Dividing the last equality by A ij , we can express the element a ij as a linear combination: , where .

Fix the value i (i>r) and get that for any j (j=1,2,…,n) i-th elements the strings e i are linearly expressed in terms of the elements of the strings e 1 , e 2 ,…,e r , i.e. i-th row is a linear combination of basic rows: . Ch.t.d.

Theorem 3. (necessary and sufficient condition for the determinant to be equal to zero). In order for the nth order determinant D to be equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Proof (p.40). Necessity. If the nth order determinant D is equal to zero, then the basis minor of its matrix is ​​of order r

Thus, one row is a linear combination of the others. Then, by Theorem 1, the rows of the determinant are linearly dependent.

Adequacy. If the rows D are linearly dependent, then by Theorem 1 one row A i is a linear combination of the other rows. Subtracting the indicated linear combination from the line A i, without changing the value of D, we obtain a zero line. Therefore, by properties of determinants, D=0. h.t.d.

Theorem 4. Under elementary transformations, the rank of the matrix does not change.

Proof. As was shown when considering the properties of determinants, when transforming square matrices, their determinants either do not change, or are multiplied by a nonzero number, or change sign. In this case, the highest order of the non-zero minors of the original matrix is ​​preserved, i.e. the rank of the matrix does not change. Ch.t.d.

If r(A)=r(B), then A and B are equivalent: A~B.

Theorem 5. Using elementary transformations, one can reduce the matrix to stepped view. The matrix is ​​called stepped if it has the form:

А=, where a ii ≠0, i=1,2,…,r; r≤k.

Conditions r≤k can always be achieved by transposition.

Theorem 6. The rank of a step matrix is ​​equal to the number of its non-zero rows .

Those. The rank of the step matrix is ​​r, because there is a non-zero minor of order r:

  • Inverse matrix, algorithm for calculating the inverse matrix.
  • System of linear algebraic equations, basic properties of slough, homogeneity and inhomogeneity, compatibility and inconsistency, definiteness of slough, matrix notation of slough and its solutions
  • Square systems, Cramer's method
  • Slough elementary transformations. Gauss method for studying slough.
  • Slough compatibility criterion, Kronecker-Capelli theorem, geometric interpretation on the example of 2 equations with 2 unknowns.
  • Homogeneous Sloughs. Property of solutions, fsr, theorem on the general solution of a homogeneous system. Criterion for the existence of a nontrivial solution.
  • Heterogeneous Sloughs. A theorem on the structure of the solution of an inhomogeneous slough. Algorithm for solving an inhomogeneous slough.
  • Definition of a linear (vector) space. Lp examples.
  • Linearly dependent and linearly independent systems of vectors. Criterion of linear dependence.
  • Sufficient conditions for linear dependence and linear independence of systems of vectors lp. Examples of linearly independent systems in the spaces of rows, polynomials, matrices.
  • lp isomorphism. Criterion for isomorphism lp.
  • Subspace lp and linear spans of systems of vectors. Dimension of the linear shell.
  • Basis completion theorem
  • Intersection and sum of subspaces, direct sum of subspaces. Theorem on the dimension of the sum of subspaces.
  • Subspace of solutions of a homogeneous slough, its dimension and basis. Expression of the general solution of the homogeneous slough in terms of fsr.
  • Matrix of transition from one basis lp to another and its properties. Transformation of vector coordinates upon transition to another basis.
  • Definition and examples of linear operators, linear mappings and linear transformations
  • Linear operator matrix, finding the coordinates of the image of a vector
  • Actions with linear operators. Linear space lo
  • Theorem on the isomorphism of the set of linear transformations to the set of square matrices
  • The matrix of the product of linear transformations. Examples of finding matrices of operators.
  • Definition and properties of the inverse operator, its matrix.
  • A criterion for the invertibility of a linear operator. Examples of reversible and irreversible operators.
  • Transformation of the matrix of a linear operator when passing to another basis.
  • Determinant and characteristic polynomial of a linear operator, their invariance with respect to basis transformations.
  • Kernel and image of a linear operator. Theorem on the sum of the dimensions of the kernel and the image. Finding the kernel and image of a linear operator in a fixed basis. Rank and defect of a linear operator.
  • The theorem of invariance of the kernel and the image of lo a with respect to the permutation with it lo
  • Algebraic and geometric multiplicity of eigenvalues ​​and their relationship.
  • Criterion for the diagonalizability of the matrix of a linear operator, sufficient conditions for the diagonalizability of a linear operator.
  • Hamilton-Cayley theorem
  • Linear algebra

    Slough theory

    1. Matrices, operations with matrices, inverse matrix. Matrix equations and their solutions.

    Matrix- a rectangular table of arbitrary numbers arranged in a certain order, size m * n (rows per columns). Matrix elements are denoted, where i is the row number, and j is the column number.

    Addition (subtraction) matrices are defined only for one-dimensional matrices. The sum (difference) of matrices is a matrix, the elements of which are, respectively, the sum (difference) of the elements of the original matrices.

    Multiplication (division)per number– multiplication (division) of each element of the matrix by this number.

    Matrix multiplication is defined only for matrices, the number of columns of the first of which is equal to the number of rows of the second.

    Matrix multiplication is a matrix, the elements of which are given by the formulas:

    Matrix transposition is a matrix B whose rows (columns) are columns (rows) in the original matrix A. Denoted

    inverse matrix

    Matrix equations– equations of the form A*X=B are a product of matrices, the answer to this equation is the matrix X, which is found using the rules:

    1. Linear dependence and independence of the columns (rows) of the matrix. Criterion of linear dependence, sufficient conditions for linear dependence of columns (rows) of the matrix.

    The system of rows (columns) is called linearly independent, if the linear combination is trivial (the equality holds only when a1…n=0), where A1…n are the columns (rows), and a1…n are the expansion coefficients.

    Criterion: for a system of vectors to be linearly dependent, it is necessary and sufficient that at least one of the vectors of the system is linearly expressed in terms of the other vectors of the system.

    Sufficient condition:

    1. Matrix determinants and their properties

    Matrix determinant (determinant) is a number that for a square matrix A can be calculated from the elements of the matrix by the formula:

    , where is the element's complementary minor

    Properties:

    1. Inverse matrix, algorithm for calculating the inverse matrix.

    inverse matrix is a square matrix X which, together with a square matrix A of the same order, satisfies the condition:, where E is the identity matrix, of the same order as A. Any square matrix with non-zero determinant has 1 inverse matrix. It is found using the method of elementary transformations and using the formula:

      The concept of the rank of a matrix. Basis minor theorem. Criterion for the matrix determinant to be equal to zero. Elementary transformations of matrices. Calculations of the rank by the method of elementary transformations. Calculation of the inverse matrix by the method of elementary transformations.

    Matrix rank - basis minor order (rg A)

    Basic minor - a minor of order r not equal to zero such that all minors of order r+1 and higher are equal to zero or do not exist.

    Basis minor theorem - In an arbitrary matrix A, each column (row) is a linear combination of columns (rows) in which the basis minor is located.

    Proof: Let the basis minor be located in the first r rows and the first r columns in a matrix A of dimensions m*n. Consider the determinant obtained by assigning the corresponding elements of the sth row and the kth column to the basic minor of the matrix A.

    Note that for any and this determinant is equal to zero. If or, then the determinant D contains two identical rows or two identical columns. If u, then the determinant D is equal to zero, since it is a minor of the (r + λ)-ro order. Expanding the determinant along the last row, we obtain:, where are the algebraic complements of the elements of the last row. Note that since this is a basic minor. Therefore, where Writing the last equality for, we obtain , i.e. The kth column (for any) is a linear combination of the columns of the basic minor, which was to be proved.

    Criterion detA=0– The determinant is equal to zero if and only if its rows (columns) are linearly dependent.

    Elementary transformations:

    1) multiplying a string by a non-zero number;

    2) adding to the elements of one line of elements of another line;

    3) permutation of lines;

    4) deletion of one of the identical rows (columns);

    5) transposition;

    Rank Calculation - It follows from the basic minor theorem that the rank of matrix A is equal to the maximum number of linearly independent rows (columns in the matrix), therefore the task of elementary transformations is to find all linearly independent rows (columns).

    Inverse Matrix Calculation- Transformations can be implemented by multiplying by matrix A some matrix T, which is the product of the corresponding elementary matrices: TA = E.

    This equation means that the transformation matrix T is the inverse of the matrix . Then and, therefore,

    Note that the rows and columns of the matrix can be viewed as arithmetic vectors of sizes m And n, respectively. Thus, the size matrix can be interpreted as a set m n-dimensional or n m-dimensional arithmetic vectors. By analogy with geometric vectors, we introduce the concepts of linear dependence and linear independence of rows and columns of a matrix.

    4.8.1. Definition. Line
    called linear combination of lines with coefficients
    , if equality is true for all elements of this row:

    ,
    .

    4.8.2. Definition.

    Strings
    called linearly dependent, if there exists a non-trivial linear combination of them equal to the zero string, i.e. there are numbers that are not all equal to zero


    ,
    .

    4.8.3. Definition.

    Strings
    called linearly independent, if only their trivial linear combination is equal to the zero row, i.e.

    ,

    4.8.4. Theorem. (Criterion of linear dependence of matrix rows)

    For strings to be linearly dependent, it is necessary and sufficient that at least one of them be a linear combination of the others.

    Proof:

    Necessity. Let the lines
    are linearly dependent, then there is a non-trivial linear combination of them equal to the zero line:

    .

    Without loss of generality, we assume that the first of the coefficients of the linear combination is nonzero (otherwise we can renumber the rows). Dividing this ratio by , we get


    ,

    that is, the first row is a linear combination of the others.

    Adequacy. Let one of the lines, for example, , is a linear combination of the others, then

    i.e. there is a non-trivial linear combination of strings
    , equal to the zero string:

    which means the lines
    are linearly dependent, which was to be proved.

    Comment.

    Similar definitions and assertions can be formulated for the columns of a matrix.

    §4.9. Matrix rank.

    4.9.1. Definition. Minor order matrices size
    called the order determinant with elements located at the intersection of some of its lines and columns.

    4.9.2. Definition. Non-zero order minor matrices size
    called basic minor, if all minors of the order matrix
    are equal to zero.

    Comment. A matrix can have multiple basis minors. Obviously, they will all be of the same order. It is also possible that the matrix size
    order minor is different from zero, and minors of the order
    does not exist, that is
    .

    4.9.3. Definition. The rows (columns) that form the basis minor are called basic rows (columns).

    4.9.4. Definition. rank matrix is ​​the order of its basis minor. Matrix rank denoted
    or
    .

    Comment.

    Note that due to the equality of the rows and columns of the determinant, the rank of the matrix does not change when it is transposed.

    4.9.5. Theorem. (Matrix rank invariance under elementary transformations)

    The rank of a matrix does not change under its elementary transformations.

    Without proof.

    4.9.6. Theorem. (On the basic minor).

    Basic rows (columns) are linearly independent. Any row (column) of a matrix can be represented as a linear combination of its basic rows (columns).

    Proof:

    Let's do the proof for strings. The proof of the assertion for the columns can be carried out by analogy.

    Let the rank of the matrix sizes
    equals , A
    − basic minor. Without loss of generality, we assume that the basis minor is located in the upper left corner (otherwise, we can reduce the matrix to this form using elementary transformations):

    .

    Let us first prove the linear independence of the base rows. We will prove by contradiction. Let us assume that the basic rows are linearly dependent. Then, according to Theorem 4.8.4, one of the rows can be represented as a linear combination of the remaining basic rows. Therefore, if we subtract the indicated linear combination from this line, then we will get a zero line, which means that the minor
    is equal to zero, which contradicts the definition of the basis minor. Thus, we have obtained a contradiction, therefore, the linear independence of the basic rows is proved.

    Let us now prove that any row of a matrix can be represented as a linear combination of basic rows. If the line number in question from 1 to r, then, obviously, it can be represented as a linear combination with a coefficient equal to 1 for the row and zero coefficients for the remaining rows. Let us now show that if the line number from
    before
    , it can be represented as a linear combination of basic rows. Consider the matrix minor
    , derived from the basis minor
    by adding a line and an arbitrary column
    :

    Let us show that this minor
    from
    before
    and for any column number from 1 to .

    Indeed, if the column number from 1 to r, then we have a determinant with two identical columns, which is obviously equal to zero. If the column number from r+1 to , and the line number from
    before
    , That
    is a minor of the original matrix of greater order than the basis minor, meaning that it is zero from the definition of the basis minor. Thus, it is proved that the minor
    is zero for any line number from
    before
    and for any column number from 1 to . Expanding it by the last column, we get:

    Here
    − corresponding algebraic additions. notice, that
    , since, therefore,
    is a basic minor. Therefore, the elements of the line k can be represented as a linear combination of the corresponding elements of the basic rows with coefficients independent of the column number :

    Thus, we have proved that an arbitrary row of a matrix can be represented as a linear combination of its basic rows. The theorem has been proven.

    Lecture 13

    4.9.7. Theorem. (On the rank of a nondegenerate square matrix)

    For a square matrix to be nondegenerate, it is necessary and sufficient that the rank of the matrix is ​​equal to the size of this matrix.

    Proof:

    Necessity. Let the square matrix size n is non-degenerate, then
    , therefore, the matrix determinant is a basic minor, i.e.

    Adequacy. Let
    then the order of the basis minor is equal to the size of the matrix, hence the basis minor is the determinant of the matrix , i.e.
    by definition of the basis minor.

    Consequence.

    For a square matrix to be nondegenerate, it is necessary and sufficient that its rows be linearly independent.

    Proof:

    Necessity. Since a square matrix is ​​nondegenerate, its rank is equal to the size of the matrix
    that is, the determinant of the matrix is ​​the basis minor. Therefore, by Theorem 4.9.6 on the basis minor, the rows of the matrix are linearly independent.

    Adequacy. Since all rows of the matrix are linearly independent, its rank is not less than the size of the matrix, which means that
    therefore, by the previous theorem 4.9.7, the matrix is non-degenerate.

    4.9.8. Fringing minors method for finding the rank of a matrix.

    Note that part of this method has already been implicitly described in the proof of the basic minor theorem.

    4.9.8.1. Definition. Minor
    called fringing in relation to minor
    , if it is derived from a minor
    adding one new row and one new column of the original matrix.

    4.9.8.2. The procedure for finding the rank of a matrix by the method of bordering minors.

      Find any current matrix minor other than zero.

      We calculate all the minors bordering it.

      If they are all equal to zero, then the current minor is the base one, and the rank of the matrix is ​​equal to the order of the current minor.

      If there is at least one other than zero among the bordering minors, then it is assumed to be current and the procedure continues.

    Using the method of bordering minors, we find the rank of the matrix

    .

    It is easy to specify the current second-order minor other than zero, for example,

    .

    We calculate the minors bordering it:




    Therefore, since all bordering minors of the third order are equal to zero, then the minor
    is basic, that is

    Comment. It can be seen from the considered example that the method is quite laborious. Therefore, in practice, the method of elementary transformations, which will be discussed below, is much more often used.

    4.9.9. Finding the rank of a matrix by the method of elementary transformations.

    Based on Theorem 4.9.5, we can state that the rank of a matrix does not change under elementary transformations (that is, the ranks of equivalent matrices are equal). Therefore, the rank of the matrix is ​​equal to the rank of the step matrix obtained from the original matrix by elementary transformations. The rank of a step matrix is ​​obviously equal to the number of its non-zero rows.

    Determine the rank of the matrix

    method of elementary transformations.

    We present the matrix to stepwise:

    The number of non-zero rows of the resulting step matrix is ​​three, therefore,

    4.9.10. The rank of a system of vectors in a linear space.

    Consider the system of vectors
    some linear space . If it is linearly dependent, then it is possible to single out a linearly independent subsystem in it.

    4.9.10.1. Definition. The rank of the system of vectors
    linear space is the maximum number of linearly independent vectors of this system. Rank of the vector system
    denoted as
    .

    Comment. If a system of vectors is linearly independent, then its rank is equal to the number of vectors in the system.

    Let us formulate a theorem showing the relationship between the concepts of the rank of a system of vectors in a linear space and the rank of a matrix.

    4.9.10.2. Theorem. (On the rank of a system of vectors in a linear space)

    The rank of a system of vectors in a linear space is equal to the rank of a matrix whose columns or rows are the coordinates of vectors in some basis of the linear space.

    Without proof.

    Consequence.

    For a system of vectors in a linear space to be linearly independent, it is necessary and sufficient that the rank of a matrix whose columns or rows are the coordinates of vectors in some basis is equal to the number of vectors in the system.

    The proof is obvious.

    4.9.10.3. Theorem (On the dimension of a linear span).

    Dimension of the linear span of vectors
    linear space is equal to the rank of this system of vectors:

    Without proof.

    Let a matrix A of sizes (m; n) have k rows and k columns chosen arbitrarily (k ≤ min(m; n)). The matrix elements located at the intersection of the selected rows and columns form a square matrix of order k, the determinant of which is called the minor M kk of order k y or the k-th order minor of matrix A.

    The rank of a matrix is ​​the maximum order r of non-zero minors of the matrix A, and any non-zero minor of order r is called the basis minor. Designation: rank A = r. If rang A = rang B and the sizes of the matrices A and B are the same, then the matrices A and B are said to be equivalent. Designation: A ~ B.

    The main methods for calculating the rank of a matrix are the fringing minors method and the .

    Fringing Minor Method

    The essence of the method of bordering minors is as follows. Let a minor of order k, which is different from zero, be already found in the matrix. Then only those minors of order k + 1 are considered below that contain (i.e., border) the k-th order minor that is different from zero. If they are all equal to zero, then the rank of the matrix is ​​equal to k, otherwise, among the bordering minors of the (k + 1)th order, there is a non-zero one, and the whole procedure is repeated.

    Linear independence of rows (columns) of a matrix

    The concept of the rank of a matrix is ​​closely related to the concept of the linear independence of its rows (columns).

    Matrix rows:

    are called linearly dependent if there are such numbers λ 1 , λ 2 , λ k that the equality is true:

    The rows of the matrix A are called linearly independent if the above equality is possible only in the case when all numbers λ 1 = λ 2 = ... = λ k = 0

    The linear dependence and independence of the columns of the matrix A are defined in a similar way.

    If any row (a l) of matrix A (where (a l)=(a l1 , a l2 ,…, a ln)) can be represented as

    The notion of a linear combination of columns is defined similarly. The following theorem on the basis minor is valid.

    Basic rows and basic columns are linearly independent. Any row (or column) of matrix A is a linear combination of basic rows (columns), that is, rows (columns) that intersect the basic minor. Thus, the rank of matrix A: rang A = k is equal to the maximum number of linearly independent rows (columns) of matrix A.

    Those. the rank of a matrix is ​​the dimension of the largest square matrix within the matrix for which you want to determine the rank, for which the determinant is not equal to zero. If the original matrix is ​​not square, or if it is square, but its determinant is zero, then for square matrices of smaller order, rows and columns are chosen arbitrarily.

    Except through determinants, the rank of a matrix can be calculated by the number of linearly independent rows or columns of the matrix. It is equal to the number of linearly independent rows or columns, whichever is less. For example, if a matrix has 3 linearly independent rows and 5 linearly independent columns, then its rank is three.

    Examples of finding the rank of a matrix

    Find the rank of the matrix by the method of bordering minors

    Solution. Minor of the second order

    bordering minor M 2 is also different from zero. However, both minors are of the fourth order, bordering M 3 .

    are equal to zero. Therefore, the rank of the matrix A is 3, and the basic minor is, for example, the minor M 3 presented above.

    The method of elementary transformations is based on the fact that elementary transformations of a matrix do not change its rank. Using these transformations, you can bring the matrix to the form when all its elements, except for a 11 , a 22 , …, a rr (r ≤min (m, n)), are equal to zero. This obviously means that rank A = r. Note that if an n-th order matrix has the form of an upper triangular matrix, that is, a matrix in which all elements under the main diagonal are equal to zero, then it is determined to be equal to the product of the elements on the main diagonal. This property can be used when calculating the rank of a matrix by the method of elementary transformations: it is necessary to use them to reduce the matrix to a triangular one, and then, by selecting the appropriate determinant, we find that the rank of the matrix is ​​equal to the number of non-zero elements of the main diagonal.

    Using the method of elementary transformations, find the rank of a matrix

    Solution. Denote the i-th row of the matrix A by the symbol α i . At the first stage, we perform elementary transformations

    At the second stage, we perform transformations

    As a result, we get