Find the matrix of the projection operator onto the plane. Projection Operators

1. Projection operators and ring idempotents

Let vector space V is equal to the direct sum of subspaces W and L: . By the definition of a direct sum, this means that each vector vV is uniquely representable as v=w+l, wW. lL.

Definition 1. If, so that v=w+l, then the map that associates each vector vV with its component (projection) wW is called the projector of the space V onto the space W. It is also called the projection operator, or the projection operator.

Obviously, if wW, then (w)=w. It follows from this that it has the following remarkable property 2 =P.

Definition 2. An element e of a ring K is called an idempotent (that is, similar to one) if e 2 =e.

There are only two idempotents in the ring of integers: 1 and 0. The situation is different in the ring of matrices. For example, matrices are idempotents. Matrices of projection operators are also idempotents. The corresponding operators are called idempotent operators.

Consider now the direct sum of n subspaces of the space V:

Then, similarly to the case of a direct sum of two subspaces, we can obtain n projection operators, …, . They have the property ==0 for ij.

Definition 3. Idempotents e i and e j (ij) are called orthogonal if e i e j = e j e i =0. Therefore, and are orthogonal idempotents.

From the fact that IV=V and from the addition rule for linear operators it follows that

This decomposition is called the decomposition of unity into a sum of idempotents.

Definition 4. An idempotent e is said to be minimal if it cannot be represented as a sum of idempotents other than e and 0.

2. Canonical decomposition of a representation

Definition 5. The canonical decomposition of the representation Т(g) is its decomposition of the form Т(g)=n 1 T 1 (g)+ n 2 T 2 (g)+…+ n t T t (g), in which the equivalent irreducible representations Т i (g ) are combined together, and n i is the multiplicity of the occurrence of the irreducible representation T i (g) in the decomposition of T(g).

Theorem 1. The canonical decomposition of a representation is determined using a projection operator of the form

I=1, 2, …, t, (31)

where |G| - the order of the group G; m i - degrees of representations T i (g), where i=1, 2, ..., t; i (g), i=1, 2, …, t - characters of irreducible representations T i (g). In this case, m i is determined by the formula

3. Projection operators associated with matrices of irreducible group representations

Formulas (31) can only be used to obtain the canonical decomposition of the representation. In the general case, it is necessary to use matrices of irreducible representations, which allow one to construct the corresponding projection operators.

Theorem 2. Let be the matrix elements of an irreducible representation T r (g) of the group G. An operator of the form

is the projection operator and is called the Wigner operator. In expression (33) m r is the dimension of the representation T r (g).

4. Decomposition of a representation into a direct sum of irreducible representations using the Wigner operator

Denote by M the module associated with the representation T. Let the irreducible representations T 1 , T 2 , …, T t from the canonical decomposition of the representation according to the method described earlier (see § 4) correspond to the irreducible submodules M 1 , M 2 , …, M t . Decomposition of a module M of the form

is called the canonical decomposition of the module M. Denote niMi=Li, so that

Denote irreducible submodules of modules L i

; i=1, 2, …, t. (36)

We need to find these modules.

Let's assume that the problem is solved. Therefore, in each of the modmodules M i (s) (s=1, 2, ..., n i) an orthonormal base is found in which the operator is represented by the matrix T i (g) of the irreducible representation T obtained as a result of the action (according to the rule from § 3 ) of the operator to the base by the formula

J=1, 2, …, m i . (37)

In this expression, we can assume that m i is the dimension of the irreducible representation T i (i=1, 2, …, t), and are base elements with number g from the irreducible submodule M i . Let us now place the elements of the base L i for fixed i as follows:

On the right in expression (38) there are bases of modules M i (1) , M i (2) , …, . If i changes from 1 to t, then we get the desired base of the entire module M, consisting of m 1 n 1 + m 2 n 2 +…+ m t n t elements.

Consider now the operator

acting in the module M (j is fixed). According to Theorem 2, is the projection operator. Therefore, this operator leaves unchanged all the basic elements (s=1, 2, ..., n i) located in j-th column expression (38), and vanishes all other vectors of the base. Denote by M ij the vector space spanned by the orthogonal system of vectors in the jth column of expression (38). Then we can say that is the projection operator onto the space M ij . The operator is known, since the diagonal elements of the matrices of irreducible group representations are known, as well as the operator T(g).

Now we can solve our problem.

We choose n i arbitrary basis vectors in M: and act on them with the projection operator. The resulting vectors lie in the space M ij and are linearly independent. They are not necessarily orthogonal and normalized. Let us orthonormalize the resulting system of vectors according to the rule from § 2. Denote the resulting system of vectors by e ij (s) in accordance with the notation adopted under the assumption that the problem is solved. As already indicated, here j is fixed, and s=1, 2, …, n i . Denote e if (s) (f=1, 2, …, j-1, j+1, …, m i), the remaining elements of the base of the module M i of dimension n i m i . Denote by the following operator:

It follows from the orthogonality relations for matrices of irreducible representations that this operator makes it possible to obtain e ig s by the formula

I=1, 2, …, t. (41)

All of the above can be expressed in the form of the following algorithm.

In order to find the base of the module M from elements that transform according to irreducible representations T i contained in the representation T associated with the module M, it is necessary:

Using formula (32), find the dimensions of the subspaces M ij corresponding to the j-component of the irreducible representation T i .

Find using the projection operator (39) all subspaces M ij .

In each subspace M ij choose an arbitrary orthonormal base.

Using formula (41), find all elements of the base that are transformed by the remaining components of the irreducible representation T i .

Let the linear operator A acts in the Euclidean space E n and transforms this space into itself.

Let's introduce definition: operator A* we call the adjoint operator A, if for any two vectors x,y from E n the equality of scalar products of the form is fulfilled:

(Ax,y) = (x, A * y)

More definition: a linear operator is called self-adjoint if it is equal to its adjoint operator, i.e. the equality is true:

(Ax,y) = (x,Ay)

or in particular ( Axx) = (x,Ax).

The self-adjoint operator has some properties. Let's mention some of them:

    The eigenvalues ​​of a self-adjoint operator are real (without proof);

    The eigenvectors of a self-adjoint operator are orthogonal. Indeed, if x 1 And x2 are eigenvectors, and  1 and  2 are their own numbers, then: Ax 1 =  1 x; Ax 2 =  2 x; (Ax 1 ,x 2) = (x 1 ,Ax 2), or  1 ( x 1 , x 2) =  2 (x 1 , x 2). Since  1 and  2 are different, then from here ( x 1 , x 2) = 0, which was to be proved.

    In Euclidean space, there is an orthonormal basis of eigenvectors of the self-adjoint operator A. That is, the matrix of a self-adjoint operator can always be reduced to a diagonal form in some orthonormal basis composed of the eigenvectors of the self-adjoint operator.

Another definition: we call a self-adjoint operator acting in the Euclidean space symmetrical operator. Consider the matrix of a symmetric operator. Let's prove the statement: For an operator to be symmetric, it is necessary and sufficient that its matrix be symmetric in an orthonormal basis.

Let A is a symmetric operator, i.e.:

(Ax,y) = (x,Ay)

If A is the matrix of the operator A, and x And y are some vectors, then we write:

coordinates x And y in some orthonormal basis

Then: ( x,y) = X T Y = Y T X and we have ( Ax,y) = (AX) T Y = X T A T Y

(x,Ay) = X T (AY) = X T AY,

those. X T A T Y = X T A Y. For arbitrary column matrices X,Y, this equality is possible only when A T = A, which means that the matrix A is symmetric.

Consider some examples of linear operators

Operator design. Let it be required to find the matrix of a linear operator that projects a three-dimensional space onto the coordinate axis e 1 in basis e 1 , e 2 , e 3 . The matrix of a linear operator is a matrix whose columns must contain the images of the basis vectors e 1 = (1,0,0), e 2 = (0,1,0), e 3 = (0,0,1). These images are obviously there: Ae 1 = (1,0,0)

Ae 2 = (0,0,0)

Ae 3 = (0,0,0)

Therefore, in the basis e 1 , e 2 , e 3 the matrix of the desired linear operator will look like:

Let us find the kernel of this operator. By definition, a kernel is a set of vectors X, for which AX = 0. Or


That is, the kernel of the operator is the set of vectors lying in the plane e 1 , e 2 . The dimension of the kernel is n – rangA = 2.

The set of images of this operator is obviously the set of vectors collinear e 1 . The dimension of the image space is equal to the rank of the linear operator and is equal to 1 , which is less than the dimension of the preimage space. i.e. operator A- degenerate. The matrix A is also degenerate.

Another example: find the matrix of a linear operator realizing in the space V 3 (basis i, j, k) linear transformation - symmetry with respect to the origin.

We have: Ai = -i

That is, the desired matrix

Consider a linear transformation − symmetry about the plane y = x.

Aj = i(1,0,0)

Ak = k (0,0,1)

The operator matrix will be:

Another example is the already familiar matrix that relates the coordinates of a vector when the coordinate axes are rotated. Let's call the operator that performs the rotation of the coordinate axes - the rotation operator. Suppose a rotation is made through an angle :

AI' = cos i+ sin j

Aj' = -sin i+ cos j

Rotation Operator Matrix:

AIAj

Recall the formulas for transforming the coordinates of a point when changing the basis - replacing coordinates on the plane when changing the basis:

E These formulas can be considered in two ways. Previously, we considered these formulas in such a way that the point stands still, the coordinate system rotates. But it can also be considered in such a way that the coordinate system remains the same, but the point moves from position M * to position M. The coordinates of the point M and M * are defined in the same coordinate system.

IN All of the above allows us to approach the next problem that programmers dealing with computer graphics have to solve. Let it be necessary on the computer screen to rotate some flat figure (for example, a triangle) relative to the point O' with coordinates (a,b) by some angle . The rotation of coordinates is described by the formulas:

Parallel translation provides the ratios:

In order to solve such a problem, an artificial trick is usually used: the so-called “homogeneous” coordinates of a point on the XOY plane are introduced: (x, y, 1). Then the matrix that performs parallel translation can be written:

Really:

And the rotation matrix:

The problem under consideration can be solved in three steps:

1st step: parallel transfer to the vector A(-a, -b) to align the center of rotation with the origin:

2nd step: turn by angle :

3rd step: parallel transfer to the vector A(a, b) to return the center of rotation to its previous position:

The desired linear transformation in matrix form will look like:

(**)

The Dirac bra and ket vectors are remarkable in that they can be used to write Various types works.

The product of a bra vector and a ket vector is called the inner product or inner product. In fact, this is a standard matrix product according to the row-by-column rule. Its result is a complex number.

The product of a ket vector and another ket vector no longer gives a number, but another ket vector. It is also represented as a column vector, but with the number of components equal to the product of the dimensions of the original vectors. Such a product is called a tensor product or a Kronecker product.

The same is true for the product of two bra vectors. We get a large row vector.

The last option is to multiply the ket vector by the bra vector. That is, you need to multiply the column by the row. Such a product is also called a tensor or outer product. As a result, a matrix is ​​obtained, that is, an operator.

Let's consider an example of using such operators.

Let us take some arbitrary Hermitian operator A. According to the postulates, some observable quantity corresponds to it. The eigenvectors of the Hermitian operator form a basis. The most general state vector can be expanded in this basis. That is, represent the sum of basis vectors with certain complex coefficients. This fact is known as the principle of superposition. Let's rewrite the expression using the sum sign.

But the coefficients in the expansion of the vector in terms of the basis vectors are the probability amplitudes, that is, the scalar product of the state vector with the corresponding basis vector. Let's write this amplitude to the right of the vector. The expression under the sum sign can be viewed as the multiplication of the ket vector by a complex number - the probability amplitude. On the other hand, it can be considered as the product of the matrix obtained by multiplying the ket vector by the bra vector and the original ket vector. The ket vector can be taken out from under the sign of the sum outside the bracket. The same psi vector will appear to the right and left of the equals sign. This means that the whole sum does nothing with the vector and is therefore equal to the identity matrix.

This formula itself is very useful when manipulating expressions with products of bra and ket vectors. After all, the unit can be inserted anywhere in the work.

Let's see what are the matrices included in the sum and obtained by the tensor product of the basis ket vector with its Hermitian conjugation. Again, for clarity, let's draw an analogy with ordinary vectors in three-dimensional space.

Let us choose unit basis vectors ex ey and ez, coinciding in direction with the coordinate axes. The tensor product of the vector ex and its conjugation will be represented by the following matrix. Take an arbitrary vector v. What happens when this matrix is ​​multiplied by a vector? This matrix simply zeroed out all components of the vector except for x. The result is a vector directed along the x-axis, that is, the projection of the original vector onto the basis vector ex. It turns out that our matrix is ​​nothing more than a projection operator.

The remaining two projection operators on the basis vectors ey and ez are represented by similar matrices and perform a similar function - they set all but one component of the vector to zero.

What happens when you sum the projection operators? Let us add, for example, the operators Px and Py. Such a matrix will reset only the z-component of the vector. The resulting vector will always lie in x-y planes. That is, we have a projection operator on the x-y plane.

Now it is clear why the sum of all projection operators on basis vectors is equal to the identity matrix. In our example, we will get the projection of a three-dimensional vector onto the three-dimensional space itself. The identity matrix is ​​essentially the projection of the vector onto itself.

It turns out that the assignment of the projection operator is equivalent to the assignment of a subspace of the original space. In the considered case of a three-dimensional Euclidean space, this can be a one-dimensional line defined by a single vector or a two-dimensional plane defined by a pair of vectors.

Returning to quantum mechanics with its state vectors in Hilbert space, we can say that the projection operators define a subspace and project the state vector into this Hilbert subspace.

Let us present the main properties of projection operators.

  1. Successive application of the same projection operator is equivalent to a single projection operator. Usually this property is written as P 2 =P. Indeed, if the first operator projected a vector into a subspace, then the second operator will not do anything with it. The vector will already be in this subspace.
  2. The projection operators are Hermitian operators, respectively, in quantum mechanics, they correspond to observable quantities.
  3. The eigenvalues ​​of projection operators of any dimension are only the numbers one and zero. Whether the vector is in the subspace or not. Because of this binarity, the observed value described by the projection operator can be formulated as a question, the answer to which is “yes” or “no”. For example, does the spin of the first electron in the singlet state point up along the z-axis? This question can be put in correspondence with the projection operator. Quantum mechanics allows you to calculate the probabilities for the answer "yes" and for the answer "no".

In what follows, we will talk more about projection operators.