Page 191 - DMTH502_LINEAR_ALGEBRA
P. 191

Unit 15: Simultaneous Triangulation and Simultaneous Diagonalization




          Proof: It is no loss of generality to assume that  contains only a finite number of operators,  Notes
          because of this observation. Let {T ,...,T ) be a maximal linearly independent subset of , i.e., a
                                      1   n
          basis for the subspace spanned by . If  is a vector such that (b) holds for each T , then (b) will
                                                                           i
          hold for every operator which is a linear combination of T ,..., T .
                                                         1    r
          By the lemma before Theorem 1 of unit 14 (this lemma for a single operator), we can find a
          vector   (not in W) and a scalar c  such that (T  – c I)  is in W. Let V  be the collection of all
                 1                    1          1  1  1           1
          vectors  in V such that (T  – c I) is in W. Then V  is a subspace of V which is properly larger
                                1  1               1
          than W. Furthermore, V  is invariant under , for this reason. If T commutes with T , then
                             1                                                1
                                      (T  – c I)(T) = T(T  – c I)
                                        1  1         1  1
          If  is in V , then (T  – c I) is in W. Since W is invariant under each T in , we have T(T  – c I)
                   1      1  1                                                   1  1
          in W, i.e., T in V , for all  in V  and all T in .
                        1           1
          Now W is a proper subspace of V . Let U  be the linear operator on V  obtained by restricting T
                                     1     2                     1                    2
          to the  subspace  V . The minimal polynomial for  U   divides the minimal polynomial for  T .
                         1                           2                               2
          Therefore, we may apply the lemma before Theorem 1 of unit 14  to that operator and the
          invariant subspace W. We obtain a vector   in V  (not in W) and a scalar c  such that (T  – c I) 
                                             2   1                   2          2  2  2
          is in W. Note that
          (a)    is not in W;
                2
          (b)  (T  – c I)  is in W;
                 1  1  2
          (c)  (T  – c I)  is in W.
                 2  2  2
          Let V  be the set of all vectors  in V  such that (T  – c I) is in W. Then V  is invariant under .
              2                        1          2  2               2
          Apply the lemma before Theorem 1 of unit 14 to U , the restriction of T  to V . If we continue in
                                                   3               3   2
          this way, we shall reach a vector  =   (not in W) such that (T  – c I) is in W, j = 1,..., r.
                                         r                  j  j
          Theorem 1: Let V be a finite-dimensional vector space over the field F. Let  be a commuting
          family of triangulable linear operators on V. There exists an ordered basis for V such that every
          operator in  is represented by a triangular matrix in that basis.

          Proof:  Given the  lemma which  we  just  proved,  this  theorem has  the same  proof as  does
          Theorem 1 of unit 14, if one replaces T by .
          Corollary: Let  be a commuting family of n × n matrices over an algebraically closed field F.
          There exists a non-singular n × n matrix P with entries in F such that P AP is upper-triangular,
                                                                   –1
          for every matrix A in .
          Theorem 2: Let F be  a  commuting  family of diagonalizable linear  operators  on the finite-
          dimensional vector space V. There exists an ordered basis for V such that every operator in  is
          represented in that basis by a diagonal matrix.
          Proof:  We  could  prove  this  theorem  by  adapting  the  lemma  before  Theorem  1  to  the
          diagonalizable  case, just  as  we  adapted  the  lemma  before  Theorem  1  of  unit  14  to  the
          diagonalizable case in order to prove Theorem 2 of unit 14. However, at this point it is easier to
          proceed by induction on the dimension of V.

          If dim V = 1, there is nothing to prove. Assume the theorem for vector spaces of dimension less
          than n, and let V be an n-dimensional space. Choose any T in  which is not a scalar multiple of
          the identity. Let c ,..., c  be the distinct characteristic values of T, and (for each i) let W  be the null
                        1   k                                                i
          space of T – c I. Fix an index  i. Then W  is invariant under every  operator which commutes
                      i                    i
          with T. Let   be the family of linear operators on W  obtained by restricting the operators in 
                     i                              i
          to the  (invariant) subspace  W . Each  operator in    is  diagonalizable,  because its  minimal
                                   i                 i
          polynomial divides the minimal polynomial for the corresponding operator in  . Since dim
          W  < dim V, the operators in   can be simultaneously diagonalized. In other words,  W  has a
            i                      i                                             i

                                           LOVELY PROFESSIONAL UNIVERSITY                                   185
   186   187   188   189   190   191   192   193   194   195   196