6

The Group Algebra of A Finite Group and Maschke's Theorem

 2 years ago
source link: https://desvl.xyz/2022/01/18/group-algebra/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

The Group Algebra of A Finite Group and Maschke's Theorem

In this post, the reader is assumed to have a background in elementary representation theory. Any chapter one of a book in representation theory will do the trick. Also, I have to assume some background in ring theory and module theory. Although we may assume a ring to be non-commutative, it is always considered unitary (having ).

Let be a finite group and be a commutative ring. The algebra of over is denoted by , which firstly is an algebra over . The basis of is given by where . The product rule on is made of

With this being said, given and , we have

For example, take , the cyclic group of three elements. If and , then

As one will notice, the structure of this algebra should be determined by both and although we don't know what would happen at this moment. If we take , then everything is very simple. A lot of things in elementary linear algebra can be recovered here. And that is part of the mission of this blog post. Before we dive in we need to look into group algebra in a general setting first. It is not often to see group algebra and representation theory to be treated together but let's try it. While the majority of this post is (non-commutative) ring theory and module theory, we encourage the reader to try to use representation theory as examples. Standalone examples may drive us too far and we may not have enough space for them.

Basic Facts of Group Algebra And Its Connection to Representation Theory

First of all, we list some very obvious facts that do not even need proof.

  1. is a free -module with dimension .

  2. is itself a ring. The commutativity of is determined by .

However, as one may ignore,

Proposition 1. is not a division ring.

Proof. Pick that is not the identity. Then is a zero-divisor because if we take , then

But in a division ring, there is no zero-divisor.

As a ring, we certainly can consider modules over , which brings us the following section.

Simple and Semisimple Modules

Simplicity

Let be a ring (not assumed to be commutative here). An -module is called simple it has no nontrivial submodule. This may remind you of irreducible or simple representations of a group. We will see the connection later. Following the definition, we immediately have a special version of Schur's lemma:

Proposition 2 (Schur's Lemma). Let be two simple -modules. Every nontrivial homomorphism is an isomorphism.

Proof. Note and are submodules of and respectively. Since is nontrivial and are simple, we have and , which is to say that is an isomorphism.

Corollary 1. If is a simple -module, then is a division ring.

Proof. If is nontrivial, then according to Schur's lemma, it has an inverse.

This definitely reminds you of irreducible representations. But irreducible representations are not always the case, so are simple modules. Recall the Maschke's theorem in representation theory: Every representation of a finite group over having positive dimension is completely reducible. For modules, we have a similar statement.

Semisimplicity

Definition-Proposition 3. Let be an -module. Then the following three conditions are equivalent:

SS 1. is a sum of simple -modules.

SS 2. is a direct sum of simple -modules.

SS 3. For every submodule of , there is another submodule such that , i.e. every submodule is a direct summand.

If satisfies the three conditions above, then is called semisimple. A ring is semisimple if it is a semisimple module over itself.

Proof. Assume SS 1, say we have . Let be the maximal subset of such that is a direct sum (this exists by Zorn's lemma). Pick any . Then is a submodule of , which can either be or . If then . If the intersection is however, is direct, which is to say that is the subset of yielding a direct sum. A contradiction. Hence holds for all , i.e. .

Next we assume SS 2 and we have . Pick any submodule . Let be the maximal subset of such that is direct. In the same manner we see for all , which proves SS 3.

Finally we assume SS 3. Let be the sum of all simple modules of . Then there is a submodule of such that . Assume , then has a simple submodule, which contradicts the definition of . Hence and . The reason why nontrivial must have a simple submodule is contained in the following lemma.

Lemma 4. Let be an -module satisfying SS 3, then every nontrivial submodule has a simple submodule.

Proof. It suffices to show that every nontrivial principal module has a simple submodule. Indeed, for any , we pick a nonzero , then .

Let be the kernel of the morphism

Then is a left ideal, which is contained in a maximal ideal of . It follows that is a maximal submodule of because is a maximal ideal of and the following isomorphism

By SS 3, we can find a submodule such that

which gives

We claim that is maximal. Pick any proper submodule , then is a submodule of , which has to be , i.e. because of the maximality of . This proves our statement.

Proposition 5. Let be a semisimple -module, then every nontrivial submodule and quotient module of is semisimple.

Proof. Pick nontrivial submodule of . Let be the maximal subset of such that

is direct. Then the direct sum is actually . Therefore where . In particular, since , a quotient module of is semisimple.

Corollary 6. is a semisimple ring if and only if every -module is semisimple.

Proof. By the universal property of free modules, every -module is a factor module of a free -module, while a free -module is a direct sum of some copies of . Hence if is semisimple then every -module is semisimple. Conversely, if every -module is semisimple, then is semisimple because it is a left module over itself.

Jacobson Radical and Semisimplicity

Let be a ring. We say it is a finite dimensional algebra if it is also a vector space over some field of finite dimension. We study the Jacobson radical in this subsection, which will be used in next section.

We summarise what we want to prove in the following proposition.

Proposition 7 (Jacobson Radical). Let be a ring (not necessarily commutative) and be the Jacobson radical of , then

  1. is a two-sided ideal containing all nilpotent elements.

  2. For every simple -module we have . More precisely,

  3. Suppose is a finite dimensional algebra (or more generally, is Artinian), then is semisimple, and if is a two-sided ideal such that is semisimple, then . It follows that is semisimple if and only if is trivial.

  4. Assumption being above, is nilpotent.

Proof. We first prove 2. Pick any such that annihilate all simple -module. For any maximal left ideal , is simple. Therefore , which implies that . Therefore .

Conversely, suppose for some simple . Since is a submodule of and is simple, we have . More precisely, there exists some such that . Therefore there exists such that . is in the annihilator , which is contained in a maximal ideal (does not equal ). But we also have . Therefore and , which implies that and this is absurd. Hence 2 is proved.

Next we prove 1. By definition is a left ideal. Now pick any and . It follows that for all simple . Indeed, if , then and therefore . If is nilpotent and is simple, then . If not, say and , then . A contradiction. Therefore 1 is proved as well.

To prove 3, we first note that is Artinian: every descending chain of left ideals must stop. This is determined by the dimension of . It follows that is the intersection of finitely many maximal ideals, for the descending chain

must be finite. Therefore we can write for some maximal ideals of . Now consider the map

Since , this follows from nothing but the Chinese Remainder Theorem. is an isomorphism and each is simple. We are done.

Now suppose is a two-sided ideal such that is semisimple. By definition we can write

for some simple . Pick any , we have for all , therefore , which implies that , i.e. . (In fact, according to the structure theorem of semisimple ring, is finite.)

If , then is semisimple. Conversely, if is semisimple, then is a two-sided ideal such that is semisimple. Hence has to be trivial as well.

To prove 4, we work on the descending chain . Let be the ideal where the chain stops to shrink. Then according to Nakayama's lemma, , which implies that .

Group algebra and representation

Let be a commutative ring and a finite group. Let be an -module. We can study the representation

and we can also study the ring homomorphism

We show that they are the same thing. Given , then for any , is an automorphism because . Therefore gives rise to representation .

Conversely, for an representation and any , is automatically an endomorphism and therefore we have a map

Therefore, the study of group representation can also be transferred into the study of group algebra. For simplicity we call such a module together with a representation as a -module, which you may have known. Note such a -module can also be considered as a module over in the usual sense. Conversely, an -module is a -module. When the context is clear, we write in place of .

Semisimplicity of Group Algebra Over A Field

We generalise Maschke's theorem in an arbitrary field .

Theorem 8 (Maschke). Let be a finite group of order . Let be a field, then is semisimple if and only if the characteristic of does not divide (it can also be ).

In introductory representation theory, we study the case when or , whose characteristic is definitely .

Proof. Let be a -module, and let be a -submodule. We show that is a direct summand of , i.e., there exists some such that . It is natural to think about the projection where for all . It is seemingly clear that is what we want, but we can't do this: we only know that is a -linear map, but we have no idea if it is a -linear map. To work around this problem, we modify the projection into a -linear map.

To do this, we average over conjugation. To be precise, we consider the map

This map is -linear. We therefore can write because it is the left inverse of the inclusion . Indeed, for any , we have

Note, since is a -module, we have and therefore . Also, the fact that is used here: if the characteristic divides , then . Moreover, in and therefore is not defined.

Next we suppose that divides . Consider the element

Note for all and therefore because . Therefore is a nonzero nilpotent element, i.e. , from which it follows that is not semisimple according to proposition 7.

In other words, if is a finitely dimensional representation over of group , and the characteristic of does not divide , then is completely reducible. Recall we also have matrix decomposition of a matrix representation. But this is not very easy to generalise. To work with it we need a clearer look at semisimple rings.

Structure of Semisimple Group Algebras

It would be great that, given a matrix representation of a representation, we can decompose it into diagonal block matrix, with each block being a subrepresentation. But it would not be a easy job: we need to know whether the field is algebraically closed, the characteristic of it, et cetera. Perhaps we need some Galois theory but it has gone too far from this post. Anyway we need to see through the structure to know how to work with it.

Structure theorem of semisimple rings

In this section we study the structure of in a more detailed way. We say a ring is simple if it is semisimple and all of its simple left ideals are isomorphic. A left ideal is called simple if it is a simple left -module.

Theorem 9 (Structure theorem of semisimple rings). Let be a semisimple ring. Then the isomorphic class of left ideals of is finite. Say it is represented by . If (the sum of all left ideals isomorphic to ), then is a two-sided ideal, and is a simple ring. One can write as a product

Besides, admits a Peirce decomposition with respect to these . There are elements such that The are idempotent (), orthogonal ( if ). As a ring, is the multiplication identity of , and .

Proof. To begin with we first study the behaviour of simple left ideals.

Lemma 10. Let be a simple left ideal of and be a simple -module, then unless .

Proof of the lemma. Since is simple, or . If , then there exists some such that (again by the simplicity of ). Therefore the map

is surjective. It is injective because the kernel is a submodule of and it has to be trivial.

According to this lemma, whenever . This will be frequently used. For the time being we can write although we don't know whether is finite. Firstly we show that is also a right ideal (since it is a sum of left ideals, it is by default a left ideal):

Therefore is also a right ideal for all . But before we proceed we need to explain the relation above. Since contains the unit, we must have . We have because for all and is a sum of all over . Therefore other terms are eliminated. Finally, we have simply because is a left ideal.

Also note that for all because it is an intersection of two distinct classes of simple modules. Therefore we can write for the time being.

Now consider with . This sum is finite (by definition of direct sum, where cofiniteness is required). Let be the finite subset such that for all . It follows that for all because . We can therefore write . All other direct summands are trivial. Since each represents a isomorphic class of simple left ideals, the class of simple left ideals are finite.

Now we study the relation of , and . For any , we have

Therefore is the unit in (it follows automatically that ). For any , we put , then there is a unique decomposition

This gives us a projection . We also have if . Since , we can safely write . Each is simple because (1) it is semisimple ( and each is also a simple -module) and (2) all simple left ideals of are isomorphic. To show this, assume that is a left ideal that is not isomorphic to . Since we have , is also a simple left ideal of . But it contradicts the definition of .

Let's extract more information from this theorem. First of all the sum of is also finite in every , hence each is also a finite direct sum. To be precise,

Theorem 11. Every simple ring admits a finite direct sum of simple left ideals

Proof. Since is semisimple, it is a sum of simple left ideals, the collection of which can be chosen to be direct. Say we have .

Consider :

where . This sum is finite, say we have and . Then

This proves our assertion.

Combining theorem 9 and 11, we see

Corollary 12. Every semisimple ring admits a decomposition

where denotes direct sums of isomorphic simple left ideals . This direct sum is unique in the following sense. are unique up to isomorphism. are unique up to a permutation.

This must reminds you of the isotropical decomposition of a representation into irreducible representations. They are the same thing. It used the semisimplicity of and here we are talking about the semisimplicity of an arbitrary ring.

We include here a elementary ring theory result that really doesn't need a proof here.

Proposition 13. Let be rings with units. The direct product

has the following property. Every ideal (no matter left, right or two-sided) of is an ideal of . Every minimal ideal of is an ideal of . Every minimal ideal of is an ideal of some .

The proof is quite similar to how we prove that is simple in our proof of theorem 9. This actually shows that

Corollary 14. If are semisimple rings, then so is

Wedderburn-Artin Ring Theory

We want to work with matrices, i.e., we want to work with linear equations. This becomes possible because of Wedderburn-Artin ring theory. We don't know what can happen yet, so we can only try to generalise things very carefully.

When talking about matrices, we can talk about endormorphisms as well. So our first step is to find a bridge to endormorphisms. We now to need to consider as a left module over itself.

The most immediate one is multiplication. For , we may consider the multiplication induced by :

It may looks natural but unfortunately it is not necessarily an endomorphism. The reason is simple because we have in general. However we can define

Now holds naturally. We can show that every endomorphism is defined in this way. Consider the map . We have

  1. is anti-homomorphism. Indeed, for all and .

  2. is surjective (as a function, not a homomorphism). For any , we have . Therefore .

  3. is injective. If for all , then in particular .

We can call an anti-isomorphism but that causes headaches. Instead, if we consider the opposite ring where addition is the same as and multiplication is given by

then we have

Proposition 14. Let be a ring. There is a natural isomorphism given by .

Note so we may be able to take the opposite to decompose and take the opposite again.

Now write as in corollary 12. We therefore have

However, by Schur's lemma, is a division ring (we don't necessarily have a field here). Therefore

For each , we have a corresponding matrix :

where is the inclusion and is projection. This is to say, the isomorphism is given by

The verification is a matter of linear algebra and techniques frequently used in this post.

Therefore we have

Taking the opposite again we have

The isomorphism is given by transpose of a matrix. However, the opposite ring of a division ring is still a division ring, we therefore have a decomposition

where is a division ring.

Conversely, rings of the form above is semisimple. This is easy because for , the only proper two-sided ideal is trivial, hence is also trivial, but is semisimple. See the lemma below.

Lemma. Let be a ring. All two-sided ideals of are of the form where is a two-sided ideal of .

Proof. If is a two-sided ideal of , then clearly is a two-sided ideal of . Conversely, suppose is a two-sided ideal, we show that for some . To be precise, put

Then is a two-sided ideal. Now pick some . Let be the element whose is on its -th element and everywhere else. For any matrix , we have

Therefore if , then and in particular,

for all . Therefore . Conversely, for any , we can find such that . Now . Note a matrix can be written in the form where . This proves that .

It follows that a matrix algebra over a division ring or a field is semisimple. But let's head back to where we were.

The direct sum (or product because it is finite) of matrix algebras over division rings

To conclude, we have the Wedderburn-Artin theorem.

Theorem 15 (Wedderburn-Artin). is a semisimple ring if and only if it can be written as a direct sum (or product because they are the same when finite) of matrix algebras over some division rings

Since the opposite of a division ring is a division ring, we also have

Corollary 16. A ring is semisimple if and only if is.

Back to representation theory

Now back to representation theory. But it can be extremely hard: we have no idea about the division ring. However, when the ring is algebraically closed, there is no problem. Note some author also use skew field in place of division ring.

Proposition 17. Let be an algebraically closed field and be a finite dimensional division ring over , then .

Proof. Pick that is not . Note the map is a -linear map. Since is algebraically closed, has at least one eigenvalue, say . It follows that

for some nonzero where is the unit of . Since is a division ring, we have . We actually established an isomorphism and therefore .

If you have studied Banach algebra theory, you will realise that this nothing but Gelfand-Mazur theorem (see any book in functional analysis that discusses Banach algebra, for example, Functional Analysis by W. Rudin). In infinite dimensional space we have to consider the topology of the field and the algebra.

Therefore we can now state Maschke's theorem in the finest way possible:

Theorem 18 (Maschke). Let be a finite group, and be an algebraically closed field whose characteristic does not divide the order of , then

Those are uniquely determined. In particular, .

References

  • Algebra Revised Third Edition, Serge Lang.

  • Abstract Algebra, Pierre Antoine Grillet.

  • Linear Representation of Finite Groups, Jean-Pierre Serre


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK