KnowMSG v1.4

First things first, for those of you actually reading on my blog, and not via an RSS feed, you probably noticed I changed the theme. I’m not sure if I’ll stick with it, but for now, I like it.

Now that summer has started, I’ve had time to work on the ever growing list of “Things to do if I had more time.” One project I started a while ago, was KnowMSG: Know More Simple Groups.

For anyone who has taken an introductory algebra course, you are intimately familiar with the problems of the form: “Prove there are no simple groups of order BLANK.” There are a few basic tricks, usually coming from the Sylow theorems, and you learn how to use them. This fall I was assigned to do this for all numbers below 200. Instead of hand-writing all of these proofs, I wrote KnowMSG.

The idea is, you input a number and it either returns a simple group of that order, or returns a proof that no such simple group can exist. Well, that’s overstating it. While it’s true that we have a classification theorem for all finite simple groups, it’s not something I pretend to understand, so I don’t think it’s fair to cite the classification theorem. Instead, KnowMSG tries to find an elementary proof. If it can’t then it cites the classification theorem, Burnside’s theorem, or the Feit-Thompson theorem. I consider any of those cases failures, as their proofs are not “elementary,” for any reasonable definition of the word.

That being said, KnowMSG doesn’t fail too frequently. The smallest number (at time of writing) that it fails on is 1008. The smallest odd number it fails on is 4455 (odd numbers are easier, it seems). It never fails on anything which is 2 modulo 4. On numbers less than 1,000,000, it only fails on 8493 cases. I set a goal, which is now accomplished, to solve every group less than 1000. The goal of solving all sizes via elementary means is surely unattainable; if I did, I would have an elementary proof of the classification theorem.

Here you can find a list of all the numbers below 1,000,000 which I do not have satisfactory proofs for. If you find a neat technique that can solve a few cases, let me know, and I’ll implement it.

People have been asking how KnowMSG works, so let me give you a basic overview. You can also find all the code hosted on github, but I can assure you it is messy, and not well-commented. If you’re feeling ambitious, have at it.

Say you plug in n. First, it’s easiest to check if there is a simple group of that order. It literally has the classification theorem coded into it, so it can check relatively quickly if there is a simple group of that order.

If there aren’t any, the idea is to first write down, for each prime p dividing n, the possible number of Sylow p-subgroups. By the Sylow theorems, these numbers must be 1 modulo p, and must be a factor of n. Sometimes, for some prime, 1 is the only option, and we can use this to deduce the existence of a normal subgroup. Otherwise, we proceed by contradiction.

If there were no normal subgroups, then 1 could not be an option for any prime. Moreover, we can knock out some of the smaller options, because if there are n_p Sylow p-subgroups, then our group acts on them by conjugation giving us a map

\phi:G\to S_{n_p}

Since the kernel of this map is a normal subgroup, it must be trivial or injective. By construction, it isn’t trivial, so it must be injective. Sometimes this gives us a contradiction, say, if n (the size of G) does not divide n_p!.

Now it gets complicated. There are approximately 10 tricks KnowMSG knows, and for each possibility for n_p, for each prime p, it tries to apply each of the tricks to arrive at a contradiction for that possibility. If it can get a contradiction for each n_p for a given prime, it prints out a case-by-case proof. Otherwise it just dumps to the screen any information it could deduce, along with a notice that it doesn’t have a proof.

There are three numbers below 1000, that I needed to get to reach my goal, for which none of this worked. They are 720, 756, and 840. These cases are somehow harder than the rest. Here I feel like I cheated slightly. KnowMSG simply checks if you input one of those numbers, and spits out a memorized proof. It’s not really doing any thinking.

Anyway, that’s the basic idea. If you want to know more about how these tricks work, you can

  • Look at the file technique.js on github,
  • Try plugging in these numbers. I believe all of the cases are covered by these examples: 2, 27, 90, 180, 240, 252, 540, 576, 720, 756, 840, 1716, and 808017424794512875886459904961710757005754368000000000

A proof via Lagrange’s theorem

Let’s assume for the sake of contradiction that there are finitely many primes. Let p denote the biggest prime. Since 2^p-1>p, it cannot be prime. It must be divisible by some other prime q. Another way to say this is that

2^p\equiv1\pmod q

This means that the order of 2 in the multiplicative group \mathbb Z/q\mathbb Z^\times must be a divisor of p. BUT p IS PRIME! So 2 must have order p. But then Lagrange’s theorem says that p must be a divisor of the size of the group (which is q-1). In particular, p<q, contradicting the “biggest-ness” of p.

Applying Yoneda’s Lemma

Let’s write down Yoneda’s lemma one more time. For a locally small category, and functors from that category into \textsc{Set},

\mbox{Nat}(\hom(A,-),F)\cong F(A).

One instance of Yoneda’s lemma is to take the second functor F to be \hom(B,-). Then we have a nice symmetric looking statement:

\mbox{Nat}(\hom(A,-),\hom(B,-))\cong \hom(B,A).

This is called the Yoneda embedding. Now let’s do some magic. Let G be a group. We can regard G as a category with one element (the elements of the group are the morphisms in the category). Call that element *. If we take A and B both to be *, then we get

\mbox{Nat}(\hom(*,-),\hom(*,-))\cong \hom(*,*).

The righthand-side is just the homomorphisms from group itself. The homomorphisms from * to * form a group; that’s what we said. Okay, technically, it’s the set of all of those homomorphisms, but the correspondenc we’ll get from Yoneda will give it a group structure.

On the lefthand-side, we need to understand \hom(*,-). As we defined it, it composes morphisms. That is, \hom(*,f) is “composition on the left with f.” So I’m looking for the ways to transform composition on the left to composition on the left that make the related square commute. This is likely difficult to picture, but these are going to correspond to functions on sets. Specifically, it ccorresponds to the permutations of the set \hom(*,*) (pushing the elements around by composing on the left with them).

If I’ve done my explaining correctly (I most certainly have not) this should be a rough sketch of a proof of Cayley’s theorem in group theory. Indeed, the Yoneda Lemma is a generalization of this theorem. Cayley is easy to prove without Yoneda, but the correspondence is an important one to see. At least for me, it helps me understand Yoneda’s lemma slightly better. I think of Yoneda as Cayley’s big brother.

Galois Group

Last time we started with a subgroup of the automorphism group of a field k, and asked about it’s fixed field, the field of elements which are fixed by every automorphism in the subgroup. We noticed that if we took a big group, we would get a small fixed field. If we picked a small group, we would get a big fixed field.

Today I want to do the same thing but in the other direction. Instead of starting with a subgroup and finding it’s fixed field, I want to start with a field and compute the group that fixes it. Of course, we always need to do this relative to some base field, so really what we’re asking about is the E is a field extension of k, what are the automorphisms of E which fix k. We think of E as being set in stone, and k as being allowed to vary.

This collection of automorphisms is called the Galois Group of the extension E/k (this is notation for saying that E is an extension of k). We denote it by \mbox{Gal}(E/k). More techncially,

\mbox{Gal}(E/k)=\{\sigma\in\mbox{Aut}E\mid\sigma|_k=\mbox{id}_k\}

Of course, we need to really check that this is a group, but it’s not so bad. If two automorphisms both fix k, then so does their composition (first the first one fixes k, then the second fixes k). It’s similarly easy to check for inverses and identity. This proof is neither difficult nor enlightening. I’d rather get to the math that is both.

Let’s work out a simple example. Let k be a field. What is \mbox{Gal}(k/k)? These are the automorphism of k which fix k. There’s exactly one of those. The identity (or lame) automorphism.

There’s something else you may notice. k/k is a particularly small extension. However, I’m thinking of it as E/k where E=k. Furthermore, I’m thinking of E being set in stone. So then the subfield k is particularly big inside E. This is the correct notion, because, I really do want to think of the bigger field as unchanging. So a very big subfield (as big as they come) gives us a very small Galois group (as small as they come). This is true in general too. Small subfields give big Galois groups and big subfields give small Galois groups.

This idea of matching groups with fields in such a way that big groups get matched with small fields and vice versa is incredibly important and incredibly powerful. We can now start exploring just how deep the rabbit hole goes.