Prime Number Theorem

So, I lied. I decided not to give a proof of the prime number theorem. Here’s a brief overview of what it says (without proof). If $\pi(x)$ denotes the number of primes less than or equal to $x$, then the prime number theorem says

$\pi(x)\sim \frac x{\log x}$

In other words, if I asked you to estimate the number of primes less than $10^6$, a good guess would be $10^6/\log(10^6)\approx 72382.4$. In actuality, the number is $78498$, so we’d be off by about six-thousand. This may seem like a lot, but compared to the one-million, it’s not that bad. Only $0.6\%$ error. In fact, here’s a table that might be of interest:

When we say $\pi(x)\sim\frac x{\log x}$, we essentially mean that as $x$ gets big, the percent error drops to zero (which we see numerically in the table). More technically, we mean that

$\displaystyle\lim_{x\to\infty}\frac{\pi(x)\log x}x=1$.

So that about wraps up what I want to say about primes. I have no idea what’s coming next, but stay tuned!

Another quick one

I know we’ve done a bunch of these types of proofs already, but this one is particularly beautiful in its simplicity:

Any prime divisor of $n!+1$ must be larger than $n$.

We’re pretty much done with the infinitude of the primes. Next week I intend to prove the prime number theorem, and then we’ll be done with primes altogether for a while.

A proof via Lagrange’s theorem

Let’s assume for the sake of contradiction that there are finitely many primes. Let $p$ denote the biggest prime. Since $2^p-1>p$, it cannot be prime. It must be divisible by some other prime $q$. Another way to say this is that

$2^p\equiv1\pmod q$

This means that the order of $2$ in the multiplicative group $\mathbb Z/q\mathbb Z^\times$ must be a divisor of $p$. BUT $p$ IS PRIME! So $2$ must have order $p$. But then Lagrange’s theorem says that $p$ must be a divisor of the size of the group (which is $q-1$). In particular, $p, contradicting the “biggest-ness” of $p$.

A nonstandard proof

We’re going to use a nonstandard model of the real numbers to make this work. The construction is a bit wacky: Take a non-principal ultrapower of $\mathbb R$. Call this the hyperreals. Here’s another way to put it:

A hypernumber a sequence of regular numbers $(a_1,a_2,\dots)$. To make decisions about hypernumbers, we put it to a vote:

Is it true that $(3,0,0,\dots)=(0,0,0,\dots)$? Well, the first position votes no, because $0\ne 3$. The rest of them all vote yes, however, so the vote is infinity to one. They’re equal. Wackier things such as $(0,1,0,1,\dots)$ we wont get into. If you’re interested, look here. (The basic idea is that the ultrafilter decides whether the evens or the odds win in the vote. The ultrafilter always decides which sets are more important. The fact that its non-principal implies that it prefers cofinite  sets to finite ones).

It’s easy to see that the standard real numbers are embedded as hyperreal numbers. If you have a real number $x$, then $(x,x,\dots)$ is the standard real number embedded in the hyperreals as a hypernumber.

Nonstandard models are fun because now we really do have “infinitesimal numbers.” The hypernumber $(1,\frac12,\frac13,\dots)$ is smaller than every standard positive number $(x,x,x,\dots)$, but its still bigger than $(0,0,0,\dots)$. We can also have infinite numbers $(1,2,3,\dots)$, and even larger infinite numbers $(2,3,4,\dots)$.

Lots of things you can do in the hyperreals are essentially the same in the standard real numbers. Some people call this the “transfer principle.” I never liked that name. All it says is that, in model theoretic terms, the standard real numbers and the hyperreals are “first-order equivalent.” That is, they have the same first order theories. This means that anything you can write down using quantifiers $\forall$ and $\exists$, where you only quantify over elements (not subsets of elements) is true in one model if and only if it is true in the other.

Anyway, enough logic. Let $P$ be the set of hyperprimes. These are the hypernatural numbers which aren’t hyperdivisible by any hypernatural numbers other than hyper-one and themselves. That sentence was fun.

Let $N=(1!,2!,3!,\dots)$. Note that $N$ is hyperdivisible by every standard natural number. This means that $N!+1=(1!+1,2!+1,3!+1,\dots)$ cannot be divisible by any standard prime. But there does have to be some hyperprime dividing any hypernumber. This is because every standard natural number is divisible by a standard prime. Since divisibility is a first-order property, there must be a hyperprime hyperdividing any hypernatural number. This tells us that $P$, the set of hyperprimes, contains some nonstandard element.

Now I just want to argue why a set which contains nonstandard elements must have its standard counterpart be infinite. The idea is that, if it were finite, I could define the set by saying $n$ is in $P$ if and only if $n=p_1$ or $n=p_2$ or … or $n=p_k$. I can’t do this if $P$ is infinite, because the sentences I write down must be finite. Then, when I transfer this set to the hyperreal numbers, if it were finite, it would satisfy this property in the hyperreals too. Thus, in the hyperreals, it would only contain the same standard elements.

The fact that $P$ contains a nonstandard hyperprime tells us that there must be infinitely many primes. I left out many details, but hopefully the sketch gives you a reasonable idea of what is going on.

At some level, this is no different from Euclid’s proof, but it just seems so much more badass. I mean, we used the axiom of choice (to construct the non-principal ultrafilter) in a number theory proof. That is decidedly awesome.

Holy primes between n and 2n, Bertrand!

While we’re on the Erdős binge, here’s another proof of his. Warning, this post is long. We’re going to prove that:

For every positive integer $n$, there is always a prime $p$ such that $n.

Clearly, this implies there are infinitely many primes. In fact, it’s much stronger, and says something about how quickly the prime numbers grow. There are shorter proofs than the one I’m presenting here, but they use complex analysis. This one is in some sense more elementary.

Here’s the idea. Pick an $n$. Suppose we wanted to find a prime $p$ between $n$ and $2n$. We could find it as a divisor of $(2n)!$. We certainly wouldn’t find it in $n!$. So in particular, it would be a divisor of

$\displaystyle\binom{2n}n=\frac{(2n)!}{n!n!}$.

So the goal is to estimate these central binomial coefficients in two ways. One will give us an upper bound, and the other a lower bound. If we assume that Bertrand’s postulate is false, we’ll get that the lower bound is actually bigger than the upper bound, a contradiction.

I’ll do the more complicated one first (the upper bound). For each prime $p$, let $e_p$ denote the exponent of $p$ in $\binom{2n}n$. That is,

$\displaystyle\binom{2n}n=\prod_{p\le 2n}p^{e_p}$

Throughout the entire post, $p$ will always denote a prime. We’re going to break the product up into four pieces depending on the size of the primes. I’ll color code them to make it easier.

$\displaystyle\binom{2n}n=$ $\left(\displaystyle\prod_{p>0}^{p\le \sqrt{2n}}p^{e_p}\right)$$\left(\displaystyle\prod_{p>\sqrt{2n}}^{p\le2n/3}p^{e_p}\right)$$\left(\displaystyle\prod_{p>2n/3}^{p\le n}p^{e_p}\right)$$\left(\displaystyle\prod_{p>n}^{p\le2n}p^{e_p}\right)$

I’m implicitly assuming here that $\sqrt{2n}<2n/3$. This isn’t always the case. However, it is the case for $n>5$. So for $n<5$ I’ll have to check it some other way. Not a big loss.

Yellow ($n$ to $2n$):

We’re going to assume for the sake of contradiction that Bertrand’s postulate is false. That is, we’re going to assume there aren’t any primes between $n$ and $2n$, so that the yellow product is empty. We can replace the yellow product with $1$.

Blue ($2n/3$ to $n$):

I’ve been told that Erdős thought this step was the most important in the entire proof. Suppose I had a prime $p$ such that $2n/3. How many times does $p$ show up in $2n!$? Only twice. There’s $p$ and $2p$. This is because $3p>3\cdot 2n/3=2n$. But clearly $p$ shows up in $n!$ exactly once. This means that $\binom{2n}n$ can’t be divisible by any primes between $2n/3$ and $n$, so the blue product is just $1$.

Green ($\sqrt{2n}$ to $2n/3$):

This is probably the hardest part. If $p$ is a prime such that $\sqrt{2n}, then $e_p=1$. This is just because $p^2>2n$. So the green product is just the product of primes between $\sqrt{2n}$ and $2n/3$. That is certainly less than all the product of all primes less than $2n/3$, so the green product is bounded by $(2n/3)\#$. The $\#$ is the “primorial” function. It’s like the factorial, but it is the product of just the primes less than a given number. So we want to estimate $(2n/3)\#$. We’ll show that $k\#<4^k$, so we can bound the green product by $4^{2n/3}$.

The proof is done by induction. It’s obvious for $n=1$ or $n=2$. Otherwise, if $n>2$ and even, $n\#=(n-1)\#$ which is at least $4^{n-1}$ by induction, so $n\#<4^{n-1}<4$. If $n$ is odd, let $n=2m+1$. Clearly

$4^m=\frac12(1+1)^{2m+1}>\binom{2m+1}m\ge\displaystyle\prod_{p>m+1}^{p<2m+1}p>\frac{(2m+1)\#}{(m+1)\#}$

Now, by induction, $(m+1)\#<4^{m+1}$, giving us the desired result.

Red ($0$ to $\sqrt{2n}$):

At this point, I’ve become bored of estimating, so I won’t try to do it well. Each prime power $p^{e_p}$ for $p<\sqrt n$ is obviously less than $2n$, and there are at most $\sqrt{2n}$ such primes, so the red product is less than $(2n)^{\sqrt{2n}}$.

Okay. Now for the lower bound. I claim:
$\frac{4^n}{2n+1}<\binom{2n}n$.
This is comes from the binomial theorem in the following way:
$4^n=(1+1)^{2n}=\sum_{k=0}^{2n}\binom{2n}k$.
Since the central binomial coefficient is the largest, for any $k$, $\binom{2n}k\le\binom{2n}n$. That is,
$4^n<\sum_{k=0}^{2n}\binom{2n}n=(2n+1)\binom{2n}n$
which is what we wanted.
Here’s why this is bad. The lower bound is less than the upper bound, So we have
$\frac{4^n}{2n+1}<$ $\left((2n)^{\sqrt{2n}}\right)$$\left(4^{2n/3}\right)$$(1)$$(1)$
Asymptotically, the lefthand side grows faster than the righthand side, so for some big enough $n$, this will always be a contradiction. The point where this starts to be a contradiction is around where $n=500$, so for every $n>500$, there is always a prime between $n$ and $2n$. For everything else, we will give a sequence of primes, each less than twice the previous. It’s a worthwhile exercise to think about how this works.
$2, 3, 5,7,13,23,43,83,163,317,503$

Another proof due to Erdős

In fact, we can do better than just “there are infinitely many primes.” Sometimes there are infinitely many numbers, but the sum of there reciprocals is convergent because they get big fast. For instance, there are infinitely many factorials $0!, 1!, 2!, 3!\dots$, but the sum of the reciprocals is still convergent:

$\sum\frac{1}{k!}=e=2.71828\dots$.

But certainly if the sum was divergent, there would have to be infinitely many terms in the sum. Let $S$ denote

$S=\sum\frac1p$

the sum of the reciprocals of the primes. We’re going to show that $S=\infty$. If it were convergent, we could get rid of the first some number of terms, and end with a sum less than, say, $1/2$.

$S_N=\sum_{p\ge N}\frac1p<\frac12$.

Call a prime big if it appears in the sum $S_N$. Call it small if it doesn’t.

Now pick your favorite number $M$, and partition $\{1,2,\dots,M\}$ into two sets. The first set, $A$ will be all the numbers less than or equal to $M$ divisible by small primes only. The second set, $B$ will be those numbers divisible by some big prime.

Let’s deal with the big primes first. For any prime $p$, there are no more than $M/p$ numbers in $\{1,2,\dots,M\}$ divisible by $p$. Ignoring the gratuitous amounts of double counting we’ll be doing, we can very safely bound the size of $B$ by

$\left|B\right|\le\displaystyle\sum_{p\ge N}\frac Np<\frac N2$.

For the set $A$, we use a bound similar to what we used last time. For each $a\in A$, write $a=b^2c$ where $c$ is square-free. (i.e., not divisible by any perfect squares other than $1$). Clearly $c$ is the product of small primes, of which there are some fixed number $k$. This gives us $2^k$ choices for $c$. For $b$, it certainly can’t be any bigger than $\sqrt M$, so

$\left|A\right|\le 2^k\sqrt M$.

But our choice of $M$ was totally independent of our choice for $N$ and $k$. So let’s just pick $M$ to be super big. By super big, I mean big enough so that $2^k\sqrt M. Something like $M> 2^{2^{2^k}}\cdot(k!)!^{k!^2}$ should probably do the trick. But now we have a problem:

$M=\left|A\right|+\left|B\right|.

This is a contradiction, so the sum of the reciprocals of the primes diverges, and therefore, there are infinitely many of them.

A proof from Erdős

I have a friend from HCSSiM who reminded me of this proof, and a few others, that he put together for MIT Splash. Thanks Ben!

Let $\{p_1,\dots,p_n\}$ be the set of primes less than some number $N$. Pick a number $x$ which is less than $N$. We can break up $x$ into two parts:

• Let $a^2$ be the largest square dividing $x$
• Let $b$ be $x/a^2$.
Though I didn’t seem to mention factoring $x$, we are using unique factorization here in a fundamental way. We know a few things. We know $x$ is only divisible by the primes $p_1,\dots,p_n$, since all others are bigger than $N$. We know $b$ is not divisible by any perfect squares, lest $a^2$ not be maximal.
How many posibilities are there for $a$ and $b$? Since $a^2\le x\le N$, there are certainly no more than $\sqrt N$. For $b$, there can be no more than $2^n$, as there are $2^n$ choices for the set of primes which divides $b$.
Each number less than $N$ works like this, and so we get the inequality
$N\le 2^n\sqrt N$.
Solving for $n$, we get
$n\ge\frac12\log_2N$.
We now have a lower bound on the number of primes less than $N$. This lower bound grows towards infinity as $N$ gets big, and so therefore, the number of primes must do so as well.