Fundamental theorem of algebra

I really think this is a misnomer. The theorem has very little to do with algebra. The theorem, is an assertion about the complex numbers which are very much an analytic object. Anyway, the proof is pretty short, so here goes.

Theorem 1 Every polynomial with coefficients in {{\mathbb C}} has at least one root in {{\mathbb C}}.

Proof: Let {P:{\mathbb C}\rightarrow{\mathbb C}} be a non-constant polynomial. Assume for the sake of contradiction that {P} has no roots in {{\mathbb C}}. Then indeed, there is some small open set in {{\mathbb C}} that must not be in the image of {P}. Why? If {P(z)=a_n\cdot z^n+\cdots a_1\cdot z+a_0} assuming {a_n\ne0}, then

\displaystyle \lim_{z\rightarrow\infty} \frac{P(z)}{z^n}=a_n,

so for {z} far enough away from zero, {P(z)/z^n} must be close to {a_n}. In particular, for large enough {|z|} we can make {|P(z)|/|z|^n} at least {|a_n|/2}. Hence, {|P(z)|} must be, (for large {|z|}), at least {|a_n|\cdot|z|^n}, and hence, far away from zero. Let {R} be any such value which is large enough to guarantee that for {|z|>R}, we have the result just described.

But what about the small complex numbers? Couldn’t those get close to zero? Alas, no. the set {\{z\mid |z|\le R\}} is compact, and so if {P(z)} gets arbitrarily close to zero on this set, it in fact hits zero. By assumption, {P(z)} is never zero, and so it must be bounded away from zero. So let’s say that if {|w|<\varepsilon}, then {w} is not in the image of {P}. In other words, for each {z\in{\mathbb C}}, {|P(z)|\ge\varepsilon}.

Now if {P} is never zero, then {f=1/P} is defined on all of {{\mathbb C}}. But remember that {|P(z)|\ge\varepsilon}. This means that {|f(z)|\le 1/\varepsilon}. But now {f} is bounded and entire (holomorphic on all of {{\mathbb C}}), so {f} must be a constant, implying that {P} must be a constant polynomial. This contradicts our assumption that {P} was non-constant, so it must be that every polynomial has a root. \Box

This is just one of many proofs. You can read a bunch of them on Adam Azzam’s blog (which you should already be reading).

Liouville’s theorem

We can use the Cauchy estimates from last time to prove a very beautiful theorem:

Theorem 1 If {f\in{\mathcal H}({\mathbb C})} is bounded (i.e., {|f|<M} for some {M\in{\mathbb R}}), then {f} is constant.

Proof:Expand {f} as a power series at 0. We know that this representation is valid everywhere in {{\mathbb C}}. Write {f(z)=a_0+a_1z+a_2z^2+\cdots}. Recall that

\displaystyle a_n=\frac{f^{(n)}(0)}{n!}.

The Cauchy estimates tell us that

\displaystyle |a_n|=\frac{f^{(n)}(0)}{n!}\le\frac{M}{r^n}

for every positive {r} less than the radius of convergence of {f}. Of course, the radius of convergence is infinite, so this estimate is valid for all {r>0}. Thus, for {n>0}, {|a_n|} can be bounded by arbitrarily small positive numbers. That is, {a_n=0} for all {n>0}. In other words, {f(z)=a_0}, a constant. \Box

It’s worth mentioning that trigonometric functions such as {\sin} and {\cos} are holomorphic on all of {{\mathbb C}} (often called entire) functions. Hopefully this doesn’t bother you, even in light of Liouville’s theorem. If it does, you’re remembering that {\sin x} is always between {1} and {-1}. You are correct for {x\in{\mathbb R}}, but Liouville’s theorem doesn’t say anything about functions on {{\mathbb R}}. Indeed, if you plug in complex numbers, you’ll see that {\sin z} can be arbitrarily large. In fact, look at {\sin(ix)} for {x\in{\mathbb R}}. You can use Wolfram Alpha to see that on the imaginary axis, {\sin z} takes on arbitrarily large values.

Cauchy estimates

We used a trick a few posts ago that I wanted to expound upon. Let {f\in{\mathcal H}(U)}, and {z_0\in U}. If we have a circle {\beta_r} of radius {r} centered at {z_0}, since it’s compact, there is some maximum value of {|f(z)|} on {\beta_r}. Call this {M_r}. We then argued that as {r\rightarrow 0}, {M_r\rightarrow 0} because {f} is continuous. I want to generalize this result slightly:

Theorem 1 (Cauchy Estimates) In the setup as above,

\displaystyle \left|f^{(n)}(z_0)\right| \le \frac{n!}{r^n}\cdot M_r

You can see that, for {n=0}, the result is exactly what we already used.

Proof:Recall from the generalized Cauchy integral formula, that

\displaystyle f^{(n)}(z_0)=\displaystyle\frac{n!}{2\pi i}\int_{\beta_r}\frac{f(w)}{(w-z_0)^{n+1}}dw

Then we have the estimates:

\displaystyle \begin{array}{rcl} |f^{(n)}(z_0)| &=& \left|\displaystyle\frac{n!}{2\pi i}\int_{\beta_r}\frac{f(z)}{(z-z_0)^{n+1}}dz\right|\\ &\le& \displaystyle\frac{n!}{2\pi}\int_{\beta_r}\left|\frac{f(z)}{(z-z_0)^{n+1}}\right|\cdot\left|dz\right|\\ &=& \displaystyle\frac{n!}{2\pi}\int_{\beta_r}\frac{\left|f(z)\right|}{r^{n+1}}\cdot\left|dz\right|\\ &\le& \displaystyle\frac{n!}{2\pi}\cdot \frac{M_r}{r^{n+1}}\cdot 2\pi r\\ &=& \displaystyle\frac{n!}{r^n}\cdot M_r \end{array}

\Box

Cauchy integral formula again

We’ve already proved the Cauchy integral formula:

Theorem 1 (Cauchy integral formula) For {f\in{\mathcal H}(U)}, {z\in U}, and any counterclockwise circle {\gamma} in {U} about {z} (or any loop homotopy equivalent to it in {U-\{z\}}),

\displaystyle f(z)=\frac1{2\pi i}\int_\gamma \frac{f(w)}{w-z}dw.

It’s really quite amazing. Thinking about it slightly more algebraically, consider the operator

\displaystyle F\mapsto\left(z\mapsto\int_\gamma \frac{F(w)}{w-z}dw\right).

Yes, this looks complicated, but the idea is you plug in a function {F}, and it spits out another function depending on {F} which you obtain by integrating that thing in a loop around the input. You shouldn’t expect this operator to behave in any reasonable way, but if you plug in a holomorphic function {F}, it spits out {2\pi i\cdot F}. That is,

Theorem 2 (Cauchy integral formula) Holomorphic functions are eigenfunctions of the above operator with eigenvalue {2\pi i}.

Thank you to Jordy Greenblatt for pointing this out to me. This result is what makes complex analysis so nice. Remember how eigenvectors were really nice for linear tranformations. This is the basically the same thing.

After that, we used this to prove that holomorphic functions were analytic. I’ll use this fact to prove the following extension:

Theorem 3 (Cauchy integral formula) Let {f\in{\mathcal H}(U)}, and {z_0\in U}, and {\gamma} a loop about {z_0}, as above. Then

\displaystyle f^{(n)}(z_0)=\frac{n!}{2\pi i}\int_\gamma\frac{f(w)}{(w-z_0)^{n+1}}dw.

Proof: Expand {f} as a power series about {z_0}. Then

\displaystyle f(z)=a_0+a_1(z-z_0)+a_2(z-z_0)^2+\cdots

Taking the {n}th derivative and evaluating at {z_0} yields

\displaystyle f^{(n)}(z_0)=n!a_n

Recalling from our proof that holomorphic functions are analytic,

\displaystyle f^{(n)}(z_0)=n!a_n=\frac{n!}{2\pi i}\int_\gamma\frac{f(w)}{(w-z_0)^{n+1}}dw.

\Box

The theorem

We are finally ready to prove “the theorem” that I’ve been talking about for so long:

Theorem 1 Let {f:U\rightarrow{\mathbb C}} for {U} open in {{\mathbb C}}. Then the following are equivalent:

  1. {f} is holomorphic.
  2. {f} is infinitely differentiable.
  3. {f} is analytic.

Proof:

(2) implies (1). Duh.

(3) implies (2). Check it out here.

(1) implies (3). Finally, here we go:

Pick any point {z_0} in {U}. There is a small open ball {B} around {z_0} contained entirely in {U}. We’ll show that {f} is given in this ball by a power series centered at {z_0}. Pick a small circle around {z_0} of radius {r} contained in {B}, and call it {\beta_r}. We’ll show that {f} is given by a power series centered at {z_0} (at least within a ball of radius {r}. We’ll use this and the Cauchy integral formula to compute what the coefficients of the power series have to be. Actually, I already know what they are, so let

\displaystyle a_k=\frac1{2\pi i}\int_\gamma\frac{f(w)}{(w-z_0)^{k+1}}dw.

Can you see why this might work? If I had such a power series {f(z)=a_0+a_1(z-z_0)+a_2(z-z_0)^2+\cdots}, and I plugged in {z_0}, all the terms would go away, except {a_0}. By my definition,

\displaystyle a_0=\displaystyle\frac{1}{2\pi i}\int_\gamma\frac{f(w)}{w-z_0}dw,

which by the Cauchy integral formula, is just {f(z_0)}. Pretty cool, huh?

Okay, so here goes. By the Cauchy integral formula, we can take a small counter-clockwise circle {\gamma} centered at {z_0} and going around {z} (for {z} in the ball {B}) to get:

\displaystyle \begin{array}{rcl} f(z) &=& \displaystyle\frac{1}{2\pi i}\int_\beta\frac{f(w)}{w-z}dw\\ &=& \displaystyle\frac{1}{2\pi i}\int_\beta\frac{1}{w-z_0}\cdot\frac{w-z_0}{(w-z_0)-(z-z_0)}f(w)dw\\ &=& \displaystyle\frac{1}{2\pi i}\int_\beta\frac{1}{w-z_0}\cdot\frac{1}{1-\frac{z-z_0}{w-z_0}}f(w)dw\\ &=& \displaystyle\frac{1}{2\pi i}\int_\beta\frac{1}{w-z_0}\cdot\sum_{k=0}^\infty\left(\frac{z-z_0}{w-z_0}\right)^kf(w)dw\\ &=& \displaystyle\frac{1}{2\pi i}\int_\beta\sum_{k=0}^\infty\frac{(z-z_0)^k}{(w-z_0)^{k+1}}f(w)dw \end{array}

We’re getting very close. Can you see how this might work? The {(z-z_0)^k} will have the coefficient involving an integral that has {(w-z_0)^{k+1}} in the denominator, just as {a_k} does.

The next step is to interchange the sum and the integral. If the sum was finite, we’d be allowed to do this, but because integrals are really limiting operators, and infinite sums are as well, we don’t just get to exchange them willy-nilly. I had a professor in undergrad that claimed “more than 80 percent of analysis is figuring out when you can interchange limits.” I thought that sounded lame at the time, but it’s definitely helped me remember to always justify such interchanges. Anyway, we can use the Weierstrass M-test. Recall how we came up with the bound {M_r} last time (by compactness of the circle). We’ll use this again. So we know {|f(w)/(w-z_0)|\le M_r}. Now we’re left with just

\displaystyle \left|\frac{(z-z_0)^k}{(w-z_0)^{k+1}}f(w)\right|\le M_r\cdot \left|\frac{z-z_0}{w-z_0}\right|^k.

Recall that {w} is always chosen on {\beta_r}, so {|w-z_0|=r}, and {z} is chosen inside the circle, so {|z-z_0|<r}. Thus, the entire fraction is less than 1 in absolute value. The sum over all {k} of these is a geometric series which converges, and so the Weiersrass {M}-test tells us that the the sum of the integrals is the integral of the sum. Thus

\displaystyle \begin{array}{rcl} f(z) &=& \displaystyle\frac{1}{2\pi i}\int_\beta\sum_{k=0}^\infty\frac{(z-z_0)^k}{(w-z_0)^{k+1}}f(w)dw\\ &=& \displaystyle\sum_{k=0}^\infty\frac{1}{2\pi i}\int_\beta\frac{(z-z_0)^k}{(w-z_0)^{k+1}}f(w)dw\\ &=& \displaystyle\sum_{k=0}^\infty(z-z_0)^k\frac{1}{2\pi i}\int_\beta\frac{f(w)}{(w-z_0)^{k+1}}dw\\ &=& \displaystyle\sum_{k=0}^\infty(z-z_0)^k\cdot a_k. \end{array}

\Box

Yellow Pigs Day

We’re tantalizingly close to “the theorem,” but today we take a break from complex analysis to talk about something much more exciting: Yellow Pigs Day.

Yellow Pigs Day is a holiday celebrated at Hampshire College Summer Studies in Mathematics (HCSSiM) every July 17th. For those who don’t know, HCSSiM is a 6-week summer program for high school students. There’s a lot of time spent doing math, but not exclusively. In past summers there have been bands, frisbee teams, bridge clubs, and the like. Those familiar with the program will undoubtedly realize that this explanation doesn’t do HCSSiM justice. It’s so much more than “just a summer of math,” but we’ll get back to it.

Yellow Pigs day usually consists of classes (as usual) a talk on the social/historical significance of the number 17, a cake shaped/colored like a yellow pig, singing of math songs (a.k.a., yellow pig carols) and an ultimate frisbee game between the current students and the staff/alumni. That’s right many alumni come back to the program to celebrate. Alumni (such as yours truly) tend to be quite fond of the program.

No doubt you think this program, and in particular, Yellow Pigs Day, is quite odd. You are correct. One of the many things I love about HCSSiM is how readily everyone embraces these oddities.

Thus far, I’m sure you don’t really understand the pure joy HCSSiM can induce in it’s participants (students and staff). Not being particularly eloquent myself, I doubt I can give you a good explanation, but let me attempt anyway.

The focus of the program is not learning math, but creating it. The staff are around to guide the students in a right direction (notice “a,” not “the,” as there can be many correct directions to pursue). Of course, the students aren’t creating new theorems that no one has seen before, but that doesn’t take away from their excitement as they discover it for themselves.

Sometimes, math can seem quite dry, but not at HCSSiM. Staff (and often students) take pride in being able to explain the mathematics in an exciting manner. This might involve a motivating story, or a humorous analogy. Let me give you a few examples:

  • The proof for Euler’s formula (for planar graphs) is often described by pirates trying to attack the a castle displayed diagrammatically by the graph. How many walls must they destroy to be able to reach every room in the castle?
  • Just yesterday, one workshop described Halls Theorem in the following analogy: Ollivander has 100 wands he wants, and 100 incoming Hogwarts students to give them too. Of course, not every wand will work well with every student. Obviously, if there’s any hope at a matching, every collection of N students must be able to use at least N wands. Is this enough?
  • Students sometimes decide to use their own notation. This year, they wrote \binom{n}{k} as a pacman symbol with the n inside the packman, and the k in the mouth. They pronounced the symbol “n chews k.”
I’m really not explaining it well. The point is, HCSSiM has a way of making math fun and goofy without losing track of the mathematics. For your daily dose of HCSSiM, start reading Cathy O’Neil’s blog.

I can give nothing but praise to this program. As a student, it changed my outlook on mathematics. As a staff member, it challenged me to become a better teacher, to be creative (while still being accurate and appropriate) with my explanations. If you missed out on HCSSiM in high school, you should seriously consider applying to be staff. You don’t want to miss out on the best summer experience of your life.

Happy Yellow Pigs Day, all! Next year in Amherst!

Cauchy integral formula

Suppose I have a function {f} which is holomorphic on an open set {U} in {{\mathbb C}} (you should be used to this setup by now). Pick some point {a\in U}. How what about the function {f(z)/(z-a)}? It’s easy to check that it’s holomorphic on {U-\{a\}}. You can do this by proving the applications of the multiplication rule and chain rule with one of the example functions we saw here.

Now let {\gamma} denote a loop which goes around {a}, such as the one to the left. The Cauchy integral theorem says that the integral is zero, so long as the loop has an interior, but this one doesn’t (it’s missing the point {a}. But we can be tricky and make a different loop composed of four parts. Take a look at the picture to the right.

The four parts are {\gamma} (the loop we started with), a straight line segment which we’ll call {\sigma}. A small circle going around {a} which we’ll call {\beta}, and the same segment {\sigma}, but traversed in the opposite direction. The whole thing is the boundary of the shaded region. Notice that if {\gamma} is traversed counter-clockwise, then {\beta} is traversed clockwise. Integrating over the entire curve yields zero. Since we would go across {\sigma} once forwards and once backwards, it must contribute zero. Thus, it follows that

\displaystyle 0=\int_\gamma\frac{f(z)}{z-a}dz+\int_\beta \frac{f(z)}{z-a}dz.

But this is where {\beta} is traversed clockwise. If we went counterclockwise, the two integrals (around {\gamma} and around {\beta} would be equal. Moreover, I never said how small a circle {\beta} was. Indeed, it can be as small as we like. We’ll use this to prove the following nice result:

Theorem 1 (Cauchy integral formula) Let {f}, {a}, and {\gamma} be as above. Then

\displaystyle f(a)=\displaystyle\int_\gamma\frac{f(z)}{z-a}dz.

Proof: Like we said, it doesn’t matter which loop we take, so take a loop {\beta_r} which is a counter-clockwise circle of radius {r} about {a}, and compute

\displaystyle \int_{\beta_r}\frac{f(z)}{z-a}dz-f(a).

Let {z} be a function of {\theta} given by {z=a+r\cdot e^{i\theta}}. Then {dz=re^{i\theta}\cdot id\theta=(z-a)id\theta}

\displaystyle \begin{array}{rcl} \frac1{2\pi i}\displaystyle\int_{\beta_r}\frac{f(z)}{z-a}dz-f(a) &=& \frac1{2\pi i}\displaystyle\int_0^{2\pi}f(z)\cdot id\theta-f(a)\\ &=& \frac1{2\pi }\displaystyle\int_0^{2\pi}f(z)\cdot d\theta-f(a) \end{array}

The next step is to move {f(a)} into the integral. To do so, we need to multiply it by {2\pi} because of the {1/(2\pi)} outside the integral. But we also need to divide by {2\pi} because we’re integrating a constant from {0} to {2\pi}. In other words, the fraction on the outside “makes up” for the integral, and we can pull {f(a)} straight through. This gives us

\displaystyle \frac1{2\pi}\int_0^{2\pi}\left(f(z)-f(a)\right)d\theta.

Now how big can {f(z)-f(a)} get? I don’t have the slightest clue, but I can tell you that for the values {z} we pick on the circle of radius {r} about {a}, {f(z)-f(a)} cannot be arbitrarily large in absolute value. This is simply because the circle is a compact set, and every real function (such as {|f(z)-f(a)|} attains its maximum on a compact set. Let {M_r} be the maximum on magnitude of {f(z)-f(a)} attained on the circle of radius {r}. Then

\displaystyle \begin{array}{rcl} \left|\frac1{2\pi}\int_0^{2\pi}\left(f(z)-f(a)\right)d\theta\right| &\le& \frac1{2\pi}\int_0^{2\pi}\left|f(z)-f(a)\right|d\theta\\ &\le& \frac1{2\pi}\int_0^{2\pi}M_r d\theta\\ &=& M_r \end{array}

As we mentioned earlier, it doesn’t matter what {r} we pick, and since {f} is continuous, very small choices in {r} yield very small changes between {f(z)} and {f(a)}, and so we can make {M_r} as small as we like. Altogether this says that the magnitude of

\displaystyle \frac{1}{2\pi i}\displaystyle\int_\gamma\frac{f(z)}{z-a}dz-f(a)

is zero, meaning the quantity itself is zero, which proves the theoerm. \Box

Cauchy Integral Theorem

Suppose I’m thinking of a path between two complex numbers {z_0} and {z_1}, and I have a holomorphic function {f:U\rightarrow{\mathbb C}} that I want you to integrate on the path. The problem is, I’m obnoxious, and I didn’t tell you what the path was. Can you do it?

The short answer is unfortunately no. The slightly longer answer is: it depends on the open set {U}. If {U} has no holes, then you can do it. Sometimes, you can even if {U} does have holes, but that’s not what we’re going to worry about. The claim is that if I can continuously morph one path {\gamma_0} into another path {\gamma_1}, then

\displaystyle \int_{\gamma_0} f(z)dz=\int_{\gamma_1}f(z)dz.

Such a “continuous morphing” is called a homotopy. Maybe you can see, roughly speaking, how a hole could be an obstruction to morphing one path into another. The intuition is, if you tie down two ends of a string, and you stick a big “pole” in the ground. You can’t get the string to go around the other side of the pole (without lifting it off the ground).

Here’s another way to look at it. Suppose I first integrate over the curve {\gamma_0}, and then I integrate over the curve {\gamma_1}, but I do it backwards. The result would be the difference between the two integrals. If {\beta} is the curve given by {\gamma_0} and then {\gamma_1} backwards, then

\displaystyle \int_\beta f(z)dz=\int_{\gamma_0} f(z)dz-\int_{\gamma_1}f(z)dz.

I’d like to show that this integral is zero. One way to do this would be to show that if I integrate over any loop which is homotopic (can be shrunk down) to a point, then the integral must be zero.

Indeed these two things are equivalent, but I’m not going to prove either. To avoid some messy details, and sweep some others under the rug, I’m going to prove a result that is slightly weaker, though I hope strong enough to give you the right idea. The details I’m leaving out aren’t complex analysis. They’re topology, and in my opinion, boring. I’ll prove this weaker result, and then use the stronger one. If you’re interested, Ahlfors has a proof of the stronger result.

This weaker result is about loops which enclose a simply-connected region. That is, it’s the same result as I want to prove, but only for loops which are the boundary of some region.

Theorem 1 Let {R} be a simply-connected region contained in an open set {U\subseteq{\mathbb C}}, and {\gamma:[0,1]\rightarrow\partial R} is a parameterization of the boundary, then for {f\in{\mathcal H}(U)},

\displaystyle \int_\gamma f(z)dz=0.

Proof: Let {f=u+iv}, and let {z=x+iy}. Then we have the relation on 1-forms {dz=dx+idy}. As before, we’ll be using the notation {u_x} as shorthand for {\frac{\partial u}{\partial x}}. Now

\displaystyle \int_\gamma fdz=\int_\gamma(udx-vdy)+i\int_\gamma(vdx+udy).

From Green’s theorem (which we won’t prove because it is real analysis), we know that

\displaystyle \int_\gamma (udx-vdy)=\iint_R(-v_x-u_y)dxdy,

and that

\displaystyle \int_\gamma(vdx+udy)=\iint_R(u_x-v_y)dxdy.

Of course, the Cauchy-Riemann equations tell us that {-v_x-u_y=0} and {u_x-v_y=0}, so

\displaystyle \int_\gamma f(z)dz=0,

as desired. \Box

A discussion with some friends revealed that this is sort of a hack. I just swept the difficult part of the theoerm into Green’s theorem and then didn’t both to prove it. I know. But I like it anyway, and the reason I like it is because I had never seen the connection between the Cauchy-Riemann equations and the result of Green’s theorem. In some sense, one might say that complex analysis is the study of a very special class of functions that behave in remarkably nice ways with respect to Green’s theorem.

Complex integration

It’s now time to introduce the integral. Again, I’ll assume we already understand real integrals. As you might expect, if {h:[a,b]\rightarrow{\mathbb C}}, then we will say that {h} is integrable if {{\mbox{Re}}(h)} and {{\mbox{Im}}(h)} are both integrable. For the pedantic, here I suppose I mean Riemann integrable. Anyway, if so, then we can define

\displaystyle \int_a^bh(t)dt=\int_a^b({\mbox{Re}} h)(t)dt+i\int_a^b({\mbox{Im}} h)(t)dt.

This is nice, but not exactly what we want. We want path integrals. If {\gamma:[a,b]\rightarrow{\mathbb C}} is a path, and {f} is defined on some open set containing the image of {\gamma}, then

\displaystyle \int_\gamma f(z)dz=\int_a^bf(\gamma(t))\gamma'(t)dt.

This may look mysterious, but I assure you, it isn’t. In fact, there is pretty much no difference between path integrals over {{\mathbb C}}, and path integrals over {{\mathbb R}^2}. They’re certainly defined similarly. Anyway, let’s do an example. If {\gamma} is the unit circle {\gamma(t)=e^{i\cdot t}} for {t\in[0,2\pi]}, then

\displaystyle  \begin{array}{rcl}  \displaystyle\int_\gamma z^ndz &=& \displaystyle\int_0^{2\pi}\gamma(t)^n\gamma'(t)dt\\ &=& \displaystyle\int_0^{2\pi}e^{i\cdot nt}\cdot ie^{i\cdot t}\\ &=& i\displaystyle\int_0^{2\pi}e^{i\cdot (n+1)t}dt \end{array}

If {n=-1}, we have {i\displaystyle\int_0^{2\pi}dt=2\pi i}. Otherwise, the integral is

\displaystyle \left[\frac i{i\cdot(n+1)}\cdot e^{i\cdot (n+1)t}\right|_0^{2\pi}=0.

Also, as one might expect, the “speed” at which you traverse a curve is irrelevant. If you reparameterize the curve, the integral computed is identical.

Theorem 1 If {f} is integrable along a path {\gamma}, and {\beta} is a reparameterization of {\gamma}, then

\displaystyle \displaystyle\int_\gamma f(z)dz=\displaystyle\int_\beta f(z)dz.

Proof: Write {\beta=\gamma\circ\phi}. We’ll have {\beta:[a,b]\rightarrow{\mathbb C}}, {\gamma:[c,d]\rightarrow{\mathbb C}}, and {\phi:[a,b]\rightarrow[c,d]}, all continuous. Then

\displaystyle  \begin{array}{rcl}  \displaystyle\int_\beta f(z)dz &=& \displaystyle\int_a^bf(\beta(t))\beta'(t)dt\\ &=& \displaystyle\int_a^b f(\gamma(\phi(t)))\gamma'(\phi(t))\phi'(t)dt \end{array}

Using the “{u}-substitution” {u=\phi(t)}, so {du=\phi'(t)dt}, we get

\displaystyle  \begin{array}{rcl}  \displaystyle\int_\beta f(z)dz &=& \displaystyle\int_a^b f(\gamma(\phi(t)))\gamma'(\phi(t))\phi'(t)dt\\ &=& \displaystyle\int_c^d f(\gamma(u))\gamma'(u)du\\ &=& \displaystyle\int_\gamma f(z)dz \end{array}

\Box

There are a few more facts I’m going to take for granted. First, if I integrate along a loop {\gamma}, but I traverse it backwards, then the value of the integral is the opposite. If I traverse two paths, first {\beta}, then {\gamma}, The integral is the sum of the integrals for each piece separately.

Analytic functions (part 3)

Last time we showed that analytic functions are continuous. This time we’ll show that {f'} is also analytic, and with the same radius of convergence. Of course, it suffices to show power series; analytic functions being a bunch of “patched together” power series.

Theorem 1 Let {f(z)=a_0+a_1(z-z_0)+a_2(z-z_0)^2+\cdots} for {a_n\in{\mathbb C}} be a power series centered at {z_0} with radius of convergence {R}. Then {f'} is also a power series, and has radius of convergence {R}.

Proof: We know that {f} is the pointwise limit of the polynomials

\displaystyle f_n(z)=a_0+a_1(z-z_0)+\cdots+a_n(z-z_0)^n.

By definition, {f'=\frac{d}{dz}f=\frac{d}{dz}\lim_{n\rightarrow\infty}f_n}. It would be nice to show that {f'} is the pointwise limit of {f'_n}. This comes down to showing that I can move the differential operator {\frac{d}{dz}} across the limit, so that

\displaystyle f'=\frac{d}{dz}\lim_{n\rightarrow\infty}f_n=\lim_{n\rightarrow\infty}\frac{d}{dz}f_n=\lim_{n\rightarrow\infty}f'_n.

In general I am not allowed to do this, but if {f_n\rightarrow f} uniformly, then I can. This is a standard theorem from analysis that I won’t prove. If you like, it’s theorem 7.17 in Baby Rudin. In general, {f'_n\rightarrow f'} need not be uniform, but on any ball with radius smaller than it’s radius of convergence, the limit is uniform. This is the same trick that we used yesterday to show that {f_n\rightarrow f} uniformly on smaller balls, giving continuity of {f}. This tells us that {f'} is exactly the derivative you would expect. You can differentiate a power series by differentiating each term in the series:

\displaystyle f'(z)=a_1+2a_2(z-z_0)+3a_3(z-z_0)^2+4a_4(z-z_0)^3+\cdots.

To compute the radius of convergence of {f'}, we’ll instead compute the radius of convergence of {(z-z_0)\cdot f'}. Clearly this converges if and only if {f'} does. The coefficient of {(z-z_0)^n} in {(z-z_0)\cdot f'} is {n\cdot a_n}, so the radius of convergence is given by

\displaystyle \left(\limsup_{n\rightarrow\infty}\sqrt[n]{n\cdot|a_n|}\right)^{-1}=\frac1{\displaystyle\lim_{n\rightarrow\infty}\sqrt[n]n}\cdot\left(\limsup_{n\rightarrow\infty}\sqrt[n]{|a_n|}\right)^{-1}=1\cdot R.

This completes the proof. \Box

Corollary 2 Let {f:U\rightarrow{\mathbb C}} ({U} an open subset of {{\mathbb C}}) be analytic. Then {f\in{\mathcal H}(U)} (“yes”). Moreover, {f} is in fact infinitely differentiable on {U} (“HELL YES!”).

Proof: First, pick any {z\in U}. Locally, {f} is given by a power series centered at some point {z_0} with radius of convergence {R}, so that {|z-z_0|<R}. Then {f'} is locally given by a power series centered at {z_0} with the same radius of convergence, and so {f'} is defined at {z}. Our choice of {z\in U} was arbitrary, so {f'} is defined on all of {U}, and hence {f\in{\mathcal H}(U)}.

Moreover, {f'} is analytic, and so {f'\in{\mathcal H}(U)}. Also, {f''} analytic, so {f''\in{\mathcal H}(U)}. In general, {f^{(n)}} is analaytic, and so {f^{(n)}\in{\mathcal H}(U)}. Thus, {f} is infinitely differentiable.

\Box

We are now two-thirds done with that theorem about the equivalences:

Theorem 3 Let {f:U\rightarrow{\mathbb C}}. The following are equivalent:

  • {f} is differentiable (holomorphic)
  • {f} is infinitely differentiable
  • {f} is locally representable by power series (analytic)

The second obviously implies the first, and today we just showed that the third implies the second. To see that the first implies the third, we’ll need to build a bit more machinery, so integrals are next on the agenda.