# Can we agree this is simpler?

I’m going to start writing about something I am not too familiar: analysis. More specifically, I want to remember some basic complex analysis, and while doing that, I’ll share it with you. I’m going to assume some basic working knowledge of analysis, but not very much. Pretty much only what I have.

For today, I wanted to talk about continuity. We have an intuitive grasp about what it means to be continuous (essentially it means “no jumps”), but when we try to write this down exactly, we get this monstrosity:

$(\forall \varepsilon >0)(\exists \delta >0)(\forall x)|p-x|<\delta\Rightarrow|f(p)-f(x)|<\varepsilon$

Even once you’ve gotten used to this, you’re sometimes thrown a curveball with these so-called “$\varepsilon/2$ arguments.” What I mean is that, maybe you have two parts whose sum has to be less that $\varepsilon$, so you’d want to ensure each part is less than $\varepsilon/2$. If each part is continuous, then for small enough $\delta$, you can make each part less than whatever number you want: say, less than $\varepsilon/2$.

And of course, there are “$\varepsilon/3$ arguments,” and “$\varepsilon/N$ arguments,” and even wackier things. Stating it this way may seem reasonable, but it always annoyed and confused me when I started to read a proof, and it says “Pick $\delta$ that bounds some continuous expression by some wacky function of $\varepsilon$. Then, as you go through the proof, you undo the wacky function of $\varepsilon$, only to be left with $\varepsilon$ at the end. This is certainly not how the author thought of the proof, and is not how I am going to understand it.

The core idea is that things don’t get “too big.” If you start with $\varepsilon$, and at the end of a proof, you end up with $2\varepsilon$, what’s the big deal? It’s still a function of $\varepsilon$ that goes to zero. I found a quote on this blog that I’ve always really liked:

Mathematical maturity is when you’re grown up enough to handle a “$2\varepsilon$.” -Michael Sharpe

Now I’m going to actually prove for you that this is okay, so that from now on, I can do it the nice way, instead of the confusing way.

Theorem: Let $g$ be a function such that $g(t)\to0$ as $t\to 0$. Suppose that for positive every $\varepsilon$ there exists some $\delta$ such that whenever $|x-x_0|<\delta$, then $|f(x)-f(x_0)|. Then $f$ is continuous at the point $x_0$

Proof: Let $\varepsilon>0$. Pick $t$ such that $g(t)<\varepsilon$. We can do this for any $\varepsilon$ because $g$ tends to $0$ as $t$ goes to $0$. Find $\delta$ so that whenever $|x-x_0|<\delta$, then $|f(x)-f(x_0)|. Done.

At times I may use generalizations of this without taking the time to prove them in the blog. I promise the ideas are all the same, and the  proofs are just as straight-forward and just as short.

### 2 Responses to Can we agree this is simpler?

1. Adam Azzam says:

Speaking of “wackier things”, I’ve always loved the $\epsilon/2^n$ arguments in analysis. It puts $\epsilon/3$ arguments to shame.

For example, if you want a open dense set with measure at most $\epsilon$, then you can enumerate the rationals and take an interval of width $\epsilon/2^n$ around the $n$-th rational. The union is open and contains a dense set, but the measure of the union is at most $\sum_{n=1}^{\infty}\frac{\epsilon}{2^n}$.

Glad to see you’ve seen the light and are writing about analysis!

• Adam Azzam says:

Just to be persnickety, I meant open dense subset of $\mathbb{R}$ (with Lebesgue measure).