Applying Yoneda’s Lemma

Let’s write down Yoneda’s lemma one more time. For a locally small category, and functors from that category into \textsc{Set},

\mbox{Nat}(\hom(A,-),F)\cong F(A).

One instance of Yoneda’s lemma is to take the second functor F to be \hom(B,-). Then we have a nice symmetric looking statement:

\mbox{Nat}(\hom(A,-),\hom(B,-))\cong \hom(B,A).

This is called the Yoneda embedding. Now let’s do some magic. Let G be a group. We can regard G as a category with one element (the elements of the group are the morphisms in the category). Call that element *. If we take A and B both to be *, then we get

\mbox{Nat}(\hom(*,-),\hom(*,-))\cong \hom(*,*).

The righthand-side is just the homomorphisms from group itself. The homomorphisms from * to * form a group; that’s what we said. Okay, technically, it’s the set of all of those homomorphisms, but the correspondenc we’ll get from Yoneda will give it a group structure.

On the lefthand-side, we need to understand \hom(*,-). As we defined it, it composes morphisms. That is, \hom(*,f) is “composition on the left with f.” So I’m looking for the ways to transform composition on the left to composition on the left that make the related square commute. This is likely difficult to picture, but these are going to correspond to functions on sets. Specifically, it ccorresponds to the permutations of the set \hom(*,*) (pushing the elements around by composing on the left with them).

If I’ve done my explaining correctly (I most certainly have not) this should be a rough sketch of a proof of Cayley’s theorem in group theory. Indeed, the Yoneda Lemma is a generalization of this theorem. Cayley is easy to prove without Yoneda, but the correspondence is an important one to see. At least for me, it helps me understand Yoneda’s lemma slightly better. I think of Yoneda as Cayley’s big brother.


Yoneda’s Lemma (part 3)

Yesterday’s post probably made you woozy. It was long and tedious, and I didn’t even do all of the lemma. I skipped the parts about it being natural in the two functors. Eh, deal with it. It all comes down to checking that things commute. If you have the patience, and you go slowly enough, you can make it work, though I’m not sure what sort of insight it gives you.

So let’s summarize. Yoneda states that for locally small categories,

\mbox{Nat}(h^A,F)\cong F(A),

and the proof can be summed up in the following diagram (source: Wikipedia entry on Yoneda’s Lemma)


Wikipedia’s diagram is slightly nice than mine, in that it tells you what the maps are right on the diagram. Very pretty. Good job, internet. What would we do without you?

Okay, great, so what can we do with this? There’s a nice simple corollary to Yoneda’s lemma that you may already know. We’ll show it next time.

Yoneda’s Lemma (part 2)

Theorem (Yoneda):

Let \mathcal C be a locally small category. Let A be an object in \mathcal C, and let F:\mathcal C\to\textsc{Set}. Then,

\mbox{Nat}(h^A,F)\cong F(A),

and this isomorphism is natural in F and in A. (we won’t prove this part here)


Given a natural transformation \eta from h^A=\hom(A,-) to F, we need to produce an element in F(A). There aren’t many things we could try. We have a map \eta_A:\hom(A,A)\to F(A). We can use that to produce an element in F(A). Of course, we’d have to plug something in to \eta_A, and there could be many choices there. There is an obvious one though. In fact, it’s the only element of \hom(A,A) we know exists: the identity map. So for a natural transformation \eta, the associated element of F(A) we’ll produce is going to be \eta_A(\mbox{id}_A). I’ll write this as function as \Theta. so \Theta(\eta)=\eta_A(\mbox{id}_A). We need to check that \Theta is an isomorphism.

First, let’s check that \Theta has an inverse. One thing we could do is check that if we knew \Theta(\eta), we could recover \eta. So let’s say we have some x\in F(A), and we want to find a natural transformation \eta such that \Theta(\eta)=x. What do we need to specify \eta?

A natural transformation \eta:h^A\to F is defined by all of the maps \eta_B:\hom(A,B)\to F(B) (we have one for each B in \mathcal C). So if I can tell you what each \eta_B is going to be, without making any choices, then I’ve uniquely defined a natural transformation from x\in F(A). Great, so how do I specify \eta_B? Well that’s just a map \eta_B:\hom(A,B)\to F(B), so I need to tell you what it does to each element in the domain (that is, each \phi\in\hom(A,B)).

So I’m going to define the transformation, and then explain why it works. Let B\in\mathcal{C}, and let \phi\in\hom(A,B)=h^A(B). Then \eta_B(\phi) should be an element of F(B). I define it to be:


How do we check if this defines a natural transformation? We need to check the commuting stuff. So take a look at this diagram (for an arbitrary f:B\to C):

Starting with an element \phi\in\hom(A,B), and going down and then across is

(\eta_C\circ h^Af)(\phi)=\eta_C(h^Af(\phi))=\eta_C(f\circ \phi)=F(f\circ \phi)(x)=(Ff)(F\phi)(x)

Going across and then down is


Great, so the diagram commutes. So \eta really is a natural transformation. We just need to check that it’s the inverse of the \Theta map. That is, we need to check that \Theta(\eta)=x. We also need to check the other direction, that if we start with some \eta, compute \Theta(\eta), and then do this construction, we get back the original \eta. I’m lazy, so I’ll only do the first one, though neither are difficult. You simply need to wade through the definitions.

Suppose we construct \eta as above from a given x\in F(A). We hope that \Theta(\eta)=x. Indeed, \Theta(\eta)=\eta_A(\mbox{id}_A). From the definition of \eta,



Yoneda’s Lemma (part 1)

So I think we’re ready, at least for the statement of Yoneda’s lemma. It says that for any locally small category \mathcal C, if A is an object in C, and F:\mathcal C\to\textsc{Set} a functor, then

\mbox{Nat}(h^A,F)\cong F(A)

Moreover, the isomorphism is natural in both A and F.

Wow that looks complicated. Let’s parse some of the notation that I haven’t even explained yet. So certainly we know what the righthand-side means. It’s F applied to A. That’s just a set. As for the lefthand-side, \mbox{Nat} and h^A I haven’t explained.

So h^A is a functor we mentioned briefly, but I used a different notation. It’s the representable functor \hom(A,-). I write it as h^A here to avoid over using parentheses and therefore complicating this business well beyond it’s current level of complication. As a reminder, \hom(A,-) is a functor from \mathcal C to \textsc{Set} (the same as F). It takes objects B in \mathcal C to the set of homomorphisms \hom(A,B). Remember we’re in a locally small category, so \hom(A,B) really is a set. It takes morphisms f:B\to C to a map h^A(f):\hom(A,B)\to\hom(A,C) by sending

h^A(f):\phi\mapsto f\circ\phi

We checked all the necessary details in an earlier post to make sure this really was a functor.

So the last thing is that \mbox{Nat}. It’s the collection of all natural transformations between the functors h^A and F. So Yoneda’s lemma claims that there is a one to one correspondence between the the natural transformations from h^A to F and the set F(A).

In particular, it claims that \mbox{Nat}(h^A,F) is a set. This is because \textsc{Set} is a locally small category, and so each natural transformation (defined as a collection of morphisms, each of which was a set) is a set. But there are only so many collections of morphisms, not even all of which are natural transformations. The collection is small enough to be a set. If you don’t care about this set theory business. Then disregard the paragraph you probably just read angrily.

It’s worth mentioning now, that Yoneda’s lemma is a generalization of some nice theorems. We can (and will) use it to derive Cayley’s theorem (every group embeds into a symmetric group). We can (and will) use it to derive the important fact that \hom_R(R,M)\cong M in the category of R-modules. I bet in that one you can already start to see the resemblance.

We’ll prove Yoneda’s lemma over the next few posts.

Examples of natural transformations (part 2)

Here’s the most common example of a natural transformation that I know of, and probably the most enlightening. Take a vector space V and let V^* denote it’s dual. If V is finite dimensional, then V\cong V^*, but not in any “natural” way. Moreover, even if V is infinite dimensional, V embeds in V^*, but again, not in a “natural” way. That is, we have to pick a basis to show how V embeds in V^*. If we pick a different basis, we get a different embedding. We don’t have any method that is inherent to the structure. And it’s not just that we haven’t found one. They don’t exist.

However, V does embed inside V^{**} naturally (i.e., in a way independent of the structure of V. How are we to describe this? We say that there is a natural transformation between the functors \mbox{id}_{\textsc{Vec}}:\textsc{Vec}\to\textsc{Vec} and -^{**}:\textsc{Vec}\to\textsc{Vec}. I’m suppressing the field over which we’re working in the notation. Oh well. It’s not too important.

So here goes. Let \eta_V:V\to V^{**} by \eta_V(v)=\mbox{ev}_v. By \mbox{ev}_v what I mean is the map that takes in a functional \phi from V^* and evaluates it at v. That is \mbox{ev}_v:\phi\mapsto \phi(v). Since \phi\in V^*, and \mbox{ev}_v is a map from V^* to the underlying field, \mbox{ev}_v is an element of V^{**}. Does this make


If we go across the top and then down, we take v\in V and send it to

T^{**}\circ\eta_V(v)=T^{**}\circ \mbox{ev}_v=T**(\phi\mapsto \phi(v))

=(\phi\mapsto (T^*\circ\phi)(v))=(\phi\mapsto\phi(Tv))=\mbox{ev}_{Tv}.

If we go down and then across, we take v\in V and send it to

(\eta_W\circ T)v=\mbox{ev}_{Tv}

Woohoo! We just proved naturality. In fact, we did two things. First, we showed that we could embed V in V^{**}, and we didn’t have to make any choices about elements to do so. This embedding is canonical. Second, we showed that linear transformations between V and W correspond to linear transformations between V^{**} and W^{**} in a nice way (such that the diagram commutes).

Examples of natural transformations (part 1)

I think natural transformations in particular need lots of examples, so here they are. Lets start out with categories \mathcal C and \mathcal D, and a functor F:\mathcal C\to\mathcal D. Then the identity transformation (which I’ll write as \mbox{id} is a natural transformation:

We need to check that we can make \mbox{id}_X for each X\in Ob(\mathcal C). such that

commutes. Note that by \mbox{id}_X , I don’t mean the identity morphism on X. It has to be a morphism from F(X) to F(X). Let’s take \mbox{id}_X=1_{F(X)}. By 1_{F(X)}, I mean the identity morphism on F(X). Sorry for the notational bullshit. I know it’s annoying. I think this is the best way to do it, even though it’s particularly awful.

Going across the top and then down gives us F(f)\circ\mbox{id}_X=F(f)\circ 1_{F(X)}=F(f). Going down and then across gives \mbox{id}_Y\circ F(f)=1_{F(Y)}\circ F(f)=F(f). The diagram commutes, so the transformation is natural.

In terms of what this represents to us as mathematicians, it says that we can change a functor into itself by not doing anything. Moreover, this change is “natural” in the sense that it commutes in the way you would expect. Okay, maybe you wouldn’t expect it, but a category theorist would.

Naturality (part 2)

Now that we’ve seen the definition, I’d like to give some motivation for what it means to be a natural transformation. In some sense, it’s a map between functors. Functors in some sense are a map between morphisms, so this is like one level more abstract, but that doesn’t seem to get the point across. I’m going to take the following example almost directly from Steve Awodey‘s book Category Theory. I think this is a great text for learning basic category theory.

In a category \mathcal C with products, and objects A, B, and C, the objects A\times(B\times C) and (A\times B)\times C are technically not the same object, but there’s an obvious isomorphism between them. But this is more than just your standard isomorphism. It’s not like we construct the isomorphism one way if we use a specific object A, and a different way for another object A'. It doesn’t really depend on the objects A, B, or C at all. It depends on the structure of the functor \times. How can we encode that?

We encode that by saying that the maps between objects A and A' can be “puffed up” to maps between (A\times B)\times C and (A'\times B)\times C, and to maps between A\times (B\times C) and A'\times (B\times C). Moreover, these puffed up maps are somehow not very different, in the sense that the way we “transform” from (A\times B)\times C to A\times (B\times C) is “the same” on the domain and codomain ends of the “puffed up” maps. Take some time to parse my attempt at writing a coherent sentence.

Take a look at the diagram again and see if you can understand what I was trying to get across. Here, F is the functor (-\times B)\times C and G is the functor -\times(B\times C). Can you write down what \eta_X should be? Does the diagram commute?