Linear Transformations

As I mentioned before, if I have any algebraic object, some interesting things to think about are

  • The object itself
  • Subobjects of the object
  • Structure preserving maps between objects (called homomorphisms)
  • Quotient objects
If our object is vector spaces, then we’ve pretty much exhausted what we can say in any generality about them. The “subobjects” are subspaces and those too seem to not give us much to study. So lets consider the structure preserving maps between vector spaces. In the context of vector spaces, they are called linear transformations (somehow this is a more informative name than vector space homomorphisms).

A linear transformation between two vector spaces V and W (over a field k) is a function T:V\to W such that

  • For all u,v\in V, T(u+v)=T(u)+T(v)
  • For all v\in V, and c\in k, T(c\cdot v)=c\cdot T(v).
These two conditions imply that T(0_V)=0_W, where 0_V is the special zero element (or additive identity) in V and 0_W is the special zero element in W.
It is common to sometimes forget about the parentheses and just write Tv when we mean T(v). This may seem silly now, but in a few days you’ll understand why.
Let’s see some examples:
  • T:V\to W by Tv=0_W. This is a trivial transformation, or zero transformation.
  • T:V\to V by Tv=v. This is the identity transformation. We write it as \mbox{id}_V.
  • More concretely, T:\mathbb R^3\to\mathbb R^2 by T(x,y,z)=(x+2y-z, z-2.5y)
  • T:\mathbb R^3\to \mathbb R^3 by T(x,y,z)=(x,y,0). This is an example of a projection. The picture on the wikipedia page is a good one.
The magic of bases:

Just a reminder, I only want to think about finite dimensional vector spaces. While much of this works for infinite dimensional spaces, treatment of such spaces requires care for which I don’t want to put forth the effort. I will try to always specify, but you should just assume I mean finite dimensional spaces if I forget. I will be sure to mention specifically any time I want to talk about infinite dimensional spaces.

So let V be a finite dimensional vector space over k, and let \{v_1,v_2,\dots, v_n\} be a basis for V. Then, for any vector v\in V, we  can write v=c_1v_1+c_2v_2+\cdots+c_nv_n where each c_i\in k. If T:V\to W is a linear transformation

Then, Tv=T\left(\displaystyle\sum_{i=1}^nc_iv_i\right)=\displaystyle\sum_{i=1}^nT(c_iv_i)=\displaystyle\sum_{i=1}^nc_i\cdot Tv_i.

Though this is just a simple application of the rules for a linear transformation, it tells us something interesting. Namely, if I know what T does to an entire basis, then I know what T does to every vector in V! Tomorrow I’ll talk about notation for this. Spoilers can be found here.


7 Responses to Linear Transformations

  1. Pingback: Linear transformations as matrices « Andy Soffer

  2. Pingback: Matrix Multiplication « Andy Soffer

  3. Pingback: Matrix Multiplication part 2 « Andy Soffer

  4. Pingback: Kernels and Images « Andy Soffer

  5. Pingback: The Rank-Nullity Theorem « Andy Soffer

  6. Pingback: Examples of Categories « Andy Soffer

  7. Pingback: Initial Objects « Andy Soffer

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s