Linear Independence

Suppose we’re talking about \mathbb R^3 as a vector space over the field \mathbb R. If I told you that some vector u=(a,b,c) was in the a subspace S, you would know immediately that v=(2a,2b,2c) was too. This is precisely the rule about scalar multiples of vectors being in a vector space. That is, v=2u. What’s so awesome about vector spaces is that this works both ways. Since u=\frac12v, if I told you u\in S, you would immediately know that v\in S as well. This is some sort of “dependence” relationship I want to make more precise.

But as with spans, it gets more complicated when we let more more vectors into the mix. Suppose I have a subspace W but I’m not telling you what it is. Now consider the three vectors (1,0,0), (0,1,0), and (1,1,0). Certainly if I tell you any two of these are in W, you would know that the third one was. But if I told you any one was in W, you couldn’t be sure about the other two. We would say that these three vectors are linearly dependent, but that any two of them are linearly independent.

So what property do these vectors have that makes them depend on each other? It’s that linear combinations of some of them sum up to the last one. When we say it that way, it’s clear that it doesn’t matter what the last one is. So, if we move “the last one” to the other side of the equation, we get a linear combination of the vectors that sums to zero. This is the property we’re after.

Definition:

Let’s be more precise, and more general about this. Let V be a vector space over a field k. For v_1,v_2,\dots,v_n\in V, we say that they are linearly independent if whenever we can find scalars \lambda_1,\lambda_2,\dots,\lambda_n\in k such that

\lambda_1\cdot v_1+\lambda_2\cdot v_2+\cdots+\lambda_n\cdot v_n=0,

then \lambda_i=0, for each i\in\{1,2,\dots,n\}. Otherwise, we call them linearly dependent.

We wanted to be lazy about writing down vector spaces. Well, if we have a linearly dependent set, we weren’t lazy enough, because we could have dropped one vector, that was dependent on the rest.

All of this is leading up to an answer to the question “how should the lazy mathematician write down a vector space?” Spans were nice, but how do we know if we’ve included too much? Linearly independent sets are nice, but how do we know if we’ve got everything? See where I’m going with this? Stay tuned for the stunning conclusion next post on vector spaces.

Advertisements

10 Responses to Linear Independence

  1. Pingback: Basis « Andy Soffer

  2. Pingback: Every vector space has a basis « Andy Soffer

  3. Pingback: Unique representations in a vector space « Andy Soffer

  4. Pingback: The Rank-Nullity Theorem « Andy Soffer

  5. Pingback: Rationals and the Square Root of 17 « Andy Soffer

  6. Pingback: The Tower Rule « Andy Soffer

  7. Pingback: The classical Greek questions « Andy Soffer

  8. BenX says:

    I think you have a typo right after the v=2u thing. You claim that u=(1/2)u implies that knowing u in S, we also know v in S.

  9. Pingback: Dimension « Andy Soffer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s