# Linear Independence

September 22, 2011 10 Comments

Suppose we’re talking about as a vector space over the field . If I told you that some vector was in the a subspace , you would know immediately that was too. This is precisely the rule about scalar multiples of vectors being in a vector space. That is, . What’s so awesome about vector spaces is that this works both ways. Since , if I told you , you would immediately know that as well. This is some sort of “dependence” relationship I want to make more precise.

But as with spans, it gets more complicated when we let more more vectors into the mix. Suppose I have a subspace but I’m not telling you what it is. Now consider the three vectors , , and . Certainly if I tell you any two of these are in , you would know that the third one was. But if I told you any one was in , you couldn’t be sure about the other two. We would say that these three vectors are *linearly dependent*, but that any two of them are *linearly independent*.

So what property do these vectors have that makes them depend on each other? It’s that linear combinations of some of them sum up to the last one. When we say it that way, it’s clear that it doesn’t matter what the last one is. So, if we move “the last one” to the other side of the equation, we get a linear combination of the vectors that sums to zero. This is the property we’re after.

##### Definition:

Let’s be more precise, and more general about this. Let be a vector space over a field . For , we say that they are **linearly independent** if whenever we can find scalars such that

,

then , for each . Otherwise, we call them **linearly dependent**.

We wanted to be lazy about writing down vector spaces. Well, if we have a linearly dependent set, we weren’t lazy enough, because we could have dropped one vector, that was dependent on the rest.

All of this is leading up to an answer to the question “how should the lazy mathematician write down a vector space?” Spans were nice, but how do we know if we’ve included too much? Linearly independent sets are nice, but how do we know if we’ve got everything? See where I’m going with this? Stay tuned for the stunning ~~conclusion~~ next post on vector spaces.

Pingback: Basis « Andy Soffer

Pingback: Every vector space has a basis « Andy Soffer

Pingback: Unique representations in a vector space « Andy Soffer

Pingback: The Rank-Nullity Theorem « Andy Soffer

Pingback: Rationals and the Square Root of 17 « Andy Soffer

Pingback: The Tower Rule « Andy Soffer

Pingback: The classical Greek questions « Andy Soffer

I think you have a typo right after the v=2u thing. You claim that u=(1/2)u implies that knowing u in S, we also know v in S.

Yes. Thanks. Fixed.

Pingback: Dimension « Andy Soffer