"Eigenvectors and eigenvalues" is one of those topics that many students find particularly unclear. Questions like "why are we doing this" and "what does this actually mean" It is often left in a sea of accounts unanswered. And because I put in the videos for the series, Many of you have commented on looking to visualize this particular topic. I suspect that The reason for this is not so much that things eigen- are particularly complicated or bad. In fact, it's relatively simple And I think most of the books do a good job of explaining it. The issue is that It really makes sense to have a solid visual understanding of the many topics that precede it. The most important thing here is to know how to think of matrices as linear transformations. But you also have to be comfortable with things Such as determinants and linear systems of equations and change of basis. Confusion about material eigen usually has something to do with a fragile foundation in one of these topics Than they are in eigenvectors and eigenvalues themselves. To start, consider some linear transformations in two dimensions.

Like the one shown here. It moves on the basis of i-hat heading to (3, 0) and j-hat to (1, 2) coordinates, So that it is represented by a matrix, its columns are (3, 0) and (1, 2). Focus on what it does to one particular vector And think about the extent of that carrier, the line that runs through its origin and end. Most vectors will go out of scope during transformation. I mean, it may sound like a coincidence If the place where the carrier landed was somewhere on this line. But some special vectors remain in their respective range, This means that the effect of a matrix on such a vector is only to expand or squish it, like a scalar number. For this particular example, the base vector i-hat is one of these special vectors. The extension of the i-hat is the x-axis, And from the first column of the matrix, We can see that the i-hat is moving up to 3 times itself, still on that x-axis.

What's more, due to the way linear transformations work, Any other vector on the x-axis is also spanned by a factor of 3 Thus it is still on its own run. Slightly narrower vector that stays at stretch during this shift is (-1, 1), The command ends up getting dilation by a factor of 2. And again, sin means that Any other vector on the diagonal line that this person extends to It is just getting stretched out by a factor of 2. And for this transformation, These are all vectors with this special property to stay on their stretch. Those on the x-axis are stretched by a factor of 3 Those on this diagonal line are spread out by a factor of 2. Any other conveyor that will rotate somehow during the shifting, He came out of the line that runs. As you might be thinking now, These special vectors are called "eigenvectors" of transformation, Each eigenvector has been associated with it, a so-called "eigenvalue", It is the only factor that either has been stretched through or crushed during transformation.

Of course, there is nothing special about stretch versus squishy Or the fact that these eigenvalues were positive. In another example, you could have a responsive eigenvalue -1/2, This means that the vector gets flipped over and is crushed by a factor of 1/2. But the important part here is that it stays on the line that extends without turning around. And to preserve the cause, this may be useful to think about. Consider some three-dimensional rotation. If you can find the inventor of this rotation, A vector that extends its range, What I found is the axis of rotation. And it's easier to think of 3D rotation in terms of some axis of rotation and angle of rotation, Instead of thinking about the complete 3-by-3 matrix associated with this transformation. In this case, by the way, the corresponding eigenvalue should be 1, Since the cycles don't stretch or crush anything, So the length of the vector will remain the same. This pattern appears frequently in linear algebra. With any linear transformation described in the matrix, you can understand what it does By reading the columns of this matrix as the landing spots of primary vectors. But mostly a better way to get the heart of what a linear shift induces, Less depending on your coordinate system, Is finding eigenvectors and eigenvalues.

I will not cover full details about methods for computing eigenvectors and eigenvalues here, But I will try to give an overview of the computational ideas Which is the most important for my conceptual understanding. Symbolically, here's what the eigenvector idea looks like. A is the matrix representing some transformation, With v as the inventor, And is a number, that is, the corresponding eigenvalue. What this expression says is that the matrix vector product – A times v It gives the same result once the eigenvector is scaled by some value λ. So find the eigenvectors and their eigenvalues from the matrix of A Come down to find the values of v and that make this expression true. It's hard to work with him at first, Because this left-hand side is the vector multiplication of the matrix, But the right side here is scalar multiplication. So let's start by rewriting that right-hand side as sort of a vector multiplication of the matrix, Using a matrix, which has the effect of scaling any vector by a factor of.

Columns such as this matrix will represent what happens to each base vector. Each base vector is simply λ, So this matrix will have the number below the diagonal with zero everywhere else. The common way to write this person is to list this and write it as λ times I, Where I am the identity matrix with 1 diagonal bottom. Looking at both sides like a vector multiplication of a matrix, We can subtract this right-hand side and the factor of v. So what we have now is a new matrix – A minus the identity fold, And we're looking for a vector v, so that this new matrix gives v a vector zero. Now this will always be true if v is the vector zero, But this is boring.

What we want is a non-zero aigenvector. And if you are watching chapters five and six, You will know that the only possible way for a matrix product to have a non-zero vector is to become zero If the transformation associated with this matrix crushes space into a smaller dimension. This subtraction is offset by the zero determinant of the matrix. To be concrete, suppose the matrix contains columns (2, 1) and (2, 3), Consider subtracting a variable quantity λ from each diagonal input. Now imagine tweaking, turning a knob to change its value. As the value of the changes, The matrix itself changes, so the determinant of the matrix changes. The goal here is to find the value of that will make this torque zero. Which means that the modified transformation crushes space in a smaller dimension. In this case, the sweet spot comes when it equals 1. Of course, if we choose some other matrices, The eigenvalue may not necessarily be 1, sweet spot may be multiplied by some other value for.

So that's kind of a lot, but let's break down what this says. When 1, the matrix A minus the identity presses space on a line. This means that there is a nonzero vector v, So that is minus the times the identity times v equal to the vector zero. And remember, the reason we care about that is that it means that times v equals رات times v, Which you can read as saying that vector v is a navigational factor of A, Stay on her run during the A.

In this example, the corresponding eigenvalue is 1, so v is actually going to be fixed in place. Pause and meditate if you need to make sure that the line of thought looks good. This is the thing I mentioned in the introduction, If you do not have a solid understanding of the determinants And why is it related to linear systems of equations that have nonzero solutions, Such an expression would feel completely out of thin air. To see this in action, let's return to the example from the start With the matrix whose columns are (3, 0) and (1, 2). To find out if the value λ is an eigenvalue, Subtract from the diagonals of this matrix and calculate the determinant. Doing this, we get some quadratic polynomials in λ, (3-λ) (2-). Since can only be an eigenvalue if this determinant is zero, You can conclude that the only possible eigenvalues are λ equals 2 and equals 3.

To find out what eigenvalues actually have one of these eigenvalues, say equals 2, Plug that value into the matrix Then solve for which vectors send this variable matrix slash to 0. If you calculate this way you want any other linear system, You will see that the solutions are all vectors on a diagonal line extending from (-1, 1). This corresponds to the fact that the unmodified matrix ((3, 0), (1, 2)) It has the effect of expanding all those vectors by a factor of 2. Now, the two-dimensional transformation should not have existed. For example, consider a 90-degree rotation. This one doesn't have any followers, as it spins each carrier out of its own domain. If you are really trying to calculate eigenvalues for a rotation like this one, notice what happens.

The matrix contains columns (0, 1) and (-1, 0), Subtract λ from the diagonal elements and look for when the determinant is 0. In this case, you get the polynomial λ ^ 2 + 1, The only roots of this polynomial are the imaginary numbers i and i. The fact that there are no real digital solutions indicates that there are no autonomous motions. Another interesting example worth holding in the back of your mind is the shearing.

This fixes the i-hat in place and moves the j-hat one over over, So the matrix has columns (1, 0) and (1, 1). All vectors on the x-axis are eigenvectors with an eigenvalue of 1, remaining constant in place. In fact, these are the only eigenvectors. When you subtract from the diagonals and calculate the determinant, What you get is (1-λ) ^ 2, The only root of this expression is equals 1. This matches what we see geometrically that all eigenvectors have an eigenvalue of 1. Keep in mind though, It is also possible to have only one subjective value, but with more than just a streak of subjective concepts. A simple example is a matrix that measures everything by 2, The only eigenvalue is 2, but every vector in the plane becomes contiguous with an eigenvalue. Now is another good time to stop and reflect on some of this Before moving on to the last topic. I want to end here with the idea of an eigenbasis, That depends heavily on insights from the last video. Take a look at what happens if the fundamental vectors happen to be just eigenvectors. For example, maybe i-hat is scaled -1 and j-hat is scaled 2.

Write their new coordinates as columns of an array, Note that these are numerical multiples of -1 and 2, which are the eigenvalues of i-hat and j-hat, Sit on the diagonal of our matrix and every other entry is 0. At any given time, the matrix contains a zero everywhere other than the diameter, It is called, reasonably enough, a diagonal matrix. The way to explain this is that all basic vectors are eigenvectors.

With the diagonal entries of this matrix being its eigenvalues. There are a lot of things that make diagonal arrays more attractive to work with. The big one is that It's easiest to calculate what will happen if you multiply this matrix by itself a bunch of times. Since each of these matrices is the measure of each base vector by some eigenvalue, Applying that matrix multiple times, let's say 100 times, It will only match the scaling of each base vector by the force of 100 at the corresponding eigenvalue. In contrast, try using the 100th power of a non-diagonal array. Really, try it for a moment, it's a nightmare. Of course, you are rarely very lucky that your primary vectors are also eigenvectors. But if your transformation has a lot of subjective factors, like the one happening at the beginning of this video, Enough that you can choose a set that covers the entire space, Then you can change your coordinate system so that these auto waves are your primary vectors.

I talked about changing the last video basis, But I'll go through a very quick reminder here How to express a transformation currently written in our coordinate system to a different system. Take the vector coordinates you want to use as a new basis, Which, in this case, means two eigenvectors, That make these coordinates columns of a matrix, known as a base-matrix change.

When the original transformation sandwich Laying change foundation matrix on the right And the inverse of the base matrix change to the left, The result would be a matrix representing this transformation itself, But from the perspective of a new basis vector coordination system. The whole point of doing this with eigenvectors is that This new matrix is guaranteed to be diagonal with corresponding eigen values below this diagonal. This is because it represents the work in the coordinate system As it happens what happens to the fundamental vectors that are scaled during the transformation. Set of basic vectors, which are also vectors, It is called, again, reasonably enough, "eigenbasis." If, for example, if you need to calculate the 100th power for this matrix, It would be much easier to change to any dust, Calculate the force of 100 in this system, Then convert to our standard system.

You cannot do this with all transitions. For example, a shear does not have enough subjective elements to span an entire space. But if you can find an eigenbasis, it makes really beautiful matrix operations. For those of you willing to work through a very neat puzzle to see what this looks like in action And how it might be used to produce some surprising results, I'll leave a prompt here on the screen. It takes a little bit of work, but I think you'll enjoy it. The next and last video in this series will be on abstract vector areas. see you later!.