![]() |
Unless there is something special going on, it's always the largest eigenvalue.
The eigenvectors form a basis set, so any vector [B]x[/B] can be represented as [B]x[/B] = a*[B]e[sub]1[/sub][/B] + b*[B]e[sub]2[/sub][/B] From which we see that [B]A[/B][sup]n[/sup][B]x[/B] = a*lambda[sub]1[/sub][sup]n[/sup]*[B]e[sub]1[/sub][/B] + b*lambda[sub]2[/sub][sup]n[/sup]*[B]e[sub]2[/sub][/B] So the larger eigenvalue will always dominate this process unless there is something special forcing the coefficient to zero. The actual iteration adds a constant at every stage, the vector [B]b[/B] in [B]x[/B][sub]n+1[/sub] = [B]Ax[/B][sub]n[/sub]+[B]b[/B] so even if [B]x[/B][sub]0[/sub] is aligned with the smaller eigenvector, [B]x[/B][sub]1[/sub] will have a component of the larger eigenvector unless [B]b[/B] is also a eigenvector for the smaller eigenvalue. (This problem actually picks up the component in [B]x[/B][sub]1[/sub] because [B]x[/B][sub]0[/sub] is the zero vector). You can concoct special scenarios with lamda=-1 so that the multiplication and addition cancel each other, but that's about all that can get in the way of the largest eigenvalue dominating the process. |
| All times are UTC. The time now is 20:38. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.