Updated: 2016 JUL
It is known from the QR method that most of the eigenvalues of Matrix $a$ need to be tri-diagonalization first (see Xu Xiaofang's textbook for detailed methods.) An example of an outer chain here), i.e.
$$ T=q^taq $$
That is, finding the orthogonal matrix $q$ makes the $t$ a three-diagonal matrix. However, if the $a$ is a large sparse matrix, the common methods such as householder and Givens transform can not make full use of $a$ sparsity, so consider the direct calculation of $t$ and $q$ matrix elements to take advantage of $a$ sparse acceleration operation.
First, the basic principle of Lanczos method
Write the $q$ in the above decomposition formula
$$ Q=[q_1,q_2,\cdots,q_n] $$
Where $ q_i $ is the $q$ column vector. $T $ written
$$ T=\begin{bmatrix} \alpha_1 & \beta_1 & & & 0 \ \beta_1 & \alpha_2 &\ddots & & \ & \ddots & \ddots & \ddots & \ & & \ddots & \alpha_{n-1} & \beta_{n-1} \ 0 & & & \beta_{n-1} & \alpha_n \end{bmatrix} $$
Comparison
$$ AQ=QT $$
Each column of both sides of the Matrix, gets
$$ aq_i = \beta_{i-1}q_{i-1}+a_iq_i+\beta_iq_{i+1}, \quad i=1,2,\cdots, n$$
Since $ \beta_0, \beta_n, Q_0, q_{n+1} $ not defined, supplemental definition $ \beta_0q_0 = \beta_nq_{n+1} = 0 $, so the upper side of the formula is multiplied by the $q_i^t$ to get
$$ \alpha_i = q_i^taq_i, \qquad \beta_i = q_{i+1}^taq_i=| | aq_i-\beta_{i-1}q_{i-1}-\alpha_iq_i| | _2$$
Can be seen as long as any given $q_1 \in \mathbb{r}^n$ and $| | q_1| | _2=1$, you can use recursion to get all $ q_i, \alpha_i, \beta_i $, iteration format for
$ \alpha_1 = q_1^taq_1, $//initial value
$ r_i=aq_i-\alpha_iq_i-\beta_{i-1}q_{i-1}, $
$ \beta_i=| | r_i| | _2, $//by previous round of $\alpha$, $q $ generate new value
$ q_{i+1}=r_i/\beta_1 \ (\beta_i \neq 0) $
$ \alpha_{i+1} = q_{i+1}^taq_{i+1}, \ I=1,2,\cdots, n-1$
This is the Lanczos iteration. Where $q_i$ is called the Lanczos vector. The matrix $t_j$ produced in step $j$ is called the J-order Lanczos matrix, and its eigenvalues may be a good approximation of the partial eigenvalues of $a$. For more information, refer to Krylov subspace.
Note that if a step is $\beta_i=0$, the eigenvalues of the $t_j$ that are obtained at this time will be $a$.
Second, the specific algorithm
Lanczos algorithm for large sparse matrix $a$ eigenvalue (tri-diagonal matrix)
1. Enter $ A \in s\mathbb{r}^{n\times n}, q_1 \in \mathbb{r}^n\ (| | q_1| | _2=1) $
2. $u _1:=aq_1,\ j:=1$
3. $a _j:=q_j^tu_j,\ r_j:=u_j-a_jq_j,\ \beta_j:=| | r_j| | _2$
4. If the $\beta_j=0$ ends, otherwise
$q _{j+1}:=r_j/\beta_j,\ u_{j+1}:=aq_{j+1}-\beta_jq_j$
$j: =j+1$, turn step 3
The main workload of this algorithm is focused on the product $av$ of computing matrix $a$ and Vector $v$. In the actual use, according to the specific characteristics of $a$, design a sub-program to calculate $av$, so that the calculation of the algorithm as little as possible.
Iii. the phenomenon of Lanczos
If the Lanczos algorithm is executed without ingestion error, the resulting lanczos vector $q_1,\cdots,q_j$ is orthogonal to each other, and the multiple $n$ steps are bound to terminate. However, in the case of errors, the orthogonal nature of the computed Lanczos vectors is quickly lost, sometimes even linearly related. C.c.paige points out that the loss of orthogonality is related to the accuracy improvement of approximate eigenvalues.
In the Lanczos matrix $t_j$, its eigenvalues $\mu_j$ as long as it is not too close to other $\mu_i$, eigenvectors $| | z_j| | The _2$ is close to 1. The greater the number of iterations $k$ (which can exceed $n$), the better approximate eigenvalues of $t_k$ containing more $a$. when the $k$ is sufficiently large, the $T _k$ contains all the distinct eigenvalues of the $a$, which is the Lanczos phenomenon.
Roughly speaking, located at both ends of the $a$ spectral interval and the well-separated eigenvalues $\lambda$, the $t_k$ is a good approximation of $\lambda$ at $k\ll n$, and the eigenvalues $\lambda$ in the interior of the interval that are not well separated from the other eigenvalues, need $k\ GG n$, there will be a better approximation of $\lambda$ in the eigenvalues of the $T _k$.
Therefore, when we only need to calculate the few two-way eigenvalues of a large sparse symmetric matrix $a$, we usually only need to iterate a few steps ($k \ll n$), $T the two ends of the _k$ are very good approximations of the $a$ both ends of the eigenvalues.
A visual example is Parllet data, when $n=10^4$, take $k=300$ can find 10 $a$ of the two ends of the eigenvalues and corresponding eigenvectors of the good approximation.
If $a$ all eigenvalues, the general $k$ is much larger than $n$. The experience of Cullum and Willoughby is that for the vast majority of matrices, it is possible to find almost all of its different eigenvalues to approximate the machine accuracy by simply $k\leqslant 3n$.
Obviously, the Lanczos method is useful and can be further developed for the precise ground state and several low-energy states of the $\textbf{hc}=e\textbf{c}$ problem. In this problem, the highest level is generally not required, so the method of obtaining the accurate value of low energy level can be further accelerated. The specific method is to calculate the commonly used Davidson diagonalization in chemistry.
[Matrix calculation] Lanczos method: Finding sparse matrix eigenvalues