Table of Contents

1 User's Guide

Python code can be found in DD directory.

1.1 CG-DEF Experimentations

EXP1_1 runs CG-DEF with true eigenvectors for W, EXP1_2 runs CG-DEF with perturbed eigenvectors. EXP3_1 and EXP3_2 run CG-DEF with computed eigenvectors between each solve. Eigenvectors can be computed using Lanczos Rayleigh-Ritz or Lanczos Harmonic-Ritz methods. The EXP1_1, EXP1_2 and EXP3_1 experimentations run using 2 right-hand sides (rhs). The EXP3_2 can be run with more than 2 rhs.

1.1.1 Execution and parameters

n and nk are used to set respectively the rank of A and the number of clustered eigenvalue. Default values are n=500, nk=1 and can be set at execution: python3 EXP.py n nk. Other parameters are:

mat
["A0", "A1", "A2", "A3", "A4"] a string list to set by name the linear operator in Ax=b
  • Matrix Constructor is setmatrix4exp(which=Imat, n=n, k=nk, theta=Itheta)
theta
[1e2, 1e4, 1e6] a list used to set the LD (1/theta) or MD (theta) eigenvalue order.
  • Itheta is the iterated value of theta used as an input for the Matrix Constructor
maxrun
integer, default is 3, used to set the number of run. Has to be also set in .R
part
["LD", "MD", "LMD"] a string list used to set which part of the spectrum of A need to be deflated
  • "LD" from the lowest dominant eigenvalue, "MD" from the most dominant one and "LMD" from both "LD" and "MD"
SI
['S0', 'S1', 'S2', 'S3'] a string list used to set deflation parameters
  • 'S0' to run CG (no deflation) and 'S1', 'S2' or 'S3' to run CG-DEF
  • 'S1' to set l and k to the number of iterations computed by the previous solve
  • 'S2' to set l like 'S1' but k in [1, 2, 3, 4, 10, 50,…,n]
  • 'S3' to set explicitly l and k

In fact for l, two integers are used l1 and l2 such that if the current solve stopped in n_iter iterations, l1 < l2 ≤ nite and CG coefficients are stored between the l1 and l2 iterations. l1, l2 and k are always recomputed between each solve using the apply_strat(strat, W, n_iter) function (in DDMPY.py). Most of the experimentations run like the following scheme:

for Imat in mat: # Matrix loop
    for Itheta in theta: # Matrix parameter loop
        A,DA,QTA = setmatrix4exp(which=Imat, n=n, k=nk, theta=Itheta) # Matrix constructor
        for ISI in SI: # Deflation parameters loop
            for Ipart in part: # Deflation parameter loop
                for Ik in Ikliste:
                    #Set k; l1; l2; RHR
                    for Irun in maxrun:
                        xexa = np.random.rand(A.shape[0],nrhs)
                        b    = A@xexa
                        # ser Deflation parameters
                        for i in nrhs:
                            s =  ConjGrad() # set solver
                            x, W = s.dot(b[...,i:i+1]) # solve
                            csv_eigen(A, W, DA, QTA, strat, tags=Tags) # Eigenvalues/vectors data
# R script execution using system command lines

2 Introduction

Numerical methods for solving linear systems \(A.x=b\) from real engineering problems need to match criteria as robustness, precision, memory and scalability. They are construct based on the properties of the linear operator \(A\) like symmetry, sparsisity, or condition number and to achieve those criteria, hybrid solvers mixing both direct and iterative methods combine with preconditionner were developed.

Iterative methods based on projection onto Krylov subspace were devloped in several way in order to reduce kernel computation for large scale problem. When \(A\) is non symmetric, GMRES[6] is often a good choice, but it's a memory bound methods. To recover from this, a restarted version was made[7] and from here the idea to reuse information from a previous solve was made and new methods like augmented, deflated [8] were study and extend to solve linear system with multiple right hand sides as block methods[9].

Like the Krylov subspace build step is made by Arnoldi process, those new methods were extend to symmetric positive definite operator case by the use of Lanczos process. The conjugate gradient method was extend to a seed version[11], block[12] and augmented[14], deflated[14] version.

When it's come to solve system with many (\(\nu\)) right hand sides $A.xs=bs,\,s=1,…,ν.$ block and seed methods reduce iteration and memory cost but increase the time execution and require that all or a part of right hand sides vector be availble at the same time. The deflated method fit well to solve sucesively multiple righ sides but involved a reduced eigenvalue problem.

This report focus the deflated version of the conjugate gradient method (cg-def), the key idea of cg-def is to split the searchspace into two complementary subspace. One as Krylov subspace and the other as deflated subspace. This method fit well when the spectrum of \(A\) contains clustered eigenvalues who perturb the convergence of cg. A good choose for the deflated space is the part of the eigenspace associated to the clustered eigenvalues. The main hot point of such methods is the computation of a good deflated subspace at least cost. But cg can take advantage of his relation with the Lanczos method used to build the krylov subspace to compute approximate eigensubspace step by step.

With cg-def several version of Lanczos process have to be study [15,16,17] in order to recover from loss of orthogonality and compute a good eigenspace [18]. This report shows deflation theory. To do, we recall linear system definition and numerical method. Eigenvalue problem is exposed and a second part details cg-def and numerical tests. A third part expose results when the eigensubspace is use as deflated space to understand the effect of deflation. Afterward the same study is performed with a perturbated eigenspace to find the minimal precision required when the deflated subspace is computed. Moreover the convergence behavior of cg-def is study when the deflated supbspace is computed and results are compared with other methods like init-cg and locally optimal cg.

**Linear system and eigenvalue problem In this section, linear and eigen value problem definition are recall. Direct and iterative methods are quickly explained afterward a focus is made onto projection methods.

2.0.1 Linear system

Let \(n\in\N,\, n>1\) such that \((x_1, x_2,...,x_n)\in \R^{n}\), \((a_1, a_2,...,a_n)\in \R^{n}\) and \(b\in\R\) the right-hand side. We call Linear equation the following problem:

\begin{equation}\label{eq:eql1} \boxed{ \mbox{Find} (x_{1},x_{2},...,x_{n}) \mbox{such that:}\\ \Sigma_{i=1}^{n}\,a_{i}*x_{i} = b} \end{equation}

With \((x_{i})_{i=1,...,n}\) the unknows of problem (\ref{eq:eql1}). When it's come to \(n\) equations, the right-hand side is \((b_{j})_{j=1,...,n}\) ands coefficients are \((a_{ij})\in\R\,\forall (i,j)\in\N^{n\times n}\), problem (\ref{eq:eql1}) is rewrite with \(n\) equations and \(n\) unkonws:

\begin{equation}\label{eq:eql2} %$\boxed{ \mbox{Trouver}\, (x_{1},x_{2},...,x_{n})\, \mbox{tel que:} \left\{\begin{array}{cc} \Sigma_{j=1}^{n}\,a_{1j}*x_{j} &= b_{1}\\ \vdots\\ \Sigma_{j=1}^{n}\,a_{ij}*x_{j} &= b_{i}\\ \vdots\\ \Sigma_{j=1}^{n}\,a_{nj}*x_{j} &= b_{n} \end{array}\color{white}\right\}\color{black} %}$ \end{equation}

Author: gitlab-runner daemon user

Created: 2021-01-07 Thu 10:43

Validate