<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">I have a complex system, (A + i B) (x + i y) = (f + ig), that I am trying to solve using real matrices: </div><div class=""><br class=""></div><div class=""> [A -B; B A ] [x; y] = [f; g]</div><div class=""><br class=""></div><div class="">So, the 2x2 block is made of the real and imaginary component of each entry in the complex matrix. </div><div class=""><br class=""></div><div class="">I am following the discussion in the following paper: </div><div class=""><div style="margin: 0px; line-height: normal; font-family: 'Times New Roman';" class=""><br class=""></div><div style="margin: 0px; line-height: normal; font-family: 'Times New Roman';" class="">DAY D. \& HEROUX M.A. 2001. Solving complex-valued linear systems via equivalent real formulations. \textit{SIAM Journal on Scientific Computing} 23: 480-498.</div></div><div class=""><br class=""></div><div class="">
<div class="page" title="Page 5">
<div class="section">
<div class="layoutArea">
<div class="column"><p class="">Following is an excerpt. </p><p class="">**********************************************************************************</p><p class="">The matrix K in the K formulation has a natural 2-by-2 block structure that can
be exploited by using block entry data structures. Using the block entry features of
these packages has the following benefits.
</p>
<ol class="">
<li class=""><p class="">Applying 2-by-2 block Jacobi scaling to K corresponds exactly to applying
point Jacobi scaling to C.
</p>
</li>
<li class=""><p class="">The block sparsity pattern of K exactly matches the point sparsity pattern
of C. Thus any pattern-based preconditioners such as block ILU(l) applied
to K correspond exactly to ILU(l) applied to C. See section 4 for definitions
of block ILU(l) and ILU(l).
</p>
</li>
<li class=""><p class="">Any drop tolerance-based complex preconditioner has a straightforward K
formulation since the absolute value of a complex entry equals the scaled
Frobenius norm of the corresponding block entry in K. </p>
</li>
</ol>
</div>
</div>
</div>
</div></div><div class="">**********************************************************************************</div><div class=""><br class=""></div><div class="">The paper additional outlines the challenges of the poor spectral properties of the equivalent real system. </div><div class=""><br class=""></div><div class="">So, I am assembling the system with a 2x2 block, but am not sure how to best pick the right solver options in Petsc. </div><div class=""><br class=""></div><div class="">I agree that I am getting confused by the “block” nomenclature. Particularly, I am not sure how to reconcile the different notions with points 1 and 2 from the paper (noted above). </div><div class=""><br class=""></div><div class="">Any guidance would be appreciated!</div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">Manav</div><div class=""><br class=""></div><br class=""><div><blockquote type="cite" class=""><div class="">On Nov 15, 2016, at 3:12 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class=""><br class=""> We can help you if you provide more information about what the blocks represent in your problem. <br class=""><br class=""> Do you have two degrees of freedom at each grid point? What physically are the two degrees of freedom. What equations are you solving?<br class=""><br class=""> I think you may be mixing up the "matrix block size" of 2 with the blocks in "block Jacobi". Though both are called "block" they really don't have anything to do with each other. <br class=""><br class=""> Barry<br class=""><br class=""><blockquote type="cite" class="">On Nov 15, 2016, at 3:03 PM, Manav Bhatia <<a href="mailto:bhatiamanav@gmail.com" class="">bhatiamanav@gmail.com</a>> wrote:<br class=""><br class="">Hi, <br class=""><br class=""> I am setting up a matrix with the following calls. The intent is to solve the system with a 2x2 block size.<br class=""><br class=""> What combinations of KSP/PC will effectively translate to solving this block matrix system? <br class=""><br class=""> I saw a discussion about bjacobi in the manual with the following calls (I omitted the prefixes from my actual command): <br class="">-pc_type bjacobi -pc_bjacobi_blocks 2 -sub_ksp_type preonly -sub_pc_type lu -ksp_view<br class=""><br class="">which provides the following output: <br class="">KSP Object:(fluid_complex_) 1 MPI processes<br class=""> type: gmres<br class=""> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br class=""> GMRES: happy breakdown tolerance 1e-30<br class=""> maximum iterations=10000, initial guess is zero<br class=""> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class=""> left preconditioning<br class=""> using PRECONDITIONED norm type for convergence test<br class="">PC Object:(fluid_complex_) 1 MPI processes<br class=""> type: bjacobi<br class=""> block Jacobi: number of blocks = 2<br class=""> Local solve is same for all blocks, in the following KSP and PC objects:<br class=""> KSP Object: (fluid_complex_sub_) 1 MPI processes<br class=""> type: preonly<br class=""> maximum iterations=10000, initial guess is zero<br class=""> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class=""> left preconditioning<br class=""> using NONE norm type for convergence test<br class=""> PC Object: (fluid_complex_sub_) 1 MPI processes<br class=""> type: lu<br class=""> LU: out-of-place factorization<br class=""> tolerance for zero pivot 2.22045e-14<br class=""> matrix ordering: nd<br class=""> factor fill ratio given 5., needed 5.70941<br class=""> Factored matrix follows:<br class=""> Mat Object: 1 MPI processes<br class=""> type: seqaij<br class=""> rows=36844, cols=36844<br class=""> package used to perform factorization: petsc<br class=""> total: nonzeros=14748816, allocated nonzeros=14748816<br class=""> total number of mallocs used during MatSetValues calls =0<br class=""> using I-node routines: found 9211 nodes, limit used is 5<br class=""> linear system matrix = precond matrix:<br class=""> Mat Object: (fluid_complex_) 1 MPI processes<br class=""> type: seqaij<br class=""> rows=36844, cols=36844<br class=""> total: nonzeros=2583248, allocated nonzeros=2583248<br class=""> total number of mallocs used during MatSetValues calls =0<br class=""> using I-node routines: found 9211 nodes, limit used is 5<br class=""> linear system matrix = precond matrix:<br class=""> Mat Object: (fluid_complex_) 1 MPI processes<br class=""> type: seqaij<br class=""> rows=73688, cols=73688, bs=2<br class=""> total: nonzeros=5224384, allocated nonzeros=5224384<br class=""> total number of mallocs used during MatSetValues calls =0<br class=""> using I-node routines: found 18422 nodes, limit used is 5<br class=""><br class=""><br class="">Likewise, I tried to use a more generic option: <br class="">-mat_set_block_size 2 -ksp_type gmres -pc_type ilu -sub_ksp_type preonly -sub_pc_type lu -ksp_view<br class=""><br class="">with the following output:<br class="">Linear fluid_complex_ solve converged due to CONVERGED_RTOL iterations 38<br class="">KSP Object:(fluid_complex_) 1 MPI processes<br class=""> type: gmres<br class=""> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br class=""> GMRES: happy breakdown tolerance 1e-30<br class=""> maximum iterations=10000, initial guess is zero<br class=""> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br class=""> left preconditioning<br class=""> using PRECONDITIONED norm type for convergence test<br class="">PC Object:(fluid_complex_) 1 MPI processes<br class=""> type: ilu<br class=""> ILU: out-of-place factorization<br class=""> 0 levels of fill<br class=""> tolerance for zero pivot 2.22045e-14<br class=""> matrix ordering: natural<br class=""> factor fill ratio given 1., needed 1.<br class=""> Factored matrix follows:<br class=""> Mat Object: 1 MPI processes<br class=""> type: seqaij<br class=""> rows=73688, cols=73688, bs=2<br class=""> package used to perform factorization: petsc<br class=""> total: nonzeros=5224384, allocated nonzeros=5224384<br class=""> total number of mallocs used during MatSetValues calls =0<br class=""> using I-node routines: found 18422 nodes, limit used is 5<br class=""> linear system matrix = precond matrix:<br class=""> Mat Object: (fluid_complex_) 1 MPI processes<br class=""> type: seqaij<br class=""> rows=73688, cols=73688, bs=2<br class=""> total: nonzeros=5224384, allocated nonzeros=5224384<br class=""> total number of mallocs used during MatSetValues calls =0<br class=""> using I-node routines: found 18422 nodes, limit used is 5<br class=""><br class=""> Are other PC types expected to translate to the block matrices? <br class=""><br class=""> I would appreciate any guidance. <br class=""><br class="">Thanks,<br class="">Manav<br class=""><br class=""></blockquote><br class=""></div></div></blockquote></div><br class=""></body></html>