<div dir="ltr">Clear enough. Thank you :-)<div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr">Giang</div></div></div>
<br><div class="gmail_quote">On Tue, Jan 26, 2016 at 3:01 PM, Mark Adams <span dir="ltr"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="">On Tue, Jan 26, 2016 at 3:58 AM, Hoang Giang Bui <span dir="ltr"><<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>></span> wrote:<br></span><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi<div><br></div><span class=""><div>I assert this line to the hypre.c to see what block size it set to</div><div><br></div><div><div>/* special case for BoomerAMG */</div><div> if (jac->setup == HYPRE_BoomerAMGSetup) {</div><div> ierr = MatGetBlockSize(pc->pmat,&bs);CHKERRQ(ierr);</div><div> </div><div> // check block size passed to HYPRE</div><div> PetscPrintf(PetscObjectComm((PetscObject)pc),"the block size passed to HYPRE is %d\n",bs);<br></div><div><br></div><div> if (bs > 1) PetscStackCallStandard(HYPRE_BoomerAMGSetNumFunctions,(jac->hsolver,bs));</div><div> }</div></div><div><br></div><div>It shows that the passing block size is 1. So my hypothesis is correct.</div><div><br></div><div>In the manual of MatSetBlockSize (<a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetBlockSize.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetBlockSize.html</a>), it has to be called before MatSetUp. Hence I guess the matrix passed to HYPRE is created before I set the block size. Given that, I set the block size after the call to PCFieldSplitSetIS</div><div><br></div><div><div> ierr = PCFieldSplitSetIS(pc, "u", IS_u); CHKERRQ(ierr);</div><div> ierr = PCFieldSplitSetIS(pc, "p", IS_p); CHKERRQ(ierr);<br></div><div><br></div><div> /* </div><div> Set block size for sub-matrix,</div><div> */</div><span><div> ierr = PCFieldSplitGetSubKSP(pc, &nsplits, &sub_ksp); CHKERRQ(ierr);</div></span></div><span><div> ksp_U = sub_ksp[0];</div><div> ierr = KSPGetOperators(ksp_U, &A_U, &P_U); CHKERRQ(ierr);</div><div> ierr = MatSetBlockSize(A_U, 3); CHKERRQ(ierr);</div><div> ierr = MatSetBlockSize(P_U, 3); CHKERRQ(ierr);</div><div><br></div></span><div>I guess the sub-matrices is created at PCFieldSplitSetIS. If that's correct then it's not possible to set the block size this way.</div></span></div></blockquote><div><br></div><div>You set the block size in the ISs that you give to FieldSplit. FieldSplit will give it to the matrices.</div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><span><font color="#888888"><div><br></div></font></span><div class="gmail_extra"><span><font color="#888888"><br clear="all"><div><div><div dir="ltr">Giang</div></div></div></font></span><div><div>
<br><div class="gmail_quote">On Mon, Jan 25, 2016 at 7:43 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span><br>
> On Jan 25, 2016, at 11:13 AM, Hoang Giang Bui <<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>> wrote:<br>
><br>
> OK, let's come back to my problem. I got your point about the interaction between components in one block. In my case, the interaction is strong.<br>
><br>
> As you said, I try this:<br>
><br>
> ierr = KSPSetFromOptions(ksp); CHKERRQ(ierr);<br>
> ierr = PCFieldSplitGetSubKSP(pc, &nsplits, &sub_ksp); CHKERRQ(ierr);<br>
> ksp_U = sub_ksp[0];<br>
> ierr = KSPGetOperators(ksp_U, &A_U, &P_U); CHKERRQ(ierr);<br>
> ierr = MatSetBlockSize(A_U, 3); CHKERRQ(ierr);<br>
> ierr = MatSetBlockSize(P_U, 3); CHKERRQ(ierr);<br>
> ierr = PetscFree(sub_ksp); CHKERRQ(ierr);<br>
><br>
> But it seems doesn't work. The output from -ksp_view shows that matrix passed to Hypre still has bs=1<br>
<br>
</span> Hmm, this is strange. MatSetBlockSize() should have either set the block size to 3 or generated an error. Can you run in the debugger on one process and put a break point in MatSetBlockSize() and see what it is setting the block size to. Then in PCSetUp_hypre() you can see what it is passing to hypre as the block size and maybe figure out how it becomes 1.<br>
<span><font color="#888888"><br>
Barry<br>
</font></span><div><div><br>
<br>
><br>
> KSP Object: (fieldsplit_u_) 8 MPI processes<br>
> type: preonly<br>
> maximum iterations=10000, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (fieldsplit_u_) 8 MPI processes<br>
> type: hypre<br>
> HYPRE BoomerAMG preconditioning<br>
> HYPRE BoomerAMG: Cycle type V<br>
> HYPRE BoomerAMG: Maximum number of levels 25<br>
> HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1<br>
> HYPRE BoomerAMG: Convergence tolerance PER hypre call 0<br>
> HYPRE BoomerAMG: Threshold for strong coupling 0.25<br>
> HYPRE BoomerAMG: Interpolation truncation factor 0<br>
> HYPRE BoomerAMG: Interpolation: max elements per row 0<br>
> HYPRE BoomerAMG: Number of levels of aggressive coarsening 0<br>
> HYPRE BoomerAMG: Number of paths for aggressive coarsening 1<br>
> HYPRE BoomerAMG: Maximum row sums 0.9<br>
> HYPRE BoomerAMG: Sweeps down 1<br>
> HYPRE BoomerAMG: Sweeps up 1<br>
> HYPRE BoomerAMG: Sweeps on coarse 1<br>
> HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi<br>
> HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi<br>
> HYPRE BoomerAMG: Relax on coarse Gaussian-elimination<br>
> HYPRE BoomerAMG: Relax weight (all) 1<br>
> HYPRE BoomerAMG: Outer relax weight (all) 1<br>
> HYPRE BoomerAMG: Using CF-relaxation<br>
> HYPRE BoomerAMG: Measure type local<br>
> HYPRE BoomerAMG: Coarsen type PMIS<br>
> HYPRE BoomerAMG: Interpolation type classical<br>
> linear system matrix = precond matrix:<br>
> Mat Object: (fieldsplit_u_) 8 MPI processes<br>
> type: mpiaij<br>
> rows=792333, cols=792333<br>
> total: nonzeros=1.39004e+08, allocated nonzeros=1.39004e+08<br>
> total number of mallocs used during MatSetValues calls =0<br>
> using I-node (on process 0) routines: found 30057 nodes, limit used is 5<br>
><br>
> In other test, I can see the block size bs=3 in the section of Mat Object<br>
><br>
> Regardless the setup cost of Hypre AMG, I saw it gives quite a radical performance, providing that the material parameters does not vary strongly, and the geometry is regular enough.<br>
><br>
><br>
> Giang<br>
><br>
> On Fri, Jan 22, 2016 at 2:57 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> On Fri, Jan 22, 2016 at 7:27 AM, Hoang Giang Bui <<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>> wrote:<br>
> DO you mean the option pc_fieldsplit_block_size? In this thread:<br>
><br>
> <a href="http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error" rel="noreferrer" target="_blank">http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error</a><br>
><br>
> No. "Block Size" is confusing on PETSc since it is used to do several things. Here block size<br>
> is being used to split the matrix. You do not need this since you are prescribing your splits. The<br>
> matrix block size is used two ways:<br>
><br>
> 1) To indicate that matrix values come in logically dense blocks<br>
><br>
> 2) To change the storage to match this logical arrangement<br>
><br>
> After everything works, we can just indicate to the submatrix which is extracted that it has a<br>
> certain block size. However, for the Laplacian I expect it not to matter.<br>
><br>
> It assumes you have a constant number of fields at each grid point, am I right? However, my field split is not constant, like<br>
> [u1_x u1_y u1_z p_1 u2_x u2_y u2_z u3_x u3_y u3_z p_3 u4_x u4_y u4_z]<br>
><br>
> Subsequently the fieldsplit is<br>
> [u1_x u1_y u1_z u2_x u2_y u2_z u3_x u3_y u3_z u4_x u4_y u4_z]<br>
> [p_1 p_3]<br>
><br>
> Then what is the option to set block size 3 for split 0?<br>
><br>
> Sorry, I search several forum threads but cannot figure out the options as you said.<br>
><br>
><br>
><br>
> You can still do that. It can be done with options once the decomposition is working. Its true that these solvers<br>
> work better with the block size set. However, if its the P2 Laplacian it does not really matter since its uncoupled.<br>
><br>
> Yes, I agree it's uncoupled with the other field, but the crucial factor defining the quality of the block preconditioner is the approximate inversion of individual block. I would merely try block Jacobi first, because it's quite simple. Nevertheless, fieldsplit implements other nice things, like Schur complement, etc.<br>
><br>
> I think concepts are getting confused here. I was talking about the interaction of components in one block (the P2 block). You<br>
> are talking about interaction between blocks.<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
> Giang<br>
><br>
><br>
><br>
> On Fri, Jan 22, 2016 at 11:15 AM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> On Fri, Jan 22, 2016 at 3:40 AM, Hoang Giang Bui <<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>> wrote:<br>
> Hi Matt<br>
> I would rather like to set the block size for block P2 too. Why?<br>
><br>
> Because in one of my test (for problem involves only [u_x u_y u_z]), the gmres + Hypre AMG converges in 50 steps with block size 3, whereby it increases to 140 if block size is 1 (see attached files).<br>
><br>
> You can still do that. It can be done with options once the decomposition is working. Its true that these solvers<br>
> work better with the block size set. However, if its the P2 Laplacian it does not really matter since its uncoupled.<br>
><br>
> This gives me the impression that AMG will give better inversion for "P2" block if I can set its block size to 3. Of course it's still an hypothesis but worth to try.<br>
><br>
> Another question: In one of the Petsc presentation, you said the Hypre AMG does not scale well, because set up cost amortize the iterations. How is it quantified? and what is the memory overhead?<br>
><br>
> I said the Hypre setup cost is not scalable, but it can be amortized over the iterations. You can quantify this<br>
> just by looking at the PCSetUp time as your increase the number of processes. I don't think they have a good<br>
> model for the memory usage, and if they do, I do not know what it is. However, generally Hypre takes more<br>
> memory than the agglomeration MG like ML or GAMG.<br>
><br>
> Thanks,<br>
><br>
> Matt<br>
><br>
><br>
> Giang<br>
><br>
> On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>> wrote:<br>
> Hoang Giang Bui <<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>> writes:<br>
><br>
> > Why P2/P2 is not for co-located discretization?<br>
><br>
> Matt typed "P2/P2" when me meant "P2/P1".<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> -- Norbert Wiener<br>
><br>
<br>
</div></div></blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div>