On Thu, Dec 1, 2011 at 8:44 AM, Parker, Andrew (UK Filton) <span dir="ltr"><<a href="mailto:Andrew.Parker2@baesystems.com">Andrew.Parker2@baesystems.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
I have tried to use the following, this was before I had both you're<br>
emails (Barry, Matthew) ;-)<br>
<br>
1) First I set this up<br>
<br>
PetscInt size = 7;<br>
MatCreate(MPI_COMM_SELF,&LHS);<br>
MatSetSizes(LHS,size,size,size,size);<br>
MatSetType(LHS,MATSEQDENSE); //**************NOTE THIS LINE<br>
MatAssemblyBegin(LHS,MAT_FINAL_ASSEMBLY);<br>
MatAssemblyEnd(LHS,MAT_FINAL_ASSEMBLY);<br>
<br>
VecCreate(PETSC_COMM_SELF,&RHS);<br>
VecSetSizes(RHS,PETSC_DECIDE, size);<br>
VecSetFromOptions(RHS);<br>
VecAssemblyBegin(RHS);<br>
VecAssemblyEnd(RHS);<br>
<br>
VecCreate(PETSC_COMM_SELF,&Updates);<br>
VecSetSizes(RHS,PETSC_DECIDE, size);<br>
VecSetFromOptions(Updates);<br>
VecAssemblyBegin(Updates);<br>
VecAssemblyEnd(Updates);<br>
<br>
KSPCreate(PETSC_COMM_SELF,&Solver);<br>
KSPSetOperators(Solver, LHS, LHS, SAME_NONZERO_PATTERN);<br>
KSPSetType(Solver, KSPPREONLY); //**************NOTE THIS LINE<br>
KSPGetPC(Solver, &prec);<br>
PCSetType(prec, PCLU); //**************NOTE THIS LINE<br>
KSPSetFromOptions(Solver);<br>
KSPSetUp(Solver);<br>
<br>
For each cell I then do:<br>
<br>
2) I copy out the appropriate data from global structures into LHS and<br>
RHS<br>
<br>
3) KSPSolve(Solver, RHS, Updates);<br>
<br>
4) Stick the local Updates back into my global data structure<br>
<br>
Could I have your thoughts on whether the above is ok? In terms of<br>
efficiency and the correct thing to do? Would I be correct in assuming<br>
that Barry's method and mine are identical with the exception that I've<br>
got some copies going on to set up these small 7x7 and 1x7 matrices and<br>
vectors, but Barry's method does this inline by doing a preonly, lu<br>
solve DIRECT solve on each 7x7??<br></blockquote><div><br></div><div>With Barry's method you can do this from the command line so you can easily compare</div><div>with other solvers without changing your code.</div><div>
<br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Many thanks,<br>
Andy<br>
<br>
-----Original Message-----<br>
From: <a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a><br>
[mailto:<a href="mailto:petsc-users-bounces@mcs.anl.gov">petsc-users-bounces@mcs.anl.gov</a>] On Behalf Of Barry Smith<br>
Sent: 01 December 2011 14:09<br>
To: PETSc users list<br>
Subject: Re: [petsc-users] MATBDIAG in > 3.1<br>
<br>
<br>
*** WARNING ***<br>
<br>
This message has originated outside your organisation,<br>
either from an external partner or the Global Internet.<br>
Keep this in mind if you answer this message.<br>
<div><div></div><div class="h5"><br>
<br>
On Dec 1, 2011, at 7:05 AM, Matthew Knepley wrote:<br>
<br>
> You can get this effect using PCBJACOBI, setting the number of local<br>
blocks, and using -sub_pc_type lu -sub_ksp_type preonly.<br>
<br>
Do not do this. It will give bad performance since it explicitly<br>
pulls out each block with MatGetSubMatrices() and solves each small<br>
matrix as a separate PETSc matrix.<br>
<br>
Instead use the BAIJ matrix format with a block size of 7 and then<br>
use a PCPBJACOBI (note the P in the name for point block, each point<br>
block is that little 7 by 7 block that it solves efficiently using a<br>
tuned kernel) with -ksp_type preonly. This should be very fast and does<br>
not do any any neccessary scatters or communication.<br>
<br>
Barry<br>
<br>
><br>
> Matt<br>
><br>
> On Thu, Dec 1, 2011 at 4:28 AM, Parker, Andrew (UK Filton)<br>
<<a href="mailto:Andrew.Parker2@baesystems.com">Andrew.Parker2@baesystems.com</a>> wrote:<br>
> Hi,<br>
><br>
><br>
><br>
> I've been using PETSc for a conventional full parallel implicit<br>
solver, currently in MPIBAIJ format. However, I'd like to understand<br>
the performance verse memory and speed trade-offs of just having a block<br>
diagonal system. In this case I do not scatter to the off diagonal<br>
blocks. Each block is 7x7 in size and fully dense. In my view this<br>
should be a point implicit solution of the system. I am looking for the<br>
best way to solve this in PETSc, clearly because of the lack of coupling<br>
between neighbouring cells by only having a block diagonal it's actually<br>
a 7x7 dense linear solve, but to test my theory at present I'd prefer to<br>
just not scatter to off diagonals, and use:<br>
><br>
><br>
><br>
> MATBDIAG with my current ksp solver and take it from there. BDIAG<br>
seems to have been removed as off 3.0:<br>
><br>
><br>
><br>
> <a href="http://www.mcs.anl.gov/petsc/documentation/changes/300.html" target="_blank">http://www.mcs.anl.gov/petsc/documentation/changes/300.html</a><br>
><br>
><br>
><br>
> Can anybody help me with this, your thoughts on the above and what I<br>
should use to solve this reduced system would be appreciated??<br>
><br>
><br>
> Cheers,<br>
><br>
> Andy<br>
><br>
><br>
> ********************************************************************<br>
> This email and any attachments are confidential to the intended<br>
> recipient and may also be privileged. If you are not the intended<br>
> recipient please delete it from your system and notify the sender.<br>
> You should not copy it or use it for any purpose nor disclose or<br>
> distribute its contents to any other person.<br>
> ********************************************************************<br>
><br>
><br>
><br>
><br>
> --<br>
> What most experimenters take for granted before they begin their<br>
experiments is infinitely more interesting than any results to which<br>
their experiments lead.<br>
> -- Norbert Wiener<br>
<br>
<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>