[petsc-users] KSP: domain decomposition and distribution

mary sweat mary.sweat78 at yahoo.it
Sat Jan 25 18:12:45 CST 2014


Resuming: I have a parabolic differential equation, through the finite difference scheme I obtain a linear system of equations with laplace matrix as coefficient matrix, i.e. the coefficient matrix is sparse, huge and structured. 
Then I solve this system with GMRES+Jacob.
I don't care about the number of processes and the size of the portions of the matrix assigned to the processes; of course the matrix is partitioned in blocks assigned to the processes
the problem is that I need to know, just theoretically, how is the matrix splitted between processes 
Moreover, how does it happen on GPUs??
Essentially in which way the domain is splitted between processes? when do they communicate to synchronize/share/exchange partial results?
I nees to know also how does it all happen on a GPU.

thank you




Il Mercoledì 8 Gennaio 2014 13:46, Dave May <dave.mayhem23 at gmail.com> ha scritto:
 
You asked how the problem was split between processes. In your case, this is defined by the matrix.

 The default solver in petsc is gmres preconditioned with block Jacobi and ilu(0) applied on each block. The "block" I refer to is the piece of the matrix locally owned by each processor (which is thus defined by the matrix layout/partitioning)






On Wednesday, 8 January 2014, mary sweat  wrote:

I do not explicitily check the size, because I use PETSC_DECIDE, instead I specify the number of processes. What I really care about is hw does KSPSolve solve the system in a parallel way with multiple processes.
>
>
>
>Il Mercoledì 8 Gennaio 2014 12:34, Dave May <dave.mayhem23 at gmail.com> ha scritto:
> 
>Please check out the manual page for MatSetSizes()
>  http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatSetSizes.html
>
>Basically you have two choices:
>
>1/ Define the global size of the matrix and use PETSC_DECIDE for the local sizes.
>In this case, PETSc will define the local row size in a manner such that there are approximately the same number of rows on each process.
>
>2/ Define the local sizes yourself and use PETSC_DETERMINE for the global size. 
>Then you have full control over the parallel layout.
>
>The following functions described by these pages
>
>  http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetSize.html
>  http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSize.html
>  http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRanges.html
>  http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetOwnershipRangesColumn.html
>
>might also be useful for you in double checking what the matrix decomposition looks like
>
>
>
>
>
>Cheers,
>
>  Dave
>
>
>
>
>
>
>On 8 January 2014 12:26, mary sweat <mary.sweat78 at yahoo.it> wrote:
>
>My target is the following. I got a huge linear system with a sparse huge matrix, nothing to deal with PDE. How is the system splitted between processes? is there in this suggested book the answer?
>>Thanks again
>>
>>
>>
>>Il Martedì 7 Gennaio 2014 17:34, Jed Brown <jedbrown at mcs.anl.gov> ha scritto:
>> 
>>mary sweat <mary.sweat78 at yahoo.it> writes:
>>
>>
>>> Hi all,  I need to know how does KSP separate and distribute domain
>>> between processes and the way processes share and communicate halfway
>>> results. Is there any good documentation about it???
>>
>>The communication is in Mat and Vec functions.  You can see it
>>summarized in -log_summary.  For the underlying theory, see Barry's
>>book.
>>
>>http://www.mcs.anl.gov/~bsmith/ddbook.html
>>
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140126/55d200fd/attachment.html>


More information about the petsc-users mailing list