[petsc-users] Parallelize the Schur complement

Jed Brown jed at jedbrown.org
Sat Mar 21 09:47:09 CDT 2015


"Sun, Hui" <hus003 at ucsd.edu> writes:

> Hi Jed, thank you for your answers. However, I still don't think I get the answer. Maybe I should ask in a clearer way. 
>
> My A00 is of size n^3 times n^3, while my A11 is of size m times m. A00 has DM, but A11 does not. 
>
> Let's suppose I have 16 cores, and I use all 16 cores to create the DM for A00, and my A01 and A10' has 16 cores parallelization in rows, but they are sequential in columns, and A11 is also sequential. 

Why would you make A11 sequential?

> I want to make A11 parallelized. So maybe I can try the following, I
> use 8 cores to create the DM for A00, and when I do
> A10*A00^(-1)*A01*v, I need the same 8 cores for the rows of A01 and
> A10' so that the matrix multiplications can carry over. But I want to
> parallelize A11 as well, so maybe I want to use 2 cores for the rows
> of A11 and 2 cores for the columns of A11, and hence I should also
> have 2 cores for the columns of A01 and A10'. Then, for matrix A00 I
> use 8 cores, and for A10 and A01 I use 8 times 2 which is 16 cores,
> and for A11 I use 4 cores. However, this doesn't seem right, because
> since there are 16 cores for A01, these 16 cores should all have
> access to part of the matrix A00 because we have the operation
> A00^(-1)*A01. But I use 8 cores for DM. That means I should have two
> copies of A00, which doesn't seem quite reasonable.

You're over-thinking this.  Distribute both over all cores (PETSc
matrices are row distributions unless you transpose, so just use the row
distributions).  Set the row distribution of A01 equal to A00 and the
column distribution of A01 equal to A11.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150321/ba360bfc/attachment.pgp>


More information about the petsc-users mailing list