[petsc-users] Assembling Restriction and Prolongation Operators for Parallel Calculations

Matthew Knepley knepley at gmail.com
Mon Apr 6 16:12:44 CDT 2015


On Mon, Apr 6, 2015 at 3:59 PM, Eric Mittelstaedt <emittelstaedt at uidaho.edu>
wrote:

>  Hi Matt
>
> On 4/6/2015 6:49 AM, Matthew Knepley wrote:
>
>  On Mon, Apr 6, 2015 at 12:16 AM, Eric Mittelstaedt <
> emittelstaedt at uidaho.edu> wrote:
>
>> Hello all,
>>    We are working on implementing restriction and prolongation operators
>> for algebraic multigrid in our finite difference (staggered grid) code that
>> solves the incompressible Stokes problem.  So far, the restriction and
>> prolongation operators we have assembled in serial can successfully solve
>> on a simple 2D grid with multiple-level v-cycles.  However, we are running
>> into problems when we attempt to progress to parallel solves.  One of the
>> primary errors we run into is an incompatibility in matrix types for
>> MatMatMatMult and similar calls:
>>
>> [0]PETSC ERROR: [1]PETSC ERROR: MatMatMatMult() line 9173 in
>> /opt/petsc/src/mat/interface/matrix.c MatMatMatMult requires A, seqaij, to
>> be compatible with B, mpiaij, C, seqaij
>> MatMatMatMult() line 9173 in /opt/petsc/src/mat/interface/matrix.c
>> MatMatMatMult requires A, seqaij, to be compatible with B, mpiaij, C, seqaij
>>
>>
>> The root cause of this is likely the matrix types we chose for
>> restriction and prolongation. To ease calculation of our restriction (R)
>> and prolongation (P) matrices, we have created them as sparse, sequential
>> matrices (SEQAIJ) that are identical on all procs.  Will this matrix type
>> actually work for R and P operators in parallel?
>>
>
>  You need to create them as parallel matrices, MPIAIJ for example, that
> have the same layout as the system matrix.
>
>
> Okay, this is what I thought was the problem.  It is really a matter of
> making sure we have the correct information on each processor.
>
>
>
>> It seem that setting this matrix type correctly is the heart of our
>> problem.  Originally, we chose SEQAIJ because we want to let Petsc deal
>> with partitioning of the grid on multiple processors.  If we do this,
>> however, we don't know a priori whether Petsc will place a processor
>> boundary on an even or odd nodal point.  This makes determining the
>> processor bounds on the subsequent coarse levels difficult.  One method for
>> resolving this may be to handle the partitioning ourselves, but it would be
>> nice to avoid this if possible. Is there a better way to set up these
>> operators?
>>
>
>  You use the same partitioning as the system matrices on the two levels.
> Does this makes sense?
>
>    It does, but I am unsure how to get the partitioning of the coarser
> grids since we don't deal with them directly.  Is it safe to assume that
> the partitioning on the coarser grid will always overlap with the same part
> of the domain as the fine level? In other words, the current processor will
> receive the coarse grid that is formed by the portion of the restriction
> matrix that it owns, right?
>

You can get the coarse grid by calling DMCoarsen(), just like the AMG will
do. Then you can ask the coverage questions
directly to the DMDA. I don't remember exactly what we guarantee with
regard to coverage of the fine grid by the coarse.
Maybe Jed does since he jsut rewrote this for the HPGMG benchmark.

  Thanks,

     MAtt


> Thanks!
> Eric
>
>
>     Thanks,
>
>       Matt
>
>
>
>> Thank you,
>> Eric and Arthur
>>
>>
>
>
>  --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
> --
>
> *************************************
> Eric Mittelstaedt, Ph.D.
> Assistant Professor
> Dept. of Geological Sciences
> University of Idaho
> email: emittelstaedt at uidaho.edu
> phone: (208) 885 2045http://scholar.google.com/citations?user=DvRzEqkAAAAJ
> *************************************
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150406/5b192bba/attachment.html>


More information about the petsc-users mailing list