[petsc-users] Assembling Restriction and Prolongation Operators for Parallel Calculations

Matthew Knepley knepley at gmail.com
Mon Apr 6 08:49:05 CDT 2015


On Mon, Apr 6, 2015 at 12:16 AM, Eric Mittelstaedt <emittelstaedt at uidaho.edu
> wrote:

> Hello all,
>    We are working on implementing restriction and prolongation operators
> for algebraic multigrid in our finite difference (staggered grid) code that
> solves the incompressible Stokes problem.  So far, the restriction and
> prolongation operators we have assembled in serial can successfully solve
> on a simple 2D grid with multiple-level v-cycles.  However, we are running
> into problems when we attempt to progress to parallel solves.  One of the
> primary errors we run into is an incompatibility in matrix types for
> MatMatMatMult and similar calls:
>
> [0]PETSC ERROR: [1]PETSC ERROR: MatMatMatMult() line 9173 in
> /opt/petsc/src/mat/interface/matrix.c MatMatMatMult requires A, seqaij,
> to be compatible with B, mpiaij, C, seqaij
> MatMatMatMult() line 9173 in /opt/petsc/src/mat/interface/matrix.c
> MatMatMatMult requires A, seqaij, to be compatible with B, mpiaij, C, seqaij
>
>
> The root cause of this is likely the matrix types we chose for restriction
> and prolongation. To ease calculation of our restriction (R) and
> prolongation (P) matrices, we have created them as sparse, sequential
> matrices (SEQAIJ) that are identical on all procs.  Will this matrix type
> actually work for R and P operators in parallel?
>

You need to create them as parallel matrices, MPIAIJ for example, that have
the same layout as the system matrix.


> It seem that setting this matrix type correctly is the heart of our
> problem.  Originally, we chose SEQAIJ because we want to let Petsc deal
> with partitioning of the grid on multiple processors.  If we do this,
> however, we don't know a priori whether Petsc will place a processor
> boundary on an even or odd nodal point.  This makes determining the
> processor bounds on the subsequent coarse levels difficult.  One method for
> resolving this may be to handle the partitioning ourselves, but it would be
> nice to avoid this if possible. Is there a better way to set up these
> operators?
>

You use the same partitioning as the system matrices on the two levels.
Does this makes sense?

  Thanks,

     Matt



> Thank you,
> Eric and Arthur
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150406/da187e10/attachment.html>


More information about the petsc-users mailing list