[petsc-users] Mesh partitioning and MPI calls

Dominik Szczerba dominik at itis.ethz.ch
Thu Aug 25 16:14:25 CDT 2011


Your expectation seems correct.

How do you organize your points/dofs? It is very important for communication.

Can you inspect which MPI messages are counted? Communication during
matrix assembly may work better, but somewhere else in your code you
may still assume cell ordering, thus contributing to the bigger total
communication cost.

Dominik

On Thu, Aug 25, 2011 at 11:15 PM, Tabrez Ali <stali at geology.wisc.edu> wrote:
> Hello
>
> I have an unstructured FE mesh which I am partitioning using Metis.
>
> In the first case I only use the element partitioning info and discard the
> nodal partitioning info i.e., the original ordering is same as petsc's
> global ordering. In the second case I do use the nodal partitioning info and
> nodes are distributed accordingly.
>
> I would expect that in the 2nd scenario the total number of MPI messages (at
> the end of the solve) would be lower than the 1st. However I see that
> opposite is true. See the plot at http://stali.freeshell.org/mpi.png
>
> The number on the y axis is the last column of the "MPI messages:" field
> from the -log_summary output.
>
> Any ideas as to why this is happening. Does relying on total number of MPI
> messages as a performance measure even make sense. Please excuse my
> ignorance on the subject.
>
> Alternatively what is a good way to measure how good the Metis partitioning
> is?
>
> Thanks in advance
>
> Tabrez
>
>


More information about the petsc-users mailing list