[petsc-users] Mesh partitioning and MPI calls

Tabrez Ali stali at geology.wisc.edu
Thu Aug 25 16:57:08 CDT 2011


I see the same behavior even if I stop the code right after my stiffness 
matrix is assembled.

The only MPI comm before that is an epart/npart integer array BCast 
(from proc 0 after the partitioning routine is called).

Tabrez

On 08/25/2011 04:14 PM, Dominik Szczerba wrote:
> Your expectation seems correct.
>
> How do you organize your points/dofs? It is very important for communication.
>
> Can you inspect which MPI messages are counted? Communication during
> matrix assembly may work better, but somewhere else in your code you
> may still assume cell ordering, thus contributing to the bigger total
> communication cost.
>
> Dominik
>
> On Thu, Aug 25, 2011 at 11:15 PM, Tabrez Ali<stali at geology.wisc.edu>  wrote:
>> Hello
>>
>> I have an unstructured FE mesh which I am partitioning using Metis.
>>
>> In the first case I only use the element partitioning info and discard the
>> nodal partitioning info i.e., the original ordering is same as petsc's
>> global ordering. In the second case I do use the nodal partitioning info and
>> nodes are distributed accordingly.
>>
>> I would expect that in the 2nd scenario the total number of MPI messages (at
>> the end of the solve) would be lower than the 1st. However I see that
>> opposite is true. See the plot at http://stali.freeshell.org/mpi.png
>>
>> The number on the y axis is the last column of the "MPI messages:" field
>> from the -log_summary output.
>>
>> Any ideas as to why this is happening. Does relying on total number of MPI
>> messages as a performance measure even make sense. Please excuse my
>> ignorance on the subject.
>>
>> Alternatively what is a good way to measure how good the Metis partitioning
>> is?
>>
>> Thanks in advance
>>
>> Tabrez
>>
>>



More information about the petsc-users mailing list