[petsc-dev] do all MIS algorithms require a symmetric structure?

Mark Adams mfadams at lbl.gov
Sun Sep 18 17:53:17 CDT 2022


On Sun, Sep 18, 2022 at 5:11 PM Barry Smith <bsmith at petsc.dev> wrote:

>
>    Does the MIS you use, use the numerical values in the matrix? Looking
> at the code it looks like not, while hem does seems to use them.
>

Yes. numerical values are used in the filtering.


>
>    But MatCreateGraph() seems to always get the numerical values, makes
> them positive and then scales them by the diagonal even when no filtering
> will be done. Can this be optimized out for no filtering and not HEM (for
> scalar matrices it looks like even the MatDuplicate() is not needed since
> it just processes a copy of the original matrix?)
>

Yes, it used to do that. There was code like if (Gmat1 != GMat2)
Destroy(Gmat1)
etc,
It looks like that was lost in the refactoring.
Maybe at gamg.c:634 check for bs==1 and no filtering and set Gmat
= Aarr[level];
Then check for this when Gmat is destroyed later.


>
>    I think the current parallel symmetrization could be optimized
> especially when one doe not need numerical values. Probably can be done
> faster than MatTranspose() followed by MatMAXPY()
>

OK. It seems to me that the values just piggyback on the column indices,
but maybe.


>
>    The MIS also seems to slow down with multiple MPI ranks.
>

I have never profiled this code carefully, It just does a nearest neighbor
exchange a few times, once for each iteration of the MIS.
That code and data structure is very old.
It could well be improved.
I have noticed that for the aggressive coarsening, with the square
graph-like thing, 80% of the time is in the PtAP that I use to create an
intermediate level matrix.


>
>    These combine to make the total time to solution much slower in
> parallel than sequential for GAMG, while all the rest of GAMG gets good
> speedup.
>

You could compare with hypre to get an idea of the relative speed and
scaling.
The code uses a Fortran style linked list to collect selected vertices and
the vertices that it "deleted".
Then sends all the processors that see the selected and deleted vertices
the new state.
This code is 25 years old.


>    Is there a parallel coarse grid point selector that would be faster and
> not have the parallel bottle necks that could be coded up relatively easily?
>

I got a best student paper award at Copper '98.
https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.37.3040&rank=8
This paper discusses the parallel algotithm.
The algorithm is fine, but I could see the implementations could be bad.
Its not that much code.

Mark


>
>  Barry
>
>
> On Sep 18, 2022, at 4:21 PM, Mark Adams <mfadams at lbl.gov> wrote:
>
>
>
> On Sun, Sep 18, 2022 at 4:02 PM Barry Smith <bsmith at petsc.dev> wrote:
>
>>
>>   Mark,
>>
>>    Do all MIS algorithms  in PETSc require a symmetric graph structure?
>> And parallel ones can hang if not structurally symmetric?
>>
>
> Yes,
>
>
>>
>>    When used sequentially I guess it never hangs but it may not produce a
>> "correct" MIS if the matrix structure is not symmetric?
>
>
> It is fine in serial and it is not necessarily an MIS of the
> symmetrized graph.
> If there is a one way edge between two vertices and the order of the
> greedy MIS process picks the root of the edge it is an MIS of the
> symmetrized graph, otherwise both vertices could get selected.
>
>
>> But like the MIS is fine for GAMG in this circumstance?
>>
>
> It will be fine for GAMG. The MIS is just a heuristic.
>
>
>>
>>   Barry
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20220918/54df85cf/attachment.html>


More information about the petsc-dev mailing list