[petsc-dev] Will DMPlexDistribute only ever distribute Cells?

Jacob Faibussowitsch jacob.fai at gmail.com
Thu Apr 16 21:21:52 CDT 2020


Yes I saw that, I am assuming you are referring to DMPlexCreateNumbering_Plex? 

Since you are already building the IS it can be done in a more stream-lined way, since the way it is done in DMPlexCreateNumbering_Plex has to loop over all the local points again, then build a layout, and finally all_gather. My implementation just gathers to root instead of all_gather, and uses cached min and max values already computed during the IS construction.

Best regards,

Jacob Faibussowitsch
(Jacob Fai - booss - oh - vitch)
Cell: (312) 694-3391

> On Apr 16, 2020, at 8:56 PM, Matthew Knepley <knepley at gmail.com> wrote:
> 
> On Thu, Apr 16, 2020 at 9:51 PM Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> wrote:
>> This algorithm does not work for any stratum
> Shouldn’t it? The IS generated are local to global conversions. Here is my current approach.
> 
> Each Iocal IS will have local indices [0, 4) but the value of the index will be the global ID of the vertex, e.g. for 2 quads in a line (assuming as usual lowest rank owns points):
> 
> 0——1——2
> |    P0 |  P1 |
> |.        |.       |
> 3——4——5
> 
> VERTICES:
> IS Object: 2 MPI processes
>   type: general
> [0] Number of indices in set 4
> [0] 0 0 
> [0] 1 1
> [0] 2 3
> [0] 3 4
> [1] Number of indices in set 4
> [1] 0 -1
> [1] 1 2
> [1] 2 -4
> [1] 3 5
> 
> So for total number of vertices the algorithm is:
> 
> 1. Find max local IS value
> 2. Add 1 to it
> 3. All_reduce to proc 0
> 4. Find max of all_reduced values
> 
> The code that calculates the L2G _already_ computed the global size, it just did not put it anywhere. If you are willing
> to compute the L2G, you have already done what you want.
> 
>   Matt
>  
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
> 
>> On Apr 16, 2020, at 8:37 PM, Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
>> 
>> On Thu, Apr 16, 2020 at 9:28 PM Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> wrote:
>>> What do you want to do? 
>> Count the global number of points per depth based on all_reduce’ing the maximum positive value returned from the IS’s listed below. This works as intended for anything but cells since global number of points = max(IS) + 1. For cells this breaks since 1 rank reports 3 as max, the next reports 6, etc.
>> 
>> I do not understand this at all. This algorithm does not work for any stratum. For example, suppose that we have a straight line of quad cells,
>> 1 per process. The vertices would be [1, 5) on all processes but there would be 2*(P + 1) vertices.
>> 
>>   Thanks,
>> 
>>      Matt
>>  
>>>  The system should be flexible enough to distribute whatever you want
>> What is the best way to check that a non-standard distribute has been done? 
>> 
>> Best regards,
>> 
>> Jacob Faibussowitsch
>> (Jacob Fai - booss - oh - vitch)
>> Cell: (312) 694-3391
>> 
>>> On Apr 16, 2020, at 8:17 PM, Matthew Knepley <knepley at gmail.com <mailto:knepley at gmail.com>> wrote:
>>> 
>>> On Thu, Apr 16, 2020 at 9:04 PM Jacob Faibussowitsch <jacob.fai at gmail.com <mailto:jacob.fai at gmail.com>> wrote:
>>> Hello All,
>>> 
>>> TL;DR: Is it possible now, or is it a planned feature, for plex to distribute over anything but points with height = 0? 
>>> 
>>> If I understand this correctly when plex currently partitions a graph, points with height 0 are only owned by a single process, but all other points can be co-owned by multiple procs.  For example for a 2D plex with 8 vertices, 12 edges, and 6 cells over 2 procs these are the global-local IS’s for all points on processes (negative values indicate ownership by another proc) the final IS corresponding to cells will always have positive values as each proc is the sole owner of its cells.
>>> 
>>> What do you want to do? The system should be flexible enough to distribute whatever you want, but the current guarantee
>>> is that the cone of any points is always available. So if you decide to distribute something else, like faces, then it ends up
>>> looking just like an overlapping mesh with some custom overlap. Moreover, the dual mesh only really makes sense for cells.
>>> For faces/edges you would need a hypergraph partitioner.
>>> 
>>>   Thanks,
>>> 
>>>      Matt
>>>  
>>> VERTICES
>>> IS Object: 2 MPI processes
>>>   type: general
>>> [0] Number of indices in set 7
>>> [0] 0 -2
>>> [0] 1 0
>>> [0] 2 -3
>>> [0] 3 -4
>>> [0] 4 -5
>>> [0] 5 -6
>>> [0] 6 -8
>>> [1] Number of indices in set 7
>>> [1] 0 1
>>> [1] 1 2
>>> [1] 2 3
>>> [1] 3 4
>>> [1] 4 5
>>> [1] 5 6
>>> [1] 6 7
>>> 
>>> EDGES
>>> IS Object: 2 MPI processes
>>>   type: general
>>> [0] Number of indices in set 9
>>> [0] 0 0
>>> [0] 1 1
>>> [0] 2 -4
>>> [0] 3 -5
>>> [0] 4 -6
>>> [0] 5 2
>>> [0] 6 -7
>>> [0] 7 -9
>>> [0] 8 -11
>>> [1] Number of indices in set 9
>>> [1] 0 3
>>> [1] 1 4
>>> [1] 2 5
>>> [1] 3 6
>>> [1] 4 7
>>> [1] 5 8
>>> [1] 6 9
>>> [1] 7 10
>>> [1] 8 11
>>> 
>>> CELLS
>>> IS Object: 2 MPI processes
>>>   type: general
>>> [0] Number of indices in set 3
>>> [0] 0 0
>>> [0] 1 1
>>> [0] 2 2
>>> [1] Number of indices in set 3
>>> [1] 0 3
>>> [1] 1 4
>>> [1] 2 5 
>>> 
>>> Best regards,
>>> 
>>> Jacob Faibussowitsch
>>> (Jacob Fai - booss - oh - vitch)
>>> Cell: (312) 694-3391
>>> 
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>> -- Norbert Wiener
>>> 
>>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>> -- Norbert Wiener
>> 
>> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20200416/715e7b0d/attachment.html>


More information about the petsc-dev mailing list