[petsc-users] Get vertex index of each cell in DMPlex after distribution
Danyang Su
danyang.su at gmail.com
Fri Apr 27 10:29:22 CDT 2018
On 2018-04-27 04:11 AM, Matthew Knepley wrote:
> On Fri, Apr 27, 2018 at 2:09 AM, Danyang Su <danyang.su at gmail.com
> <mailto:danyang.su at gmail.com>> wrote:
>
> Hi Matt,
>
> Sorry if this is a stupid question.
>
> In the previous code for unstructured grid, I create labels to
> mark the original node/cell index from VTK file and then
> distribute it so that each subdomain has a copy of its original
> node and cell index, as well as the PETSc numbering. Now I am
> trying to get avoid of using large number of keys in
> DMSetLabelValue since this costs lot of time for large problem.
>
> I can get the coordinates of subdomain after distribution by using
> DMGetCoordinatesLocal and DMGetCoordinateDM.
>
> How can I get the vertex index of each cell after distribution?
> Would you please give me a hint or functions that I can use.
>
> You can permute the vectors back to the natural ordering using
>
> http://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexNaturalToGlobalBegin.html
>
> which says you have to call DMPlexSetUseNaturalSF() before
> distributing the mesh. It is tested in
>
> src/dm/impls/plex/examples/tests/ex15.c
>
> so you can see how its intended to work. It is very new and has not
> been tested by many people.
>
> I can see how you might want this for small tests. Why would you want
> it for production models?
Hi Matt,
This is indeed what I need. As some of years old cases import initial
conditions from external files, which are in natural ordering as the
original mesh. Just want to make the code compatible to the old input files.
Thanks,
Danyang
>
> Thanks,
>
> Matt
>
> Thanks,
>
> Danyang
>
>
> On 18-04-25 02:12 PM, Danyang Su wrote:
>> On 2018-04-25 09:47 AM, Matthew Knepley wrote:
>>> On Wed, Apr 25, 2018 at 12:40 PM, Danyang Su
>>> <danyang.su at gmail.com <mailto:danyang.su at gmail.com>> wrote:
>>>
>>> Hi Matthew,
>>>
>>> In the worst case, every node/cell may have different label.
>>>
>>> Do not use Label for this. Its not an appropriate thing. If
>>> every cell is different, just use the cell number.
>>> Labels are for mapping a relatively small number of keys (like
>>> material IDs) to sets of points (cells, vertices, etc.)
>>> Its not a great data structure for a permutation.
>> Yes. If there is small number of keys, it runs very fast, even
>> for more than one million DMSetLabelValue calls. The performance
>> just deteriorates as the number of keys increases.
>>
>> I cannot get avoid of DMSetLabelValue as node/cell index of
>> original mesh is needed for the previous input file that uses
>> some of global node/cell index to set value. But if I can get the
>> natural order of nodes/cells from DMPlex, I can discard the use
>> of DMSetLabelValue. Is there any function can do this job?
>>
>> Thanks,
>>
>> Danyang
>>>
>>> However, I still do not believe these numbers. The old code does
>>> a string comparison every time. I will setup a test.
>>>
>>> Matt
>>>
>>> Below is one of the worst scenario with 102299 nodes and
>>> 102299 different labels for test. I found the time cost
>>> increase during the loop. The first 9300 loop takes least
>>> time (<0.5) while the last 9300 loops takes much more time
>>> (>7.7), as shown below. If I use larger mesh with >1 million
>>> nodes, it runs very very slowly in this part. The PETSc is
>>> configured with optimization on.
>>>
>>> Configure options --with-cc=gcc --with-cxx=g++
>>> --with-fc=gfortran --download-mpich --download-scalapack
>>> --download-parmetis --download-metis --download-ptscotch
>>> --download-fblaslapack --download-hypre
>>> --download-superlu_dist --download-hdf5=yes
>>> --download-ctetgen --with-debugging=0 COPTFLAGS="-O3
>>> -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native
>>> -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native"
>>>
>>> istart iend progress CPU_Time time cost - old (sec)
>>> time cost - new (sec)
>>> 0 9299 0 1524670045.51166
>>>
>>> 9300 18599 0.100010753 1524670045.99605 0.4843890667
>>> 0.497246027
>>> 18600 27899 0.200010747 1524670047.32635 1.330302
>>> 1.3820912838
>>> 27900 37199 0.300010741 1524670049.3066 1.9802515507
>>> 2.2439446449
>>> 37200 46499 0.400010765 1524670052.1594 2.852804184
>>> 3.0739262104
>>> 46500 55799 0.500010729 1524670055.90961 3.7502081394
>>> 3.9270553589
>>> 55800 65099 0.600010753 1524670060.47654 4.5669286251
>>> 4.7571902275
>>> 65100 74399 0.700010777 1524670066.0941 5.6175630093
>>> 5.7428796291
>>> 74400 83699 0.800010741 1524670072.53886 6.44475317
>>> 6.5761549473
>>> 83700 92998 0.900010765 1524670079.99072 7.4518604279
>>> 7.4606924057
>>> 92999 102298 1 1524670087.71066 7.7199423313 8.2424075603
>>>
>>>
>>>
>>> old code
>>>
>>> do ipoint = 0, istart-1
>>> !c output time cost, use 1 processor to test
>>> if (b_enable_output .and. rank == 0) then
>>> if (mod(ipoint,iprogress) == 0 .or. ipoint ==
>>> istart-1) then
>>> !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
>>> write(*,*) ipoint,
>>> (ipoint+1.0)/istart,"time",MPI_Wtime()
>>> end if
>>> end if
>>>
>>> call DMSetLabelValue(dmda_flow%da,"cid_lg2g",ipoint, &
>>> ipoint+1,ierr)
>>> CHKERRQ(ierr)
>>> end do
>>>
>>>
>>> new code
>>>
>>> call DMCreateLabel(dmda_flow%da,'cid_lg2g',ierr)
>>> CHKERRQ(ierr)
>>>
>>> call DMGetLabel(dmda_flow%da,'cid_lg2g',label, ierr)
>>> CHKERRQ(ierr)
>>>
>>> do ipoint = 0, istart-1
>>> !c output time cost, use 1 processor to test
>>> if (b_enable_output .and. rank == 0) then
>>> if (mod(ipoint,iprogress) == 0 .or. ipoint ==
>>> istart-1) then
>>> !write(*,'(f3.1,1x)',advance="no") (ipoint+1.0)/istart
>>> write(*,*) ipoint,
>>> (ipoint+1.0)/istart,"time",MPI_Wtime()
>>> end if
>>> end if
>>>
>>> call DMLabelSetValue(label,ipoint,ipoint+1,ierr)
>>> CHKERRQ(ierr)
>>> end do
>>>
>>> Thanks,
>>>
>>> Danyang
>>>
>>> On 2018-04-25 03:16 AM, Matthew Knepley wrote:
>>>> On Tue, Apr 24, 2018 at 11:57 PM, Danyang Su
>>>> <danyang.su at gmail.com <mailto:danyang.su at gmail.com>> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> I use DMPlex in unstructured grid code and recently
>>>> found DMSetLabelValue takes a lot of time for large
>>>> problem, e.g., num. of cells > 1 million. In my code, I use
>>>>
>>>>
>>>> I read your code wrong. For large loop, you should not use
>>>> the convenience function. You should use
>>>>
>>>> DMPlexCreateFromCellList ()
>>>>
>>>>
>>>> DMGetLabel(dm, name, &label)
>>>>
>>>>
>>>> Loop over all cells/nodes{
>>>>
>>>> DMSetLabelValue
>>>>
>>>>
>>>> Replace this by DMLabelSetValue(label, point, val)
>>>>
>>>> }
>>>>
>>>> DMPlexDistribute
>>>>
>>>> The code works fine except DMSetLabelValue takes a lot
>>>> of time for large problem. I use DMSetLabelValue to set
>>>> material id for all the nodes or cells so that each
>>>> subdomain has a copy of material id. Is there any other
>>>> functions that can be used more efficient, e.g. set
>>>> labels by array, not 1 by 1?
>>>>
>>>>
>>>> That should take much less time.
>>>>
>>>> Thanks,
>>>>
>>>> Matt
>>>>
>>>> Thanks,
>>>>
>>>> Danyang
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> What most experimenters take for granted before they begin
>>>> their experiments is infinitely more interesting than any
>>>> results to which their experiments lead.
>>>> -- Norbert Wiener
>>>>
>>>> https://www.cse.buffalo.edu/~knepley/
>>>> <http://www.caam.rice.edu/%7Emk51/>
>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to
>>> which their experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.caam.rice.edu/%7Emk51/>
>>
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180427/7dabdcfa/attachment-0001.html>
More information about the petsc-users
mailing list