[petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile

Danyang Su danyang.su at gmail.com
Fri Feb 16 12:18:48 CST 2018


On 18-02-16 10:13 AM, Matthew Knepley wrote:
> On Fri, Feb 16, 2018 at 11:36 AM, Danyang Su <danyang.su at gmail.com 
> <mailto:danyang.su at gmail.com>> wrote:
>
>     On 18-02-15 05:57 PM, Matthew Knepley wrote:
>
>>     On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su <danyang.su at gmail.com
>>     <mailto:danyang.su at gmail.com>> wrote:
>>
>>         Hi Matt,
>>
>>         I have a question on DMPlexCreateFromCellList and
>>         DMPlexCreateFromFile. When use DMPlexCreateFromFile with Gmsh
>>         file input, it works fine and each processor gets its own
>>         part. However, when use DMPlexCreateFromCellList, all the
>>         processors have the same global mesh. To my understand, I
>>         should put the global mesh as input, right?
>>
>>
>>     No. Each process should get part of the mesh in
>>     CreateFromCellList(), but the most common thing to do is to
>>     feed the whole mesh in on proc 0, and nothing in on the other procs.
>     Thanks for the explanation. It works now.
>
>
> Great. Also feel free to suggest improvements, examples, or better 
> documentation.
Thanks, I will bother you a lot recently while porting the code from 
structured grid version to unstructured grid version. Thanks in advance.
Danyang
>
>   Thanks,
>
>      Matt
>
>     Danyang
>>
>>       Thanks,
>>
>>         Matt
>>
>>         Otherwise, I should use DMPlexCreateFromCellListParallel
>>         instead if the input is local mesh.
>>
>>         Below is the test code I use, results from method 1 is wrong
>>         and that from method 2 is correct. Would you please help to
>>         check if I did anything wrong with DMPlexCreateFromCellList
>>         input?
>>
>>         !test with 4 processor, global num_cells = 8268, global
>>         num_nodes = 4250
>>
>>         !correct results
>>
>>          check rank            2  istart         2034 iend         3116
>>          check rank            3  istart         2148 iend         3293
>>          check rank            1  istart         2044 iend         3133
>>          check rank            0  istart         2042 iend         3131
>>
>>         !wrong results
>>
>>           check rank            0  istart 8268  iend        12518
>>           check rank            1  istart 8268  iend        12518
>>           check rank            2  istart 8268  iend        12518
>>           check rank            3  istart 8268  iend        12518
>>
>>
>>               !c *************    test part *********************
>>               !c method 1: create DMPlex from cell list, same
>>         duplicated global meshes over all processors
>>               !c the input parameters num_cells, num_nodes,
>>         dmplex_cells, dmplex_verts are all global parameters (global
>>         mesh data)
>>               call
>>         DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, &
>>         num_nodes,num_nodes_per_cell,      &
>>         Petsc_True,dmplex_cells,ndim,      &
>>         dmplex_verts,dmda_flow%da,ierr)
>>               CHKERRQ(ierr)
>>
>>
>>               !c method 2: create DMPlex from Gmsh file, for test
>>         purpose, this works fine, each processor gets its own part
>>               call DMPlexCreateFromFile(Petsc_Comm_World, &
>>         prefix(:l_prfx)//'.msh',0, &
>>         dmda_flow%da,ierr)
>>               CHKERRQ(ierr)
>>
>>               !c *************end of test part*********************
>>
>>
>>               distributedMesh = PETSC_NULL_OBJECT
>>
>>               !c distribute mesh over processes
>>               call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, &
>>         distributedMesh,ierr)
>>               CHKERRQ(ierr)
>>
>>               !c destroy original global mesh after distribution
>>               if (distributedMesh /= PETSC_NULL_OBJECT) then
>>                 call DMDestroy(dmda_flow%da,ierr)
>>                 CHKERRQ(ierr)
>>                 !c set the global mesh as distributed mesh
>>                 dmda_flow%da = distributedMesh
>>               end if
>>
>>               !c get coordinates
>>               call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr)
>>               CHKERRQ(ierr)
>>
>>               call DMGetCoordinateDM(dmda_flow%da,cda,ierr)
>>               CHKERRQ(ierr)
>>
>>               call DMGetDefaultSection(cda,cs,ierr)
>>               CHKERRQ(ierr)
>>
>>               call PetscSectionGetChart(cs,istart,iend,ierr)
>>               CHKERRQ(ierr)
>>
>>         #ifdef DEBUG
>>                 if(info_debug > 0) then
>>                   write(*,*) "check rank ",rank," istart ",istart,"
>>         iend ",iend
>>                 end if
>>         #endif
>>
>>
>>         Thanks and regards,
>>
>>         Danyang
>>
>>
>>
>>
>>     -- 
>>     What most experimenters take for granted before they begin their
>>     experiments is infinitely more interesting than any results to
>>     which their experiments lead.
>>     -- Norbert Wiener
>>
>>     https://www.cse.buffalo.edu/~knepley/
>>     <http://www.caam.rice.edu/%7Emk51/>
>
>
>
>
> -- 
> What most experimenters take for granted before they begin their 
> experiments is infinitely more interesting than any results to which 
> their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/%7Emk51/>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180216/71f936e1/attachment-0001.html>


More information about the petsc-users mailing list