[petsc-users] DMPlex Distribution

Mohammad Hassan mhbaghaei at mail.sjtu.edu.cn
Tue Sep 17 12:05:35 CDT 2019


Sorry if I confused you. 

In fact, I want to use grid adaptively using block-based AMR technique. By that, I mean I will have the same stencil for all points inside the block. For better functionality of AMR and its parallelization, it is needed to know the location of points for both working vectors and also vectors obtained from DMPlex.  That’s why I think I need to specify the AMR block across processors.

 

Thanks

Amir

 

From: Mark Adams [mailto:mfadams at lbl.gov] 
Sent: Wednesday, September 18, 2019 12:43 AM
To: Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn>
Cc: Matthew Knepley <knepley at gmail.com>; PETSc users list <petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

 

 

On Tue, Sep 17, 2019 at 12:07 PM Mohammad Hassan via petsc-users <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to know exactly the mesh distribution of blocks across different processors for implementation of amr.

And as a general question, can we set block size of vector on each rank?

 

I don't understand what you mean by AMR in this context exactly. And I'm not sure what you mean by blocks size. Block size is the number of dof per vertex (eg, 3) and it is a constant for a vector.

 

Thanks

Amir

 

From: Matthew Knepley [mailto:knepley at gmail.com <mailto:knepley at gmail.com> ] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn <mailto:mhbaghaei at mail.sjtu.edu.cn> >
Cc: PETSc <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the distribution across processors manually. I mean, how can I set the share of dm on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just want to mess with the weights in ParMetis.

 

  Thanks,

 

    Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190918/e421591e/attachment.html>


More information about the petsc-users mailing list