[petsc-users] DMPlex Distribution

Mohammad Hassan mhbaghaei at mail.sjtu.edu.cn
Wed Sep 18 08:35:36 CDT 2019


If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there any way that we can construct non-conformal layout for DM in petsc?

 

From: Mark Adams [mailto:mfadams at lbl.gov] 
Sent: Wednesday, September 18, 2019 9:23 PM
To: Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn>
Cc: Matthew Knepley <knepley at gmail.com>; PETSc users list <petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

I'm puzzled. It sounds like you are doing non-conforming AMR (structured block AMR), but Plex does not support that.

 

On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov> > wrote:

Mark is  right. The functionality of AMR does not relate to parallelization of that. The vector size (global or local) does not conflict with AMR functions.

Thanks

 

Amir

 

From: Matthew Knepley [mailto:knepley at gmail.com <mailto:knepley at gmail.com> ] 
Sent: Wednesday, September 18, 2019 12:59 AM
To: Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn <mailto:mhbaghaei at mail.sjtu.edu.cn> >
Cc: PETSc <petsc-maint at mcs.anl.gov <mailto:petsc-maint at mcs.anl.gov> >
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn <mailto:mhbaghaei at mail.sjtu.edu.cn> > wrote:

Thanks for suggestion. I am going to use a block-based amr. I think I need to know exactly the mesh distribution of blocks across different processors for implementation of amr.

 

Hi Amir,

 

How are you using Plex if the block-AMR is coming from somewhere else? This will help

me tell you what would be best.

 

And as a general question, can we set block size of vector on each rank?

 

I think as Mark says that you are using "blocksize" is a different way than PETSc.

 

  Thanks,

 

    Matt

 

Thanks

Amir

 

From: Matthew Knepley [mailto: <mailto:knepley at gmail.com> knepley at gmail.com] 
Sent: Tuesday, September 17, 2019 11:04 PM
To: Mohammad Hassan < <mailto:mhbaghaei at mail.sjtu.edu.cn> mhbaghaei at mail.sjtu.edu.cn>
Cc: PETSc < <mailto:petsc-users at mcs.anl.gov> petsc-users at mcs.anl.gov>
Subject: Re: [petsc-users] DMPlex Distribution

 

On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov> > wrote:

Hi

I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set the distribution across processors manually. I mean, how can I set the share of dm on each rank (local)?

 

You could make a Shell partitioner and tell it the entire partition:

 

  https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html

 

However, I would be surprised if you could do this. It is likely that you just want to mess with the weights in ParMetis.

 

  Thanks,

 

    Matt

 

Thanks

Amir




 

-- 

What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 




 

-- 

What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

 

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190918/f208bf56/attachment-0001.html>


More information about the petsc-users mailing list