[petsc-users] DMPlex Distribution
Matthew Knepley
knepley at gmail.com
Wed Sep 18 08:50:26 CDT 2019
On Wed, Sep 18, 2019 at 9:35 AM Mohammad Hassan via petsc-users <
petsc-users at mcs.anl.gov> wrote:
> If DMPlex does not support, I may need to use PARAMESH or CHOMBO. Is there
> any way that we can construct non-conformal layout for DM in petsc?
>
Lets see. Plex does support geometrically non-conforming meshes. This is
how we support p4est. However, if
you want that, you can just use DMForest I think. So you jsut want
structured AMR?
Thanks,
Matt
>
>
> *From:* Mark Adams [mailto:mfadams at lbl.gov]
> *Sent:* Wednesday, September 18, 2019 9:23 PM
> *To:* Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn>
> *Cc:* Matthew Knepley <knepley at gmail.com>; PETSc users list <
> petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> I'm puzzled. It sounds like you are doing non-conforming AMR (structured
> block AMR), but Plex does not support that.
>
>
>
> On Tue, Sep 17, 2019 at 11:41 PM Mohammad Hassan via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
>
> Mark is right. The functionality of AMR does not relate to
> parallelization of that. The vector size (global or local) does not
> conflict with AMR functions.
>
> Thanks
>
>
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knepley at gmail.com]
> *Sent:* Wednesday, September 18, 2019 12:59 AM
> *To:* Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn>
> *Cc:* PETSc <petsc-maint at mcs.anl.gov>
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 12:03 PM Mohammad Hassan <
> mhbaghaei at mail.sjtu.edu.cn> wrote:
>
> Thanks for suggestion. I am going to use a block-based amr. I think I need
> to know exactly the mesh distribution of blocks across different processors
> for implementation of amr.
>
>
>
> Hi Amir,
>
>
>
> How are you using Plex if the block-AMR is coming from somewhere else?
> This will help
>
> me tell you what would be best.
>
>
>
> And as a general question, can we set block size of vector on each rank?
>
>
>
> I think as Mark says that you are using "blocksize" is a different way
> than PETSc.
>
>
>
> Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
> *From:* Matthew Knepley [mailto:knepley at gmail.com]
> *Sent:* Tuesday, September 17, 2019 11:04 PM
> *To:* Mohammad Hassan <mhbaghaei at mail.sjtu.edu.cn>
> *Cc:* PETSc <petsc-users at mcs.anl.gov>
> *Subject:* Re: [petsc-users] DMPlex Distribution
>
>
>
> On Tue, Sep 17, 2019 at 9:27 AM Mohammad Hassan via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
>
> Hi
>
> I am using DMPlexCreateFromDAG() to construct my DM. Is it possible to set
> the distribution across processors manually. I mean, how can I set the
> share of dm on each rank (local)?
>
>
>
> You could make a Shell partitioner and tell it the entire partition:
>
>
>
>
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/PetscPartitionerShellSetPartition.html
>
>
>
> However, I would be surprised if you could do this. It is likely that you
> just want to mess with the weights in ParMetis.
>
>
>
> Thanks,
>
>
>
> Matt
>
>
>
> Thanks
>
> Amir
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>
>
> --
>
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
>
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190918/d725d456/attachment.html>
More information about the petsc-users
mailing list