[petsc-users] Load distributed matrices from directory

Matthew Knepley knepley at gmail.com
Mon Oct 2 04:34:21 CDT 2017


On Mon, Oct 2, 2017 at 4:12 AM, Matthieu Vitse <vitse at lmt.ens-cachan.fr>
wrote:

>
> Le 29 sept. 2017 à 17:43, Barry Smith <bsmith at mcs.anl.gov> a écrit :
>
>  Or is your matrix generator code sequential and cannot generate the full
> matrix so you want to generate chunks at a time and save to disk then load
> them? Better for you to refactor your code to work in parallel in
> generating the whole thing (since you can already generate parts the
> refactoring shouldn't be terribly difficult).
>
>
> Thanks for your answer.
>
> The matrix is already generated in parallel, but we want to keep control
> on the decomposition which conflicts with directly using PCASM.
>

Please explain this statement with an example. When using MatLoad(), you
are in control of the partitions, although not of the row order.
Also, I am confused by your use of the word "distributed". We use it to
mean an object, like a Mat that exists on several processes in a
coordinated way.

  Thanks,

    Matt


> That’s why we would really like to work only with the distributed
> matrices. Are there some issues that would prevent me from doing that ?
> Moreover, ASM is a first step, we would like then to use those matrices for
> multi-preconditioning our problem, and take into account MPCs (as a
> consequence we really need to know the decomposition).
>
> Thanks,
>
>> Matt
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171002/8ae53915/attachment.html>


More information about the petsc-users mailing list