[petsc-users] Beginner questions : MatCreateMPIAIJWithSeqAIJ, MatCreateMPIAIJWithSplitArrays
Matthew Knepley
knepley at gmail.com
Fri Jul 27 18:00:55 CDT 2018
On Fri, Jul 27, 2018 at 4:25 AM Mark Olesen <Mark.Olesen at esi-group.com>
wrote:
> Hi Barry,
>
> Thanks for the information. One of the major takeaways from talking to
> various people at the workshop in June was about the imperative for
> proper pre-allocation sizing.
> As I started looking into how the matrices are being assembled, it is
> quickly apparently that I need to go off and determine the number of
> non-zeroes everywhere. After which I would use this information for
> petsc for provide good preallocation values and use the matrix set
> routines to set/copy the values. The hurdle, from there, to just sending
> off the appropriate CSR chunks for on-diagonal and off-diagonal doesn't
> seem so much higher. Unfortunately it appears that I'll need to approach
> it more indirectly.
>
While its true that proper preallocation is essential, I want to point out
that
computing the nonzero pattern usually has a negligible cost. If you just
run through
your assembly routine twice, the first time leaving out the value
computation, its
the easiest route.
Also, Lisandro has now given us really nice hash table support. I use this
to calculate the nonzero
pattern in several places in PETSc.
Thanks,
Matt
> Thanks,
> /mark
>
>
>
> On 07/24/18 18:02, Smith, Barry F. wrote:
> >
> > Mark,
> >
> > I think you are over-optimizing your matrix assembly leading to
> complicated, fragile code. Better just to create the matrix and use
> MatSetValues() to set values into the matrix and not to work directly with
> the various sparse matrix data structures. If you wish to work directly
> with the sparse matrix data structures then you are largely on your own and
> need to figure out yourself how to use them. Plus you will only get a small
> benefit time wise in going the much more complicated route.
> >
> > Barry
> >
> >
> >> On Jul 24, 2018, at 6:52 AM, Mark Olesen <Mark.Olesen at esi-group.com>
> wrote:
> >>
> >> I'm still at the beginning phase of looking at PETSc and accordingly
> have some beginner questions. My apologies if these are FAQs, but I didn't
> find much to address these specific questions.
> >>
> >> My simulation matrices are sparse, and will generally (but not always)
> be generated in parallel. There is currently no conventional internal
> storage format (something like a COO variant), but lets just assume that I
> have CSR format for the moment.
> >>
> >> I would like the usual combination of convenience and high efficiency,
> but efficiency (speed, memory) is the main criterion.
> >>
> >> For serial, MatCreateSeqAIJWithArrays() looks like the thing to be
> using. It would provide a very thin wrapper around my CSR matrix without
> much additional allocation. The only extra allocation appears to be a
> precompute column range size per row (ilen) instead of doing it on-the-fly.
> If my matrix is actually to be considered symmetric, then use
> MatCreateSeqSBAIJWithArrays() instead.
> >> This all seems pretty clear.
> >>
> >>
> >> For parallel, MatCreateMPIAIJWithSplitArrays() appears to be the
> equivalent for efficiency, but I also read the note discouraging its use,
> which I fully appreciate. It also leads neatly into my question. I
> obviously will have fairly ready access to my on-processor portions of the
> matrix, but collecting the information for the off-processor portions is
> required. What would a normal or recommended approach look like?
> >>
> >> For example,
> >> ====
> >> Mat A = MatCreateSeqAIJWithArrays() to wrap the local CSR.
> >>
> >> Mat B = MatCreateSeqAIJ(). Do some preallocation for num non-zeroes,
> use MatSetValues() to fill in. Need extra garray[] as linear lookup for
> the global column numbers of B.
> >>
> >> Or as an alternative, calculate the off-diagonal as a CSR by hand and
> use Mat B = MatCreateSeqAIJWithArrays() to wrap it.
> >>
> >> Finally,
> >> Use MatCreateMPIAIJWithSeqAIJ() to produce the full matrix.
> >>
> >> Assuming that I used MatCreateSeqAIJWithArrays() to create both the A
> and B matrices, then they both hold a shallow copy of my own storage.
> >> In MatCreateSeqAIJWithArrays(), I can't really tell what happens to the
> A matrix. For the B matrix, it appears that its column entries are changed
> to be those of the global columns and its data values are handed off to
> another MatCreateSeqAIJ() as the off-diagonal bits. The original B matrix
> is tagged up to avoid any deletion, and the shallow copied part is tagged
> to be deleted. If I follow this properly, this implies that if I was
> managing the storage of the original B matrix myself, I now have double
> deletion?
> >>
> >> I would have expected something like this instead (around line 3431 of
> mpiaij.c in master):
> >>
> >> /* Retain original memory management */
> >> bnew->singlemalloc = b->singlemalloc;
> >> bnew->free_a = b->free_a;
> >> bnew->free_ij = b->free_ij;
> >>
> >> /* B arrays are shared by Bnew */
> >> b->singlemalloc = PETSC_FALSE;
> >> b->free_a = PETSC_FALSE;
> >> b->free_ij = PETSC_FALSE;
> >> ierr = MatDestroy(&B);CHKERRQ(ierr);
> >>
> >>
> >> Have I gone off in completely the wrong direction here?
> >> Is there a better method of approaching this?
> >>
> >> Cheers,
> >> /mark
> >
>
> --
> Dr Mark OLESEN
> Principal Engineer, ESI-OpenCFD
> ESI GmbH | Einsteinring 24 | 85609 Munich | GERMANY
> Mob. +49 171 9710 149
> www.openfoam.com | www.esi-group.com | mark.olesen at esi-group.com
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180727/fcb609c5/attachment-0001.html>
More information about the petsc-users
mailing list