[petsc-users] How to efficiently fill in, in parallel, a PETSc matrix from a COO sparse matrix?

Diego Magela Lemos diegomagela at usp.br
Tue Jun 20 13:02:46 CDT 2023


So... what do I need to do, please?
Why am I getting wrong results when solving the linear system if the matrix
is filled in with MatSetPreallocationCOO and MatSetValuesCOO?

Em ter., 20 de jun. de 2023 às 14:56, Jed Brown <jed at jedbrown.org> escreveu:

> Matthew Knepley <knepley at gmail.com> writes:
>
> >> The matrix entries are multiplied by 2, that is, the number of processes
> >> used to execute the code.
> >>
> >
> > No. This was mostly intended for GPUs, where there is 1 process. If you
> > want to use multiple MPI processes, then each process can only introduce
> > some disjoint subset of the values. This is also how MatSetValues()
> works,
> > but it might not be as obvious.
>
> They need not be disjoint, just sum to the expected values. This interface
> is very convenient for FE and FV methods. MatSetValues with ADD_VALUES has
> similar semantics without the intermediate storage, but it forces you to
> submit one element matrix at a time. Classic parallelism granularity versus
> memory use tradeoff with MatSetValuesCOO being a clear win on GPUs and more
> nuanced for CPUs.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230620/e2fa353a/attachment.html>


More information about the petsc-users mailing list