[petsc-users] MatZeroRows costly while applying 1st-kind Boundary Conditions
Barry Smith
bsmith at petsc.dev
Fri Nov 29 21:57:12 CST 2024
You need to call MatZeroRows() once; passing all the rows you want zeroed, instead of once for each row.
If you are running in parallel each MPI process should call MatZeroRows() once passing in a list of rows to be zeroed. Each process can pass in different
rows than the other processes.
BTW: You do not need to call MatAssemblyBegin/End() after MatZeroRows()
Barry
> On Nov 29, 2024, at 9:56 PM, Qiyue Lu <qiyuelu1 at gmail.com> wrote:
>
> Hello,
> In the MPI context, after assembling the distributed matrix A (matmpiaij) and the right-hand-side b, I am trying to apply the 1st kind boundary condition using MatZeroRows() and VecSetValues(), for A and b respectively.
> The pseudo-code is:
> =========
> for (int key = 0; key < BCNodes_Length; key++){
> // retrieving the global row position
> pos = BCNodes[key];
> // Set all elements in that row 0 except the one on the diagonal to be 1.0
> MatZeroRows(A, 1, &pos, 1.0, NULL, NULL);
> }
> MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY);
> MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY);
> =========
>
> For BCNodes_Length = 10^4, the FOR loop timing is 8 seconds.
> For BCNodes_Length = 15*10^4, the FOR loop timing is 3000 seconds.
> I am using two computational nodes and each having 12 cores.
>
> My questions are:
> 1) Is the timing plausible? Is the MatZeroRows() function so costly?
> 2) Any suggestions to apply the 1st kind boundary conditions for a better performance?
>
> Thanks,
> Qiyue Lu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20241129/af55c615/attachment.html>
More information about the petsc-users
mailing list