<div dir="ltr"><span style="font-size:12.8px">Thanks,</span><div style="font-size:12.8px"><br></div><div style="font-size:12.8px">as I already discussed with you, the matrix is coming from SPH discretization, which is not fixed on a grid and is changing over time.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Dec 12, 2016 at 1:10 AM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Dec 11, 2016, at 6:04 PM, Massoud Rezavand <<a href="mailto:msdrezavand@gmail.com">msdrezavand@gmail.com</a>> wrote:<br>
><br>
> Thank you very much,<br>
><br>
> So, if I am using PetscSplitOwnership() and then MatGetOwnershipRange() to be prepared for preallocation, then MatSetSizes(A, local_size, local_size, N, N) should be called with the calculated local_size from PetscSplitOwnership() ?<br>
<br>
</span> Confusion from the two responses. You cannot use MatGetOwnershipRange() for preallocation.<br>
<br>
Without preallocation:<br>
<span class=""><br>
> > PetscInt local_size = PETSC_DECIDE;<br>
> ><br>
> > MatSetSizes(A, local_size, local_size, N, N);<br>
<br>
</span> MatGetOwnershipRanges(...)<br>
<br>
With preallocation:<br>
<span class="">> ><br>
> ><br>
> > 2)<br>
> ><br>
> > PetscInt local_size = PETSC_DECIDE;<br>
> ><br>
> > PetscSplitOwnership(PETSC_<wbr>COMM_WORLD, &local_size, &N);<br>
> ><br>
> > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);<br>
> > begin_row = end_row - local_size;<br>
<br>
</span> MatMPIAIJSetPreallocation(....<wbr>.).<br>
<br>
<br>
But note that normally if the matrix comes from a discretization on a grid you would not use either approach above. The parallel layout of the grid would determine the local sizes and you won't not obtain them with PetscSplitOwnership() or local_size = PETSC_DECIDE;<br>
<br>
Where is your matrix coming from?<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
> ><br>
> ><br>
<br>
<br>
><br>
> Thanks,<br>
> Massoud<br>
><br>
><br>
> On Mon, Dec 12, 2016 at 12:35 AM, Jed Brown <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br>
> Massoud Rezavand <<a href="mailto:msdrezavand@gmail.com">msdrezavand@gmail.com</a>> writes:<br>
><br>
> > Dear PETSc team,<br>
> ><br>
> > What is the difference between the following two methods to get the local<br>
> > dimensions of a square matrix A? If they do the same, which one is<br>
> > recommended? Should I use MPI_Scan after both?<br>
><br>
> I would typically use 1 because it's fewer calls and automatically uses<br>
> the correct communicator. You can use MatGetOwnershipRange() instead of<br>
> manually using MPI_Scan.<br>
><br>
> > 1)<br>
> ><br>
> > PetscInt local_size = PETSC_DECIDE;<br>
> ><br>
> > MatSetSizes(A, local_size, local_size, N, N);<br>
> ><br>
> ><br>
> > 2)<br>
> ><br>
> > PetscInt local_size = PETSC_DECIDE;<br>
> ><br>
> > PetscSplitOwnership(PETSC_<wbr>COMM_WORLD, &local_size, &N);<br>
> ><br>
> > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);<br>
> > begin_row = end_row - local_size;<br>
> ><br>
> ><br>
> > Thanks in advance,<br>
> > Massoud<br>
><br>
<br>
</div></div></blockquote></div><br></div>