[petsc-users] Incorrect local row ranges allocated to the processes i.e. rstart and rend are not what I expected
Dave May
dave.mayhem23 at gmail.com
Fri Nov 23 14:24:12 CST 2018
On Fri, 23 Nov 2018 at 19:39, Klaus Burkart via petsc-users <
petsc-users at mcs.anl.gov> wrote:
> PetscInitialize(0,0,PETSC_NULL,PETSC_NULL);
>
> MPI_Comm_size(PETSC_COMM_WORLD,&size);
> MPI_Comm_rank(PETSC_COMM_WORLD,&rank);
>
> MatCreate(PETSC_COMM_WORLD,&A);
> //MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N);
> MatSetType(A,MATMPIAIJ);
> PetscInt local_size = PETSC_DECIDE;
> PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N);
> MPI_Scan(&local_size, &rend, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);
> rstart = rend - local_size;
> PetscInt d_nnz[local_size], o_nnz[local_size];
> /*
>
> compute d_nnz and o_nnz here
>
> MatMPIAIJSetPreallocation(A,0,d_nnz,0,o_nnz);
> */
>
> //***
>
> PetscSynchronizedPrintf(PETSC_COMM_WORLD,"local_size = %d, on
> process %d\n", local_size, rank);
> PetscSynchronizedPrintf(PETSC_COMM_WORLD,"rstart = %d, on process
> %d\n", rstart, rank);
> PetscSynchronizedPrintf(PETSC_COMM_WORLD,"rend = %d, on process
> %d\n", rend, rank);
>
>
Please read the manual page:
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscSynchronizedPrintf.html
It explicitly states
"REQUIRES a call to PetscSynchronizedFlush() by all the processes after the
completion of the calls to PetscSynchronizedPrintf() for the information
from all the processors to be printed."
Thanks,
Dave
> PetscFinalize();
>
> Gives me :
>
> local_size = 25, on process 0
> rstart = 0, on process 0
> rend = 25, on process 0
>
> but there are 4 processes.
>
> Am Freitag, 23. November 2018, 19:51:26 MEZ hat Smith, Barry F. <
> bsmith at mcs.anl.gov> Folgendes geschrieben:
>
>
>
> The correct answer is computed but you are printing out the answer all
> wrong.
>
> For PetscPrintf(PETSC_COMM_WORLD) only the FIRST process ever prints
> anything so you are having the first process print out the same values
> repeatedly.
>
> Don't have the loop over size in the code. You can use
> PetscSynchronizedPrintf() to have each process print its own values.
>
> Barry
>
>
> > On Nov 23, 2018, at 6:44 AM, Klaus Burkart via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
> >
> > Hello,
> >
> > I am trying to compute the local row ranges allocated to the processes
> i.e. rstart and rend of each process, needed as a prerequisite for
> MatMPIAIJSetPreallocation using d_nnz and o_nnz.
> >
> > I tried the following:
> >
> > ...
> >
> > PetscInitialize(0,0,PETSC_NULL,PETSC_NULL);
> >
> > MPI_Comm_size(PETSC_COMM_WORLD,&size);
> > MPI_Comm_rank(PETSC_COMM_WORLD,&rank);
> >
> > MatCreate(PETSC_COMM_WORLD,&A);
> > MatSetType(A,MATMPIAIJ);
> > PetscInt local_size = PETSC_DECIDE;
> > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N);
> > MPI_Scan(&local_size, &rend, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);
> > rstart = rend - local_size;
> > PetscInt d_nnz[local_size], o_nnz[local_size];
> > /*
> >
> > compute d_nnz and o_nnz here
> >
> > MatMPIAIJSetPreallocation(A,0,d_nnz,0,o_nnz);
> > */
> >
> > for (rank = 0; rank < size; rank++) {
> > PetscPrintf(PETSC_COMM_WORLD,"local_size = %d, on process %d\n",
> local_size, rank);
> > PetscPrintf(PETSC_COMM_WORLD,"rstart = %d, on process %d\n", rstart,
> rank);
> > PetscPrintf(PETSC_COMM_WORLD,"rend = %d, on process %d\n", rend,
> rank);
> > }
> >
> > PetscFinalize();
> >
> > The local size is 25 rows on each process but rstart and rend are 0 and
> 25 on all processes, I expected 0 and 25, 25 and 50, 50 and 75 and 75 and
> 101. N = 100
> >
> > I can't spot the error. Any ideas, what's the problem?
> >
> > Klaus
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181123/6a73cdf9/attachment.html>
More information about the petsc-users
mailing list