[petsc-users] Incorrect local row ranges allocated to the processes i.e. rstart and rend are not what I expected

Smith, Barry F. bsmith at mcs.anl.gov
Fri Nov 23 12:51:23 CST 2018


   The correct answer is computed but you are printing out the answer all wrong. 

   For PetscPrintf(PETSC_COMM_WORLD) only the FIRST process ever prints anything so you are having the first process print out the same values repeatedly.

    Don't have the loop over size in the code. You can use PetscSynchronizedPrintf() to have each process print its own values.

    Barry


> On Nov 23, 2018, at 6:44 AM, Klaus Burkart via petsc-users <petsc-users at mcs.anl.gov> wrote:
> 
> Hello,
> 
> I am trying to compute the local row ranges allocated to the processes i.e. rstart and rend of each process, needed as a  prerequisite for MatMPIAIJSetPreallocation using d_nnz and o_nnz.
> 
> I tried the following:
> 
> ...
> 
>     PetscInitialize(0,0,PETSC_NULL,PETSC_NULL);
> 
>     MPI_Comm_size(PETSC_COMM_WORLD,&size);
>     MPI_Comm_rank(PETSC_COMM_WORLD,&rank);
> 
>     MatCreate(PETSC_COMM_WORLD,&A);
>     MatSetType(A,MATMPIAIJ);
>     PetscInt local_size = PETSC_DECIDE;
>     PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N);
>     MPI_Scan(&local_size, &rend, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);
>     rstart = rend - local_size;
>     PetscInt d_nnz[local_size], o_nnz[local_size];
> /*
> 
> compute d_nnz and o_nnz here
> 
>     MatMPIAIJSetPreallocation(A,0,d_nnz,0,o_nnz);
> */
> 
>     for (rank = 0; rank < size; rank++) {
>     PetscPrintf(PETSC_COMM_WORLD,"local_size   =  %d, on process %d\n", local_size, rank);
>     PetscPrintf(PETSC_COMM_WORLD,"rstart =  %d, on process %d\n", rstart, rank);
>     PetscPrintf(PETSC_COMM_WORLD,"rend =  %d, on process %d\n", rend, rank);
>     }
> 
>     PetscFinalize();
> 
> The local size is 25 rows on each process but rstart and rend are 0 and 25 on all processes, I expected 0 and 25, 25 and 50, 50 and 75 and 75 and 101. N = 100
> 
> I can't spot the error. Any ideas, what's the problem?
> 
> Klaus



More information about the petsc-users mailing list