<div dir="ltr"><div dir="ltr"><div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Fri, 23 Nov 2018 at 19:39, Klaus Burkart via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail-m_-6593686182115005703ydp70b972a2yahoo-style-wrap" style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:10px"><div></div>
<div><span> PetscInitialize(0,0,PETSC_NULL,PETSC_NULL);<br><br> MPI_Comm_size(PETSC_COMM_WORLD,&size);<br> MPI_Comm_rank(PETSC_COMM_WORLD,&rank);<br><br> MatCreate(PETSC_COMM_WORLD,&A);<br> //MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N);<br> MatSetType(A,MATMPIAIJ);<br> PetscInt local_size = PETSC_DECIDE;<br> PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N);<br> MPI_Scan(&local_size, &rend, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);<br> rstart = rend - local_size;<br> PetscInt d_nnz[local_size], o_nnz[local_size];<br>/*<br><br>compute d_nnz and o_nnz here<br><br> MatMPIAIJSetPreallocation(A,0,d_nnz,0,o_nnz);<br>*/<br><br>//***<br><br> PetscSynchronizedPrintf(PETSC_COMM_WORLD,"local_size = %d, on process %d\n", local_size, rank);<br> PetscSynchronizedPrintf(PETSC_COMM_WORLD,"rstart = %d, on process %d\n", rstart, rank);<br> PetscSynchronizedPrintf(PETSC_COMM_WORLD,"rend = %d, on process %d\n", rend, rank);<br><br></span></div></div></div></blockquote><div><br></div><div>Please read the manual page:</div><div><br></div><div><a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscSynchronizedPrintf.html">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscSynchronizedPrintf.html</a><br></div><div><br>It explicitly states </div><div>"REQUIRES a call to PetscSynchronizedFlush() by all the processes after the completion of the calls to PetscSynchronizedPrintf() for the information from all the processors to be printed." </div><div><br></div><div>Thanks,</div><div> Dave</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail-m_-6593686182115005703ydp70b972a2yahoo-style-wrap" style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:10px"><div><span></span><div><span> PetscFinalize();<br></span></div><div><br></div><div><div>Gives me :</div><div><br></div><div><span>local_size = 25, on process 0<br>rstart = 0, on process 0<br>rend = 25, on process 0<br><br></span>but there are 4 processes.<br></div><span></span></div></div><div><br></div>
</div><div id="gmail-m_-6593686182115005703yahoo_quoted_3674333506" class="gmail-m_-6593686182115005703yahoo_quoted">
<div style="font-family:"Helvetica Neue",Helvetica,Arial,sans-serif;font-size:13px;color:rgb(38,40,42)">
<div>
Am Freitag, 23. November 2018, 19:51:26 MEZ hat Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> Folgendes geschrieben:
</div>
<div><br></div>
<div><br></div>
<div><div dir="ltr"><br clear="none"> The correct answer is computed but you are printing out the answer all wrong. <br clear="none"><br clear="none"> For PetscPrintf(PETSC_COMM_WORLD) only the FIRST process ever prints anything so you are having the first process print out the same values repeatedly.<br clear="none"><br clear="none"> Don't have the loop over size in the code. You can use PetscSynchronizedPrintf() to have each process print its own values.<br clear="none"><br clear="none"> Barry<br clear="none"><br clear="none"><div class="gmail-m_-6593686182115005703yqt1956348565" id="gmail-m_-6593686182115005703yqtfd20364"><br clear="none">> On Nov 23, 2018, at 6:44 AM, Klaus Burkart via petsc-users <<a shape="rect" href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br clear="none">> <br clear="none">> Hello,<br clear="none">> <br clear="none">> I am trying to compute the local row ranges allocated to the processes i.e. rstart and rend of each process, needed as a prerequisite for MatMPIAIJSetPreallocation using d_nnz and o_nnz.<br clear="none">> <br clear="none">> I tried the following:<br clear="none">> <br clear="none">> ...<br clear="none">> <br clear="none">> PetscInitialize(0,0,PETSC_NULL,PETSC_NULL);<br clear="none">> <br clear="none">> MPI_Comm_size(PETSC_COMM_WORLD,&size);<br clear="none">> MPI_Comm_rank(PETSC_COMM_WORLD,&rank);<br clear="none">> <br clear="none">> MatCreate(PETSC_COMM_WORLD,&A);<br clear="none">> MatSetType(A,MATMPIAIJ);<br clear="none">> PetscInt local_size = PETSC_DECIDE;<br clear="none">> PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N);<br clear="none">> MPI_Scan(&local_size, &rend, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD);<br clear="none">> rstart = rend - local_size;<br clear="none">> PetscInt d_nnz[local_size], o_nnz[local_size];<br clear="none">> /*<br clear="none">> <br clear="none">> compute d_nnz and o_nnz here<br clear="none">> <br clear="none">> MatMPIAIJSetPreallocation(A,0,d_nnz,0,o_nnz);<br clear="none">> */<br clear="none">> <br clear="none">> for (rank = 0; rank < size; rank++) {<br clear="none">> PetscPrintf(PETSC_COMM_WORLD,"local_size = %d, on process %d\n", local_size, rank);<br clear="none">> PetscPrintf(PETSC_COMM_WORLD,"rstart = %d, on process %d\n", rstart, rank);<br clear="none">> PetscPrintf(PETSC_COMM_WORLD,"rend = %d, on process %d\n", rend, rank);<br clear="none">> }<br clear="none">> <br clear="none">> PetscFinalize();<br clear="none">> <br clear="none">> The local size is 25 rows on each process but rstart and rend are 0 and 25 on all processes, I expected 0 and 25, 25 and 50, 50 and 75 and 75 and 101. N = 100<br clear="none">> <br clear="none">> I can't spot the error. Any ideas, what's the problem?<br clear="none">> <br clear="none">> Klaus<br clear="none"></div></div></div>
</div>
</div></div></blockquote></div></div></div></div>