<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hi Matthew,</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Many thanks for the tip re: the synchronized print, I wasn't aware of that routine.</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
It is great how many useful utility routines PETSC seems to have - it's a big timesaver!</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Thanks,</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Dan<br>
</div>
<div>
<div id="appendonsend"></div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
<br>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>From:</b> Matthew Knepley <knepley@gmail.com><br>
<b>Sent:</b> Thursday, May 20, 2021 10:31 AM<br>
<b>To:</b> dazza simplythebest <sayosale@hotmail.com><br>
<b>Cc:</b> Jose E. Roman <jroman@dsic.upv.es>; PETSc users list <petsc-users@mcs.anl.gov><br>
<b>Subject:</b> Re: [petsc-users] Code hangs when calling PetscIntView (MPI, fortran)</font>
<div> </div>
</div>
<div>
<div dir="ltr">
<div dir="ltr">On Thu, May 20, 2021 at 5:32 AM dazza simplythebest <<a href="mailto:sayosale@hotmail.com">sayosale@hotmail.com</a>> wrote:<br>
</div>
<div class="x_gmail_quote">
<blockquote class="x_gmail_quote" style="margin:0px 0px 0px 0.8ex; border-left:1px solid rgb(204,204,204); padding-left:1ex">
<div dir="ltr">
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
Dear Jose,</div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
Many thanks for the prompt explanation - that would definitely explain what is going on,</div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
I will adjust my code accordingly .</div>
</div>
</blockquote>
<div><br>
</div>
<div>If you want to print different things from each process in parallel, I suggest</div>
<div><br>
</div>
<div> <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscSynchronizedPrintf.html">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscSynchronizedPrintf.html</a></div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="x_gmail_quote" style="margin:0px 0px 0px 0.8ex; border-left:1px solid rgb(204,204,204); padding-left:1ex">
<div dir="ltr">
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
Thanks again,</div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
Dan.<br>
</div>
<div>
<div id="x_gmail-m_-632613171368988235appendonsend"></div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
<br>
</div>
<hr style="display:inline-block; width:98%">
<div id="x_gmail-m_-632613171368988235divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>From:</b> Jose E. Roman <<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>><br>
<b>Sent:</b> Thursday, May 20, 2021 9:06 AM<br>
<b>To:</b> dazza simplythebest <<a href="mailto:sayosale@hotmail.com" target="_blank">sayosale@hotmail.com</a>><br>
<b>Cc:</b> PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>
<b>Subject:</b> Re: [petsc-users] Code hangs when calling PetscIntView (MPI, fortran)</font>
<div> </div>
</div>
<div><font size="2"><span style="font-size:11pt">
<div>If you look at the manpage <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscIntView.html" target="_blank">
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscIntView.html</a> you will see that PetscIntView() is collective. This means that all MPI processes must call this function, so it is forbidden to call it within an IF rank==...<br>
<br>
Jose<br>
<br>
> El 20 may 2021, a las 10:25, dazza simplythebest <<a href="mailto:sayosale@hotmail.com" target="_blank">sayosale@hotmail.com</a>> escribió:<br>
> <br>
> Dear All,<br>
> As part of preparing a code to call the SLEPC eigenvalue solving library,<br>
> I am constructing a matrix in sparse CSR format row-by-row. Just for debugging <br>
> purposes I write out the column values for a given row, which are stored in a <br>
> PetscInt allocatable vector, using PetscIntView.<br>
> <br>
> Everything works fine when the number of MPI processes exactly divide the<br>
> number of rows of the matrix, and so each process owns the same number of rows.<br>
> However, when the number of MPI processes does not exactly divide the<br>
> number of rows of the matrix, and so each process owns a different number of rows,<br>
> the code hangs when it reaches the line that calls PetscIntView.<br>
> To be precise the code hangs on the final row that a process, other than root, owns.<br>
> If I however comment out the call to PetscIntView the code completes without error,<br>
> and produces the correct eigenvalues (hence we are not missing a row / miswriting a row).<br>
> Note also that a simple direct writeout of this same array using a plain fortran command<br>
> will write out the array without problem.<br>
> <br>
> I have attached below a small code that reproduces the problem.<br>
> For this code we have nominally assigned 200 rows to our matrix. The code runs without<br>
> problem using 1,2,4,5,8 or 10 MPI processes, all of which precisely divide 200,<br>
> but will hang for 3 MPI processes for example.<br>
> For the case of 3 MPI processes the subroutine WHOSE_ROW_IS_IT allocates the rows
<br>
> to each process as :<br>
> process no first row last row no. of rows<br>
> 0 1 66 66<br>
> 1 67 133 67<br>
> 2 134 200 67<br>
> <br>
> The code will hang when process 1 calls PetscIntView for its last row, row 133 for example.<br>
> <br>
> One piece of additional information that may be relevant is that the code does run to completion
<br>
> without hanging if I comment out the final slepc/MPI finalisation command<br>
> CALL SlepcFinalize(ierr_pets) <br>
> (I of course I get ' bad termination' errors, but the otherwise the run is successful.)<br>
> <br>
> I would appreciate it if anyone has any ideas on what is going wrong!<br>
> Many thanks,<br>
> Dan.<br>
> <br>
> <br>
> code:<br>
> <br>
> MODULE ALL_STAB_ROUTINES<br>
> IMPLICIT NONE<br>
> CONTAINS<br>
> <br>
> SUBROUTINE WHOSE_ROW_IS_IT(ROW_NO, TOTAL_NO_ROWS, NO_PROCESSES, &<br>
> & OWNER)<br>
> ! THIS ROUTINE ALLOCATES ROWS EVENLY BETWEEN mpi PROCESSES<br>
> #include <slepc/finclude/slepceps.h><br>
> use slepceps<br>
> IMPLICIT NONE<br>
> PetscInt, INTENT(IN) :: ROW_NO, TOTAL_NO_ROWS, NO_PROCESSES<br>
> PetscInt, INTENT(OUT) :: OWNER<br>
> PetscInt :: P, REM<br>
> <br>
> P = TOTAL_NO_ROWS / NO_PROCESSES ! NOTE INTEGER DIVISION<br>
> REM = TOTAL_NO_ROWS - P*NO_PROCESSES<br>
> IF (ROW_NO < (NO_PROCESSES - REM)*P + 1 ) THEN<br>
> OWNER = (ROW_NO - 1)/P ! NOTE INTEGER DIVISION<br>
> ELSE<br>
> OWNER = ( ROW_NO + NO_PROCESSES - REM -1 )/(P+1) ! NOTE INTEGER DIVISION<br>
> ENDIF <br>
> END SUBROUTINE WHOSE_ROW_IS_IT<br>
> END MODULE ALL_STAB_ROUTINES<br>
> <br>
> <br>
> PROGRAM trialer<br>
> USE MPI<br>
> #include <slepc/finclude/slepceps.h><br>
> use slepceps<br>
> USE ALL_STAB_ROUTINES<br>
> IMPLICIT NONE<br>
> PetscMPIInt rank3, total_mpi_size<br>
> PetscInt nl3, code, PROC_ROW, ISTATUS, jm, N_rows,NO_A_ENTRIES<br>
> PetscInt, ALLOCATABLE, DIMENSION(:) :: JALOC<br>
> PetscInt, PARAMETER :: ZERO = 0 , ONE = 1, TWO = 2, THREE = 3 <br>
> PetscErrorCode ierr_pets<br>
> <br>
> ! Initialise sleps/mpi<br>
> call SlepcInitialize(PETSC_NULL_CHARACTER,ierr_pets) ! note that this initialises MPI<br>
> call MPI_COMM_SIZE(MPI_COMM_WORLD, total_mpi_size, ierr_pets) !! find total no of MPI processes
<br>
> nL3= total_mpi_size<br>
> call MPI_COMM_RANK(MPI_COMM_WORLD,rank3,ierr_pets) !! find my overall rank -> rank3<br>
> write(*,*)'Welcome: PROCESS NO , TOTAL NO. OF PROCESSES = ',rank3, nl3<br>
> <br>
> N_rows = 200 ! NUMBER OF ROWS OF A NOTIONAL MATRIX<br>
> NO_A_ENTRIES = 12 ! NUMBER OF ENTRIES FOR JALOC<br>
> <br>
> ! LOOP OVER ROWS <br>
> do jm = 1, N_rows<br>
> <br>
> CALL whose_row_is_it(JM, N_rows , NL3, PROC_ROW) ! FIND OUT WHICH PROCESS OWNS ROW<br>
> if (rank3 == PROC_ROW) then ! IF mpi PROCESS OWNS THIS ROW THEN ..<br>
> ! ALLOCATE jaloc ARRAY AND INITIALISE<br>
> <br>
> allocate(jaloc(NO_A_ENTRIES), STAT=ISTATUS )<br>
> jaloc = three<br>
> <br>
> <br>
> WRITE(*,*)'JALOC',JALOC ! THIS SIMPLE PLOT ALWAYS WORKS<br>
> write(*,*)'calling PetscIntView: PROCESS NO. ROW NO.',rank3, jm<br>
> ! THIS CALL TO PetscIntView CAUSES CODE TO HANG WHEN E.G. total_mpi_size=3, JM=133<br>
> call PetscIntView(NO_A_ENTRIES,JALOC(1:NO_A_ENTRIES), &<br>
> & PETSC_VIEWER_STDOUT_WORLD, ierr_pets)<br>
> CHKERRA(ierr_pets)<br>
> deallocate(jaloc)<br>
> endif<br>
> enddo<br>
> <br>
> CALL SlepcFinalize(ierr_pets)<br>
> end program trialer<br>
<br>
</div>
</span></font></div>
</div>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="x_gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</body>
</html>