<div dir="ltr"><div>Hi Devs, hope you are having a great weekend,</div><div><br></div><div>I could finally parallelize my linear solver and implement it into the rest of the code in a way that only the linear system is solved in parallel, great news for my team, but there is a catch and is that i don't see any speedup in the linear system, i don't know if its the MPI in the cluster we are using, but im not sure on how to debug it,</div><div><br></div><div>On the other hand and because of this issue i was trying to do -log_summary or -log_view and i noticed the program in this context hangs when is time of producing the log, if i debug this for 2 cores, process 0 exits normally but process 1 hangs in the vectorscatterbegin() with scatter_reverse way back in the code, and even after destroying all associated objects and calling petscfinalize(), so im really clueless on why is this, as it only happens for -log_* or -ksp_view options.</div><div><br></div><div>my -ksp_view shows this:</div><div><br></div><div> <span style="font-family:menlo;font-size:11px">KSP Object: 2 MPI processes</span></div>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: gcr</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> GCR: restart = 30 </span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> GCR: restarts performed = 20 </span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> maximum iterations=10000, initial guess is zero</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> tolerances: relative=1e-14, absolute=1e-50, divergence=10000.</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> right preconditioning</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> using UNPRECONDITIONED norm type for convergence test</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures">PC Object: 2 MPI processes</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: bjacobi</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> block Jacobi: number of blocks = 2</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> Local solve is same for all blocks, in the following KSP and PC objects:</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> KSP Object: (sub_) 1 MPI processes</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: preonly</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> maximum iterations=10000, initial guess is zero</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> left preconditioning</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> using NONE norm type for convergence test</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> PC Object: (sub_) 1 MPI processes</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: ilu</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> ILU: out-of-place factorization</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> 0 levels of fill</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> tolerance for zero pivot 2.22045e-14</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> matrix ordering: natural</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> factor fill ratio given 1., needed 1.</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> Factored matrix follows:</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> Mat Object: 1 MPI processes</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: seqaij</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> rows=100000, cols=100000</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> package used to perform factorization: petsc</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> total: nonzeros=1675180, allocated nonzeros=1675180</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> total number of mallocs used during MatSetValues calls =0</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> not using I-node routines</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> linear system matrix = precond matrix:</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> Mat Object: 1 MPI processes</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: seqaij</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> rows=100000, cols=100000</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> total: nonzeros=1675180, allocated nonzeros=1675180</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> total number of mallocs used during MatSetValues calls =0</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> not using I-node routines</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> linear system matrix = precond matrix:</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> Mat Object: 2 MPI processes</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> type: mpiaij</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> rows=200000, cols=200000</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> total: nonzeros=3373340, allocated nonzeros=3373340</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> total number of mallocs used during MatSetValues calls =0</span></p>
<p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> not using I-node (on process 0) routines</span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"><br></span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"><br></span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures">And i configured my PC object as:</span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"><br></span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> </span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(52,187,199)">call</span><span style="font-variant-ligatures:no-common-ligatures"> PCSetType(mg,PCHYPRE,ierr)</span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> </span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(52,187,199)">call</span><span style="font-variant-ligatures:no-common-ligatures"> PCHYPRESetType(mg,</span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(195,55,32)">'boomeramg'</span><span style="font-variant-ligatures:no-common-ligatures">,ierr)</span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo;min-height:13px"><span style="font-variant-ligatures:no-common-ligatures"></span><br></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> </span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(52,187,199)">call</span><span style="font-variant-ligatures:no-common-ligatures"> PetscOptionsSetValue(PETSC_NULL_OBJECT,</span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(195,55,32)">'pc_hypre_boomeramg_nodal_coarsen'</span><span style="font-variant-ligatures:no-common-ligatures">,</span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(195,55,32)">'1'</span><span style="font-variant-ligatures:no-common-ligatures">,ierr)</span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo"><span style="font-variant-ligatures:no-common-ligatures"> </span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(52,187,199)">call</span><span style="font-variant-ligatures:no-common-ligatures"> PetscOptionsSetValue(PETSC_NULL_OBJECT,</span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(195,55,32)">'pc_hypre_boomeramg_vec_interp_variant'</span><span style="font-variant-ligatures:no-common-ligatures">,</span><span style="font-variant-ligatures:no-common-ligatures;color:rgb(195,55,32)">'1'</span><span style="font-variant-ligatures:no-common-ligatures">,ierr)</span></p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo">
</p><p style="margin:0px;font-size:11px;line-height:normal;font-family:menlo;min-height:13px"><span style="font-variant-ligatures:no-common-ligatures"></span><br></p><div><br></div><div>What are your thoughts ?</div><div><br></div><div>Thanks,</div><div><br></div><div>Manuel </div><div><br></div><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 6, 2017 at 1:58 PM, Manuel Valera <span dir="ltr"><<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Awesome, that did it, thanks once again.<div><br></div></div><div class="gmail-HOEnZb"><div class="gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jan 6, 2017 at 1:53 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
Take the scatter out of the if () since everyone does it and get rid of the VecView().<br>
<br>
Does this work? If not where is it hanging?<br>
<span><br>
<br>
> On Jan 6, 2017, at 3:29 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
><br>
> Thanks Dave,<br>
><br>
> I think is interesting it never gave an error on this, after adding the vecassembly calls it still shows the same behavior, without complaining, i did:<br>
><br>
> if(rankl==0)then<br>
><br>
> call VecSetValues(bp0,nbdp,ind,Rhs,<wbr>INSERT_VALUES,ierr)<br>
> call VecAssemblyBegin(bp0,ierr) ; call VecAssemblyEnd(bp0,ierr);<br>
> CHKERRQ(ierr)<br>
><br>
</span>endif<br>
<span class="gmail-m_9152750964337008787im gmail-m_9152750964337008787HOEnZb">><br>
><br>
> call VecScatterBegin(ctr,bp0,bp2,IN<wbr>SERT_VALUES,SCATTER_REVERSE,ie<wbr>rr)<br>
> call VecScatterEnd(ctr,bp0,bp2,INSE<wbr>RT_VALUES,SCATTER_REVERSE,ierr<wbr>)<br>
> print*,"done! "<br>
> CHKERRQ(ierr)<br>
><br>
><br>
</span><div class="gmail-m_9152750964337008787HOEnZb"><div class="gmail-m_9152750964337008787h5">> CHKERRQ(ierr)<br>
><br>
><br>
> Thanks.<br>
><br>
> On Fri, Jan 6, 2017 at 12:44 PM, Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br>
><br>
><br>
> On 6 January 2017 at 20:24, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> Great help Barry, i totally had overlooked that option (it is explicit in the vecscatterbegin call help page but not in vecscattercreatetozero, as i read later)<br>
><br>
> So i used that and it works partially, it scatters te values assigned in root but not the rest, if i call vecscatterbegin from outside root it hangs, the code currently look as this:<br>
><br>
> call VecScatterCreateToZero(bp2,ctr<wbr>,bp0,ierr); CHKERRQ(ierr)<br>
><br>
> call PetscObjectSetName(bp0, 'bp0:',ierr)<br>
><br>
> if(rankl==0)then<br>
><br>
> call VecSetValues(bp0,nbdp,ind,Rhs,<wbr>INSERT_VALUES,ierr)<br>
><br>
> call VecView(bp0,PETSC_VIEWER_STDOU<wbr>T_WORLD,ierr)<br>
><br>
><br>
> You need to call<br>
><br>
> VecAssemblyBegin(bp0);<br>
> VecAssemblyEnd(bp0);<br>
> after your last call to VecSetValues() before you can do any operations with bp0.<br>
><br>
> With your current code, the call to VecView should produce an error if you used the error checking macro CHKERRQ(ierr) (as should VecScatter{Begin,End}<br>
><br>
> Thanks,<br>
> Dave<br>
><br>
><br>
> call VecScatterBegin(ctr,bp0,bp2,IN<wbr>SERT_VALUES,SCATTER_REVERSE,ie<wbr>rr)<br>
> call VecScatterEnd(ctr,bp0,bp2,INSE<wbr>RT_VALUES,SCATTER_REVERSE,ierr<wbr>)<br>
> print*,"done! "<br>
> CHKERRQ(ierr)<br>
><br>
> endif<br>
><br>
> ! call VecScatterBegin(ctr,bp0,bp2,IN<wbr>SERT_VALUES,SCATTER_REVERSE,ie<wbr>rr)<br>
> ! call VecScatterEnd(ctr,bp0,bp2,INSE<wbr>RT_VALUES,SCATTER_REVERSE,ierr<wbr>)<br>
><br>
> call VecView(bp2,PETSC_VIEWER_STDOU<wbr>T_WORLD,ierr)<br>
><br>
> call PetscBarrier(PETSC_NULL_OBJECT<wbr>,ierr)<br>
><br>
> call exit()<br>
><br>
><br>
><br>
> And the output is: (with bp the right answer)<br>
><br>
> Vec Object:bp: 2 MPI processes<br>
> type: mpi<br>
> Process [0]<br>
> 1.<br>
> 2.<br>
> Process [1]<br>
> 4.<br>
> 3.<br>
> Vec Object:bp2: 2 MPI processes (before scatter)<br>
> type: mpi<br>
> Process [0]<br>
> 0.<br>
> 0.<br>
> Process [1]<br>
> 0.<br>
> 0.<br>
> Vec Object:bp0: 1 MPI processes<br>
> type: seq<br>
> 1.<br>
> 2.<br>
> 4.<br>
> 3.<br>
> done!<br>
> Vec Object:bp2: 2 MPI processes (after scatter)<br>
> type: mpi<br>
> Process [0]<br>
> 1.<br>
> 2.<br>
> Process [1]<br>
> 0.<br>
> 0.<br>
><br>
><br>
><br>
><br>
> Thanks inmensely for your help,<br>
><br>
> Manuel<br>
><br>
><br>
> On Thu, Jan 5, 2017 at 4:39 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> > On Jan 5, 2017, at 6:21 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> ><br>
> > Hello Devs is me again,<br>
> ><br>
> > I'm trying to distribute a vector to all called processes, the vector would be originally in root as a sequential vector and i would like to scatter it, what would the best call to do this ?<br>
> ><br>
> > I already know how to gather a distributed vector to root with VecScatterCreateToZero, this would be the inverse operation,<br>
><br>
> Use the same VecScatter object but with SCATTER_REVERSE, not you need to reverse the two vector arguments as well.<br>
><br>
><br>
> > i'm currently trying with VecScatterCreate() and as of now im doing the following:<br>
> ><br>
> ><br>
> > if(rank==0)then<br>
> ><br>
> ><br>
> > call VecCreate(PETSC_COMM_SELF,bp0,<wbr>ierr); CHKERRQ(ierr) !if i use WORLD<br>
> > !freezes in SetSizes<br>
> > call VecSetSizes(bp0,PETSC_DECIDE,n<wbr>bdp,ierr); CHKERRQ(ierr)<br>
> > call VecSetType(bp0,VECSEQ,ierr)<br>
> > call VecSetFromOptions(bp0,ierr); CHKERRQ(ierr)<br>
> ><br>
> ><br>
> > call VecSetValues(bp0,nbdp,ind,Rhs,<wbr>INSERT_VALUES,ierr)<br>
> ><br>
> > !call VecSet(bp0,5.0D0,ierr); CHKERRQ(ierr)<br>
> ><br>
> ><br>
> > call VecView(bp0,PETSC_VIEWER_STDOU<wbr>T_WORLD,ierr)<br>
> ><br>
> > call VecAssemblyBegin(bp0,ierr) ; call VecAssemblyEnd(bp0,ierr) !rhs<br>
> ><br>
> > do i=0,nbdp-1,1<br>
> > ind(i+1) = i<br>
> > enddo<br>
> ><br>
> > call ISCreateGeneral(PETSC_COMM_SEL<wbr>F,nbdp,ind,PETSC_COPY_VALUES,<wbr>locis,ierr)<br>
> ><br>
> > !call VecScatterCreate(bp0,PETSC_NUL<wbr>L_OBJECT,bp2,is,ctr,ierr) !if i use SELF<br>
> > !freezes here.<br>
> ><br>
> > call VecScatterCreate(bp0,locis,bp2<wbr>,PETSC_NULL_OBJECT,ctr,ierr)<br>
> ><br>
> > endif<br>
> ><br>
> > bp2 being the receptor MPI vector to scatter to<br>
> ><br>
> > But it freezes in VecScatterCreate when trying to use more than one processor, what would be a better approach ?<br>
> ><br>
> ><br>
> > Thanks once again,<br>
> ><br>
> > Manuel<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Wed, Jan 4, 2017 at 3:30 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> > Thanks i had no idea how to debug and read those logs, that solved this issue at least (i was sending a message from root to everyone else, but trying to catch from everyone else including root)<br>
> ><br>
> > Until next time, many thanks,<br>
> ><br>
> > Manuel<br>
> ><br>
> > On Wed, Jan 4, 2017 at 3:23 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:<br>
> > On Wed, Jan 4, 2017 at 5:21 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> > I did a PetscBarrier just before calling the vicariate routine and im pretty sure im calling it from every processor, code looks like this:<br>
> ><br>
> > From the gdb trace.<br>
> ><br>
> > Proc 0: Is in some MPI routine you call yourself, line 113<br>
> ><br>
> > Proc 1: Is in VecCreate(), line 130<br>
> ><br>
> > You need to fix your communication code.<br>
> ><br>
> > Matt<br>
> ><br>
> > call PetscBarrier(PETSC_NULL_OBJECT<wbr>,ierr)<br>
> ><br>
> > print*,'entering POInit from',rank<br>
> > !call exit()<br>
> ><br>
> > call PetscObjsInit()<br>
> ><br>
> ><br>
> > And output gives:<br>
> ><br>
> > entering POInit from 0<br>
> > entering POInit from 1<br>
> > entering POInit from 2<br>
> > entering POInit from 3<br>
> ><br>
> ><br>
> > Still hangs in the same way,<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Manuel<br>
> ><br>
> ><br>
> ><br>
> > On Wed, Jan 4, 2017 at 2:55 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> > Thanks for the answers !<br>
> ><br>
> > heres the screenshot of what i got from bt in gdb (great hint in how to debug in petsc, didn't know that)<br>
> ><br>
> > I don't really know what to look at here,<br>
> ><br>
> > Thanks,<br>
> ><br>
> > Manuel<br>
> ><br>
> > On Wed, Jan 4, 2017 at 2:39 PM, Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br>
> > Are you certain ALL ranks in PETSC_COMM_WORLD call these function(s). These functions cannot be inside if statements like<br>
> > if (rank == 0){<br>
> > VecCreateMPI(...)<br>
> > }<br>
> ><br>
> ><br>
> > On Wed, 4 Jan 2017 at 23:34, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> > Thanks Dave for the quick answer, appreciate it,<br>
> ><br>
> > I just tried that and it didn't make a difference, any other suggestions ?<br>
> ><br>
> > Thanks,<br>
> > Manuel<br>
> ><br>
> > On Wed, Jan 4, 2017 at 2:29 PM, Dave May <<a href="mailto:dave.mayhem23@gmail.com" target="_blank">dave.mayhem23@gmail.com</a>> wrote:<br>
> > You need to swap the order of your function calls.<br>
> > Call VecSetSizes() before VecSetType()<br>
> ><br>
> > Thanks,<br>
> > Dave<br>
> ><br>
> ><br>
> > On Wed, 4 Jan 2017 at 23:21, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu" target="_blank">mvalera@mail.sdsu.edu</a>> wrote:<br>
> > Hello all, happy new year,<br>
> ><br>
> > I'm working on parallelizing my code, it worked and provided some results when i just called more than one processor, but created artifacts because i didn't need one image of the whole program in each processor, conflicting with each other.<br>
> ><br>
> > Since the pressure solver is the main part i need in parallel im chosing mpi to run everything in root processor until its time to solve for pressure, at this point im trying to create a distributed vector using either<br>
> ><br>
> > call VecCreateMPI(PETSC_COMM_WORLD,<wbr>PETSC_DECIDE,nbdp,xp,ierr)<br>
> > or<br>
> > call VecCreate(PETSC_COMM_WORLD,xp,<wbr>ierr); CHKERRQ(ierr)<br>
> > call VecSetType(xp,VECMPI,ierr)<br>
> > call VecSetSizes(xp,PETSC_DECIDE,nb<wbr>dp,ierr); CHKERRQ(ierr)<br>
> ><br>
> ><br>
> > In both cases program hangs at this point, something it never happened on the naive way i described before. I've made sure the global size, nbdp, is the same in every processor. What can be wrong?<br>
> ><br>
> > Thanks for your kind help,<br>
> ><br>
> > Manuel.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
> > -- Norbert Wiener<br>
> ><br>
> ><br>
><br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div></div>