[petsc-users] VecSetSizes hangs in MPI

Manuel Valera mvalera at mail.sdsu.edu
Wed Jan 4 17:21:48 CST 2017


I did a PetscBarrier just before calling the vicariate routine and im
pretty sure im calling it from every processor, code looks like this:

call PetscBarrier(PETSC_NULL_OBJECT,ierr)


print*,'entering POInit from',rank

!call exit()


call PetscObjsInit()



And output gives:


 entering POInit from           0

 entering POInit from           1

 entering POInit from           2

 entering POInit from           3


Still hangs in the same way,

Thanks,

Manuel



On Wed, Jan 4, 2017 at 2:55 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:

> Thanks for the answers !
>
> heres the screenshot of what i got from bt in gdb (great hint in how to
> debug in petsc, didn't know that)
>
> I don't really know what to look at here,
>
> Thanks,
>
> Manuel
>
> On Wed, Jan 4, 2017 at 2:39 PM, Dave May <dave.mayhem23 at gmail.com> wrote:
>
>> Are you certain ALL ranks in PETSC_COMM_WORLD call these function(s).
>> These functions cannot be inside if statements like
>> if (rank == 0){
>>   VecCreateMPI(...)
>> }
>>
>>
>> On Wed, 4 Jan 2017 at 23:34, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
>>
>>> Thanks Dave for the quick answer, appreciate it,
>>>
>>> I just tried that and it didn't make a difference, any other suggestions
>>> ?
>>>
>>> Thanks,
>>> Manuel
>>>
>>> On Wed, Jan 4, 2017 at 2:29 PM, Dave May <dave.mayhem23 at gmail.com>
>>> wrote:
>>>
>>> You need to swap the order of your function calls.
>>> Call VecSetSizes() before VecSetType()
>>>
>>> Thanks,
>>>   Dave
>>>
>>>
>>> On Wed, 4 Jan 2017 at 23:21, Manuel Valera <mvalera at mail.sdsu.edu>
>>> wrote:
>>>
>>> Hello all, happy new year,
>>>
>>> I'm working on parallelizing my code, it worked and provided some
>>> results when i just called more than one processor, but created artifacts
>>> because i didn't need one image of the whole program in each processor,
>>> conflicting with each other.
>>>
>>> Since the pressure solver is the main part i need in parallel im chosing
>>> mpi to run everything in root processor until its time to solve for
>>> pressure, at this point im trying to create a distributed vector using
>>> either
>>>
>>>      call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,nbdp,xp,ierr)
>>> or
>>>
>>>      call VecCreate(PETSC_COMM_WORLD,xp,ierr); CHKERRQ(ierr)
>>>
>>>      call VecSetType(xp,VECMPI,ierr)
>>>
>>>      call VecSetSizes(xp,PETSC_DECIDE,nbdp,ierr); CHKERRQ(ierr)
>>>
>>>
>>>
>>> In both cases program hangs at this point, something it never happened
>>> on the naive way i described before. I've made sure the global size, nbdp,
>>> is the same in every processor. What can be wrong?
>>>
>>>
>>> Thanks for your kind help,
>>>
>>>
>>> Manuel.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170104/6ab56f9c/attachment.html>


More information about the petsc-users mailing list