[petsc-users] VecSetSizes hangs in MPI
Matthew Knepley
knepley at gmail.com
Wed Jan 4 16:37:06 CST 2017
On Wed, Jan 4, 2017 at 4:21 PM, Manuel Valera <mvalera at mail.sdsu.edu> wrote:
> Hello all, happy new year,
>
> I'm working on parallelizing my code, it worked and provided some results
> when i just called more than one processor, but created artifacts because i
> didn't need one image of the whole program in each processor, conflicting
> with each other.
>
> Since the pressure solver is the main part i need in parallel im chosing
> mpi to run everything in root processor until its time to solve for
> pressure, at this point im trying to create a distributed vector using
> either
>
> call VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,nbdp,xp,ierr)
> or
>
> call VecCreate(PETSC_COMM_WORLD,xp,ierr); CHKERRQ(ierr)
>
> call VecSetType(xp,VECMPI,ierr)
>
> call VecSetSizes(xp,PETSC_DECIDE,nbdp,ierr); CHKERRQ(ierr)
>
>
>
> In both cases program hangs at this point, something it never happened on
> the naive way i described before. I've made sure the global size, nbdp, is
> the same in every processor. What can be wrong?
>
It sounds like every process is not calling this function. This will cause
a hang since its collective.
Matt
> Thanks for your kind help,
>
>
> Manuel.
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170104/11f1c3eb/attachment.html>
More information about the petsc-users
mailing list