[petsc-users] VecAssembly gives segmentation fault with MPI
Jed Brown
jed at jedbrown.org
Wed Apr 19 06:20:05 CDT 2017
Francesco Migliorini <francescomigliorini93 at gmail.com> writes:
> Hello!
>
> I have an MPI code in which a linear system is created and solved with
> PETSc. It works in sequential run but when I use multiple cores the
> VecAssemblyBegin/End give segmentation fault. Here's a sample of my code:
>
> call PetscInitialize(PETSC_NULL_CHARACTER,perr)
>
> ind(1) = 3*nnod_loc*max_time_deg
> call VecCreate(PETSC_COMM_WORLD,feP,perr)
> call VecSetSizes(feP,PETSC_DECIDE,ind,perr)
You set the global size here (does "nnod_loc" mean local? and is it the
same size on every process?), but then set values for all of these
below.
> call VecSetFromOptions(feP,perr)
>
> do in = nnod_loc
> do jt = 1,mm
What is mm?
> ind(1) = 3*((in -1)*max_time_deg + (jt-1))
> fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +1)
> call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> ind(1) = 3*((in -1)*max_time_deg + (jt-1)) +1
> fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +2)
> call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> ind(1) = 3*((in -1)*max_time_deg + (jt-1)) +2
> fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +3)
> call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> enddo
> enddo
> enddo
> call VecAssemblyBegin(feP,perr)
> call VecAssemblyEnd(feP,perr)
>
> The vector has 640.000 elements more or less but I am running on a high
> performing computer so there shouldn't be memory issues. Does anyone know
> where is the problem and how can I fix it?
>
> Thank you,
> Francesco Migliorini
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170419/ca61f70a/attachment.pgp>
More information about the petsc-users
mailing list