[petsc-users] VecAssembly gives segmentation fault with MPI

Karl Rupp rupp at iue.tuwien.ac.at
Wed Apr 19 06:25:06 CDT 2017


Hi Francesco,

please consider the following:

  a) run your code through valgrind to locate the segmentation fault. 
Maybe there is already a memory access problem in the sequential version.

  b) send any error messages as well as the stack trace.

  c) what is you intent with "do in = nnod_loc"? Isn't nnoc_loc the 
number of local elements?

Best regards,
Karli



On 04/19/2017 12:26 PM, Francesco Migliorini wrote:
> Hello!
>
> I have an MPI code in which a linear system is created and solved with
> PETSc. It works in sequential run but when I use multiple cores the
> VecAssemblyBegin/End give segmentation fault. Here's a sample of my code:
>
> call PetscInitialize(PETSC_NULL_CHARACTER,perr)
>
>       ind(1) = 3*nnod_loc*max_time_deg
>       call VecCreate(PETSC_COMM_WORLD,feP,perr)
>       call VecSetSizes(feP,PETSC_DECIDE,ind,perr)
>       call VecSetFromOptions(feP,perr)
>
>       do in = nnod_loc
> do jt = 1,mm
> ind(1) = 3*((in -1)*max_time_deg + (jt-1))
> fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +1)
> call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> ind(1) = 3*((in -1)*max_time_deg + (jt-1)) +1
> fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +2)
> call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
> ind(1) = 3*((in -1)*max_time_deg + (jt-1)) +2
> fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +3)
> call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
>  enddo
>          enddo
>       enddo
>       call VecAssemblyBegin(feP,perr)
>       call VecAssemblyEnd(feP,perr)
>
> The vector has 640.000 elements more or less but I am running on a high
> performing computer so there shouldn't be memory issues. Does anyone
> know where is the problem and how can I fix it?
>
> Thank you,
> Francesco Migliorini


More information about the petsc-users mailing list