[petsc-users] VecAssembly gives segmentation fault with MPI
Jed Brown
jed at jedbrown.org
Wed Apr 19 08:07:44 CDT 2017
Please always use "reply-all" so that your messages go to the list.
This is standard mailing list etiquette. It is important to preserve
threading for people who find this discussion later and so that we do
not waste our time re-answering the same questions that have already
been answered in private side-conversations. You'll likely get an
answer faster that way too.
Francesco Migliorini <francescomigliorini93 at gmail.com> writes:
> Hi, thank you for your answer!
>
> Yes xxx_loc means local but it is referred to the MPI processes, so each
> process has different xxx_loc values.
Always use Debug mode PETSc so we can check when you pass inconsistent
information to Vec. The global size needs to be the same on every
process.
> Indeed, the program arrives to Petsc initialization with already
> multiple processes. Then I thought Petsc was applied to all the
> processes separately and therefore the global dimensions of the system
> were the local ones of the MPI processes. Maybe it does not work in
> this way... However mm is parameter equal for all the processes (in
> particular it is 3) and the processes do not have exactly the same
> number of nodes.
>
> 2017-04-19 13:20 GMT+02:00 Jed Brown <jed at jedbrown.org>:
>
>> Francesco Migliorini <francescomigliorini93 at gmail.com> writes:
>>
>> > Hello!
>> >
>> > I have an MPI code in which a linear system is created and solved with
>> > PETSc. It works in sequential run but when I use multiple cores the
>> > VecAssemblyBegin/End give segmentation fault. Here's a sample of my code:
>> >
>> > call PetscInitialize(PETSC_NULL_CHARACTER,perr)
>> >
>> > ind(1) = 3*nnod_loc*max_time_deg
>> > call VecCreate(PETSC_COMM_WORLD,feP,perr)
>> > call VecSetSizes(feP,PETSC_DECIDE,ind,perr)
>>
>> You set the global size here (does "nnod_loc" mean local? and is it the
>> same size on every process?), but then set values for all of these
>> below.
>>
>> > call VecSetFromOptions(feP,perr)
>> >
>> > do in = nnod_loc
>> > do jt = 1,mm
>>
>> What is mm?
>>
>> > ind(1) = 3*((in -1)*max_time_deg + (jt-1))
>> > fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +1)
>> > call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
>> > ind(1) = 3*((in -1)*max_time_deg + (jt-1)) +1
>> > fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +2)
>> > call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
>> > ind(1) = 3*((in -1)*max_time_deg + (jt-1)) +2
>> > fval(1) = fe(3*((in -1)*max_time_deg + (jt-1)) +3)
>> > call VecSetValues(feP,1,ind,fval(1),INSERT_VALUES,perr)
>> > enddo
>> > enddo
>> > enddo
>> > call VecAssemblyBegin(feP,perr)
>> > call VecAssemblyEnd(feP,perr)
>> >
>> > The vector has 640.000 elements more or less but I am running on a high
>> > performing computer so there shouldn't be memory issues. Does anyone know
>> > where is the problem and how can I fix it?
>> >
>> > Thank you,
>> > Francesco Migliorini
>>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170419/7070f27f/attachment.pgp>
More information about the petsc-users
mailing list