[petsc-users] [petsc-maint] petsc ksp solver hangs
Mark Adams
mfadams at lbl.gov
Sun Sep 29 08:02:10 CDT 2019
On Sun, Sep 29, 2019 at 1:30 AM Michael Wick via petsc-maint <
petsc-maint at mcs.anl.gov> wrote:
> Thank you all for the reply.
>
> I am trying to get the backtrace. However, the code hangs totally
> randomly, and it hangs only when I run large simulations (e.g. 72 CPUs for
> this one). I am trying very hard to get the error message.
>
> So far, I can pin-point that the issue is related with hypre, and a static
> build of the petsc library. Switching to a dynamic build works fine so far.
> Also, using a naked gmres works. Does anyone have similar issues before?
>
I've never heard of a problem like this. You might try deleting your
architectured directory (a make clean essentially) and reconfigure.
If dynamic builds work is there any reason not to just do that and move on?
>
> On Sat, Sep 28, 2019 at 6:28 AM Stefano Zampini <stefano.zampini at gmail.com>
> wrote:
>
>> In my experience, an hanging execution may results from seterrq being
>> called with the wrong communicator. Anyway, it would be useful to get the
>> output of -log_trace .
>>
>> Also, does it hang when -pc_type none is specified?
>>
>> Il Sab 28 Set 2019, 16:22 Zhang, Junchao via petsc-users <
>> petsc-users at mcs.anl.gov> ha scritto:
>>
>>> Does it hang with 2 or 4 processes? Which PETSc version do you use
>>> (using the latest is easier for us to debug)? Did you configure PETSc with
>>> --with-debugging=yes COPTFLAGS="-O0 -g" CXXOPTFLAGS="-O0 -g"
>>> After attaching gdb to one process, you can use bt to see its stack
>>> trace.
>>>
>>> --Junchao Zhang
>>>
>>>
>>> On Sat, Sep 28, 2019 at 5:33 AM Michael Wick <
>>> michael.wick.1980 at gmail.com> wrote:
>>>
>>>> I attached a debugger to my run. The code just hangs without throwing
>>>> an error message, interestingly. I uses 72 processors. I turned on the ksp
>>>> monitor. And I can see it hangs either at the beginning or the end of KSP
>>>> iteration. I also uses valgrind to debug my code on my local machine, which
>>>> does not detect any issue. I uses fgmres + fieldsplit, which is really a
>>>> standard option.
>>>>
>>>> Do you have any suggestions to do?
>>>>
>>>> On Fri, Sep 27, 2019 at 8:17 PM Zhang, Junchao <jczhang at mcs.anl.gov>
>>>> wrote:
>>>>
>>>>> How many MPI ranks did you use? If it is done on your desktop, you can
>>>>> just attach a debugger to a MPI process to see what is going on.
>>>>>
>>>>> --Junchao Zhang
>>>>>
>>>>>
>>>>> On Fri, Sep 27, 2019 at 4:24 PM Michael Wick via petsc-maint <
>>>>> petsc-maint at mcs.anl.gov> wrote:
>>>>>
>>>>>> Hi PETSc:
>>>>>>
>>>>>> I have been experiencing a code stagnation at certain KSP iterations.
>>>>>> This happens rather randomly, which means the code may stop at the middle
>>>>>> of a KSP solve and hangs there.
>>>>>>
>>>>>> I have used valgrind and detect nothing. I just wonder if you have
>>>>>> any suggestions.
>>>>>>
>>>>>> Thanks!!!
>>>>>> M
>>>>>>
>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190929/d7de1fe1/attachment.html>
More information about the petsc-users
mailing list