[petsc-users] MG Preconditioning

Gaetan Kenway kenway at utias.utoronto.ca
Sat Sep 1 12:04:57 CDT 2012


I believe I partially tracked down my problem. The issue arises at line 644
in mg.c with a call to KSPGetVecs(). This is used to get the coarse grid
vectors for the RHS. The issue is that up until this point, the actual
operator for mglevels[i]->smoothd have not actually been set and therefore
it cannot get the vectors.

I am using
  call PCMGSetGalerkin(pc, PETSC_TRUE, PETScIerr)
  call EChk(PETScIerr,__FILE__,__LINE__)

with the intention of having the coarse matrices approximately
automatically. However, from looking over the code, it appears when it is
used in this manner, the coarse grid operators never get set. From what I
can tell, it is only possible to create these matrices automatically if
(pc->dm) is True which I think means that the preconditioner is derived
from a distributed matrix.

I've set my own restriction operator, but mg->galerkin has a value of -1 in
mg.c and none of the KSPSetOperators() are executed.

I've also tried to generate the matrices myself using MatRARt() or MatPtAP,
but I receive error code 56 "* /* no support for requested operation */" . *Is
this op only implemented from matrices derived from from DM objects?

Any other suggestions for creating the coarse grid operators manually?

Thank you,

Gaetan


On Sat, Sep 1, 2012 at 12:29 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:

> On Fri, Aug 31, 2012 at 11:22 PM, Gaetan Kenway <kenway at utias.utoronto.ca>wrote:
>
>> Hi Again
>>
>> I also tried petsc-3.2 version and I still get the same backtrace.
>>
>> If its not possible to figure out where the communicator segfault is
>> coming from its not a huge deal...I've just set the option using
>> PetscOptionsSetValue() and then use PCSetFromOptions() to pull it back out.
>>  That seems to work fine.
>>
>> Even avoiding the above problem, with the PetscOptionsSetValue I'm still
>> receiving an error code 73 when I run the multigrid solver.  I've included
>> the backtrace output below but its not a lot of help since the code exited
>> cleaning using my error checking procedure
>>
>>  =================================================================
>> PETSc or MPI Error. Error Code 73. Detected on Proc  0
>> Error at line:   122 in file: solveADjointTransposePETSc.F90
>>  =================================================================
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> 0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0
>> (gdb) bt
>> #0  0xb68c98f0 in PMPI_Abort () from /usr/local/lib/libmpi.so.0
>> #1  0xb519f190 in pmpi_abort__ () from /usr/local/lib/libmpi_f77.so.0
>> #2  0xb44c9e4c in echk (ierr=@0x49, file=..., line=@0x7a,
>> .tmp.FILE.len_V$eb=30) at terminate.f90:154
>> #3  0xb44bd68f in solveadjointtransposepetsc () at
>> solveADjointTransposePETSc.F90:122
>> #4  0xb44138a9 in f2py_rout_sumb_solveadjointtransposepetsc () from
>> /tmp/tmpKYF_DT/sumb.so
>> #5  0xb440fd35 in fortran_call () from /tmp/tmpKYF_DT/sumb.so
>> #6  0x0805fd6a in PyObject_Call ()
>> #7  0x080dd5b0 in PyEval_EvalFrameEx ()
>> #8  0x080dfbb2 in PyEval_EvalCodeEx ()
>> #9  0x08168f1f in ?? ()
>> #10 0x0805fd6a in PyObject_Call ()
>> #11 0x080dcbeb in PyEval_EvalFrameEx ()
>> #12 0x080dfbb2 in PyEval_EvalCodeEx ()
>> #13 0x080de145 in PyEval_EvalFrameEx ()
>> #14 0x080dfbb2 in PyEval_EvalCodeEx ()
>> #15 0x080dfca7 in PyEval_EvalCode ()
>> #16 0x080fd956 in PyRun_FileExFlags ()
>> #17 0x080fdbb2 in PyRun_SimpleFileExFlags ()
>> #18 0x0805b6d3 in Py_Main ()
>> #19 0x0805a8ab in main ()
>>
>
> This stack doesn't involve PETSc at all.
>
>
>>
>> Valgrid was clean right up until the end where I get the normal error
>> message:
>>
>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
>> probably memory access out of range
>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
>> [0]PETSC ERROR: or see
>> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try
>> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
>> corruption errors
>> [0]PETSC ERROR: likely location of problem given in stack below
>> [0]PETSC ERROR: ---------------------  Stack Frames
>> ------------------------------------
>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not
>> available,
>> [0]PETSC ERROR:       INSTEAD the line number of the start of the function
>> [0]PETSC ERROR:       is given.
>> [0]PETSC ERROR: [0] MatGetVecs line 8142
>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/mat/interface/matrix.c
>> [0]PETSC ERROR: [0] KSPGetVecs line 774
>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/iterativ.c
>> [0]PETSC ERROR: [0] PCSetUp_MG line 508
>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/impls/mg/mg.c
>> [0]PETSC ERROR: [0] PCSetUp line 810
>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/pc/interface/precon.c
>> [0]PETSC ERROR: [0] KSPSetUp line 182
>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c
>> [0]PETSC ERROR: [0] KSPSolve line 351
>> /nfs/mica/home/kenway/Downloads/petsc-3.3/src/ksp/ksp/interface/itfunc.c
>>
>> I will try it on a different system tomorrow to see if I have any more
>> luck.
>>
>> Thanks,
>>
>> Gaetan
>>
>>
>>
>> On Fri, Aug 31, 2012 at 11:08 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>>
>>> On Fri, Aug 31, 2012 at 10:06 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>>
>>>> True, but the backtrace also shows that comm = 0x0 on the call to
>>>> KSPCreate(), which
>>>> leads me to believe that your petsc4py has not initialized PETSc, and
>>>> therefor not
>>>> initialized PETSC_COMM_WORLD.
>>>>
>>>
>>> Could catch, maybe PetscFunctionBegin() should check that PETSc has been
>>> initialized.
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120901/bfadb931/attachment.html>


More information about the petsc-users mailing list