[petsc-users] pctype hmpi

George Pau gpau at lbl.gov
Fri Aug 31 23:04:44 CDT 2012


Hi Barry,

It works now.  Thank you so much for the help.

Thanks,
George


On Fri, Aug 31, 2012 at 7:57 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:

>
>   Ok, another bug. Put the attached file in src/mat/impls/aij/mpi and run
> make in that directory then relink program and run again.
>
>    Barry
>
> [see attached file: mpiaij.c]
>
> On Aug 31, 2012, at 5:26 PM, George Pau <gpau at lbl.gov> wrote:
>
> > hi Barry,
> >
> > The hmpi option is read in properly now.  The error is now different
> when I use -hmpi_spawn_size 3 with mpiexec -n 1. My printout suggests this
> is now happening in the KSPSolve.
> >
> > George
> >
> >
> > [0] petscinitialize_(): (Fortran):PETSc successfully started: procs 1
> > [0] PetscGetHostName(): Rejecting domainname, likely is NIS
> gilbert.(none)
> > [0] petscinitialize_(): Running on machine: gilbert
> > [0] petscinitialize_(): (Fortran):PETSc successfully started: procs 2
> > [0] PetscGetHostName(): Rejecting domainname, likely is NIS
> gilbert.(none)
> > [0] petscinitialize_(): Running on machine: gilbert
> > [1] petscinitialize_(): (Fortran):PETSc successfully started: procs 2
> > [1] PetscGetHostName(): Rejecting domainname, likely is NIS
> gilbert.(none)
> > [1] petscinitialize_(): Running on machine: gilbert
> > [0] PetscHMPISpawn(): PETSc HMPI successfully spawned: number of nodes =
> 1 node size = 3
> >
> > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688
> -2080374781 max tags = 2147483647
> > [0] MatSetUp(): Warning not preallocating matrix storage
> > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 360 X 360; storage space: 3978
> unneeded,3222 used
> > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is
> 360
> > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
> > [0] Mat_CheckInode(): Found 120 nodes of 360. Limit used: 5. Using Inode
> routines
> > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374781
> >  start ksp
> > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374781
> > [0] PCSetUp(): Setting up new PC
> > [0] PetscCommDuplicate(): Duplicating a communicator -2080374782
> -2080374780 max tags = 2147483647
> > [0] PetscCommDuplicate(): Duplicating a communicator -1006632960
> -1006632959 max tags = 2147483647
> > [1] PetscCommDuplicate(): Duplicating a communicator -2080374779
> -2080374778 max tags = 2147483647
> > [0] PetscCommDuplicate(): Using internal PETSc communicator -2080374782
> -2080374780
> > [0] PetscCommDuplicate(): Using internal PETSc communicator -1006632960
> -1006632959
> > [1] PetscCommDuplicate(): Using internal PETSc communicator -2080374779
> -2080374778
> > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689
> -2080374779 max tags = 2147483647
> > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689
> -1006632958 max tags = 2147483647
> > [1] PetscCommDuplicate(): Duplicating a communicator 1140850689
> -2080374777 max tags = 2147483647
> > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689
> -2080374779
> > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689
> -2080374777
> > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689
> -1006632958
> > [0] VecScatterCreate(): Special case: processor zero gets entire
> parallel vector, rest get none
> > [0] Petsc_DelComm(): Removing reference to PETSc communicator imbedded
> in a user MPI_Comm m -2080374779
> > [0] Petsc_DelComm(): User MPI_Comm m 1140850689 is being freed, removing
> reference from inner PETSc comm to this outer comm
> > [0] PetscCommDestroy(): Deleting PETSc MPI_Comm -2080374779
> > [0] Petsc_DelCounter(): Deleting counter data in an MPI_Comm -2080374779
> > [0] PetscCommDuplicate(): Using internal PETSc communicator -2080374782
> -2080374780
> > [0] PetscCommDuplicate(): Using internal PETSc communicator -1006632960
> -1006632959
> > [1] PetscCommDuplicate(): Using internal PETSc communicator -2080374779
> -2080374778
> > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689
> -1006632958
> > [1] PetscCommDuplicate(): Using internal PETSc communicator 1140850689
> -2080374777
> >
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: [1]PETSC ERROR:
> ------------------------------------------------------------------------
> > [1]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
> probably memory access out of range
> > [1]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> > [1]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[1]PETSCERROR: or try
> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
> corruption errors
> > Caught signal number 11 SEGV: Segmentation Violation, probably memory
> access out of range
> > [0]PETSC ERROR: Try option -start_in_debugger or
> -on_error_attach_debugger
> > [0]PETSC ERROR: or see
> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: or try
> http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
> corruption errors
> > [1]PETSC ERROR: likely location of problem given in stack below
> > [1]PETSC ERROR: ---------------------  Stack Frames
> ------------------------------------
> > [0]PETSC ERROR: likely location of problem given in stack below
> > [0]PETSC ERROR: ---------------------  Stack Frames
> ------------------------------------
> > [1]PETSC ERROR: Note: The EXACT line numbers in the stack are not
> available,
> > [1]PETSC ERROR:       INSTEAD the line number of the start of the
> function
> > [1]PETSC ERROR:       is given.
> > [1]PETSC ERROR: [1] MatDistribute_MPIAIJ line 192
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/mat/impls/aij/mpi/mpiaij.c
> > [1]PETSC ERROR: [1] PCSetUp_HMPI_MP line 90
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/impls/openmp/hpc.c
> > [1]PETSC ERROR: [1] PetscHMPIHandle line 253
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c
> > [1]PETSC ERROR: [1] PetscHMPISpawn line 71
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c
> > [1]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [1]PETSC ERROR: Signal received!
> > [1]PETSC ERROR:
> ------------------------------------------------------------------------
> > [1]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29
> 11:26:24 CDT 2012
> > [1]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [1]PETSC ERROR: See docs/index.html for manual pages.
> > [1]PETSC ERROR:
> ------------------------------------------------------------------------
> > [1]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named gilbert
> by gpau Fri Aug 31 15:20:27 2012
> > [1]PETSC ERROR: [0]PETSC ERROR: Note: The EXACT line numbers in the
> stack are not available,
> > [0]PETSC ERROR:       INSTEAD the line number of the start of the
> function
> > [0]PETSC ERROR:       is given.
> > Libraries linked from
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib
> > [1]PETSC ERROR: Configure run at Fri Aug 31 15:16:04 2012
> > [1]PETSC ERROR: Configure options --with-debugging=1
> --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1
> --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib
> > [1]PETSC ERROR:
> ------------------------------------------------------------------------
> > [1]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 1
> > [0]PETSC ERROR: [0] MatDistribute_MPIAIJ line 192
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/mat/impls/aij/mpi/mpiaij.c
> > [0]PETSC ERROR: [0] PCSetUp_HMPI_MP line 90
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/impls/openmp/hpc.c
> > [0]PETSC ERROR: [0] PetscHMPIHandle line 253
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c
> > [0]PETSC ERROR: [0] PetscHMPISpawn line 71
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/sys/objects/mpinit.c
> > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > [0]PETSC ERROR: Signal received!
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29
> 11:26:24 CDT 2012
> > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > [0]PETSC ERROR: See docs/index.html for manual pages.
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named gilbert
> by gpau Fri Aug 31 15:20:27 2012
> > [0]PETSC ERROR: Libraries linked from
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib
> > [0]PETSC ERROR: Configure run at Fri Aug 31 15:16:04 2012
> > [0]PETSC ERROR: Configure options --with-debugging=1
> --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1
> --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib
> > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > [0]PETSC ERROR: User provided function() line 0 in unknown directory
> unknown file
> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
> > Fatal error in MPI_Allreduce: Other MPI error, error stack:
> > MPI_Allreduce(855)........: MPI_Allreduce(sbuf=0x7fff6315ead0,
> rbuf=0x7fff6315eae0, count=2, MPI_INT, MPI_MAX, comm=0x84000004) failed
> > MPIR_Allreduce_impl(712)..:
> > MPIR_Allreduce_intra(534).:
> > dequeue_and_set_error(596): Communication error with rank 0
> >
> >
> > On Fri, Aug 31, 2012 at 2:09 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> > [see attached file: zstart.c]
> >
> > On Aug 31, 2012, at 4:07 PM, George Pau <gpau at lbl.gov> wrote:
> >
> > > Hi Barry,
> > >
> > > You forgot the file ...
> > >
> > > George
> > >
> > > On Fri, Aug 31, 2012 at 2:04 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > >
> > >    Yikes. It is totally my fault.  The handling of these merge and
> spawn options is done only for PetscInitialize() for C. Not for Fortran,
> hence the arguments just got ignored.
> > >
> > >     Please find attached a file zstart.c   put it in the directory
> src/sys/ftn-custom  and run make in that directory (with appropriate
> PETSC_DIR and PETSC_ARCH set)
> > >
> > >     Then link and run the example again.
> > >
> > >
> > >    Barry
> > >
> > >
> > > On Aug 31, 2012, at 3:30 PM, George Pau <gpau at lbl.gov> wrote:
> > >
> > > > Sorry, it was a cut and paste error.  I tried running the code with
> all the options in the command line:
> > > >
> > > > mpiexec.mpich2 -n 1 xt2_eos4 -hmpi_spawn_size 3 -pc_type hmpi
> -ksp_type preonly -hmpi_ksp_type cg -hmpi_pc_type hypre -hmpi_pc_hypre
> boomeramg
> > > >
> > > > mpiexec.mpich2 -n 2 xt2_eos4 -hmpi_merge_size 2 -pc_type hmpi
> -ksp_type preonly -hmpi_ksp_type cg -hmpi_pc_type hypre -hmpi_pc_hypre
> boomeramg
> > > >
> > > > but I get the exact same outputs.
> > > >
> > > > George
> > > >
> > > >
> > > >
> > > > On Fri, Aug 31, 2012 at 1:18 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > > >
> > > > On Aug 31, 2012, at 3:09 PM, George Pau <gpau at lbl.gov> wrote:
> > > >
> > > > > Hi Barry,
> > > > >
> > > > > For the hmpi_spawn_size,  the options in my .petscrc are
> > > > > -info
> > > > > -pc_view
> > > > > pc_type hmpi
> > > >
> > > >   How come there is no - in front of this one?
> > > >
> > > > > -ksp_type preonly
> > > > > -ksp_view
> > > > > -hmpi_pc_monitor
> > > > > -hmpi_ksp_monitor
> > > > > -hmpi_ksp_type cg
> > > > > -hmpi_pc_type hypre
> > > > > -hmpi_pc_hypre_type boomeramg
> > > > > -hmpi_spawn_size 3
> > > > >
> > > > > mpiexec.mpich2 -n 1 myprogram
> > > > >
> > > > > [0] petscinitialize_(): (Fortran):PETSc successfully started:
> procs 1
> > > > > [0] PetscGetHostName(): Rejecting domainname, likely is NIS
> gilbert.(none)
> > > > > [0] petscinitialize_(): Running on machine: gilbert
> > > > >
> > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688
> -2080374784 max tags = 2147483647
> > > > > [0] MatSetUp(): Warning not preallocating matrix storage
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 360 X 360; storage
> space: 3978 unneeded,3222 used
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
> MatSetValues() is 360
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
> > > > > [0] Mat_CheckInode(): Found 120 nodes of 360. Limit used: 5. Using
> Inode routines
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374784
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374784
> > > > >
> > > > > Fatal error in PMPI_Bcast: Invalid communicator, error stack:
> > > > > PMPI_Bcast(1478): MPI_Bcast(buf=0x7fff30dacecc, count=1, MPI_INT,
> root=0, comm=0x0) failed
> > > > > PMPI_Bcast(1418): Invalid communicator
> > > > >
> > > > > I inserted some print statement between the ksp calls and found
> that the error occurs in
> > > > >
> > > > > call KSPSetFromOptions(ksp, pierr)
> > > > >
> > > > > 2. If I change hmpi_spawn_size 3 to hmpi_merge_size 2 and launch
> my job by
> > > >
> > > >    How come there is no - in front of hmpi_merge_size 2?
> > > >
> > > >
> > > >    Can you try putting all the arguments as command line arguments
> instead of in a file? It shouldn't matter but it seems like some of the
> arguments are being ignored.
> > > >
> > > >    Barry
> > > >
> > > >
> > > > >
> > > > > mpiexec.mpich2 -n 2 myprogram
> > > > >
> > > > > [0] petscinitialize_(): (Fortran):PETSc successfully started:
> procs 2
> > > > > [0] PetscGetHostName(): Rejecting domainname, likely is NIS
> gilbert.(none)
> > > > > [0] petscinitialize_(): Running on machine: gilbert
> > > > > [1] petscinitialize_(): (Fortran):PETSc successfully started:
> procs 2
> > > > > [1] PetscGetHostName(): Rejecting domainname, likely is NIS
> gilbert.(none)
> > > > > [1] petscinitialize_(): Running on machine: gilbert
> > > > >
> > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850688
> -2080374780 max tags = 2147483647
> > > > > [0] MatSetUp(): Warning not preallocating matrix storage
> > > > > [1] PetscCommDuplicate(): Duplicating a communicator 1140850688
> -2080374782 max tags = 2147483647
> > > > > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689
> -2080374777 max tags = 2147483647
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850689 -2080374777
> > > > > [1] PetscCommDuplicate(): Duplicating a communicator 1140850689
> -2080374780 max tags = 2147483647
> > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850689 -2080374780
> > > > > [0] MatStashScatterBegin_Private(): No of messages: 1
> > > > > [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 12896
> > > > > [0] MatAssemblyBegin_MPIAIJ(): Stash has 1611 entries, uses 0
> mallocs.
> > > > > [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 180; storage
> space: 1998 unneeded,1602 used
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
> MatSetValues() is 180
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
> > > > > [0] Mat_CheckInode(): Found 60 nodes of 180. Limit used: 5. Using
> Inode routines
> > > > > [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 180; storage
> space: 1998 unneeded,1602 used
> > > > > [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
> MatSetValues() is 180
> > > > > [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 9
> > > > > [1] Mat_CheckInode(): Found 60 nodes of 180. Limit used: 5. Using
> Inode routines
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850689 -2080374777
> > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850689 -2080374780
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850689 -2080374777
> > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850689 -2080374780
> > > > > [0] VecScatterCreateCommon_PtoS(): Using blocksize 1 scatter
> > > > > [0] VecScatterCreate(): General case: MPI to Seq
> > > > > [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 3; storage space:
> 396 unneeded,9 used
> > > > > [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
> MatSetValues() is 3
> > > > > [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3
> > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374782
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 180 X 3; storage space:
> 396 unneeded,9 used
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
> MatSetValues() is 3
> > > > > [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 3
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374780
> > > > > [0] VecAssemblyBegin_MPI(): Stash has 180 entries, uses 1 mallocs.
> > > > > [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0
> mallocs.
> > > > > [1] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374782
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374780
> > > > > [0] PCSetUp(): Setting up new PC
> > > > > [0] PetscCommDuplicate(): Using internal PETSc communicator
> 1140850688 -2080374780
> > > > >
> > > > > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > > > > [0]PETSC ERROR: Nonconforming object sizes!
> > > > > [0]PETSC ERROR: HMPI preconditioner only works for sequential
> solves!
> > > > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29
> 11:26:24 CDT 2012
> > > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > > > [0]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named
> gilbert by gpau Fri Aug 31 13:00:31 2012
> > > > > [0]PETSC ERROR: Libraries linked from
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib
> > > > > [0]PETSC ERROR: Configure run at Thu Aug 30 15:27:17 2012
> > > > > [0]PETSC ERROR: Configure options --with-debugging=0
> --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1
> --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib
> > > > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > > > [0]PETSC ERROR: PCCreate_HMPI() line 283 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/impls/openmp/hpc.c
> > > > > [0]PETSC ERROR: PCSetType() line 83 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/interface/pcset.c
> > > > > [0]PETSC ERROR: PCSetFromOptions() line 188 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/interface/pcset.c
> > > > > [0]PETSC ERROR: KSPSetFromOptions() line 287 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/interface/itcl.c
> > > > > [0]PETSC ERROR: --------------------- Error Message
> ------------------------------------
> > > > > [0]PETSC ERROR: No support for this operation for this object type!
> > > > > [0]PETSC ERROR: PC does not have apply!
> > > > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > > > [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 3, Wed Aug 29
> 11:26:24 CDT 2012
> > > > > [0]PETSC ERROR: See docs/changes/index.html for recent updates.
> > > > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
> > > > > [0]PETSC ERROR: See docs/index.html for manual pages.
> > > > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > > > [0]PETSC ERROR: ../../esd-tough2/xt2_eos4 on a arch-linu named
> gilbert by gpau Fri Aug 31 13:00:31 2012
> > > > > [0]PETSC ERROR: Libraries linked from
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/lib
> > > > > [0]PETSC ERROR: Configure run at Thu Aug 30 15:27:17 2012
> > > > > [0]PETSC ERROR: Configure options --with-debugging=0
> --with-mpi-dir=/usr/lib/mpich2 --download-hypre=1
> --prefix=/home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib
> > > > > [0]PETSC ERROR:
> ------------------------------------------------------------------------
> > > > > [0]PETSC ERROR: PCApply() line 382 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/pc/interface/precon.c
> > > > > [0]PETSC ERROR: KSPInitialResidual() line 64 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/interface/itres.c
> > > > > [0]PETSC ERROR: KSPSolve_GMRES() line 230 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/impls/gmres/gmres.c
> > > > > [0]PETSC ERROR: KSPSolve() line 446 in
> /home/gpau/tough_codes/esd-tough2/build/Linux-x86_64-Debug-MPI-EOS4/toughlib/tpls/petsc/petsc-3.3-p3-source/src/ksp/ksp/interface/itfunc.c
> > > > >
> > > > > I note that the error appears to occur at the same point.
> > > > >
> > > > > George
> > > > >
> > > > >
> > > > > On Fri, Aug 31, 2012 at 11:31 AM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > > > >
> > > > > On Aug 31, 2012, at 1:27 PM, George Pau <gpau at lbl.gov> wrote:
> > > > >
> > > > > > Hi Barry,
> > > > > >
> > > > > > 1. It is the exact same error related to MPI_ERR_COMM and
> MPI_Bcast.
> > > > >
> > > > >    That should not happen. Please run and send all the output
> including the exact command line used
> > > > >
> > > > >
> > > > > >  I am currently using the MPICH2 distribution provided by ubuntu
> but if MPICH version that Petsc download with -download-mpich works, I can
> use that.
> > > > > > 2. If I use hmpi_merge_size, I will need to launch mpiexec with
> more than 1 cpus.  But, petsc will complain that the pctype hmpi can only
> be used in serial.
> > > > >
> > > > >    That should not happen. Run with 2 MPI processes and
> -hmpi_merge_size 2  and send the complete error message.
> > > > >
> > > > >
> > > > >   Barry
> > > > >
> > > > > >
> > > > > > George
> > > > > >
> > > > > >
> > > > > > On Aug 31, 2012, at 11:17 AM, Barry Smith wrote:
> > > > > >
> > > > > >>
> > > > > >> On Aug 30, 2012, at 10:02 PM, George Pau <gpau at lbl.gov> wrote:
> > > > > >>
> > > > > >>> Hi Barry,
> > > > > >>>
> > > > > >>> I tried with the addition of
> > > > > >>>
> > > > > >>> -hmpi_spawn_size 3
> > > > > >>>
> > > > > >>> but I am still getting the same error though.
> > > > > >>
> > > > > >>   The EXACT same error? Or some other error?
> > > > > >>
> > > > > >>    What happens if you run with the -hmpi_merge_size <size>
> option instead?
> > > > > >>
> > > > > >>  Barry
> > > > > >>
> > > > > >> 1) I am getting a crash with the spawn version I suspect is due
> to bugs in the MPICH version I am using related to spawn.
> > > > > >>
> > > > > >> 2) I am getting errors with the merge version due to Apple's
> ASLR which they make hard to turn off.
> > > > > >>
> > > > > >>
> > > > > >>> I am using mpich2.  Any other options to try?
> > > > > >>>
> > > > > >>> George
> > > > > >>>
> > > > > >>>
> > > > > >>> On Aug 30, 2012, at 7:28 PM, Barry Smith wrote:
> > > > > >>>
> > > > > >>>>
> > > > > >>>> On Aug 30, 2012, at 7:24 PM, George Pau <gpau at lbl.gov> wrote:
> > > > > >>>>
> > > > > >>>>> Hi,
> > > > > >>>>>
> > > > > >>>>> I have some issues using the -pctype hmpi.  I used the same
> setting found at
> > > > > >>>>>
> > > > > >>>>>
> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCHMPI.html
> > > > > >>>>>
> > > > > >>>>> i.e.
> > > > > >>>>> -pc_type hmpi
> > > > > >>>>> -ksp_type preonly
> > > > > >>>>> -hmpi_ksp_type cg
> > > > > >>>>> -hmpi_pc_type hypre
> > > > > >>>>> -hmpi_pc_hypre_type boomeramg
> > > > > >>>>>
> > > > > >>>>> My command  is
> > > > > >>>>>
> > > > > >>>>> mpiexec -n 1 myprogram
> > > > > >>>>
> > > > > >>>> Sorry the documentation doesn't make this clearer. You need
> to start PETSc with special options to get the "worker" processes
> initialized. From the manual page for PCHMPI it has
> > > > > >>>>
> > > > > >>>> See PetscHMPIMerge() and PetscHMPISpawn() for two ways to
> start up MPI for use with this preconditioner
> > > > > >>>>
> > > > > >>>> This will tell you want option to start PETSc up with.
> > > > > >>>>
> > > > > >>>> I will fix the PC so that it prints a far more useful error
> message.
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> Barry
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>>
> > > > > >>>>> But, I get
> > > > > >>>>>
> > > > > >>>>> [gilbert:4041] *** An error occurred in MPI_Bcast
> > > > > >>>>> [gilbert:4041] *** on communicator MPI_COMM_WORLD
> > > > > >>>>> [gilbert:4041] *** MPI_ERR_COMM: invalid communicator
> > > > > >>>>> [gilbert:4041] *** MPI_ERRORS_ARE_FATAL (your MPI job will
> now abort)
> > > > > >>>>>
> > > > > >>>>> with openmpi.  I get similar error with mpich2
> > > > > >>>>>
> > > > > >>>>> Fatal error in PMPI_Bcast: Invalid communicator, error stack:
> > > > > >>>>> PMPI_Bcast(1478): MPI_Bcast(buf=0x7fffb683479c, count=1,
> MPI_INT, root=0, comm=0x0) failed
> > > > > >>>>> PMPI_Bcast(1418): Invalid communicator
> > > > > >>>>>
> > > > > >>>>> I couldn't figure out what is wrong.    My petsc is  version
> 3.3.3 and the configuration is -with-debugging=0
> --with-mpi-dir=/usr/lib/openmpi --download-hypre=1 and I am on a Ubuntu
> machine.
> > > > > >>>>>
> > > > > >>>>> Note that with the default pc_type and ksp_type, everything
> is fine.  It was also tested with multiple processors.  I wondering whether
> there are some options that I am not specifying correctly?
> > > > > >>>>>
> > > > > >>>>> --
> > > > > >>>>> George Pau
> > > > > >>>>> Earth Sciences Division
> > > > > >>>>> Lawrence Berkeley National Laboratory
> > > > > >>>>> One Cyclotron, MS 74-120
> > > > > >>>>> Berkeley, CA 94720
> > > > > >>>>>
> > > > > >>>>> (510) 486-7196
> > > > > >>>>> gpau at lbl.gov
> > > > > >>>>> http://esd.lbl.gov/about/staff/georgepau/
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>
> > > > > >>
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > George Pau
> > > > > Earth Sciences Division
> > > > > Lawrence Berkeley National Laboratory
> > > > > One Cyclotron, MS 74-120
> > > > > Berkeley, CA 94720
> > > > >
> > > > > (510) 486-7196
> > > > > gpau at lbl.gov
> > > > > http://esd.lbl.gov/about/staff/georgepau/
> > > > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > George Pau
> > > > Earth Sciences Division
> > > > Lawrence Berkeley National Laboratory
> > > > One Cyclotron, MS 74-120
> > > > Berkeley, CA 94720
> > > >
> > > > (510) 486-7196
> > > > gpau at lbl.gov
> > > > http://esd.lbl.gov/about/staff/georgepau/
> > > >
> > >
> > >
> > >
> > >
> > > --
> > > George Pau
> > > Earth Sciences Division
> > > Lawrence Berkeley National Laboratory
> > > One Cyclotron, MS 74-120
> > > Berkeley, CA 94720
> > >
> > > (510) 486-7196
> > > gpau at lbl.gov
> > > http://esd.lbl.gov/about/staff/georgepau/
> > >
> >
> >
> >
> > --
> > George Pau
> > Earth Sciences Division
> > Lawrence Berkeley National Laboratory
> > One Cyclotron, MS 74-120
> > Berkeley, CA 94720
> >
> > (510) 486-7196
> > gpau at lbl.gov
> > http://esd.lbl.gov/about/staff/georgepau/
> >
>



-- 
George Pau
Earth Sciences Division
Lawrence Berkeley National Laboratory
One Cyclotron, MS 74-120
Berkeley, CA 94720

(510) 486-7196
gpau at lbl.gov
http://esd.lbl.gov/about/staff/georgepau/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120831/b2da4561/attachment-0001.html>


More information about the petsc-users mailing list