From Eric.Chamberland at giref.ulaval.ca Thu Dec 1 08:48:13 2016 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Thu, 1 Dec 2016 09:48:13 -0500 Subject: [petsc-users] How to zero entries in a vec, including ghosts? Message-ID: <406c0280-3174-9105-6d25-49dbfac60d32@giref.ulaval.ca> Hi, I try to find how to zeros all vec entries, including ghosts, without doing any communications... Since VecSet does not modify ghost values, we can do VecGhostUpdateBegin(v,INSERT_VALUES,SCATTER_FORWARD); VecGhostUpdateEnd(v,INSERT_VALUES,SCATTER_FORWARD); But that is somewhat "heavy" just to put zeros in a vec on all processes... Shouldn't VecZeroEntries be the function that should do the work correctly? Thanks, Eric From knepley at gmail.com Thu Dec 1 08:59:17 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 1 Dec 2016 08:59:17 -0600 Subject: [petsc-users] How to zero entries in a vec, including ghosts? In-Reply-To: <406c0280-3174-9105-6d25-49dbfac60d32@giref.ulaval.ca> References: <406c0280-3174-9105-6d25-49dbfac60d32@giref.ulaval.ca> Message-ID: On Thu, Dec 1, 2016 at 8:48 AM, Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi, > > I try to find how to zeros all vec entries, including ghosts, without > doing any communications... > > Since VecSet does not modify ghost values, we can do > > VecGhostUpdateBegin(v,INSERT_VALUES,SCATTER_FORWARD); > VecGhostUpdateEnd(v,INSERT_VALUES,SCATTER_FORWARD); > > But that is somewhat "heavy" just to put zeros in a vec on all processes... > > Shouldn't VecZeroEntries be the function that should do the work correctly? > How about VecGhostGetLocalForm(x,&xlocal); VecZeroEntries(xlocal); VecGhostRestoreLocalForm(x,&xlocal); Thanks, Matt > Thanks, > > Eric > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Fri Dec 2 10:36:08 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Fri, 2 Dec 2016 17:36:08 +0100 Subject: [petsc-users] Set saddle-point structure in parallel Message-ID: Dear all, Thanks to Matt's help, I have been able to set up a fieldsplit preconditioner for a Stokes-like problem. But it was in sequential! Now I am facing new issues when trying to set up the saddle-point structure in parallel. Well, I have a matrix with 38 DOF. In the global numbering, the pressure DOF are numbered : 2,5,8,11,14,17 and the velocity DOF are the others. The matrix is distributed on 2 procs, the rows 0 to 18 on proc0, the rows from 19 to 38 on procs1. I have set the following IS in order to pass them to the PCFieldSplit : call ISCreateGeneral(PETSC_COMM_SELF, nbddl0, vec_ddl0, PETSC_COPY_VALUES, is0, ierr) call ISCreateGeneral(PETSC_COMM_SELF, nbddl1, vec_ddl1, PETSC_COPY_VALUES, is1, ierr) This is what they contain : is0 on proc0 : ------------------- IS Object: 1 MPI processes type: general Number of indices in set 19 0 19 1 20 2 21 3 22 4 23 5 24 6 25 7 26 8 27 9 28 10 29 11 30 12 31 13 32 14 33 15 34 16 35 17 36 18 37 is1 on proc0 : ------------------- IS Object: 1 MPI processes type: general Number of indices in set 0 is0 on proc1 : ------------------- IS Object: 1 MPI processes type: general Number of indices in set 13 0 0 1 1 2 3 3 4 4 6 5 7 6 9 7 10 8 12 9 13 10 15 11 16 12 18 is1 on proc1 : ------------------- IS Object: 1 MPI processes type: general Number of indices in set 6 0 2 1 5 2 8 3 11 4 14 5 17 Then I pass them to the FieldSplit : call PCFieldSplitSetIS(pc,'0',is0, ierr) call PCFieldSplitSetIS(pc,'1',is1, ierr) But when the PC is set up, PETSc complains about : [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [1]PETSC ERROR: Nonconforming object sizes [1]PETSC ERROR: Local column sizes 32 do not add up to total number of columns 19 [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016 [1]PETSC ERROR: \C0\E3o on a arch-linux2-c-debug named dsp0780450 by B07947 Fri Dec 2 17:07:54 2016 [1]PETSC ERROR: Configure options --prefix=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes --download-ml=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz --with-mumps-lib="-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps -lmumps_common -lpord -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr -lscotcherrexit -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib -lparmetis -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib -lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu -lgomp " --with-mumps-include=/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/include --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi" --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas" --with-lapack-lib="-L/usr/lib -llapack" [1]PETSC ERROR: #1 MatGetSubMatrix_MPIAIJ_Private() line 3181 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c [1]PETSC ERROR: #2 MatGetSubMatrix_MPIAIJ() line 3100 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c [1]PETSC ERROR: #3 MatGetSubMatrix() line 7825 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/interface/matrix.c [1]PETSC ERROR: #4 PCSetUp_FieldSplit() line 560 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c [1]PETSC ERROR: #5 PCSetUp() line 968 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/interface/precon.c [1]PETSC ERROR: #6 KSPSetUp() line 390 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/ksp/interface/itfunc.c I am doing something wrong but I cannot see how I should specify the layout of my fields. Thanks in advance, Nicolas [image: Images int?gr?es 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34631 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Matrix38.ascii Type: application/octet-stream Size: 11580 bytes Desc: not available URL: From bsmith at mcs.anl.gov Fri Dec 2 11:34:05 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 2 Dec 2016 11:34:05 -0600 Subject: [petsc-users] Set saddle-point structure in parallel In-Reply-To: References: Message-ID: <9FA2A4A2-8196-4B35-84EF-67ABCCC7FD21@mcs.anl.gov> Each process needs to provide the IS that contain only local entries for that process. It looks like you might be doing the opposite. > On Dec 2, 2016, at 10:36 AM, Karin&NiKo wrote: > > Dear all, > > Thanks to Matt's help, I have been able to set up a fieldsplit preconditioner for a Stokes-like problem. But it was in sequential! Now I am facing new issues when trying to set up the saddle-point structure in parallel. > > Well, I have a matrix with 38 DOF. In the global numbering, the pressure DOF are numbered : 2,5,8,11,14,17 and the velocity DOF are the others. The matrix is distributed on 2 procs, the rows 0 to 18 on proc0, the rows from 19 to 38 on procs1. > I have set the following IS in order to pass them to the PCFieldSplit : > call ISCreateGeneral(PETSC_COMM_SELF, nbddl0, vec_ddl0, PETSC_COPY_VALUES, is0, ierr) > call ISCreateGeneral(PETSC_COMM_SELF, nbddl1, vec_ddl1, PETSC_COPY_VALUES, is1, ierr) > > This is what they contain : > > is0 on proc0 : > ------------------- > IS Object: 1 MPI processes > type: general > Number of indices in set 19 > 0 19 > 1 20 > 2 21 > 3 22 > 4 23 > 5 24 > 6 25 > 7 26 > 8 27 > 9 28 > 10 29 > 11 30 > 12 31 > 13 32 > 14 33 > 15 34 > 16 35 > 17 36 > 18 37 > > is1 on proc0 : > ------------------- > IS Object: 1 MPI processes > type: general > Number of indices in set 0 > > is0 on proc1 : > ------------------- > IS Object: 1 MPI processes > type: general > Number of indices in set 13 > 0 0 > 1 1 > 2 3 > 3 4 > 4 6 > 5 7 > 6 9 > 7 10 > 8 12 > 9 13 > 10 15 > 11 16 > 12 18 > > is1 on proc1 : > ------------------- > IS Object: 1 MPI processes > type: general > Number of indices in set 6 > 0 2 > 1 5 > 2 8 > 3 11 > 4 14 > 5 17 > > Then I pass them to the FieldSplit : > call PCFieldSplitSetIS(pc,'0',is0, ierr) > call PCFieldSplitSetIS(pc,'1',is1, ierr) > > > But when the PC is set up, PETSc complains about : > > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [1]PETSC ERROR: Nonconforming object sizes > [1]PETSC ERROR: Local column sizes 32 do not add up to total number of columns 19 > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016 > [1]PETSC ERROR: \C0\E3o on a arch-linux2-c-debug named dsp0780450 by B07947 Fri Dec 2 17:07:54 2016 > [1]PETSC ERROR: Configure options --prefix=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes --download-ml=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz --with-mumps-lib="-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps -lmumps_common -lpord -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr -lscotcherrexit -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib -lparmetis -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib -lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu -lgomp " --with-mumps-include=/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/include --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi" --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas" --with-lapack-lib="-L/usr/lib -llapack" > [1]PETSC ERROR: #1 MatGetSubMatrix_MPIAIJ_Private() line 3181 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c > [1]PETSC ERROR: #2 MatGetSubMatrix_MPIAIJ() line 3100 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c > [1]PETSC ERROR: #3 MatGetSubMatrix() line 7825 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/interface/matrix.c > [1]PETSC ERROR: #4 PCSetUp_FieldSplit() line 560 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > [1]PETSC ERROR: #5 PCSetUp() line 968 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/interface/precon.c > [1]PETSC ERROR: #6 KSPSetUp() line 390 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/ksp/interface/itfunc.c > > > I am doing something wrong but I cannot see how I should specify the layout of my fields. > > Thanks in advance, > Nicolas > > > > > > > > From niko.karin at gmail.com Fri Dec 2 12:13:11 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Fri, 2 Dec 2016 19:13:11 +0100 Subject: [petsc-users] Set saddle-point structure in parallel In-Reply-To: <9FA2A4A2-8196-4B35-84EF-67ABCCC7FD21@mcs.anl.gov> References: <9FA2A4A2-8196-4B35-84EF-67ABCCC7FD21@mcs.anl.gov> Message-ID: Thank you Barry. If I understand well, each process needs to provide the IS of the global number of each local row of the considered field. Right? This is what I tried to code. I am gonna check my implementation. Nicolas 2016-12-02 18:34 GMT+01:00 Barry Smith : > > Each process needs to provide the IS that contain only local entries > for that process. > > It looks like you might be doing the opposite. > > > > On Dec 2, 2016, at 10:36 AM, Karin&NiKo wrote: > > > > Dear all, > > > > Thanks to Matt's help, I have been able to set up a fieldsplit > preconditioner for a Stokes-like problem. But it was in sequential! Now I > am facing new issues when trying to set up the saddle-point structure in > parallel. > > > > Well, I have a matrix with 38 DOF. In the global numbering, the pressure > DOF are numbered : 2,5,8,11,14,17 and the velocity DOF are the others. The > matrix is distributed on 2 procs, the rows 0 to 18 on proc0, the rows from > 19 to 38 on procs1. > > I have set the following IS in order to pass them to the PCFieldSplit : > > call ISCreateGeneral(PETSC_COMM_SELF, nbddl0, vec_ddl0, > PETSC_COPY_VALUES, is0, ierr) > > call ISCreateGeneral(PETSC_COMM_SELF, nbddl1, vec_ddl1, > PETSC_COPY_VALUES, is1, ierr) > > > > This is what they contain : > > > > is0 on proc0 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 19 > > 0 19 > > 1 20 > > 2 21 > > 3 22 > > 4 23 > > 5 24 > > 6 25 > > 7 26 > > 8 27 > > 9 28 > > 10 29 > > 11 30 > > 12 31 > > 13 32 > > 14 33 > > 15 34 > > 16 35 > > 17 36 > > 18 37 > > > > is1 on proc0 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 0 > > > > is0 on proc1 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 13 > > 0 0 > > 1 1 > > 2 3 > > 3 4 > > 4 6 > > 5 7 > > 6 9 > > 7 10 > > 8 12 > > 9 13 > > 10 15 > > 11 16 > > 12 18 > > > > is1 on proc1 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 6 > > 0 2 > > 1 5 > > 2 8 > > 3 11 > > 4 14 > > 5 17 > > > > Then I pass them to the FieldSplit : > > call PCFieldSplitSetIS(pc,'0',is0, ierr) > > call PCFieldSplitSetIS(pc,'1',is1, ierr) > > > > > > But when the PC is set up, PETSc complains about : > > > > [1]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > [1]PETSC ERROR: Nonconforming object sizes > > [1]PETSC ERROR: Local column sizes 32 do not add up to total number of > columns 19 > > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > > [1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016 > > [1]PETSC ERROR: > > > \C0\E3o on a > arch-linux2-c-debug named dsp0780450 by B07947 Fri Dec 2 17:07:54 2016 > > [1]PETSC ERROR: Configure options --prefix=/home/B07947/dev/ > codeaster-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes > --download-ml=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz > --with-mumps-lib="-L/home/B07947/dev/codeaster-prerequisites/v13/ > prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps > -lmumps_common -lpord -L/home/B07947/dev/codeaster-prerequisites/v13/ > prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps -lptscotch > -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr -lscotcherrexit > -L/home/B07947/dev/codeaster-prerequisites/v13/ > prerequisites/Parmetis_aster-403_aster/lib -lparmetis > -L/home/B07947/dev/codeaster-prerequisites/v13/ > prerequisites/Metis_aster-510_aster1/lib -lmetis -L/usr/lib > -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi > -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu -lgomp " > --with-mumps-include=/home/B07947/dev/codeaster-prerequisites/v13/ > prerequisites/Mumps-502_consortium_aster1/MPI/include > --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi" > --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi > -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas" > --with-lapack-lib="-L/usr/lib -llapack" > > [1]PETSC ERROR: #1 MatGetSubMatrix_MPIAIJ_Private() line 3181 in > /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ > mat/impls/aij/mpi/mpiaij.c > > [1]PETSC ERROR: #2 MatGetSubMatrix_MPIAIJ() line 3100 in > /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ > mat/impls/aij/mpi/mpiaij.c > > [1]PETSC ERROR: #3 MatGetSubMatrix() line 7825 in > /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ > mat/interface/matrix.c > > [1]PETSC ERROR: #4 PCSetUp_FieldSplit() line 560 in > /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ > ksp/pc/impls/fieldsplit/fieldsplit.c > > [1]PETSC ERROR: #5 PCSetUp() line 968 in /home/B07947/dev/codeaster- > prerequisites/petsc-3.7.2/src/ksp/pc/interface/precon.c > > [1]PETSC ERROR: #6 KSPSetUp() line 390 in /home/B07947/dev/codeaster- > prerequisites/petsc-3.7.2/src/ksp/ksp/interface/itfunc.c > > > > > > I am doing something wrong but I cannot see how I should specify the > layout of my fields. > > > > Thanks in advance, > > Nicolas > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Dec 2 12:39:08 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 2 Dec 2016 12:39:08 -0600 Subject: [petsc-users] Set saddle-point structure in parallel In-Reply-To: References: <9FA2A4A2-8196-4B35-84EF-67ABCCC7FD21@mcs.anl.gov> Message-ID: <30013BDA-08BB-4BE6-A6E3-F3F66066F846@mcs.anl.gov> > On Dec 2, 2016, at 12:13 PM, Karin&NiKo wrote: > > Thank you Barry. > If I understand well, each process needs to provide the IS of the global number of each local row of the considered field. Right? Yes > This is what I tried to code. I am gonna check my implementation. > > Nicolas > > 2016-12-02 18:34 GMT+01:00 Barry Smith : > > Each process needs to provide the IS that contain only local entries for that process. > > It looks like you might be doing the opposite. > > > > On Dec 2, 2016, at 10:36 AM, Karin&NiKo wrote: > > > > Dear all, > > > > Thanks to Matt's help, I have been able to set up a fieldsplit preconditioner for a Stokes-like problem. But it was in sequential! Now I am facing new issues when trying to set up the saddle-point structure in parallel. > > > > Well, I have a matrix with 38 DOF. In the global numbering, the pressure DOF are numbered : 2,5,8,11,14,17 and the velocity DOF are the others. The matrix is distributed on 2 procs, the rows 0 to 18 on proc0, the rows from 19 to 38 on procs1. > > I have set the following IS in order to pass them to the PCFieldSplit : > > call ISCreateGeneral(PETSC_COMM_SELF, nbddl0, vec_ddl0, PETSC_COPY_VALUES, is0, ierr) > > call ISCreateGeneral(PETSC_COMM_SELF, nbddl1, vec_ddl1, PETSC_COPY_VALUES, is1, ierr) > > > > This is what they contain : > > > > is0 on proc0 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 19 > > 0 19 > > 1 20 > > 2 21 > > 3 22 > > 4 23 > > 5 24 > > 6 25 > > 7 26 > > 8 27 > > 9 28 > > 10 29 > > 11 30 > > 12 31 > > 13 32 > > 14 33 > > 15 34 > > 16 35 > > 17 36 > > 18 37 > > > > is1 on proc0 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 0 > > > > is0 on proc1 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 13 > > 0 0 > > 1 1 > > 2 3 > > 3 4 > > 4 6 > > 5 7 > > 6 9 > > 7 10 > > 8 12 > > 9 13 > > 10 15 > > 11 16 > > 12 18 > > > > is1 on proc1 : > > ------------------- > > IS Object: 1 MPI processes > > type: general > > Number of indices in set 6 > > 0 2 > > 1 5 > > 2 8 > > 3 11 > > 4 14 > > 5 17 > > > > Then I pass them to the FieldSplit : > > call PCFieldSplitSetIS(pc,'0',is0, ierr) > > call PCFieldSplitSetIS(pc,'1',is1, ierr) > > > > > > But when the PC is set up, PETSc complains about : > > > > [1]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > > [1]PETSC ERROR: Nonconforming object sizes > > [1]PETSC ERROR: Local column sizes 32 do not add up to total number of columns 19 > > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > > [1]PETSC ERROR: Petsc Release Version 3.7.2, Jun, 05, 2016 > > [1]PETSC ERROR: \C0\E3o on a arch-linux2-c-debug named dsp0780450 by B07947 Fri Dec 2 17:07:54 2016 > > [1]PETSC ERROR: Configure options --prefix=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/Install --with-mpi=yes --with-x=yes --download-ml=/home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/ml-6.2-p3.tar.gz --with-mumps-lib="-L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/lib -lzmumps -ldmumps -lmumps_common -lpord -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Scotch_aster-604_aster6/MPI/lib -lesmumps -lptscotch -lptscotcherr -lptscotcherrexit -lscotch -lscotcherr -lscotcherrexit -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Parmetis_aster-403_aster/lib -lparmetis -L/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Metis_aster-510_aster1/lib -lmetis -L/usr/lib -lscalapack-openmpi -L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -lblacsF77init-openmpi -L/usr/lib/x86_64-linux-gnu -lgomp " --with-mumps-include=/home/B07947/dev/codeaster-prerequisites/v13/prerequisites/Mumps-502_consortium_aster1/MPI/include --with-scalapack-lib="-L/usr/lib -lscalapack-openmpi" --with-blacs-lib="-L/usr/lib -lblacs-openmpi -lblacsCinit-openmpi -lblacsF77init-openmpi" --with-blas-lib="-L/usr/lib -lopenblas -lcblas" --with-lapack-lib="-L/usr/lib -llapack" > > [1]PETSC ERROR: #1 MatGetSubMatrix_MPIAIJ_Private() line 3181 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c > > [1]PETSC ERROR: #2 MatGetSubMatrix_MPIAIJ() line 3100 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/impls/aij/mpi/mpiaij.c > > [1]PETSC ERROR: #3 MatGetSubMatrix() line 7825 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/mat/interface/matrix.c > > [1]PETSC ERROR: #4 PCSetUp_FieldSplit() line 560 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/impls/fieldsplit/fieldsplit.c > > [1]PETSC ERROR: #5 PCSetUp() line 968 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/pc/interface/precon.c > > [1]PETSC ERROR: #6 KSPSetUp() line 390 in /home/B07947/dev/codeaster-prerequisites/petsc-3.7.2/src/ksp/ksp/interface/itfunc.c > > > > > > I am doing something wrong but I cannot see how I should specify the layout of my fields. > > > > Thanks in advance, > > Nicolas > > > > > > > > > > > > > > > > > > From msdrezavand at gmail.com Sat Dec 3 11:33:20 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Sat, 3 Dec 2016 18:33:20 +0100 Subject: [petsc-users] Values of PETSC_DECIDE Message-ID: Dear PETSc team, Supposing to have a dynamic Mat and Vec, if I let PETSc to decide the number of local rows and local columns, i.e MatSetSizes(A, PETSC_DECIDE, PETSC_DECISDE, M, N) How can I get these numbers from PETSc? Thanks, Massoud -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Dec 3 11:37:06 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 3 Dec 2016 11:37:06 -0600 Subject: [petsc-users] Values of PETSC_DECIDE In-Reply-To: References: Message-ID: On Sat, Dec 3, 2016 at 11:33 AM, Massoud Rezavand wrote: > Dear PETSc team, > > Supposing to have a dynamic Mat and Vec, if I let PETSc to decide the > number of local rows and local columns, i.e > > MatSetSizes(A, PETSC_DECIDE, PETSC_DECISDE, M, N) > > How can I get these numbers from PETSc? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecGetLocalSize.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSize.html#MatGetLocalSize Matt > Thanks, > Massoud > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From msdrezavand at gmail.com Sat Dec 3 12:36:36 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Sat, 3 Dec 2016 19:36:36 +0100 Subject: [petsc-users] Values of PETSC_DECIDE In-Reply-To: References: Message-ID: Thanks Matt, the local "m" and "n" are calculated based on the number of processors, right? how about for the case where m and n are not equal for all processors (like the example presented in MatMPIAIJSetPreallocation)? how can I find the local m and n for each processor? I need the m and n for each processor to calculate the entries for d_nnz and o_nnz and then preallocate the matrix. Best Massoud On Sat, Dec 3, 2016 at 6:37 PM, Matthew Knepley wrote: > On Sat, Dec 3, 2016 at 11:33 AM, Massoud Rezavand > wrote: > >> Dear PETSc team, >> >> Supposing to have a dynamic Mat and Vec, if I let PETSc to decide the >> number of local rows and local columns, i.e >> >> MatSetSizes(A, PETSC_DECIDE, PETSC_DECISDE, M, N) >> >> How can I get these numbers from PETSc? >> > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/ > VecGetLocalSize.html > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/ > MatGetLocalSize.html#MatGetLocalSize > > Matt > > >> Thanks, >> Massoud >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Dec 3 12:46:01 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 3 Dec 2016 12:46:01 -0600 Subject: [petsc-users] Values of PETSC_DECIDE In-Reply-To: References: Message-ID: <5BB9F559-BA4A-48B1-A270-40B8DFDA6F78@mcs.anl.gov> > On Dec 3, 2016, at 12:36 PM, Massoud Rezavand wrote: > > Thanks Matt, > > the local "m" and "n" are calculated based on the number of processors, right? > > how about for the case where m and n are not equal for all processors (like the example presented in MatMPIAIJSetPreallocation)? how can I find the local m and n for each processor? > > I need the m and n for each processor to calculate the entries for d_nnz and o_nnz and then preallocate the matrix. You can't do this. By the time the local sizes are determined it is to late to set d_nnz and o_nnz You should call PetscSplitOwnership() once for rows and once for columns to get the local sizes from your global sizes and then use MatMPIAIJSetPreallocation() in the normal way. Barry > > Best > Massoud > > > > On Sat, Dec 3, 2016 at 6:37 PM, Matthew Knepley wrote: > On Sat, Dec 3, 2016 at 11:33 AM, Massoud Rezavand wrote: > Dear PETSc team, > > Supposing to have a dynamic Mat and Vec, if I let PETSc to decide the number of local rows and local columns, i.e > > MatSetSizes(A, PETSC_DECIDE, PETSC_DECISDE, M, N) > > How can I get these numbers from PETSc? > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecGetLocalSize.html > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetLocalSize.html#MatGetLocalSize > > Matt > > Thanks, > Massoud > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > From leejearl at 126.com Sun Dec 4 01:58:12 2016 From: leejearl at 126.com (leejearl) Date: Sun, 4 Dec 2016 15:58:12 +0800 Subject: [petsc-users] Some routines are very expensive, such as DMPlexGetSupport and DMPlexPointLocalRef. Message-ID: <0e2ff2a9-5e25-894b-d3bb-3c3798eff856@126.com> Hi, all PETSc developer: Thank you for your great works. I have deploy my fvm code based on the PETSc. It works well, and the results are beautiful. But I found a problem that some of the functions, such as DMPlexGetSupport and DMPlexPointLocalRef, are very expensive. It costs a lot of times if such routines are involved. Is there any method one can use to reduce the time costs and improve the efficiency of the executable applications? Thanks leejearl -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 4 07:34:49 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 4 Dec 2016 07:34:49 -0600 Subject: [petsc-users] Some routines are very expensive, such as DMPlexGetSupport and DMPlexPointLocalRef. In-Reply-To: <0e2ff2a9-5e25-894b-d3bb-3c3798eff856@126.com> References: <0e2ff2a9-5e25-894b-d3bb-3c3798eff856@126.com> Message-ID: On Sun, Dec 4, 2016 at 1:58 AM, leejearl wrote: > Hi, all PETSc developer: > Thank you for your great works. I have deploy my fvm code based on the > PETSc. > It works well, and the results are beautiful. But I found a problem that > some of the > functions, such as DMPlexGetSupport and DMPlexPointLocalRef, are very > expensive. > > I can believe that some parts are expensive, but I think it is probably something other than GetSupport() and PointLocalRef(). Lets look at the code. First support is just two pointer lookups https://bitbucket.org/petsc/petsc/src/8191f1e31285033beeebf70760bc9786361aefca/src/dm/impls/plex/plex.c?at=master&fileviewer=file-view-default#plex.c-1502 and for Point LocalRef() its one lookup and arithmetic https://bitbucket.org/petsc/petsc/src/8191f1e31285033beeebf70760bc9786361aefca/src/dm/impls/plex/plexpoint.c?at=master&fileviewer=file-view-default#plexpoint.c-105 I have benchmark code that runs these, and they should definitely take < 1e-7s, and maybe 10-100 times less. You can look at Plex test ex9 to see some of it. What is taking a lot of time? Thanks, Matt It costs a lot of times if such routines are involved. Is there any method > one can use to reduce > the time costs and improve the efficiency of the executable applications? > Thanks > leejearl > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Dec 4 11:57:52 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 4 Dec 2016 11:57:52 -0600 Subject: [petsc-users] For Fortran users of PETSc development version Message-ID: For Fortran users of the PETSc development (git master branch) version I have updated and simplified the Fortran usage of PETSc in the past few weeks. I will put the branch barry/fortran-update into the master branch on Monday. The usage changes are A) for each Fortran function (and main) use the following subroutine mysubroutine(.....) #include use petscxxx implicit none For example if you are using SNES in your code you would have #include use petscsnes implicit none B) Instead of PETSC_NULL_OBJECT you must pass PETSC_NULL_XXX (for example PETSC_NULL_VEC) using the specific object type XXX that the function call is expecting. C) Objects can be declared either as XXX a or type(iXXX) a, for example Mat a or type(iMat) a. (Note that previously for those who used types it was type(Mat) but that can no longer be used. Notes: 1) There are no longer any .h90 files that may be included 2) Like C the include files are now nested so you no longer need to include for example #include #include #include #include #include you can just include #include 3) there is now type checking of most function calls. This will help eliminate bugs due to incorrect calling sequences. Note that Fortran distinguishes between a argument that is a scalar (zero dimensional array), a one dimensional array and a two dimensional array (etc). So you may get compile warnings because you are passing in an array when PETSc expects a scalar or vis-versa. If you get these simply fix your declaration of the variable to match what is expected. In some routines like MatSetValues() and friends you can pass either scalars, one dimensional arrays or two dimensional arrays, if you get errors here please send mail to petsc-maint at mcs.anl.gov and include enough of your code so we can see the dimensions of all your variables so we can fix the problems. 4) You can continue to use either fixed (.F extension) or free format (.F90 extension) for your source 5) All the examples in PETSc have been updated so consult them for clarifications. Please report any problems to petsc-maint at mcs.anl.gov Thanks Barry From bsmith at mcs.anl.gov Sun Dec 4 13:13:40 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 4 Dec 2016 13:13:40 -0600 Subject: [petsc-users] For Fortran users of PETSc development version In-Reply-To: References: Message-ID: <4726FF04-1D07-455F-88B4-FF2488DCAE07@mcs.anl.gov> Jed noticed a small mistake in my description. It is type(tXXX) not type(iXXX) if you chose to declare your variables that way. Note that declaring them via type(tXXX) or XXX is identical (XXX is just a macro for type(tXXX)). Barry > On Dec 4, 2016, at 11:57 AM, Barry Smith wrote: > > > For Fortran users of the PETSc development (git master branch) version > > > I have updated and simplified the Fortran usage of PETSc in the past few weeks. I will put the branch barry/fortran-update into the master branch on Monday. The usage changes are > > A) for each Fortran function (and main) use the following > > subroutine mysubroutine(.....) > #include > use petscxxx > implicit none > > For example if you are using SNES in your code you would have > > #include > use petscsnes > implicit none > > B) Instead of PETSC_NULL_OBJECT you must pass PETSC_NULL_XXX (for example PETSC_NULL_VEC) using the specific object type XXX that the function call is expecting. > > C) Objects can be declared either as XXX a or type(iXXX) a, for example Mat a or type(iMat) a. (Note that previously for those who used types it was type(Mat) but that can no longer be used. > > Notes: > > 1) There are no longer any .h90 files that may be included > > 2) Like C the include files are now nested so you no longer need to include for example > > #include > #include > #include > #include > #include > > you can just include > > #include > > 3) there is now type checking of most function calls. This will help eliminate bugs due to incorrect calling sequences. Note that Fortran distinguishes between a argument that is a scalar (zero dimensional array), a one dimensional array and a two dimensional array (etc). So you may get compile warnings because you are passing in an array when PETSc expects a scalar or vis-versa. If you get these simply fix your declaration of the variable to match what is expected. In some routines like MatSetValues() and friends you can pass either scalars, one dimensional arrays or two dimensional arrays, if you get errors here please send mail to petsc-maint at mcs.anl.gov and include enough of your code so we can see the dimensions of all your variables so we can fix the problems. > > 4) You can continue to use either fixed (.F extension) or free format (.F90 extension) for your source > > 5) All the examples in PETSc have been updated so consult them for clarifications. > > > Please report any problems to petsc-maint at mcs.anl.gov > > Thanks > > Barry > > From leejearl at 126.com Mon Dec 5 04:34:29 2016 From: leejearl at 126.com (=?GBK?B?wO68vg==?=) Date: Mon, 5 Dec 2016 18:34:29 +0800 (CST) Subject: [petsc-users] Some routines are very expensive, such as DMPlexGetSupport and DMPlexPointLocalRef. In-Reply-To: References: <0e2ff2a9-5e25-894b-d3bb-3c3798eff856@126.com> Message-ID: <75f69297.bc66.158ce8c9573.Coremail.leejearl@126.com> Hi Matt: Thank you for your kind reply. I am aware of this problem from my test case. I simulate the lid driven cavity by the code, and the grid is a 100x100 2D domain. I use the routine DMPlexReconstructGradientsFVM to compute the gradients and limiters. The limiter which I used in the code is the PETSCLIMITERMINMOD. I have march 1000 steps, and the time costs are more higher than I expected. Then, I have loop the function DMPlexReconstructGradientsFVM for 1000 times, and it costs nearly 170 seconds. I have browse the code of the routine DMPlexReconstructGradientsFVM. The arithmetic is very clean, so I think It was because of the lots of function calls? such as VecGetArray, DMPlexGetSupport and DMPlexPointLocalRef. I make a further test and recode the DMPlexReconstructGradientsFVM and named it as DMPlexReconstructGradientsFVM_1 by myself. When I loop the DMPlexReconstructGradientsFVM_1 for 1000 times, the time costs were reduced as 30 seconds. The modification in my own code is that I calls the function outside the loops, and then pass the data into the function DMPlexReconstructGradientsFVM_1. The program flow is like as follow VecGetArray DMPlexGetSupport DMPlexPointLocalRef ... for(i=0; i<1000;++i) { DMPlexReconstructGradientsFVM_1(data, ....) /* Here the data represent the data I extract from the DMPlex using the function VecGetArray and etc. */ } The code using DMPlexReconstructGradientsFVM look like for(i=0; i<1000;++i) { function DMPlexReconstructGradientsFVM { VecGetArray DMPlexGetSupport DMPlexPointLocalRef ... } } Compared with DMPlexReconstructGradientsFVM_1, DMPlexReconstructGradientsFVM has too many function calls. It makes the time costs very expensive. So, I write to you for helps that whether I can use some compiler options to reduce the time coses. Thanks. leejearl At 2016-12-04 21:34:49, "Matthew Knepley" wrote: On Sun, Dec 4, 2016 at 1:58 AM, leejearl wrote: Hi, all PETSc developer: Thank you for your great works. I have deploy my fvm code based on the PETSc. It works well, and the results are beautiful. But I found a problem that some of the functions, such as DMPlexGetSupport and DMPlexPointLocalRef, are very expensive. I can believe that some parts are expensive, but I think it is probably something other than GetSupport() and PointLocalRef(). Lets look at the code. First support is just two pointer lookups https://bitbucket.org/petsc/petsc/src/8191f1e31285033beeebf70760bc9786361aefca/src/dm/impls/plex/plex.c?at=master&fileviewer=file-view-default#plex.c-1502 and for Point LocalRef() its one lookup and arithmetic https://bitbucket.org/petsc/petsc/src/8191f1e31285033beeebf70760bc9786361aefca/src/dm/impls/plex/plexpoint.c?at=master&fileviewer=file-view-default#plexpoint.c-105 I have benchmark code that runs these, and they should definitely take < 1e-7s, and maybe 10-100 times less. You can look at Plex test ex9 to see some of it. What is taking a lot of time? Thanks, Matt It costs a lot of times if such routines are involved. Is there any method one can use to reduce the time costs and improve the efficiency of the executable applications? Thanks leejearl -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From msdrezavand at gmail.com Mon Dec 5 10:49:31 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Mon, 5 Dec 2016 17:49:31 +0100 Subject: [petsc-users] MatSetValues in runtime Message-ID: Dear Petsc team, In order to create a parallel matrix and solve by KSP, is it possible to directly use MatSetValues() in runtime when each matrix entry is just created without MatMPIAIJSetPreallocation()? I mean, when you only know the global size of Mat, and the number of nonzeros per row is not constant neither for all rows nor during time, is it possible to set the singular entries into Mat one by one after creating each one? Thanks Massoud -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Mon Dec 5 10:57:17 2016 From: dave.mayhem23 at gmail.com (Dave May) Date: Mon, 5 Dec 2016 16:57:17 +0000 Subject: [petsc-users] MatSetValues in runtime In-Reply-To: References: Message-ID: On 5 December 2016 at 16:49, Massoud Rezavand wrote: > Dear Petsc team, > > In order to create a parallel matrix and solve by KSP, is it possible to > directly use MatSetValues() in runtime when each matrix entry is just > created without MatMPIAIJSetPreallocation()? > Yes, but performance will be terrible without specify any preallocation info. See this note http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMPIAIJSetPreallocation.html For a code example of for how to do the assembly without preallocation (not recommended), you can refer to this (its the same pattern for MPIAIJ) http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex1.c.html > > I mean, when you only know the global size of Mat, and the number of > nonzeros per row is not constant neither for all rows nor during time, is > it possible to set the singular entries into Mat one by one after creating > each one? > Why don't you just destroy the matrix and create a new one every time the non-zero structure changes? That's what I recommended last time. Thanks, Dave > Thanks > Massoud > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knram06 at gmail.com Mon Dec 5 14:47:19 2016 From: knram06 at gmail.com (K. N. Ramachandran) Date: Mon, 5 Dec 2016 15:47:19 -0500 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices Message-ID: Hello PETSc-Users, I am working on an application where we capture the A matrix in a linear system Ax=b, which is solved using Petsc. Let us also say that the matrix A can change after a few iterations. We want to capture only the changed matrices and simply avoid the duplicate ones. I was considering using (or defining) a Set-like data structure that stores only the Mat objects which have changed entries. So a hash function that can operate on a sparse matrix would be pretty useful here. This seems like a common enough use case and I was wondering if anyone can give their inputs on defining a hash function that can operate on sparse matrices. One such link I had found online is: http://stackoverflow.com/questions/10638373/suitable-hash-function-for-matrix-sparsity-pattern Any thoughts or comments would be great here. Thanking You, Ramachandran -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 5 20:14:34 2016 From: jed at jedbrown.org (Jed Brown) Date: Mon, 05 Dec 2016 19:14:34 -0700 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices In-Reply-To: References: Message-ID: <874m2hvm5x.fsf@jedbrown.org> "K. N. Ramachandran" writes: > Hello PETSc-Users, > > I am working on an application where we capture the A matrix in a linear > system Ax=b, which is solved using Petsc. Let us also say that the matrix A > can change after a few iterations. We want to capture only the changed > matrices and simply avoid the duplicate ones. > > I was considering using (or defining) a Set-like data structure that stores > only the Mat objects which have changed entries. So a hash function that > can operate on a sparse matrix would be pretty useful here. You want to use the hash function to identify duplicate sparsity structures? Or what? > This seems like a common enough use case and I was wondering if anyone can > give their inputs on defining a hash function that can operate on sparse > matrices. One such link I had found online is: > http://stackoverflow.com/questions/10638373/suitable-hash-function-for-matrix-sparsity-pattern > > Any thoughts or comments would be great here. > > > Thanking You, > Ramachandran -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Dec 5 20:26:22 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 5 Dec 2016 20:26:22 -0600 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices In-Reply-To: References: Message-ID: > On Dec 5, 2016, at 2:47 PM, K. N. Ramachandran wrote: > > Hello PETSc-Users, > > I am working on an application where we capture the A matrix in a linear system Ax=b, which is solved using Petsc. Let us also say that the matrix A can change after a few iterations. We want to capture only the changed matrices and simply avoid the duplicate ones. > > I was considering using (or defining) a Set-like data structure that stores only the Mat objects which have changed entries. So a hash function that can operate on a sparse matrix would be pretty useful here. > > This seems like a common enough use case and I was wondering if anyone can give their inputs on defining a hash function that can operate on sparse matrices. One such link I had found online is: > http://stackoverflow.com/questions/10638373/suitable-hash-function-for-matrix-sparsity-pattern > > Any thoughts or comments would be great here. PETSc automatically tracks changes in the numerical values and sparsity pattern of its matrices. So, for example, if you call MatSetValues() and MatAssemblyBegin/End() then the KSP associated with the Mat "knows" the matrix has changed. Similarly if the sparsity pattern changes then preconditioners that depend on the sparsity pattern, such as ILU, take the change into account when they rebuild the preconditioner. You can inquire a PETSc matrix for its current state or nonzero pattern state and compare with saved previous state to see if it has changed us PetscObjectGetState() for numerical changes and MatGetNonzeroState() for nonzero structure changes. I don't know if this is related to your inquiry or not. Barry > > > Thanking You, > Ramachandran From friedmud at gmail.com Mon Dec 5 22:56:23 2016 From: friedmud at gmail.com (Derek Gaston) Date: Tue, 06 Dec 2016 04:56:23 +0000 Subject: [petsc-users] PETSc + Julia + MVAPICH2 = Segfault Message-ID: Please excuse the slightly off-topic post: but I'm pulling my hair out here and I'm hoping someone else has seen this before. I'm calling PETSc from Julia and it's working great on my Mac with MPICH but I'm seeing a segfault on my Linux cluster using MVAPICH2. I get the same segfault both with the "official" PETSc.jl and my own smaller wrapper MiniPETSc.jl: https://github.com/friedmud/MiniPETSc.jl Here is the stack trace I'm seeing: signal (11): Segmentation fault while loading /home/gastdr/projects/falcon/julia_mpi.jl, in expression starting on line 5 _int_malloc at /home/gastdr/projects/falcon/root/lib/libmpi.so.12 (unknown line) calloc at /home/gastdr/projects/falcon/root/lib/libmpi.so.12 (unknown line) PetscOptionsCreate at /home/gastdr/projects/falcon/petsc-3.7.3/src/sys/objects/options.c:2578 PetscInitialize at /home/gastdr/projects/falcon/petsc-3.7.3/src/sys/objects/pinit.c:761 PetscInitializeNoPointers at /home/gastdr/projects/falcon/petsc-3.7.3/src/sys/objects/pinit.c:111 __init__ at /home/gastdr/.julia/v0.5/MiniPETSc/src/MiniPETSc.jl:14 The script I'm running is simply just run.jl : using MiniPETSc It feels like libmpi is not quite loaded correctly yet. It does get loaded by MPI.jl here: https://github.com/JuliaParallel/MPI.jl/blob/master/src/MPI.jl#L29 and I've verified that that code is running before PETSc is being initialized. It looks ok to me... and I've tried a few variations on that dlopen() call and nothing makes it better. BTW: MPI.jl is working fine on its own. I can write pure MPI Julia apps and run them in parallel on the cluster. Just need to get this initialization of PETSc straightened out. Thanks for any help! Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 5 23:04:05 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 5 Dec 2016 23:04:05 -0600 Subject: [petsc-users] PETSc + Julia + MVAPICH2 = Segfault In-Reply-To: References: Message-ID: On Mon, Dec 5, 2016 at 10:56 PM, Derek Gaston wrote: > Please excuse the slightly off-topic post: but I'm pulling my hair out > here and I'm hoping someone else has seen this before. > > I'm calling PETSc from Julia and it's working great on my Mac with MPICH > but I'm seeing a segfault on my Linux cluster using MVAPICH2. I get the > same segfault both with the "official" PETSc.jl and my own smaller wrapper > MiniPETSc.jl: https://github.com/friedmud/MiniPETSc.jl > > Here is the stack trace I'm seeing: > > signal (11): Segmentation fault > while loading /home/gastdr/projects/falcon/julia_mpi.jl, in expression > starting on line 5 > _int_malloc at /home/gastdr/projects/falcon/root/lib/libmpi.so.12 > (unknown line) > calloc at /home/gastdr/projects/falcon/root/lib/libmpi.so.12 (unknown > line) > Why is MPI taking over calloc()? Matt > PetscOptionsCreate at /home/gastdr/projects/falcon/ > petsc-3.7.3/src/sys/objects/options.c:2578 > PetscInitialize at /home/gastdr/projects/falcon/ > petsc-3.7.3/src/sys/objects/pinit.c:761 > PetscInitializeNoPointers at /home/gastdr/projects/falcon/ > petsc-3.7.3/src/sys/objects/pinit.c:111 > __init__ at /home/gastdr/.julia/v0.5/MiniPETSc/src/MiniPETSc.jl:14 > > The script I'm running is simply just run.jl : > > using MiniPETSc > > It feels like libmpi is not quite loaded correctly yet. It does get > loaded by MPI.jl here: https://github.com/JuliaParallel/MPI.jl/blob/ > master/src/MPI.jl#L29 and I've verified that that code is running before > PETSc is being initialized. > > It looks ok to me... and I've tried a few variations on that dlopen() call > and nothing makes it better. > > BTW: MPI.jl is working fine on its own. I can write pure MPI Julia apps > and run them in parallel on the cluster. Just need to get this > initialization of PETSc straightened out. > > Thanks for any help! > > Derek > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedmud at gmail.com Tue Dec 6 00:43:30 2016 From: friedmud at gmail.com (Derek Gaston) Date: Tue, 06 Dec 2016 06:43:30 +0000 Subject: [petsc-users] PETSc + Julia + MVAPICH2 = Segfault In-Reply-To: References: Message-ID: Quick update for the list: Matt and I were emailing back and forth a bit and I at least have a workaround for now. It turns out that MVAPICH2 does include their own implementations of malloc/calloc . Matt believes (and I agree) that they should be private to the library though. It looks like something about the way Julia is loading the library is exposing those to PETSc. For now: I've worked around the issue by hardcore hacking the MVAPICH source to remove the definitions of malloc/calloc... and it DOES fix the "problem"... but, is definitely not the right answer. I'm going to talk to the Julia guys here at MIT tomorrow and see if I can get to the bottom of why those symbols are getting exposed when libmpi is getting loaded. Thanks Matt, for the help! Derek On Mon, Dec 5, 2016 at 11:56 PM Derek Gaston wrote: > Please excuse the slightly off-topic post: but I'm pulling my hair out > here and I'm hoping someone else has seen this before. > > I'm calling PETSc from Julia and it's working great on my Mac with MPICH > but I'm seeing a segfault on my Linux cluster using MVAPICH2. I get the > same segfault both with the "official" PETSc.jl and my own smaller wrapper > MiniPETSc.jl: https://github.com/friedmud/MiniPETSc.jl > > Here is the stack trace I'm seeing: > > signal (11): Segmentation fault > while loading /home/gastdr/projects/falcon/julia_mpi.jl, in expression > starting on line 5 > _int_malloc at /home/gastdr/projects/falcon/root/lib/libmpi.so.12 (unknown > line) > calloc at /home/gastdr/projects/falcon/root/lib/libmpi.so.12 (unknown line) > PetscOptionsCreate at > /home/gastdr/projects/falcon/petsc-3.7.3/src/sys/objects/options.c:2578 > PetscInitialize at > /home/gastdr/projects/falcon/petsc-3.7.3/src/sys/objects/pinit.c:761 > PetscInitializeNoPointers at > /home/gastdr/projects/falcon/petsc-3.7.3/src/sys/objects/pinit.c:111 > __init__ at /home/gastdr/.julia/v0.5/MiniPETSc/src/MiniPETSc.jl:14 > > The script I'm running is simply just run.jl : > > using MiniPETSc > > It feels like libmpi is not quite loaded correctly yet. It does get > loaded by MPI.jl here: > https://github.com/JuliaParallel/MPI.jl/blob/master/src/MPI.jl#L29 and > I've verified that that code is running before PETSc is being initialized. > > It looks ok to me... and I've tried a few variations on that dlopen() call > and nothing makes it better. > > BTW: MPI.jl is working fine on its own. I can write pure MPI Julia apps > and run them in parallel on the cluster. Just need to get this > initialization of PETSc straightened out. > > Thanks for any help! > > Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knram06 at gmail.com Tue Dec 6 09:31:17 2016 From: knram06 at gmail.com (K. N. Ramachandran) Date: Tue, 6 Dec 2016 10:31:17 -0500 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices In-Reply-To: References: Message-ID: Hello Jed, Barry, Thanks for the inputs. Yes, I am trying to spot duplicate sparse Matrices basically and avoid storing if possible. MatGetNonzeroState() seems useful here and I'll take a closer look. I couldn't find the PetscObjectGetState() though from the available API ( http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html). Was there another closely related function I should take a look at? Thanks, Ram On Mon, Dec 5, 2016 at 9:26 PM, Barry Smith wrote: > > > On Dec 5, 2016, at 2:47 PM, K. N. Ramachandran > wrote: > > > > Hello PETSc-Users, > > > > I am working on an application where we capture the A matrix in a linear > system Ax=b, which is solved using Petsc. Let us also say that the matrix A > can change after a few iterations. We want to capture only the changed > matrices and simply avoid the duplicate ones. > > > > I was considering using (or defining) a Set-like data structure that > stores only the Mat objects which have changed entries. So a hash function > that can operate on a sparse matrix would be pretty useful here. > > > > This seems like a common enough use case and I was wondering if anyone > can give their inputs on defining a hash function that can operate on > sparse matrices. One such link I had found online is: > > http://stackoverflow.com/questions/10638373/suitable- > hash-function-for-matrix-sparsity-pattern > > > > Any thoughts or comments would be great here. > > PETSc automatically tracks changes in the numerical values and sparsity > pattern of its matrices. So, for example, if you call MatSetValues() and > MatAssemblyBegin/End() then the KSP associated with the Mat "knows" the > matrix has changed. Similarly if the sparsity pattern changes then > preconditioners that depend on the sparsity pattern, such as ILU, take the > change into account when they rebuild the preconditioner. You can inquire a > PETSc matrix for its current state or nonzero pattern state and compare > with saved previous state to see if it has changed us PetscObjectGetState() > for numerical changes and MatGetNonzeroState() for nonzero structure > changes. > > I don't know if this is related to your inquiry or not. > > Barry > > > > > > > Thanking You, > > Ramachandran > > -- K.N.Ramachandran Ph: 814-441-4279 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 6 09:39:08 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 6 Dec 2016 09:39:08 -0600 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices In-Reply-To: References: Message-ID: On Tue, Dec 6, 2016 at 9:31 AM, K. N. Ramachandran wrote: > Hello Jed, Barry, > > Thanks for the inputs. Yes, I am trying to spot duplicate sparse Matrices > basically and avoid storing if possible. MatGetNonzeroState() seems useful > here and I'll take a closer look. > > I couldn't find the PetscObjectGetState() though from the available API ( > http://www.mcs.anl.gov/petsc/petsc-current/docs/ > manualpages/singleindex.html). Was there another closely related function > I should take a look at? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscObjectStateGet.html Matt > Thanks, > Ram > > > > On Mon, Dec 5, 2016 at 9:26 PM, Barry Smith wrote: > >> >> > On Dec 5, 2016, at 2:47 PM, K. N. Ramachandran >> wrote: >> > >> > Hello PETSc-Users, >> > >> > I am working on an application where we capture the A matrix in a >> linear system Ax=b, which is solved using Petsc. Let us also say that the >> matrix A can change after a few iterations. We want to capture only the >> changed matrices and simply avoid the duplicate ones. >> > >> > I was considering using (or defining) a Set-like data structure that >> stores only the Mat objects which have changed entries. So a hash function >> that can operate on a sparse matrix would be pretty useful here. >> > >> > This seems like a common enough use case and I was wondering if anyone >> can give their inputs on defining a hash function that can operate on >> sparse matrices. One such link I had found online is: >> > http://stackoverflow.com/questions/10638373/suitable-hash- >> function-for-matrix-sparsity-pattern >> > >> > Any thoughts or comments would be great here. >> >> PETSc automatically tracks changes in the numerical values and sparsity >> pattern of its matrices. So, for example, if you call MatSetValues() and >> MatAssemblyBegin/End() then the KSP associated with the Mat "knows" the >> matrix has changed. Similarly if the sparsity pattern changes then >> preconditioners that depend on the sparsity pattern, such as ILU, take the >> change into account when they rebuild the preconditioner. You can inquire a >> PETSc matrix for its current state or nonzero pattern state and compare >> with saved previous state to see if it has changed us PetscObjectGetState() >> for numerical changes and MatGetNonzeroState() for nonzero structure >> changes. >> >> I don't know if this is related to your inquiry or not. >> >> Barry >> >> > >> > >> > Thanking You, >> > Ramachandran >> >> > > > -- > K.N.Ramachandran > Ph: 814-441-4279 <(814)%20441-4279> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Dec 6 09:42:55 2016 From: jed at jedbrown.org (Jed Brown) Date: Tue, 06 Dec 2016 08:42:55 -0700 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices In-Reply-To: References: Message-ID: <87vauxt668.fsf@jedbrown.org> "K. N. Ramachandran" writes: > Hello Jed, Barry, > > Thanks for the inputs. Yes, I am trying to spot duplicate sparse Matrices > basically and avoid storing if possible. MatGetNonzeroState() seems useful > here and I'll take a closer look. > > I couldn't find the PetscObjectGetState() though from the available API ( > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html). > Was there another closely related function I should take a look at? http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscObjectStateGet.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From knram06 at gmail.com Tue Dec 6 09:46:28 2016 From: knram06 at gmail.com (K. N. Ramachandran) Date: Tue, 6 Dec 2016 10:46:28 -0500 Subject: [petsc-users] Hash Function for PETSc Sparse Matrices In-Reply-To: <87vauxt668.fsf@jedbrown.org> References: <87vauxt668.fsf@jedbrown.org> Message-ID: Ah ok. Thanks. :) On Tue, Dec 6, 2016 at 10:42 AM, Jed Brown wrote: > "K. N. Ramachandran" writes: > > > Hello Jed, Barry, > > > > Thanks for the inputs. Yes, I am trying to spot duplicate sparse Matrices > > basically and avoid storing if possible. MatGetNonzeroState() seems > useful > > here and I'll take a closer look. > > > > I couldn't find the PetscObjectGetState() though from the available API ( > > http://www.mcs.anl.gov/petsc/petsc-current/docs/ > manualpages/singleindex.html). > > Was there another closely related function I should take a look at? > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/ > PetscObjectStateGet.html > -- K.N.Ramachandran Ph: 814-441-4279 -------------- next part -------------- An HTML attachment was scrubbed... URL: From leidy-catherine.ramirez-villalba at ec-nantes.fr Tue Dec 6 11:25:23 2016 From: leidy-catherine.ramirez-villalba at ec-nantes.fr (Leidy Catherine Ramirez Villalba) Date: Tue, 6 Dec 2016 18:25:23 +0100 (CET) Subject: [petsc-users] Way to remove zero entries from an assembled matrix Message-ID: <2067533533.816969.1481045123386.JavaMail.zimbra@ec-nantes.fr> Hello PETSc team: I'm doing the parallelization of the assembling of a system, previously assembled in a serial way (manual), but solved using PETSc in parallel. Therefore I have the old assembled matrix to compare with the one assembled with PETSc. While doing the assembling of the matrix, I avoid the zero entries using the option 'MAT_IGNORE_ZERO_ENTRIES' in MatSetOption, however the final matrix has zero values (or almost) due to the addition of non zero elements. Below the example of added values: 1664 i 1663 j -165509.423650377 1664 i 1663 j 165509.423650377 1664 i 1663 j -165509.423650377 1664 i 1663 j 165509.423650377 Due to this difference between the two matrices I'm not able to get the solution for the matrix assembled with petsc. Therefore I wonder if there is a way to remove zero entries from an assembled matrix, or which solver to use so that zeros will not be a problem. Thanks in advance, Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 6 11:45:52 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 6 Dec 2016 11:45:52 -0600 Subject: [petsc-users] Way to remove zero entries from an assembled matrix In-Reply-To: <2067533533.816969.1481045123386.JavaMail.zimbra@ec-nantes.fr> References: <2067533533.816969.1481045123386.JavaMail.zimbra@ec-nantes.fr> Message-ID: > On Dec 6, 2016, at 11:25 AM, Leidy Catherine Ramirez Villalba wrote: > > Hello PETSc team: > > I'm doing the parallelization of the assembling of a system, previously assembled in a serial way (manual), but solved using PETSc in parallel. > Therefore I have the old assembled matrix to compare with the one assembled with PETSc. > > While doing the assembling of the matrix, I avoid the zero entries using the option 'MAT_IGNORE_ZERO_ENTRIES' in MatSetOption, however the final matrix has zero values (or almost) due to the addition of non zero elements. > Below the example of added values: > > 1664 i 1663 j -165509.423650377 > 1664 i 1663 j 165509.423650377 > 1664 i 1663 j -165509.423650377 > 1664 i 1663 j 165509.423650377 > > Due to this difference between the two matrices I'm not able to get the solution for the matrix assembled with petsc. Catherine, The "extra" zeros should not prevent getting the solution to the system. The computed solution may be a little different but if you use an iterative method with smaller and smaller tolerances the solutions should converge to the same value. With direct solvers the solutions should be very similar. Could you please provide more details about not getting the solution for the parallel system? Send output of -ksp_monitor_true_residual for example Barry > > Therefore I wonder if there is a way to remove zero entries from an assembled matrix, or which solver to use so that zeros will not be a problem. > > Thanks in advance, > Catherine > > > > From hengjiew at uci.edu Tue Dec 6 16:22:58 2016 From: hengjiew at uci.edu (frank) Date: Tue, 6 Dec 2016 14:22:58 -0800 Subject: [petsc-users] Question about Set-up of Full MG and its Output Message-ID: Dear all, I am trying to use full MG to solve a 2D Poisson equation. I want to set full MG as the solver and SOR as the smoother. Is the following setup the proper way to do it? -ksp_type richardson -pc_type mg -pc_mg_type full -mg_levels_ksp_type richardson -mg_levels_pc_type sor The ksp_view shows the levels from the coarsest mesh to finest mesh in a linear order. I was expecting sth like: coarsest -> level1 -> coarsest -> level1 -> level2 -> level1 -> coarsest -> ... Is there a way to show exactly how the full MG proceeds? Also in the above example, I want to know what interpolation or prolongation method is used from level1 to level2. Can I get that info by adding some options? (not using PCMGGetInterpolation) I attached the ksp_view info and my petsc options file. Thank you. Frank -------------- next part -------------- A non-text attachment was scrubbed... Name: ksp.log Type: text/x-log Size: 7467 bytes Desc: not available URL: -------------- next part -------------- -ksp_type richardson -ksp_norm_type unpreconditioned -ksp_rtol 1e-7 -options_left -ksp_initial_guess_nonzero yes -ksp_converged_reason -ksp_view -pc_type mg -pc_mg_type full -pc_mg_galerkin -pc_mg_levels 6 -mg_levels_ksp_type richardson -mg_levels_pc_type sor -mg_levels_ksp_max_it 1 -mg_coarse_ksp_type preonly -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_package superlu_dist From jed at jedbrown.org Tue Dec 6 16:31:48 2016 From: jed at jedbrown.org (Jed Brown) Date: Tue, 06 Dec 2016 15:31:48 -0700 Subject: [petsc-users] Question about Set-up of Full MG and its Output In-Reply-To: References: Message-ID: <87pol4u1t7.fsf@jedbrown.org> frank writes: > Dear all, > > I am trying to use full MG to solve a 2D Poisson equation. > > I want to set full MG as the solver and SOR as the smoother. Is the > following setup the proper way to do it? > -ksp_type richardson > -pc_type mg > -pc_mg_type full > -mg_levels_ksp_type richardson > -mg_levels_pc_type sor > > The ksp_view shows the levels from the coarsest mesh to finest mesh in a > linear order. It is showing the solver configuration, not a trace of the cycle. > I was expecting sth like: coarsest -> level1 -> coarsest -> level1 -> > level2 -> level1 -> coarsest -> ... > Is there a way to show exactly how the full MG proceeds? You could get a trace like this from -mg_coarse_ksp_converged_reason -mg_levels_ksp_converged_reason If you want to deliminate the iterations, you could add -ksp_monitor. > Also in the above example, I want to know what interpolation or > prolongation method is used from level1 to level2. > Can I get that info by adding some options? (not using PCMGGetInterpolation) > > I attached the ksp_view info and my petsc options file. > Thank you. > > Frank > Linear solve converged due to CONVERGED_RTOL iterations 3 > KSP Object: 1 MPI processes > type: richardson > Richardson: damping factor=1. > maximum iterations=10000 > tolerances: relative=1e-07, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using UNPRECONDITIONED norm type for convergence test > PC Object: 1 MPI processes > type: mg > MG: type is FULL, levels=6 cycles=v > Using Galerkin computed coarse grid matrices > Coarse grid solver -- level ------------------------------- > KSP Object: (mg_coarse_) 1 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object: (mg_coarse_) 1 MPI processes > type: lu > out-of-place factorization > tolerance for zero pivot 2.22045e-14 > using diagonal shift on blocks to prevent zero pivot [INBLOCKS] > matrix ordering: nd > factor fill ratio given 0., needed 0. > Factored matrix follows: > Mat Object: 1 MPI processes > type: superlu_dist > rows=64, cols=64 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > SuperLU_DIST run parameters: > Process grid nprow 1 x npcol 1 > Equilibrate matrix TRUE > Matrix input mode 0 > Replace tiny pivots FALSE > Use iterative refinement FALSE > Processors in row 1 col partition 1 > Row permutation LargeDiag > Column permutation METIS_AT_PLUS_A > Parallel symbolic factorization FALSE > Repeated factorization SamePattern > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=64, cols=64 > total: nonzeros=576, allocated nonzeros=576 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Down solver (pre-smoother) on level 1 ------------------------------- > KSP Object: (mg_levels_1_) 1 MPI processes > type: richardson > Richardson: damping factor=1. > maximum iterations=1 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_1_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=256, cols=256 > total: nonzeros=2304, allocated nonzeros=2304 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 2 ------------------------------- > KSP Object: (mg_levels_2_) 1 MPI processes > type: richardson > Richardson: damping factor=1. > maximum iterations=1 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_2_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=1024, cols=1024 > total: nonzeros=9216, allocated nonzeros=9216 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 3 ------------------------------- > KSP Object: (mg_levels_3_) 1 MPI processes > type: richardson > Richardson: damping factor=1. > maximum iterations=1 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_3_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=4096, cols=4096 > total: nonzeros=36864, allocated nonzeros=36864 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 4 ------------------------------- > KSP Object: (mg_levels_4_) 1 MPI processes > type: richardson > Richardson: damping factor=1. > maximum iterations=1 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_4_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=16384, cols=16384 > total: nonzeros=147456, allocated nonzeros=147456 > total number of mallocs used during MatSetValues calls =0 > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > Down solver (pre-smoother) on level 5 ------------------------------- > KSP Object: (mg_levels_5_) 1 MPI processes > type: richardson > Richardson: damping factor=1. > maximum iterations=1 > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using nonzero initial guess > using NONE norm type for convergence test > PC Object: (mg_levels_5_) 1 MPI processes > type: sor > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=65536, cols=65536 > total: nonzeros=327680, allocated nonzeros=327680 > total number of mallocs used during MatSetValues calls =0 > has attached null space > not using I-node routines > Up solver (post-smoother) same as down solver (pre-smoother) > linear system matrix = precond matrix: > Mat Object: 1 MPI processes > type: seqaij > rows=65536, cols=65536 > total: nonzeros=327680, allocated nonzeros=327680 > total number of mallocs used during MatSetValues calls =0 > has attached null space > not using I-node routines > #PETSc Option Table entries: > -ksp_converged_reason > -ksp_initial_guess_nonzero yes > -ksp_norm_type unpreconditioned > -ksp_rtol 1e-7 > -ksp_type richardson > -ksp_view > -mg_coarse_ksp_type preonly > -mg_coarse_pc_factor_mat_solver_package superlu_dist > -mg_coarse_pc_type lu > -mg_levels_ksp_max_it 1 > -mg_levels_ksp_type richardson > -mg_levels_pc_type sor > -N 256 > -options_left > -pc_mg_galerkin > -pc_mg_levels 6 > -pc_mg_type full > -pc_type mg > -px 1 > -py 1 > #End of PETSc Option Table entries > There are no unused options. > -ksp_type richardson > -ksp_norm_type unpreconditioned > -ksp_rtol 1e-7 > -options_left > -ksp_initial_guess_nonzero yes > -ksp_converged_reason > -ksp_view > -pc_type mg > -pc_mg_type full > -pc_mg_galerkin > -pc_mg_levels 6 > -mg_levels_ksp_type richardson > -mg_levels_pc_type sor > -mg_levels_ksp_max_it 1 > -mg_coarse_ksp_type preonly > -mg_coarse_pc_type lu > -mg_coarse_pc_factor_mat_solver_package superlu_dist -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From leidy-catherine.ramirez-villalba at ec-nantes.fr Wed Dec 7 02:49:09 2016 From: leidy-catherine.ramirez-villalba at ec-nantes.fr (Leidy Catherine Ramirez Villalba) Date: Wed, 7 Dec 2016 09:49:09 +0100 (CET) Subject: [petsc-users] Way to remove zero entries from an assembled matrix In-Reply-To: References: <2067533533.816969.1481045123386.JavaMail.zimbra@ec-nantes.fr> Message-ID: <784345704.957201.1481100549953.JavaMail.zimbra@ec-nantes.fr> Hi Barry, Thanks for your reply! I must say that I still do not master still the different solvers and options, so my problem might be due to a wrong formulation for the solver, or the final state of the matrix is not good, even if I verify that is assembled and the exit looks quite ok. Following the whole thing: After assembling, manually and using petsc, I compare the matrices using a ASCII file (PETSC_VIEWER_ASCII_MATLAB option), and there i see no difference but the zero in some rows 1664 560 -1.6995706110114395e+06 | 1664 560 -1.6995706110114395e+06 1664 1660 -1.7462298274040222e-10 | 1664 1660 -1.7462298274040222e-10 1664 1661 6.6919512730949128e+05 | 1664 1661 6.6919512730949128e+05 1664 1663 0.0000000000000000e+00 | 1664 1664 5.4708211855417928e+06 1664 1664 5.4707211855417928e+06 | ------------------------------------------------------------------------------------------------------- 1664 4600 -1.7462298274040222e-10 | 1664 4600 -1.7462298274040222e-10 then, I solve both systems using MUMPS solve, with same rhs vector and finally I compare the output vector Ur. Below how I solve the system: ! ---------------------------------------SOLVING 1: manual matrix: mpiA ---------------------------------------------------------------- call KSPCreate(PETSC_COMM_WORLD,kspSolver,petscIerr) call KSPSetOperators(kspSolver,mpiA,mpiA,petscIerr) call KSPSetType(kspSolver, KSPPREONLY, petscIerr) call KSPSetFromOptions(kspSolver,petscIerr) call KSPGetPC(kspSolver,precond,petscIerr) call PCSetType(precond,PCLU,petscIerr) call PCFactorSetMatSolverPackage(precond,MATSOLVERMUMPS,petscIerr) call PCFactorSetUpMatSolverPackage(precond,petscIerr) call PCSetFromOptions(precond,petscIerr) relativeTol=1.e-15 absoluteTol=1.e-15 divergenceTol=1.e4 maxIter=200 call KSPSetTolerances(kspSolver,relativeTol,absoluteTol,divergenceTol,maxIter,petscIerr) call KSPSolve(kspSolver,mpiVecRhs,mpiVecLocSolution,petscIerr) call KSPGetSolution(kspSolver,mpiVecLocSolution,petscIerr) call VecScatterCreateToAll(mpiVecLocSolution,vecScatterEnv,mpiVecGlobSolution,petscIerr) call VecScatterBegin(vecScatterEnv,mpiVecLocSolution,mpiVecGlobSolution,INSERT_VALUES,SCATTER_FORWARD,petscIerr) call VecScatterEnd(vecScatterEnv,mpiVecLocSolution,mpiVecGlobSolution,INSERT_VALUES,SCATTER_FORWARD,petscIerr) call VecGetArrayF90(mpiVecGlobSolution,pVecSolution,petscIerr) Ur(1:ddl) = pVecSolution(:) call VecRestoreArrayF90(mpiVecLocSolution,pVecSolution,petscIerr) ! ------------------------------------------------------------------------------------------------------- For the assembled matrix I have tried the same sequence, but: 1. changing the system and preconditioner specified matrix for the solver: call KSPSetOperators(kspSolver,mpiMassMatrix,mpiMassMatrix,petscIerr) There, I'm getting infinite for the solution for the petsc assembled matrix and the proper solution for the manual assembled one. %Vec Object:mpiVecLocSolution 1 MPI processes | %Vec Object:mpiVecLocSolution 1 MPI processes % type: seq | % type: seq mpiVecLocSolution = [ | mpiVecLocSolution = [ inf | -2.2389524508816294e-20 inf | -1.5265035169220699e-20 inf | 0.0000000000000000e+00 2. I thought it was due to the extra zeros, then I tried no passing the same matrix for the preconditioner argument. call KSPSetOperators(kspSolver,mpiMassMatrix,PETSC_NULL_OBJECT,petscIerr) There I have the errors listed below and a solution vector full of zeros for the petsc assembled matrix. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: You can only call this routine after the matrix object has been provided to the solver, for example with KSPSetOperators() or SNESSetJacobian() [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.4, unknown [0]PETSC ERROR: ./aravanti on a arch-linux2-c-debug named localhost.localdomain by catherine Wed Dec 7 09:36:39 2016 [0]PETSC ERROR: Configure options --download-mumps --with-mpi-dir=/home/catherine/local/intel2016/ --download-scalapack --download-parmetis --download-metis --download-fblaslapack [0]PETSC ERROR: #1 PCFactorSetUpMatSolverPackage_Factor() line 15 in /home/catherine/petsc/src/ksp/pc/impls/factor/factimpl.c [0]PETSC ERROR: #2 PCFactorSetUpMatSolverPackage() line 26 in /home/catherine/petsc/src/ksp/pc/impls/factor/factor.c solving the system [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Not for unassembled matrix [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.4, unknown [0]PETSC ERROR: ./aravanti on a arch-linux2-c-debug named localhost.localdomain by catherine Wed Dec 7 09:36:39 2016 [0]PETSC ERROR: Configure options --download-mumps --with-mpi-dir=/home/catherine/local/intel2016/ --download-scalapack --download-parmetis --download-metis --download-fblaslapack [0]PETSC ERROR: #3 MatGetOrdering() line 189 in /home/catherine/petsc/src/mat/order/sorder.c [0]PETSC ERROR: #4 PCSetUp_LU() line 125 in /home/catherine/petsc/src/ksp/pc/impls/factor/lu/lu.c [0]PETSC ERROR: #5 PCSetUp() line 968 in /home/catherine/petsc/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #6 KSPSetUp() line 390 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #7 KSPSolve() line 599 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c 3. Or simply the default option (direct solver LU if i'm not mistaken) call KSPCreate(PETSC_COMM_WORLD,kspSolver,petscIerr) call KSPSetOperators(kspSolver,mpiMassMatrix,mpiMassMatrix,petscIerr) call KSPSolve(kspSolver,mpiVecRhs,mpiVecLocSolution,petscIerr) and then again I have an error and an exit of zeros, and a closer answer for the matrix assembled manually. [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Matrix is missing diagonal entry 5955 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.7.4, unknown [0]PETSC ERROR: ./aravanti on a arch-linux2-c-debug named localhost.localdomain by catherine Wed Dec 7 09:26:29 2016 [0]PETSC ERROR: Configure options --download-mumps --with-mpi-dir=/home/catherine/local/intel2016/ --download-scalapack --download-parmetis --download-metis --download-fblaslapack [0]PETSC ERROR: #1 MatILUFactorSymbolic_SeqAIJ() line 1733 in /home/catherine/petsc/src/mat/impls/aij/seq/aijfact.c [0]PETSC ERROR: #2 MatILUFactorSymbolic() line 6579 in /home/catherine/petsc/src/mat/interface/matrix.c [0]PETSC ERROR: #3 PCSetUp_ILU() line 213 in /home/catherine/petsc/src/ksp/pc/impls/factor/ilu/ilu.c [0]PETSC ERROR: #4 PCSetUp() line 968 in /home/catherine/petsc/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #5 KSPSetUp() line 390 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #6 KSPSolve() line 599 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c I hope information is clear enough. Thanks again, Catherine ----- Mail original ----- De: "Barry Smith" ?: "Leidy Catherine Ramirez Villalba" Cc: petsc-users at mcs.anl.gov Envoy?: Mardi 6 D?cembre 2016 18:45:52 Objet: Re: [petsc-users] Way to remove zero entries from an assembled matrix > On Dec 6, 2016, at 11:25 AM, Leidy Catherine Ramirez Villalba wrote: > > Hello PETSc team: > > I'm doing the parallelization of the assembling of a system, previously assembled in a serial way (manual), but solved using PETSc in parallel. > Therefore I have the old assembled matrix to compare with the one assembled with PETSc. > > While doing the assembling of the matrix, I avoid the zero entries using the option 'MAT_IGNORE_ZERO_ENTRIES' in MatSetOption, however the final matrix has zero values (or almost) due to the addition of non zero elements. > Below the example of added values: > > 1664 i 1663 j -165509.423650377 > 1664 i 1663 j 165509.423650377 > 1664 i 1663 j -165509.423650377 > 1664 i 1663 j 165509.423650377 > > Due to this difference between the two matrices I'm not able to get the solution for the matrix assembled with petsc. Catherine, The "extra" zeros should not prevent getting the solution to the system. The computed solution may be a little different but if you use an iterative method with smaller and smaller tolerances the solutions should converge to the same value. With direct solvers the solutions should be very similar. Could you please provide more details about not getting the solution for the parallel system? Send output of -ksp_monitor_true_residual for example Barry > > Therefore I wonder if there is a way to remove zero entries from an assembled matrix, or which solver to use so that zeros will not be a problem. > > Thanks in advance, > Catherine > > > > From niko.karin at gmail.com Wed Dec 7 07:06:19 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Wed, 7 Dec 2016 14:06:19 +0100 Subject: [petsc-users] FieldSplit, multigrid and blocksize Message-ID: Dear PETSc gurus, I am using FieldSplit to solve a poro-mechanics problem. Thus, I am dealing with 3 displacement DOF and 1 pressure DOF. In order to precondition the 00 block (aka the displacement block), I am using a multigrid method (ml or gamg). Nevertheless, I have the feeling that the multigrids performance is much lower than in the case where they are used on pure displacement problems (say elasticity). Indeed, I do not know how to set the block size of the 00 block when using FieldSplit! Could you please give me some hint on that? (the phrase "The fieldsplit preconditioner cannot currently be used with the BAIJ or SBAIJ data formats if the blocksize is larger than 1." is not clear enough for me...). Thanks in advance, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 7 07:08:05 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 7 Dec 2016 07:08:05 -0600 Subject: [petsc-users] Way to remove zero entries from an assembled matrix In-Reply-To: <784345704.957201.1481100549953.JavaMail.zimbra@ec-nantes.fr> References: <2067533533.816969.1481045123386.JavaMail.zimbra@ec-nantes.fr> <784345704.957201.1481100549953.JavaMail.zimbra@ec-nantes.fr> Message-ID: <58D954B5-A9DA-4674-B865-6AE7181E965E@mcs.anl.gov> I don't think it is the zero entries. Please run both the first two version below with the option -ksp_view_mat binary -ksp_view_rhs binary and in each case email to petsc-maint at mcs.anl.gov the resulting file called binaryoutput Barry > On Dec 7, 2016, at 2:49 AM, Leidy Catherine Ramirez Villalba wrote: > > Hi Barry, > > Thanks for your reply! > > I must say that I still do not master still the different solvers and options, so my problem might be due to a wrong formulation for the solver, or the final state of the matrix is not good, even if I verify that is assembled and the exit looks quite ok. Following the whole thing: > > > After assembling, manually and using petsc, I compare the matrices using a ASCII file (PETSC_VIEWER_ASCII_MATLAB option), and there i see no difference but the zero in some rows > > 1664 560 -1.6995706110114395e+06 | 1664 560 -1.6995706110114395e+06 > 1664 1660 -1.7462298274040222e-10 | 1664 1660 -1.7462298274040222e-10 > 1664 1661 6.6919512730949128e+05 | 1664 1661 6.6919512730949128e+05 > 1664 1663 0.0000000000000000e+00 | 1664 1664 5.4708211855417928e+06 > 1664 1664 5.4707211855417928e+06 | ------------------------------------------------------------------------------------------------------- > 1664 4600 -1.7462298274040222e-10 | 1664 4600 -1.7462298274040222e-10 > > > then, I solve both systems using MUMPS solve, with same rhs vector and finally I compare the output vector Ur. > > Below how I solve the system: > > ! ---------------------------------------SOLVING 1: manual matrix: mpiA ---------------------------------------------------------------- > call KSPCreate(PETSC_COMM_WORLD,kspSolver,petscIerr) > call KSPSetOperators(kspSolver,mpiA,mpiA,petscIerr) > > call KSPSetType(kspSolver, KSPPREONLY, petscIerr) > call KSPSetFromOptions(kspSolver,petscIerr) > > call KSPGetPC(kspSolver,precond,petscIerr) > call PCSetType(precond,PCLU,petscIerr) > call PCFactorSetMatSolverPackage(precond,MATSOLVERMUMPS,petscIerr) > call PCFactorSetUpMatSolverPackage(precond,petscIerr) > > call PCSetFromOptions(precond,petscIerr) > > relativeTol=1.e-15 > absoluteTol=1.e-15 > divergenceTol=1.e4 > maxIter=200 > call KSPSetTolerances(kspSolver,relativeTol,absoluteTol,divergenceTol,maxIter,petscIerr) > > call KSPSolve(kspSolver,mpiVecRhs,mpiVecLocSolution,petscIerr) > > call KSPGetSolution(kspSolver,mpiVecLocSolution,petscIerr) > > call VecScatterCreateToAll(mpiVecLocSolution,vecScatterEnv,mpiVecGlobSolution,petscIerr) > call VecScatterBegin(vecScatterEnv,mpiVecLocSolution,mpiVecGlobSolution,INSERT_VALUES,SCATTER_FORWARD,petscIerr) > call VecScatterEnd(vecScatterEnv,mpiVecLocSolution,mpiVecGlobSolution,INSERT_VALUES,SCATTER_FORWARD,petscIerr) > > call VecGetArrayF90(mpiVecGlobSolution,pVecSolution,petscIerr) > Ur(1:ddl) = pVecSolution(:) > call VecRestoreArrayF90(mpiVecLocSolution,pVecSolution,petscIerr) > ! ------------------------------------------------------------------------------------------------------- > > > For the assembled matrix I have tried the same sequence, but: > > 1. changing the system and preconditioner specified matrix for the solver: > > call KSPSetOperators(kspSolver,mpiMassMatrix,mpiMassMatrix,petscIerr) > > There, I'm getting infinite for the solution for the petsc assembled matrix and the proper solution for the manual assembled one. > > %Vec Object:mpiVecLocSolution 1 MPI processes | %Vec Object:mpiVecLocSolution 1 MPI processes > % type: seq | % type: seq > mpiVecLocSolution = [ | mpiVecLocSolution = [ > inf | -2.2389524508816294e-20 > inf | -1.5265035169220699e-20 > inf | 0.0000000000000000e+00 > > > > 2. I thought it was due to the extra zeros, then I tried no passing the same matrix for the preconditioner argument. > > call KSPSetOperators(kspSolver,mpiMassMatrix,PETSC_NULL_OBJECT,petscIerr) > > There I have the errors listed below and a solution vector full of zeros for the petsc assembled matrix. > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: You can only call this routine after the matrix object has been provided to the solver, for example with KSPSetOperators() or SNESSetJacobian() > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.4, unknown > [0]PETSC ERROR: ./aravanti on a arch-linux2-c-debug named localhost.localdomain by catherine Wed Dec 7 09:36:39 2016 > [0]PETSC ERROR: Configure options --download-mumps --with-mpi-dir=/home/catherine/local/intel2016/ --download-scalapack --download-parmetis --download-metis --download-fblaslapack > [0]PETSC ERROR: #1 PCFactorSetUpMatSolverPackage_Factor() line 15 in /home/catherine/petsc/src/ksp/pc/impls/factor/factimpl.c > [0]PETSC ERROR: #2 PCFactorSetUpMatSolverPackage() line 26 in /home/catherine/petsc/src/ksp/pc/impls/factor/factor.c > solving the system > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: Not for unassembled matrix > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.4, unknown > [0]PETSC ERROR: ./aravanti on a arch-linux2-c-debug named localhost.localdomain by catherine Wed Dec 7 09:36:39 2016 > [0]PETSC ERROR: Configure options --download-mumps --with-mpi-dir=/home/catherine/local/intel2016/ --download-scalapack --download-parmetis --download-metis --download-fblaslapack > [0]PETSC ERROR: #3 MatGetOrdering() line 189 in /home/catherine/petsc/src/mat/order/sorder.c > [0]PETSC ERROR: #4 PCSetUp_LU() line 125 in /home/catherine/petsc/src/ksp/pc/impls/factor/lu/lu.c > [0]PETSC ERROR: #5 PCSetUp() line 968 in /home/catherine/petsc/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #6 KSPSetUp() line 390 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #7 KSPSolve() line 599 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c > > > > 3. Or simply the default option (direct solver LU if i'm not mistaken) > call KSPCreate(PETSC_COMM_WORLD,kspSolver,petscIerr) > call KSPSetOperators(kspSolver,mpiMassMatrix,mpiMassMatrix,petscIerr) > call KSPSolve(kspSolver,mpiVecRhs,mpiVecLocSolution,petscIerr) > > and then again I have an error and an exit of zeros, and a closer answer for the matrix assembled manually. > > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: Matrix is missing diagonal entry 5955 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.7.4, unknown > [0]PETSC ERROR: ./aravanti on a arch-linux2-c-debug named localhost.localdomain by catherine Wed Dec 7 09:26:29 2016 > [0]PETSC ERROR: Configure options --download-mumps --with-mpi-dir=/home/catherine/local/intel2016/ --download-scalapack --download-parmetis --download-metis --download-fblaslapack > [0]PETSC ERROR: #1 MatILUFactorSymbolic_SeqAIJ() line 1733 in /home/catherine/petsc/src/mat/impls/aij/seq/aijfact.c > [0]PETSC ERROR: #2 MatILUFactorSymbolic() line 6579 in /home/catherine/petsc/src/mat/interface/matrix.c > [0]PETSC ERROR: #3 PCSetUp_ILU() line 213 in /home/catherine/petsc/src/ksp/pc/impls/factor/ilu/ilu.c > [0]PETSC ERROR: #4 PCSetUp() line 968 in /home/catherine/petsc/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #5 KSPSetUp() line 390 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #6 KSPSolve() line 599 in /home/catherine/petsc/src/ksp/ksp/interface/itfunc.c > > > > I hope information is clear enough. > > Thanks again, > Catherine > > > > ----- Mail original ----- > De: "Barry Smith" > ?: "Leidy Catherine Ramirez Villalba" > Cc: petsc-users at mcs.anl.gov > Envoy?: Mardi 6 D?cembre 2016 18:45:52 > Objet: Re: [petsc-users] Way to remove zero entries from an assembled matrix > >> On Dec 6, 2016, at 11:25 AM, Leidy Catherine Ramirez Villalba wrote: >> >> Hello PETSc team: >> >> I'm doing the parallelization of the assembling of a system, previously assembled in a serial way (manual), but solved using PETSc in parallel. >> Therefore I have the old assembled matrix to compare with the one assembled with PETSc. >> >> While doing the assembling of the matrix, I avoid the zero entries using the option 'MAT_IGNORE_ZERO_ENTRIES' in MatSetOption, however the final matrix has zero values (or almost) due to the addition of non zero elements. >> Below the example of added values: >> >> 1664 i 1663 j -165509.423650377 >> 1664 i 1663 j 165509.423650377 >> 1664 i 1663 j -165509.423650377 >> 1664 i 1663 j 165509.423650377 >> >> Due to this difference between the two matrices I'm not able to get the solution for the matrix assembled with petsc. > > Catherine, > > The "extra" zeros should not prevent getting the solution to the system. The computed solution may be a little different but if you use an iterative method with smaller and smaller tolerances the solutions should converge to the same value. With direct solvers the solutions should be very similar. > > Could you please provide more details about not getting the solution for the parallel system? Send output of -ksp_monitor_true_residual for example > > Barry > > >> >> Therefore I wonder if there is a way to remove zero entries from an assembled matrix, or which solver to use so that zeros will not be a problem. >> >> Thanks in advance, >> Catherine >> >> >> >> From bsmith at mcs.anl.gov Wed Dec 7 07:22:57 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 7 Dec 2016 07:22:57 -0600 Subject: [petsc-users] FieldSplit, multigrid and blocksize In-Reply-To: References: Message-ID: > On Dec 7, 2016, at 7:06 AM, Karin&NiKo wrote: > > Dear PETSc gurus, > > I am using FieldSplit to solve a poro-mechanics problem. Thus, I am dealing with 3 displacement DOF and 1 pressure DOF. > In order to precondition the 00 block (aka the displacement block), I am using a multigrid method (ml or gamg). Nevertheless, I have the feeling that the multigrids performance is much lower than in the case where they are used on pure displacement problems (say elasticity). Indeed, I do not know how to set the block size of the 00 block when using FieldSplit! > Could you please give me some hint on that? In your case you can use a block size of 4. The first field is defined by "components" 0, 1, and 2 and the second field (the pressure) is defined by component 3. Use PCFieldSplitSetFields() to set the fields and set the matrix block size to 4 (use AIJ matrix). If the displacement block corresponds to a true displacement problem then one should expect similar convergence of the multigrid. BUT note that usually with PCFIELDSPLIT one just does a single V-cycle of multigrid (KSP type of preonly) on the 00 block in each iteration. Run with -ksp_view to see what the solve is actually doing. > (the phrase "The fieldsplit preconditioner cannot currently be used with the BAIJ or SBAIJ data formats if the blocksize is larger than 1." is not clear enough for me...). To use fieldsplit you should use AIJ matrix, not BAIJ or SBAIJ (don't worry about impacting performance the fieldsplit pulls apart the blocks anyways so there would be no advantage to BAIJ or SBAIJ). > > Thanks in advance, > Nicolas From niko.karin at gmail.com Wed Dec 7 07:43:59 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Wed, 7 Dec 2016 14:43:59 +0100 Subject: [petsc-users] FieldSplit, multigrid and blocksize In-Reply-To: References: Message-ID: Thanks Barry. I must emphasize that my unknowns are not numbered in a regular way : I am using a P2-P1 finite element and the middle nodes do not carry a pressure DOF. So the global numbering is somewhat like : ----------------------------------------------------------------------------------------------------------- u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4, ..... node 1 DOF | node 2 DOF | node 3 DOF | node 4 DOF | ----------------------------------------------------------------------------------------------------------- So my global matrix does not have a block-size of 4. Nevertheless the A00 matrix has a block size of 3! Is there a way to specify that only on the A00 sub-matrix? Nicolas 2016-12-07 14:22 GMT+01:00 Barry Smith : > > > On Dec 7, 2016, at 7:06 AM, Karin&NiKo wrote: > > > > Dear PETSc gurus, > > > > I am using FieldSplit to solve a poro-mechanics problem. Thus, I am > dealing with 3 displacement DOF and 1 pressure DOF. > > In order to precondition the 00 block (aka the displacement block), I am > using a multigrid method (ml or gamg). Nevertheless, I have the feeling > that the multigrids performance is much lower than in the case where they > are used on pure displacement problems (say elasticity). Indeed, I do not > know how to set the block size of the 00 block when using FieldSplit! > > Could you please give me some hint on that? > > In your case you can use a block size of 4. The first field is defined > by "components" 0, 1, and 2 and the second field (the pressure) is defined > by component 3. Use PCFieldSplitSetFields() to set the fields and set the > matrix block size to 4 (use AIJ matrix). > > If the displacement block corresponds to a true displacement problem > then one should expect similar convergence of the multigrid. BUT note that > usually with PCFIELDSPLIT one just does a single V-cycle of multigrid (KSP > type of preonly) on the 00 block in each iteration. Run with -ksp_view to > see what the solve is actually doing. > > > (the phrase "The fieldsplit preconditioner cannot currently be used with > the BAIJ or SBAIJ data formats if the blocksize is larger than 1." is not > clear enough for me...). > > To use fieldsplit you should use AIJ matrix, not BAIJ or SBAIJ (don't > worry about impacting performance the fieldsplit pulls apart the blocks > anyways so there would be no advantage to BAIJ or SBAIJ). > > > > Thanks in advance, > > Nicolas > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lawrence.mitchell at imperial.ac.uk Wed Dec 7 07:45:38 2016 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Wed, 7 Dec 2016 13:45:38 +0000 Subject: [petsc-users] FieldSplit, multigrid and blocksize In-Reply-To: References: Message-ID: <7ef31e41-2b29-6acb-4dc8-8ead93dde2c4@imperial.ac.uk> On 07/12/16 13:43, Karin&NiKo wrote: > Thanks Barry. > I must emphasize that my unknowns are not numbered in a regular way : > I am using a P2-P1 finite element and the middle nodes do not carry a > pressure DOF. So the global numbering is somewhat like : > ----------------------------------------------------------------------------------------------------------- > u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4, > ..... > node 1 DOF | node 2 DOF | node 3 DOF | node 4 DOF | > ----------------------------------------------------------------------------------------------------------- > > So my global matrix does not have a block-size of 4. Nevertheless the > A00 matrix has a block size of 3! > Is there a way to specify that only on the A00 sub-matrix? I presume you are defining the splits by providing ISes. You need to set the block size on the IS that defines the A00 block appropriately, then the submatrix will have it. Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From mfadams at lbl.gov Wed Dec 7 08:08:31 2016 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Dec 2016 17:08:31 +0300 Subject: [petsc-users] FieldSplit, multigrid and blocksize In-Reply-To: <7ef31e41-2b29-6acb-4dc8-8ead93dde2c4@imperial.ac.uk> References: <7ef31e41-2b29-6acb-4dc8-8ead93dde2c4@imperial.ac.uk> Message-ID: Note, for best performance with ML and GAMG you want to give it the near kernel for the 00 block. These are the 6 "rigid body modes" or zero energy modes. PETSc provides some tools to do that (eg, MatNullSpaceCreateRigidBody). On Wed, Dec 7, 2016 at 4:45 PM, Lawrence Mitchell < lawrence.mitchell at imperial.ac.uk> wrote: > > > On 07/12/16 13:43, Karin&NiKo wrote: > > Thanks Barry. > > I must emphasize that my unknowns are not numbered in a regular way : > > I am using a P2-P1 finite element and the middle nodes do not carry a > > pressure DOF. So the global numbering is somewhat like : > > ------------------------------------------------------------ > ----------------------------------------------- > > u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4, > > ..... > > node 1 DOF | node 2 DOF | node 3 DOF | node 4 DOF | > > ------------------------------------------------------------ > ----------------------------------------------- > > > > So my global matrix does not have a block-size of 4. Nevertheless the > > A00 matrix has a block size of 3! > > Is there a way to specify that only on the A00 sub-matrix? > > I presume you are defining the splits by providing ISes. You need to > set the block size on the IS that defines the A00 block appropriately, > then the submatrix will have it. > > Lawrence > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Wed Dec 7 08:46:37 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Wed, 7 Dec 2016 15:46:37 +0100 Subject: [petsc-users] FieldSplit, multigrid and blocksize In-Reply-To: References: <7ef31e41-2b29-6acb-4dc8-8ead93dde2c4@imperial.ac.uk> Message-ID: Thank you all. These are the answers I was looking for! Best regards, Nicolas 2016-12-07 15:08 GMT+01:00 Mark Adams : > Note, for best performance with ML and GAMG you want to give it the near > kernel for the 00 block. These are the 6 "rigid body modes" or zero energy > modes. PETSc provides some tools to do that (eg, > MatNullSpaceCreateRigidBody). > > On Wed, Dec 7, 2016 at 4:45 PM, Lawrence Mitchell < > lawrence.mitchell at imperial.ac.uk> wrote: > >> >> >> On 07/12/16 13:43, Karin&NiKo wrote: >> > Thanks Barry. >> > I must emphasize that my unknowns are not numbered in a regular way : >> > I am using a P2-P1 finite element and the middle nodes do not carry a >> > pressure DOF. So the global numbering is somewhat like : >> > ------------------------------------------------------------ >> ----------------------------------------------- >> > u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4, >> > ..... >> > node 1 DOF | node 2 DOF | node 3 DOF | node 4 DOF | >> > ------------------------------------------------------------ >> ----------------------------------------------- >> > >> > So my global matrix does not have a block-size of 4. Nevertheless the >> > A00 matrix has a block size of 3! >> > Is there a way to specify that only on the A00 sub-matrix? >> >> I presume you are defining the splits by providing ISes. You need to >> set the block size on the IS that defines the A00 block appropriately, >> then the submatrix will have it. >> >> Lawrence >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Wed Dec 7 09:55:30 2016 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Wed, 7 Dec 2016 10:55:30 -0500 Subject: [petsc-users] FieldSplit, multigrid and blocksize In-Reply-To: References: Message-ID: Hi Nicolas, for us the solution has been to "manually" create a MatNest with, ie, block A00 containing only u-u coupling and block A11 containing p-p coupling. Thus, we are able to assign block size of 3 for A00 and block size of 1 for A11. The other thing we did, is to be able to number the unknowns in a u then p order (on each process). Thus, the ISs are "continuous strides" per process. It allowed us to be able to do the assembly efficiently *directly* into a MatNest (which is not a Petsc native feature) then, we didn't touch the complex part of the assembly in our code, but just the small function where we call MatSetValues to split the elementary indices accordingly to A00 then A01, A10 and A11 (I mean we keep the same elementary matrix, but just translate at most 4 times the indices to sub-matrix ranges, and negate the ones not to be assembled into the sub-matrix). Have a nice day! Eric On 07/12/16 08:43 AM, Karin&NiKo wrote: > Thanks Barry. > I must emphasize that my unknowns are not numbered in a regular way : I > am using a P2-P1 finite element and the middle nodes do not carry a > pressure DOF. So the global numbering is somewhat like : > ----------------------------------------------------------------------------------------------------------- > u1x, u1y, u1z, p, u2x, u2y, u2z, p2, u3x, u3y, u3z, u4x, u4y, u4z, p4, ..... > node 1 DOF | node 2 DOF | node 3 DOF | node 4 DOF | > ----------------------------------------------------------------------------------------------------------- > > So my global matrix does not have a block-size of 4. Nevertheless the > A00 matrix has a block size of 3! > Is there a way to specify that only on the A00 sub-matrix? > > Nicolas > > > > 2016-12-07 14:22 GMT+01:00 Barry Smith >: > > > > On Dec 7, 2016, at 7:06 AM, Karin&NiKo > wrote: > > > > Dear PETSc gurus, > > > > I am using FieldSplit to solve a poro-mechanics problem. Thus, I am dealing with 3 displacement DOF and 1 pressure DOF. > > In order to precondition the 00 block (aka the displacement block), I am using a multigrid method (ml or gamg). Nevertheless, I have the feeling that the multigrids performance is much lower than in the case where they are used on pure displacement problems (say elasticity). Indeed, I do not know how to set the block size of the 00 block when using FieldSplit! > > Could you please give me some hint on that? > > In your case you can use a block size of 4. The first field is > defined by "components" 0, 1, and 2 and the second field (the > pressure) is defined by component 3. Use PCFieldSplitSetFields() to > set the fields and set the matrix block size to 4 (use AIJ matrix). > > If the displacement block corresponds to a true displacement > problem then one should expect similar convergence of the multigrid. > BUT note that usually with PCFIELDSPLIT one just does a single > V-cycle of multigrid (KSP type of preonly) on the 00 block in each > iteration. Run with -ksp_view to see what the solve is actually doing. > > > (the phrase "The fieldsplit preconditioner cannot currently be used with the BAIJ or SBAIJ data formats if the blocksize is larger than 1." is not clear enough for me...). > > To use fieldsplit you should use AIJ matrix, not BAIJ or SBAIJ > (don't worry about impacting performance the fieldsplit pulls apart > the blocks anyways so there would be no advantage to BAIJ or SBAIJ). > > > > Thanks in advance, > > Nicolas > > From hengjiew at uci.edu Wed Dec 7 17:48:56 2016 From: hengjiew at uci.edu (frank) Date: Wed, 7 Dec 2016 15:48:56 -0800 Subject: [petsc-users] Question about Set-up of Full MG and its Output In-Reply-To: <87pol4u1t7.fsf@jedbrown.org> References: <87pol4u1t7.fsf@jedbrown.org> Message-ID: <04b6ad59-9d74-389d-006b-52bff937433e@uci.edu> Hello, Thank you. Now I am able to see the trace of MG. I still have a question about the interpolation. I wan to get the matrix of the default interpolation method and print it on terminal. The code is as follow: ( KSP is already set by petsc options) ----------------------------------------------------------------------------------------- 132 CALL KSPGetPC( ksp, pc, ierr ) 133 CALL MATCreate( PETSC_COMM_WORLD, interpMat, ierr ) 134 CALL MATSetType( interpMat, MATSEQAIJ, ierr ) 135 CALL MATSetSizes( interpMat, i5, i5, i5, i5, ierr ) 136 CALL MATSetUp( interpMat, ierr ) 137 CALL PCMGGetInterpolation( pc, i1, interpMat, ierr ) 138 CALL MatAssemblyBegin( interpMat, MAT_FINAL_ASSEMBLY, ierr ) 139 CALL MatAssemblyEnd( interpMat, MAT_FINAL_ASSEMBLY, ierr ) 140 CALL MatView( interpMat, PETSC_VIEWER_STDOUT_SELF, ierr ) ----------------------------------------------------------------------------------------- The error massage is: ------------------------------------------------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Must call PCMGSetInterpolation() or PCMGSetRestriction() ------------------------------------------------------------------------------------------------------- Do I have to set the interpolation first? How can I just print the default interpolation matrix? I attached the option file. Thank you. Frank On 12/06/2016 02:31 PM, Jed Brown wrote: > frank writes: > >> Dear all, >> >> I am trying to use full MG to solve a 2D Poisson equation. >> >> I want to set full MG as the solver and SOR as the smoother. Is the >> following setup the proper way to do it? >> -ksp_type richardson >> -pc_type mg >> -pc_mg_type full >> -mg_levels_ksp_type richardson >> -mg_levels_pc_type sor >> >> The ksp_view shows the levels from the coarsest mesh to finest mesh in a >> linear order. > It is showing the solver configuration, not a trace of the cycle. > >> I was expecting sth like: coarsest -> level1 -> coarsest -> level1 -> >> level2 -> level1 -> coarsest -> ... >> Is there a way to show exactly how the full MG proceeds? > You could get a trace like this from > > -mg_coarse_ksp_converged_reason -mg_levels_ksp_converged_reason > > If you want to deliminate the iterations, you could add -ksp_monitor. > >> Also in the above example, I want to know what interpolation or >> prolongation method is used from level1 to level2. >> Can I get that info by adding some options? (not using PCMGGetInterpolation) >> >> I attached the ksp_view info and my petsc options file. >> Thank you. >> >> Frank >> Linear solve converged due to CONVERGED_RTOL iterations 3 >> KSP Object: 1 MPI processes >> type: richardson >> Richardson: damping factor=1. >> maximum iterations=10000 >> tolerances: relative=1e-07, absolute=1e-50, divergence=10000. >> left preconditioning >> using nonzero initial guess >> using UNPRECONDITIONED norm type for convergence test >> PC Object: 1 MPI processes >> type: mg >> MG: type is FULL, levels=6 cycles=v >> Using Galerkin computed coarse grid matrices >> Coarse grid solver -- level ------------------------------- >> KSP Object: (mg_coarse_) 1 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using NONE norm type for convergence test >> PC Object: (mg_coarse_) 1 MPI processes >> type: lu >> out-of-place factorization >> tolerance for zero pivot 2.22045e-14 >> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >> matrix ordering: nd >> factor fill ratio given 0., needed 0. >> Factored matrix follows: >> Mat Object: 1 MPI processes >> type: superlu_dist >> rows=64, cols=64 >> package used to perform factorization: superlu_dist >> total: nonzeros=0, allocated nonzeros=0 >> total number of mallocs used during MatSetValues calls =0 >> SuperLU_DIST run parameters: >> Process grid nprow 1 x npcol 1 >> Equilibrate matrix TRUE >> Matrix input mode 0 >> Replace tiny pivots FALSE >> Use iterative refinement FALSE >> Processors in row 1 col partition 1 >> Row permutation LargeDiag >> Column permutation METIS_AT_PLUS_A >> Parallel symbolic factorization FALSE >> Repeated factorization SamePattern >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=64, cols=64 >> total: nonzeros=576, allocated nonzeros=576 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> Down solver (pre-smoother) on level 1 ------------------------------- >> KSP Object: (mg_levels_1_) 1 MPI processes >> type: richardson >> Richardson: damping factor=1. >> maximum iterations=1 >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using nonzero initial guess >> using NONE norm type for convergence test >> PC Object: (mg_levels_1_) 1 MPI processes >> type: sor >> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=256, cols=256 >> total: nonzeros=2304, allocated nonzeros=2304 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> Up solver (post-smoother) same as down solver (pre-smoother) >> Down solver (pre-smoother) on level 2 ------------------------------- >> KSP Object: (mg_levels_2_) 1 MPI processes >> type: richardson >> Richardson: damping factor=1. >> maximum iterations=1 >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using nonzero initial guess >> using NONE norm type for convergence test >> PC Object: (mg_levels_2_) 1 MPI processes >> type: sor >> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=1024, cols=1024 >> total: nonzeros=9216, allocated nonzeros=9216 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> Up solver (post-smoother) same as down solver (pre-smoother) >> Down solver (pre-smoother) on level 3 ------------------------------- >> KSP Object: (mg_levels_3_) 1 MPI processes >> type: richardson >> Richardson: damping factor=1. >> maximum iterations=1 >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using nonzero initial guess >> using NONE norm type for convergence test >> PC Object: (mg_levels_3_) 1 MPI processes >> type: sor >> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=4096, cols=4096 >> total: nonzeros=36864, allocated nonzeros=36864 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> Up solver (post-smoother) same as down solver (pre-smoother) >> Down solver (pre-smoother) on level 4 ------------------------------- >> KSP Object: (mg_levels_4_) 1 MPI processes >> type: richardson >> Richardson: damping factor=1. >> maximum iterations=1 >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using nonzero initial guess >> using NONE norm type for convergence test >> PC Object: (mg_levels_4_) 1 MPI processes >> type: sor >> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=16384, cols=16384 >> total: nonzeros=147456, allocated nonzeros=147456 >> total number of mallocs used during MatSetValues calls =0 >> not using I-node routines >> Up solver (post-smoother) same as down solver (pre-smoother) >> Down solver (pre-smoother) on level 5 ------------------------------- >> KSP Object: (mg_levels_5_) 1 MPI processes >> type: richardson >> Richardson: damping factor=1. >> maximum iterations=1 >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using nonzero initial guess >> using NONE norm type for convergence test >> PC Object: (mg_levels_5_) 1 MPI processes >> type: sor >> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=65536, cols=65536 >> total: nonzeros=327680, allocated nonzeros=327680 >> total number of mallocs used during MatSetValues calls =0 >> has attached null space >> not using I-node routines >> Up solver (post-smoother) same as down solver (pre-smoother) >> linear system matrix = precond matrix: >> Mat Object: 1 MPI processes >> type: seqaij >> rows=65536, cols=65536 >> total: nonzeros=327680, allocated nonzeros=327680 >> total number of mallocs used during MatSetValues calls =0 >> has attached null space >> not using I-node routines >> #PETSc Option Table entries: >> -ksp_converged_reason >> -ksp_initial_guess_nonzero yes >> -ksp_norm_type unpreconditioned >> -ksp_rtol 1e-7 >> -ksp_type richardson >> -ksp_view >> -mg_coarse_ksp_type preonly >> -mg_coarse_pc_factor_mat_solver_package superlu_dist >> -mg_coarse_pc_type lu >> -mg_levels_ksp_max_it 1 >> -mg_levels_ksp_type richardson >> -mg_levels_pc_type sor >> -N 256 >> -options_left >> -pc_mg_galerkin >> -pc_mg_levels 6 >> -pc_mg_type full >> -pc_type mg >> -px 1 >> -py 1 >> #End of PETSc Option Table entries >> There are no unused options. >> -ksp_type richardson >> -ksp_norm_type unpreconditioned >> -ksp_rtol 1e-7 >> -options_left >> -ksp_initial_guess_nonzero yes >> -ksp_converged_reason >> -ksp_view >> -pc_type mg >> -pc_mg_type full >> -pc_mg_galerkin >> -pc_mg_levels 6 >> -mg_levels_ksp_type richardson >> -mg_levels_pc_type sor >> -mg_levels_ksp_max_it 1 >> -mg_coarse_ksp_type preonly >> -mg_coarse_pc_type lu >> -mg_coarse_pc_factor_mat_solver_package superlu_dist -------------- next part -------------- -ksp_type richardson -ksp_norm_type unpreconditioned -ksp_rtol 1e-7 -options_left -ksp_initial_guess_nonzero yes -ksp_converged_reason -ksp_view -pc_type mg -pc_mg_galerkin -pc_mg_levels 6 -mg_levels_ksp_type richardson -mg_levels_pc_type sor -mg_levels_ksp_max_it 1 -mg_levels_ksp_converged_reason -mg_coarse_ksp_type preonly -mg_coarse_ksp_converged_reason -mg_coarse_pc_type lu -mg_coarse_pc_factor_mat_solver_package superlu_dist From bsmith at mcs.anl.gov Wed Dec 7 18:00:42 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 7 Dec 2016 18:00:42 -0600 Subject: [petsc-users] Question about Set-up of Full MG and its Output In-Reply-To: <04b6ad59-9d74-389d-006b-52bff937433e@uci.edu> References: <87pol4u1t7.fsf@jedbrown.org> <04b6ad59-9d74-389d-006b-52bff937433e@uci.edu> Message-ID: Frank, There is no "default" interpolation for PCMG. It is always defined depending on how you setup the solver. If you use KSP with a DM then it uses calls to DM to generate the interpolation (for example with DMDA it uses either piecewise bi/trilinear interpolation or piecewise constant). With GAMG it uses one defined by the algebraic multigrid algorithm. I think it is returning nothing because the KSP in your case has not been fully set up yet. Try calling AFTER the KSPSolve() because by then the PCMG infrastructure is fully set up. See below also > On Dec 7, 2016, at 5:48 PM, frank wrote: > > Hello, > > Thank you. Now I am able to see the trace of MG. > I still have a question about the interpolation. I wan to get the matrix of the default interpolation method and print it on terminal. > The code is as follow: ( KSP is already set by petsc options) > ----------------------------------------------------------------------------------------- > 132 CALL KSPGetPC( ksp, pc, ierr ) Remove the next 4 lines > 133 CALL MATCreate( PETSC_COMM_WORLD, interpMat, ierr ) > 134 CALL MATSetType( interpMat, MATSEQAIJ, ierr ) > 135 CALL MATSetSizes( interpMat, i5, i5, i5, i5, ierr ) > 136 CALL MATSetUp( interpMat, ierr ) > 137 CALL PCMGGetInterpolation( pc, i1, interpMat, ierr ) Remove the next 2 lines > 138 CALL MatAssemblyBegin( interpMat, MAT_FINAL_ASSEMBLY, ierr ) > 139 CALL MatAssemblyEnd( interpMat, MAT_FINAL_ASSEMBLY, ierr ) > 140 CALL MatView( interpMat, PETSC_VIEWER_STDOUT_SELF, ierr ) > ----------------------------------------------------------------------------------------- > > The error massage is: > ------------------------------------------------------------------------------------------------------- > [0]PETSC ERROR: Object is in wrong state > [0]PETSC ERROR: Must call PCMGSetInterpolation() or PCMGSetRestriction() > ------------------------------------------------------------------------------------------------------- > > Do I have to set the interpolation first? How can I just print the default interpolation matrix? > I attached the option file. > > Thank you. > Frank > > > > On 12/06/2016 02:31 PM, Jed Brown wrote: >> frank writes: >> >>> Dear all, >>> >>> I am trying to use full MG to solve a 2D Poisson equation. >>> >>> I want to set full MG as the solver and SOR as the smoother. Is the >>> following setup the proper way to do it? >>> -ksp_type richardson >>> -pc_type mg >>> -pc_mg_type full >>> -mg_levels_ksp_type richardson >>> -mg_levels_pc_type sor >>> >>> The ksp_view shows the levels from the coarsest mesh to finest mesh in a >>> linear order. >> It is showing the solver configuration, not a trace of the cycle. >> >>> I was expecting sth like: coarsest -> level1 -> coarsest -> level1 -> >>> level2 -> level1 -> coarsest -> ... >>> Is there a way to show exactly how the full MG proceeds? >> You could get a trace like this from >> >> -mg_coarse_ksp_converged_reason -mg_levels_ksp_converged_reason >> >> If you want to deliminate the iterations, you could add -ksp_monitor. >> >>> Also in the above example, I want to know what interpolation or >>> prolongation method is used from level1 to level2. >>> Can I get that info by adding some options? (not using PCMGGetInterpolation) >>> >>> I attached the ksp_view info and my petsc options file. >>> Thank you. >>> >>> Frank >>> Linear solve converged due to CONVERGED_RTOL iterations 3 >>> KSP Object: 1 MPI processes >>> type: richardson >>> Richardson: damping factor=1. >>> maximum iterations=10000 >>> tolerances: relative=1e-07, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using nonzero initial guess >>> using UNPRECONDITIONED norm type for convergence test >>> PC Object: 1 MPI processes >>> type: mg >>> MG: type is FULL, levels=6 cycles=v >>> Using Galerkin computed coarse grid matrices >>> Coarse grid solver -- level ------------------------------- >>> KSP Object: (mg_coarse_) 1 MPI processes >>> type: preonly >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using NONE norm type for convergence test >>> PC Object: (mg_coarse_) 1 MPI processes >>> type: lu >>> out-of-place factorization >>> tolerance for zero pivot 2.22045e-14 >>> using diagonal shift on blocks to prevent zero pivot [INBLOCKS] >>> matrix ordering: nd >>> factor fill ratio given 0., needed 0. >>> Factored matrix follows: >>> Mat Object: 1 MPI processes >>> type: superlu_dist >>> rows=64, cols=64 >>> package used to perform factorization: superlu_dist >>> total: nonzeros=0, allocated nonzeros=0 >>> total number of mallocs used during MatSetValues calls =0 >>> SuperLU_DIST run parameters: >>> Process grid nprow 1 x npcol 1 >>> Equilibrate matrix TRUE >>> Matrix input mode 0 >>> Replace tiny pivots FALSE >>> Use iterative refinement FALSE >>> Processors in row 1 col partition 1 >>> Row permutation LargeDiag >>> Column permutation METIS_AT_PLUS_A >>> Parallel symbolic factorization FALSE >>> Repeated factorization SamePattern >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=64, cols=64 >>> total: nonzeros=576, allocated nonzeros=576 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> Down solver (pre-smoother) on level 1 ------------------------------- >>> KSP Object: (mg_levels_1_) 1 MPI processes >>> type: richardson >>> Richardson: damping factor=1. >>> maximum iterations=1 >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using nonzero initial guess >>> using NONE norm type for convergence test >>> PC Object: (mg_levels_1_) 1 MPI processes >>> type: sor >>> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=256, cols=256 >>> total: nonzeros=2304, allocated nonzeros=2304 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> Up solver (post-smoother) same as down solver (pre-smoother) >>> Down solver (pre-smoother) on level 2 ------------------------------- >>> KSP Object: (mg_levels_2_) 1 MPI processes >>> type: richardson >>> Richardson: damping factor=1. >>> maximum iterations=1 >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using nonzero initial guess >>> using NONE norm type for convergence test >>> PC Object: (mg_levels_2_) 1 MPI processes >>> type: sor >>> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=1024, cols=1024 >>> total: nonzeros=9216, allocated nonzeros=9216 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> Up solver (post-smoother) same as down solver (pre-smoother) >>> Down solver (pre-smoother) on level 3 ------------------------------- >>> KSP Object: (mg_levels_3_) 1 MPI processes >>> type: richardson >>> Richardson: damping factor=1. >>> maximum iterations=1 >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using nonzero initial guess >>> using NONE norm type for convergence test >>> PC Object: (mg_levels_3_) 1 MPI processes >>> type: sor >>> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=4096, cols=4096 >>> total: nonzeros=36864, allocated nonzeros=36864 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> Up solver (post-smoother) same as down solver (pre-smoother) >>> Down solver (pre-smoother) on level 4 ------------------------------- >>> KSP Object: (mg_levels_4_) 1 MPI processes >>> type: richardson >>> Richardson: damping factor=1. >>> maximum iterations=1 >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using nonzero initial guess >>> using NONE norm type for convergence test >>> PC Object: (mg_levels_4_) 1 MPI processes >>> type: sor >>> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=16384, cols=16384 >>> total: nonzeros=147456, allocated nonzeros=147456 >>> total number of mallocs used during MatSetValues calls =0 >>> not using I-node routines >>> Up solver (post-smoother) same as down solver (pre-smoother) >>> Down solver (pre-smoother) on level 5 ------------------------------- >>> KSP Object: (mg_levels_5_) 1 MPI processes >>> type: richardson >>> Richardson: damping factor=1. >>> maximum iterations=1 >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using nonzero initial guess >>> using NONE norm type for convergence test >>> PC Object: (mg_levels_5_) 1 MPI processes >>> type: sor >>> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1. >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=65536, cols=65536 >>> total: nonzeros=327680, allocated nonzeros=327680 >>> total number of mallocs used during MatSetValues calls =0 >>> has attached null space >>> not using I-node routines >>> Up solver (post-smoother) same as down solver (pre-smoother) >>> linear system matrix = precond matrix: >>> Mat Object: 1 MPI processes >>> type: seqaij >>> rows=65536, cols=65536 >>> total: nonzeros=327680, allocated nonzeros=327680 >>> total number of mallocs used during MatSetValues calls =0 >>> has attached null space >>> not using I-node routines >>> #PETSc Option Table entries: >>> -ksp_converged_reason >>> -ksp_initial_guess_nonzero yes >>> -ksp_norm_type unpreconditioned >>> -ksp_rtol 1e-7 >>> -ksp_type richardson >>> -ksp_view >>> -mg_coarse_ksp_type preonly >>> -mg_coarse_pc_factor_mat_solver_package superlu_dist >>> -mg_coarse_pc_type lu >>> -mg_levels_ksp_max_it 1 >>> -mg_levels_ksp_type richardson >>> -mg_levels_pc_type sor >>> -N 256 >>> -options_left >>> -pc_mg_galerkin >>> -pc_mg_levels 6 >>> -pc_mg_type full >>> -pc_type mg >>> -px 1 >>> -py 1 >>> #End of PETSc Option Table entries >>> There are no unused options. >>> -ksp_type richardson >>> -ksp_norm_type unpreconditioned >>> -ksp_rtol 1e-7 >>> -options_left >>> -ksp_initial_guess_nonzero yes >>> -ksp_converged_reason >>> -ksp_view >>> -pc_type mg >>> -pc_mg_type full >>> -pc_mg_galerkin >>> -pc_mg_levels 6 >>> -mg_levels_ksp_type richardson >>> -mg_levels_pc_type sor >>> -mg_levels_ksp_max_it 1 >>> -mg_coarse_ksp_type preonly >>> -mg_coarse_pc_type lu >>> -mg_coarse_pc_factor_mat_solver_package superlu_dist > > From khaipham at utexas.edu Thu Dec 8 14:52:24 2016 From: khaipham at utexas.edu (Khai Pham) Date: Thu, 8 Dec 2016 15:52:24 -0500 Subject: [petsc-users] Running the same problem multiple time, but the matrix is not the same Message-ID: Hello, I have problem with matrix assembly for linear solver KSP. I run the the same problem with 4 processors multiple time. Using flag -mat_view ::ascii_matlab to view the matrix. The matrix outputs are not the same during each run. Please see the attached file for the comparison between two runs. I checked all the input for each processor as it calls to MatSetValues ( indices of the matrix and values ) and it's consistent all the time. I also checked the allocation information (with flag -info) and it looks fine. Could you give me an advice how to deal with this issue? Thanks ! Best, Khai -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot from 2016-12-08 15-40-29.png Type: image/png Size: 231572 bytes Desc: not available URL: From hzhang at mcs.anl.gov Thu Dec 8 15:01:08 2016 From: hzhang at mcs.anl.gov (Hong) Date: Thu, 8 Dec 2016 15:01:08 -0600 Subject: [petsc-users] Running the same problem multiple time, but the matrix is not the same In-Reply-To: References: Message-ID: Khai : Your solution components have values ranging from 1.e+9 to 1.e-8, and the values only differ in the order of 1.e-8, which are within computational error tolerance. I would consider all solutions same within the approximation tolerance. Hong Hello, > > I have problem with matrix assembly for linear solver KSP. I run the the > same problem with 4 processors multiple time. Using flag > -mat_view ::ascii_matlab to view the matrix. The matrix outputs are not the > same during each run. Please see the attached file for the comparison > between two runs. I checked all the input for each processor as it calls to > MatSetValues ( indices of the matrix and values ) and it's consistent all > the time. I also checked the allocation information (with flag -info) and > it looks fine. Could you give me an advice how to deal with this issue? > Thanks ! > > Best, > Khai > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khaipham at utexas.edu Thu Dec 8 17:10:25 2016 From: khaipham at utexas.edu (Khai Pham) Date: Thu, 8 Dec 2016 18:10:25 -0500 Subject: [petsc-users] Running the same problem multiple time, but the matrix is not the same In-Reply-To: References: Message-ID: Hi Hong, I more concern about the difference between A1 and A2 is in the order of O(1.e-8) as I run the same code twice. Should it be in the order of machine epsilon? Khai On Thu, Dec 8, 2016 at 4:54 PM, Hong wrote: > Khai : >> >> Thanks for your response. The output is not the solution. They are the >> component of the matrix in matlab format. I would expect the difference in >> the order of machine precesion. The difference in solutions in two runs is >> in the order of 1e-5. >> > > If your matrices A1 and A2 have difference of O(1.e-8), then the computed > solution may differ by > Condition_number(A) * machine_epsion. Do you know cond(A)? > > Please alway send your request to petsc-maint. > > Hong > >> >> >> On Dec 8, 2016 4:01 PM, "Hong" wrote: >> >> Khai : >> Your solution components have values ranging from 1.e+9 to 1.e-8, and the >> values only differ in the order of 1.e-8, which are within computational >> error tolerance. >> I would consider all solutions same within the approximation tolerance. >> >> Hong >> >> Hello, >>> >>> I have problem with matrix assembly for linear solver KSP. I run the the >>> same problem with 4 processors multiple time. Using flag >>> -mat_view ::ascii_matlab to view the matrix. The matrix outputs are not the >>> same during each run. Please see the attached file for the comparison >>> between two runs. I checked all the input for each processor as it calls to >>> MatSetValues ( indices of the matrix and values ) and it's consistent all >>> the time. I also checked the allocation information (with flag -info) and >>> it looks fine. Could you give me an advice how to deal with this issue? >>> Thanks ! >>> >>> Best, >>> Khai >>> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 8 17:25:30 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 8 Dec 2016 17:25:30 -0600 Subject: [petsc-users] Running the same problem multiple time, but the matrix is not the same In-Reply-To: References: Message-ID: On Thu, Dec 8, 2016 at 5:10 PM, Khai Pham wrote: > Hi Hong, > > I more concern about the difference between A1 and A2 is in the order of > O(1.e-8) as I run the same code twice. Should it be in the order of machine > epsilon? > If you have 17 orders of magnitude difference between elements, then its easy to cancellation when doing subtraction and have differences on that order due to permuted operations. Matt > Khai > > On Thu, Dec 8, 2016 at 4:54 PM, Hong wrote: > >> Khai : >>> >>> Thanks for your response. The output is not the solution. They are the >>> component of the matrix in matlab format. I would expect the difference in >>> the order of machine precesion. The difference in solutions in two runs is >>> in the order of 1e-5. >>> >> >> If your matrices A1 and A2 have difference of O(1.e-8), then the computed >> solution may differ by >> Condition_number(A) * machine_epsion. Do you know cond(A)? >> >> Please alway send your request to petsc-maint. >> >> Hong >> >>> >>> >>> On Dec 8, 2016 4:01 PM, "Hong" wrote: >>> >>> Khai : >>> Your solution components have values ranging from 1.e+9 to 1.e-8, and >>> the values only differ in the order of 1.e-8, which are within >>> computational error tolerance. >>> I would consider all solutions same within the approximation tolerance. >>> >>> Hong >>> >>> Hello, >>>> >>>> I have problem with matrix assembly for linear solver KSP. I run the >>>> the same problem with 4 processors multiple time. Using flag >>>> -mat_view ::ascii_matlab to view the matrix. The matrix outputs are not the >>>> same during each run. Please see the attached file for the comparison >>>> between two runs. I checked all the input for each processor as it calls to >>>> MatSetValues ( indices of the matrix and values ) and it's consistent all >>>> the time. I also checked the allocation information (with flag -info) and >>>> it looks fine. Could you give me an advice how to deal with this issue? >>>> Thanks ! >>>> >>>> Best, >>>> Khai >>>> >>> >>> >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From khaipham at utexas.edu Fri Dec 9 04:51:05 2016 From: khaipham at utexas.edu (Khai Pham) Date: Fri, 9 Dec 2016 05:51:05 -0500 Subject: [petsc-users] Running the same problem multiple time, but the matrix is not the same In-Reply-To: References: Message-ID: Thanks, Matthew! I will look at numerical stability more carefully. Khai On Thu, Dec 8, 2016 at 6:25 PM, Matthew Knepley wrote: > On Thu, Dec 8, 2016 at 5:10 PM, Khai Pham wrote: > >> Hi Hong, >> >> I more concern about the difference between A1 and A2 is in the order of >> O(1.e-8) as I run the same code twice. Should it be in the order of machine >> epsilon? >> > > If you have 17 orders of magnitude difference between elements, then its > easy to cancellation when doing subtraction and have > differences on that order due to permuted operations. > > Matt > > >> Khai >> >> On Thu, Dec 8, 2016 at 4:54 PM, Hong wrote: >> >>> Khai : >>>> >>>> Thanks for your response. The output is not the solution. They are the >>>> component of the matrix in matlab format. I would expect the difference in >>>> the order of machine precesion. The difference in solutions in two runs is >>>> in the order of 1e-5. >>>> >>> >>> If your matrices A1 and A2 have difference of O(1.e-8), then the >>> computed solution may differ by >>> Condition_number(A) * machine_epsion. Do you know cond(A)? >>> >>> Please alway send your request to petsc-maint. >>> >>> Hong >>> >>>> >>>> >>>> On Dec 8, 2016 4:01 PM, "Hong" wrote: >>>> >>>> Khai : >>>> Your solution components have values ranging from 1.e+9 to 1.e-8, and >>>> the values only differ in the order of 1.e-8, which are within >>>> computational error tolerance. >>>> I would consider all solutions same within the approximation tolerance. >>>> >>>> Hong >>>> >>>> Hello, >>>>> >>>>> I have problem with matrix assembly for linear solver KSP. I run the >>>>> the same problem with 4 processors multiple time. Using flag >>>>> -mat_view ::ascii_matlab to view the matrix. The matrix outputs are not the >>>>> same during each run. Please see the attached file for the comparison >>>>> between two runs. I checked all the input for each processor as it calls to >>>>> MatSetValues ( indices of the matrix and values ) and it's consistent all >>>>> the time. I also checked the allocation information (with flag -info) and >>>>> it looks fine. Could you give me an advice how to deal with this issue? >>>>> Thanks ! >>>>> >>>>> Best, >>>>> Khai >>>>> >>>> >>>> >>>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedmud at gmail.com Fri Dec 9 12:45:15 2016 From: friedmud at gmail.com (Derek Gaston) Date: Fri, 09 Dec 2016 18:45:15 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES Message-ID: Is there a way to tell SNES to simultaneously compute both the residual and the Jacobian in one callback? My code can compute both simultaneously and it will be more efficient (think FE where you can reuse the shape-functions, variables, material properties, etc. for both residual and Jacobian computation). In addition, I also have automatic differentiation as an option which _definitely_ computes both efficiently (and actually computes residuals, by themselves, much slower). I was thinking that I may just save off the Jacobian whenever the initial residual computation is asked for by SNES... and then just return that Jacobian when SNES asks for it. This may be a bit dicey though as SNES can ask for residual computations at many different points during the solve. Thanks for any help! Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Dec 9 12:48:59 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 9 Dec 2016 12:48:59 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: Message-ID: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> Sorry the title in the FAQ is a bit tongue-in-check. http://www.mcs.anl.gov/petsc/documentation/faq.html#functionjacobian > On Dec 9, 2016, at 12:45 PM, Derek Gaston wrote: > > Is there a way to tell SNES to simultaneously compute both the residual and the Jacobian in one callback? > > My code can compute both simultaneously and it will be more efficient (think FE where you can reuse the shape-functions, variables, material properties, etc. for both residual and Jacobian computation). In addition, I also have automatic differentiation as an option which _definitely_ computes both efficiently (and actually computes residuals, by themselves, much slower). > > I was thinking that I may just save off the Jacobian whenever the initial residual computation is asked for by SNES... and then just return that Jacobian when SNES asks for it. This may be a bit dicey though as SNES can ask for residual computations at many different points during the solve. > > Thanks for any help! > > Derek From fdkong.jd at gmail.com Fri Dec 9 13:42:09 2016 From: fdkong.jd at gmail.com (Fande Kong) Date: Fri, 9 Dec 2016 12:42:09 -0700 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: Message-ID: On Fri, Dec 9, 2016 at 11:45 AM, Derek Gaston wrote: > Is there a way to tell SNES to simultaneously compute both the residual > and the Jacobian in one callback? > > My code can compute both simultaneously and it will be more efficient > (think FE where you can reuse the shape-functions, variables, material > properties, etc. for both residual and Jacobian computation). In addition, > I also have automatic differentiation as an option which _definitely_ > computes both efficiently (and actually computes residuals, by themselves, > much slower). > > I was thinking that I may just save off the Jacobian whenever the initial > residual computation is asked for by > It is a reasonable way. But there is no a straight way to determine when you should evaluate Jacobian especially when you use a matrix-free matter. > SNES... and then just return that Jacobian when SNES asks for it. This > may be a bit dicey though as SNES can ask for residual computations at many > different points during the solve. > > Thanks for any help! > > Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From friedmud at gmail.com Fri Dec 9 13:50:02 2016 From: friedmud at gmail.com (Derek Gaston) Date: Fri, 09 Dec 2016 19:50:02 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> Message-ID: Oh man! Sorry Barry! I swear I looked around before I sent the email. I should have checked the FAQ a little more closely! I can understand the reasoning in the FAQ... but I still wonder if it might not be useful to provide all three options (Function, Jacobian, FunctionJacobian). In my case I could fill in each one to do the right thing. That way PETSc could call the "FunctionJacobian" one when it knew it needed both (by default that could just farm out to the individual calls). But you guys have definitely thought a lot more about this than I have. So, do you still recommend what's suggested in the FAQ? Save off the Jacobian computation during the residual computation and then use that when SNES asks for a Jacobian? In the case of automatic differentiation this could make a pretty huge difference in time... Derek On Fri, Dec 9, 2016 at 1:49 PM Barry Smith wrote: > > Sorry the title in the FAQ is a bit tongue-in-check. > > http://www.mcs.anl.gov/petsc/documentation/faq.html#functionjacobian > > > > On Dec 9, 2016, at 12:45 PM, Derek Gaston wrote: > > > > Is there a way to tell SNES to simultaneously compute both the residual > and the Jacobian in one callback? > > > > My code can compute both simultaneously and it will be more efficient > (think FE where you can reuse the shape-functions, variables, material > properties, etc. for both residual and Jacobian computation). In addition, > I also have automatic differentiation as an option which _definitely_ > computes both efficiently (and actually computes residuals, by themselves, > much slower). > > > > I was thinking that I may just save off the Jacobian whenever the > initial residual computation is asked for by SNES... and then just return > that Jacobian when SNES asks for it. This may be a bit dicey though as > SNES can ask for residual computations at many different points during the > solve. > > > > Thanks for any help! > > > > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Dec 9 14:10:04 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 9 Dec 2016 14:10:04 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> Message-ID: <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> > On Dec 9, 2016, at 1:50 PM, Derek Gaston wrote: > > Oh man! Sorry Barry! I swear I looked around before I sent the email. I should have checked the FAQ a little more closely! > > I can understand the reasoning in the FAQ... but I still wonder if it might not be useful to provide all three options (Function, Jacobian, FunctionJacobian). In my case I could fill in each one to do the right thing. That way PETSc could call the "FunctionJacobian" one when it knew it needed both Derek, The code literally never knows if it will need a Jacobian following the function evaluation, yes at the first function evaluation it will need the Jacobian unless the function norm is sufficiently small but after that it is only a question of probabilities (which it can't know) whether it will need the Jacobian. > (by default that could just farm out to the individual calls). But you guys have definitely thought a lot more about this than I have. > > So, do you still recommend what's suggested in the FAQ? Save off the Jacobian computation during the residual computation and then use that when SNES asks for a Jacobian? Yes, try it. I think you can get away with simply putting the new Jacobian matrix values into the same Jacobian matrix that is regularly used so there is no need to "stash the values" somewhere else and copy them over later. I'd be interested in hearing how the performance works out, compute always or compute only when requested. Barry > In the case of automatic differentiation this could make a pretty huge difference in time... > > Derek > > On Fri, Dec 9, 2016 at 1:49 PM Barry Smith wrote: > > Sorry the title in the FAQ is a bit tongue-in-check. > > http://www.mcs.anl.gov/petsc/documentation/faq.html#functionjacobian > > > > On Dec 9, 2016, at 12:45 PM, Derek Gaston wrote: > > > > Is there a way to tell SNES to simultaneously compute both the residual and the Jacobian in one callback? > > > > My code can compute both simultaneously and it will be more efficient (think FE where you can reuse the shape-functions, variables, material properties, etc. for both residual and Jacobian computation). In addition, I also have automatic differentiation as an option which _definitely_ computes both efficiently (and actually computes residuals, by themselves, much slower). > > > > I was thinking that I may just save off the Jacobian whenever the initial residual computation is asked for by SNES... and then just return that Jacobian when SNES asks for it. This may be a bit dicey though as SNES can ask for residual computations at many different points during the solve. > > > > Thanks for any help! > > > > Derek > From knepley at gmail.com Fri Dec 9 14:11:22 2016 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 9 Dec 2016 14:11:22 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: On Fri, Dec 9, 2016 at 2:10 PM, Barry Smith wrote: > > > On Dec 9, 2016, at 1:50 PM, Derek Gaston wrote: > > > > Oh man! Sorry Barry! I swear I looked around before I sent the email. > I should have checked the FAQ a little more closely! > > > > I can understand the reasoning in the FAQ... but I still wonder if it > might not be useful to provide all three options (Function, Jacobian, > FunctionJacobian). In my case I could fill in each one to do the right > thing. That way PETSc could call the "FunctionJacobian" one when it knew > it needed both > > Derek, > > The code literally never knows if it will need a Jacobian following > the function evaluation, yes at the first function evaluation it will need > the Jacobian unless the function norm is sufficiently small but after that > it is only a question of probabilities (which it can't know) whether it > will need the Jacobian. > > > (by default that could just farm out to the individual calls). But you > guys have definitely thought a lot more about this than I have. > > > > So, do you still recommend what's suggested in the FAQ? Save off the > Jacobian computation during the residual computation and then use that when > SNES asks for a Jacobian? > > Yes, try it. I think you can get away with simply putting the new > Jacobian matrix values into the same Jacobian matrix that is regularly used > so there is no need to "stash the values" somewhere else and copy them over > later. > > I'd be interested in hearing how the performance works out, compute > always or compute only when requested. Can anyone write down a simple model for a concrete algorithm where this is more efficient? I would like to see the high level reasoning. Thanks, Matt > > Barry > > > In the case of automatic differentiation this could make a pretty huge > difference in time... > > > > Derek > > > > On Fri, Dec 9, 2016 at 1:49 PM Barry Smith wrote: > > > > Sorry the title in the FAQ is a bit tongue-in-check. > > > > http://www.mcs.anl.gov/petsc/documentation/faq.html#functionjacobian > > > > > > > On Dec 9, 2016, at 12:45 PM, Derek Gaston wrote: > > > > > > Is there a way to tell SNES to simultaneously compute both the > residual and the Jacobian in one callback? > > > > > > My code can compute both simultaneously and it will be more efficient > (think FE where you can reuse the shape-functions, variables, material > properties, etc. for both residual and Jacobian computation). In addition, > I also have automatic differentiation as an option which _definitely_ > computes both efficiently (and actually computes residuals, by themselves, > much slower). > > > > > > I was thinking that I may just save off the Jacobian whenever the > initial residual computation is asked for by SNES... and then just return > that Jacobian when SNES asks for it. This may be a bit dicey though as > SNES can ask for residual computations at many different points during the > solve. > > > > > > Thanks for any help! > > > > > > Derek > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From giuntoli1991 at gmail.com Sat Dec 10 16:52:47 2016 From: giuntoli1991 at gmail.com (Guido Giuntoli) Date: Sat, 10 Dec 2016 23:52:47 +0100 Subject: [petsc-users] One sequencial vector seen from other process Message-ID: Hi, I am solving a problem that needs that process 0 (for example) modifies a small vector (let say 1000 components). Then all the process should be able to get all the component of this vector. Which is the best way to do it ? I don't want a distributed vector because only one process will be the responsible of computing the components ! Thanks, Guido. -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sat Dec 10 16:58:51 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 10 Dec 2016 16:58:51 -0600 Subject: [petsc-users] One sequencial vector seen from other process In-Reply-To: References: Message-ID: On Sat, Dec 10, 2016 at 4:52 PM, Guido Giuntoli wrote: > Hi, > > I am solving a problem that needs that process 0 (for example) modifies a > small vector (let say 1000 components). Then all the process should be able > to get all the component of this vector. Which is the best way to do it ? > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Vec/VecScatterCreateToZero.html Thanks, Matt > I don't want a distributed vector because only one process will be the > responsible of computing the components ! > > Thanks, Guido. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Dec 10 17:05:24 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 10 Dec 2016 17:05:24 -0600 Subject: [petsc-users] One sequencial vector seen from other process In-Reply-To: References: Message-ID: Does this thing need to be a PETSc vector (that is will you perform VecXXX operations on it) or can it simply be an array. If it can be an array then I would use MPI_Bcast() http://www.mpich.org/static/docs/v3.2/www3/MPI_Bcast.html. Or you can use VecScatterCreateToAll(), note that the input Vec is kind of funny because it must be defined as an VECMPI but has zero local entries on all processes except the 0th process. Barry > On Dec 10, 2016, at 4:52 PM, Guido Giuntoli wrote: > > Hi, > > I am solving a problem that needs that process 0 (for example) modifies a small vector (let say 1000 components). Then all the process should be able to get all the component of this vector. Which is the best way to do it ? > > I don't want a distributed vector because only one process will be the responsible of computing the components ! > > Thanks, Guido. From friedmud at gmail.com Sun Dec 11 14:43:55 2016 From: friedmud at gmail.com (Derek Gaston) Date: Sun, 11 Dec 2016 20:43:55 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: Thanks Barry - I'll try it and get back to you. Matt: There are lots of cases where this could be a large savings. Here are a few examples: 1. If you have automatic differentiation. With my newest code, just computing a residual computes the Jacobian as a side effect. If you throw away that Jacobian that's obviously a waste. If you compute one residual without computing the Jacobian (which isn't always possible, depending on how you have your automatic differentiation setup) then you still have to compute _another_ residual to compute the Jacobian... so you're directly doing double the residual computations that are necessary. 2. Anytime you have extremely expensive work to do "per element" that would need to be done for both the residual and Jacobian. A few examples: - Extremely complex, heavy shape function evaluation (think super high order with first and second derivatives needing to be computed) - Extremely heavy material property computations that need to happen at each quadrature point. Think: multiscale. Maybe you have an expensive lower-length-scale solve to do at every quadrature point (yes, we've actually done this). - MANY coupled variables (we've run thousands). Each of those variables needs to have value, gradient and (possibly) second derivatives computed at every quadrature point. These values are exactly the same for the residual and Jacobian. These cases could be so extreme that these heavy "element" calculations actually dominate your residual/jacobian assembly time. That would mean that by computing the residual and Jacobian simultaneously you could directly cut your assembly time in _half_. That could be significant for many applications. In my current application that essentially cuts the whole runtime of the application in half (runtime is very much assembly dominated). Derek On Fri, Dec 9, 2016 at 3:11 PM Matthew Knepley wrote: > On Fri, Dec 9, 2016 at 2:10 PM, Barry Smith wrote: > > > > On Dec 9, 2016, at 1:50 PM, Derek Gaston wrote: > > > > Oh man! Sorry Barry! I swear I looked around before I sent the email. > I should have checked the FAQ a little more closely! > > > > I can understand the reasoning in the FAQ... but I still wonder if it > might not be useful to provide all three options (Function, Jacobian, > FunctionJacobian). In my case I could fill in each one to do the right > thing. That way PETSc could call the "FunctionJacobian" one when it knew > it needed both > > Derek, > > The code literally never knows if it will need a Jacobian following > the function evaluation, yes at the first function evaluation it will need > the Jacobian unless the function norm is sufficiently small but after that > it is only a question of probabilities (which it can't know) whether it > will need the Jacobian. > > > (by default that could just farm out to the individual calls). But you > guys have definitely thought a lot more about this than I have. > > > > So, do you still recommend what's suggested in the FAQ? Save off the > Jacobian computation during the residual computation and then use that when > SNES asks for a Jacobian? > > Yes, try it. I think you can get away with simply putting the new > Jacobian matrix values into the same Jacobian matrix that is regularly used > so there is no need to "stash the values" somewhere else and copy them over > later. > > I'd be interested in hearing how the performance works out, compute > always or compute only when requested. > > > Can anyone write down a simple model for a concrete algorithm where this > is more efficient? I would like to see the high level reasoning. > > Thanks, > > Matt > > > > Barry > > > In the case of automatic differentiation this could make a pretty huge > difference in time... > > > > Derek > > > > On Fri, Dec 9, 2016 at 1:49 PM Barry Smith wrote: > > > > Sorry the title in the FAQ is a bit tongue-in-check. > > > > http://www.mcs.anl.gov/petsc/documentation/faq.html#functionjacobian > > > > > > > On Dec 9, 2016, at 12:45 PM, Derek Gaston wrote: > > > > > > Is there a way to tell SNES to simultaneously compute both the > residual and the Jacobian in one callback? > > > > > > My code can compute both simultaneously and it will be more efficient > (think FE where you can reuse the shape-functions, variables, material > properties, etc. for both residual and Jacobian computation). In addition, > I also have automatic differentiation as an option which _definitely_ > computes both efficiently (and actually computes residuals, by themselves, > much slower). > > > > > > I was thinking that I may just save off the Jacobian whenever the > initial residual computation is asked for by SNES... and then just return > that Jacobian when SNES asks for it. This may be a bit dicey though as > SNES can ask for residual computations at many different points during the > solve. > > > > > > Thanks for any help! > > > > > > Derek > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 11 15:02:08 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 11 Dec 2016 15:02:08 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: On Sun, Dec 11, 2016 at 2:43 PM, Derek Gaston wrote: > Thanks Barry - I'll try it and get back to you. > > Matt: There are lots of cases where this could be a large savings. Here > are a few examples: > > 1. If you have automatic differentiation. With my newest code, just > computing a residual computes the Jacobian as a side effect. If you throw > away that Jacobian that's obviously a waste. If you compute one residual > without computing the Jacobian (which isn't always possible, depending on > how you have your automatic differentiation setup) then you still have to > compute _another_ residual to compute the Jacobian... so you're directly > doing double the residual computations that are necessary. > I consider this bad code management more than an analytical case for the technique, but I can see the point. > 2. Anytime you have extremely expensive work to do "per element" that > would need to be done for both the residual and Jacobian. A few examples: > - Extremely complex, heavy shape function evaluation (think super high > order with first and second derivatives needing to be computed) > I honestly do not understand this one. Maybe I do not understand high order since I never use it. If I want to compute an integral, I have the basis functions tabulated. I understand that for high order, you use a tensor product evaluation, but you still tabulate in 1D. What is being recomputed here? > - Extremely heavy material property computations that need to happen at > each quadrature point. Think: multiscale. Maybe you have an expensive > lower-length-scale solve to do at every quadrature point (yes, we've > actually done this). > Yes. I have to think about this one more. > - MANY coupled variables (we've run thousands). Each of those variables > needs to have value, gradient and (possibly) second derivatives computed at > every quadrature point. These values are exactly the same for the residual > and Jacobian. > Ah, so you are saying that the process of field evaluation at the quadrature points is expensive because you have so many fields. It feels very similar to the material case, but I cannot articulate why. I guess my gut says that really expensive material properties, much more expensive than my top level model, should be modeled by something simpler at that level. Same feeling for using thousands of fields. However, science proceeds by brute force, not clever should have beens. I can see in these cases that a combined evaluation would save a lot of time. However, our Newton does not really know whether it needs a Jacobian or residual at the same time. Its hard to make it work in my head. For example, 1) I compute a Jacobian with every residual. This sucks because line search and lots of other things use residuals. 2) I compute a residual with every Jacobian. This sound like it could work because I compute both for the Newton system, but here I am reusing the residual I computed to check the convergence criterion. Can you see a nice way to express Newton for this? Matt > These cases could be so extreme that these heavy "element" calculations > actually dominate your residual/jacobian assembly time. That would mean > that by computing the residual and Jacobian simultaneously you could > directly cut your assembly time in _half_. That could be significant for > many applications. In my current application that essentially cuts the > whole runtime of the application in half (runtime is very much assembly > dominated). > > Derek > > On Fri, Dec 9, 2016 at 3:11 PM Matthew Knepley wrote: > >> On Fri, Dec 9, 2016 at 2:10 PM, Barry Smith wrote: >> >> >> > On Dec 9, 2016, at 1:50 PM, Derek Gaston wrote: >> > >> > Oh man! Sorry Barry! I swear I looked around before I sent the >> email. I should have checked the FAQ a little more closely! >> > >> > I can understand the reasoning in the FAQ... but I still wonder if it >> might not be useful to provide all three options (Function, Jacobian, >> FunctionJacobian). In my case I could fill in each one to do the right >> thing. That way PETSc could call the "FunctionJacobian" one when it knew >> it needed both >> >> Derek, >> >> The code literally never knows if it will need a Jacobian following >> the function evaluation, yes at the first function evaluation it will need >> the Jacobian unless the function norm is sufficiently small but after that >> it is only a question of probabilities (which it can't know) whether it >> will need the Jacobian. >> >> > (by default that could just farm out to the individual calls). But you >> guys have definitely thought a lot more about this than I have. >> > >> > So, do you still recommend what's suggested in the FAQ? Save off the >> Jacobian computation during the residual computation and then use that when >> SNES asks for a Jacobian? >> >> Yes, try it. I think you can get away with simply putting the new >> Jacobian matrix values into the same Jacobian matrix that is regularly used >> so there is no need to "stash the values" somewhere else and copy them over >> later. >> >> I'd be interested in hearing how the performance works out, compute >> always or compute only when requested. >> >> >> Can anyone write down a simple model for a concrete algorithm where this >> is more efficient? I would like to see the high level reasoning. >> >> Thanks, >> >> Matt >> >> >> >> Barry >> >> > In the case of automatic differentiation this could make a pretty huge >> difference in time... >> > >> > Derek >> > >> > On Fri, Dec 9, 2016 at 1:49 PM Barry Smith wrote: >> > >> > Sorry the title in the FAQ is a bit tongue-in-check. >> > >> > http://www.mcs.anl.gov/petsc/documentation/faq.html#functionjacobian >> > >> > >> > > On Dec 9, 2016, at 12:45 PM, Derek Gaston wrote: >> > > >> > > Is there a way to tell SNES to simultaneously compute both the >> residual and the Jacobian in one callback? >> > > >> > > My code can compute both simultaneously and it will be more efficient >> (think FE where you can reuse the shape-functions, variables, material >> properties, etc. for both residual and Jacobian computation). In addition, >> I also have automatic differentiation as an option which _definitely_ >> computes both efficiently (and actually computes residuals, by themselves, >> much slower). >> > > >> > > I was thinking that I may just save off the Jacobian whenever the >> initial residual computation is asked for by SNES... and then just return >> that Jacobian when SNES asks for it. This may be a bit dicey though as >> SNES can ask for residual computations at many different points during the >> solve. >> > > >> > > Thanks for any help! >> > > >> > > Derek >> > >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From msdrezavand at gmail.com Sun Dec 11 17:19:40 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Mon, 12 Dec 2016 00:19:40 +0100 Subject: [petsc-users] local dimensions Message-ID: Dear PETSc team, What is the difference between the following two methods to get the local dimensions of a square matrix A? If they do the same, which one is recommended? Should I use MPI_Scan after both? 1) PetscInt local_size = PETSC_DECIDE; MatSetSizes(A, local_size, local_size, N, N); 2) PetscInt local_size = PETSC_DECIDE; PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); begin_row = end_row - local_size; Thanks in advance, Massoud -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Dec 11 17:30:52 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 11 Dec 2016 17:30:52 -0600 Subject: [petsc-users] local dimensions In-Reply-To: References: Message-ID: On Sun, Dec 11, 2016 at 5:19 PM, Massoud Rezavand wrote: > Dear PETSc team, > > What is the difference between the following two methods to get the local > dimensions of a square matrix A? If they do the same, which one is > recommended? Should I use MPI_Scan after both? > > 1) > > PetscInt local_size = PETSC_DECIDE; > > MatSetSizes(A, local_size, local_size, N, N); > > > 2) > > PetscInt local_size = PETSC_DECIDE; > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > begin_row = end_row - local_size; > They do the same thing. You only need to second if you want that information before matrix setup happens, such as to preallocate. Matt > Thanks in advance, > Massoud > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Dec 11 17:35:14 2016 From: jed at jedbrown.org (Jed Brown) Date: Sun, 11 Dec 2016 16:35:14 -0700 Subject: [petsc-users] local dimensions In-Reply-To: References: Message-ID: <87oa0iniod.fsf@jedbrown.org> Massoud Rezavand writes: > Dear PETSc team, > > What is the difference between the following two methods to get the local > dimensions of a square matrix A? If they do the same, which one is > recommended? Should I use MPI_Scan after both? I would typically use 1 because it's fewer calls and automatically uses the correct communicator. You can use MatGetOwnershipRange() instead of manually using MPI_Scan. > 1) > > PetscInt local_size = PETSC_DECIDE; > > MatSetSizes(A, local_size, local_size, N, N); > > > 2) > > PetscInt local_size = PETSC_DECIDE; > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > begin_row = end_row - local_size; > > > Thanks in advance, > Massoud -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From msdrezavand at gmail.com Sun Dec 11 18:04:48 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Mon, 12 Dec 2016 01:04:48 +0100 Subject: [petsc-users] local dimensions In-Reply-To: <87oa0iniod.fsf@jedbrown.org> References: <87oa0iniod.fsf@jedbrown.org> Message-ID: Thank you very much, So, if I am using PetscSplitOwnership() and then MatGetOwnershipRange() to be prepared for preallocation, then MatSetSizes(A, local_size, local_size, N, N) should be called with the calculated local_size from PetscSplitOwnership() ? Thanks, Massoud On Mon, Dec 12, 2016 at 12:35 AM, Jed Brown wrote: > Massoud Rezavand writes: > > > Dear PETSc team, > > > > What is the difference between the following two methods to get the local > > dimensions of a square matrix A? If they do the same, which one is > > recommended? Should I use MPI_Scan after both? > > I would typically use 1 because it's fewer calls and automatically uses > the correct communicator. You can use MatGetOwnershipRange() instead of > manually using MPI_Scan. > > > 1) > > > > PetscInt local_size = PETSC_DECIDE; > > > > MatSetSizes(A, local_size, local_size, N, N); > > > > > > 2) > > > > PetscInt local_size = PETSC_DECIDE; > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > > begin_row = end_row - local_size; > > > > > > Thanks in advance, > > Massoud > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Dec 11 18:10:56 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 11 Dec 2016 18:10:56 -0600 Subject: [petsc-users] local dimensions In-Reply-To: References: <87oa0iniod.fsf@jedbrown.org> Message-ID: <1790CC29-C7BC-4329-A63F-C970E33B14C7@mcs.anl.gov> > On Dec 11, 2016, at 6:04 PM, Massoud Rezavand wrote: > > Thank you very much, > > So, if I am using PetscSplitOwnership() and then MatGetOwnershipRange() to be prepared for preallocation, then MatSetSizes(A, local_size, local_size, N, N) should be called with the calculated local_size from PetscSplitOwnership() ? Confusion from the two responses. You cannot use MatGetOwnershipRange() for preallocation. Without preallocation: > > PetscInt local_size = PETSC_DECIDE; > > > > MatSetSizes(A, local_size, local_size, N, N); MatGetOwnershipRanges(...) With preallocation: > > > > > > 2) > > > > PetscInt local_size = PETSC_DECIDE; > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > > begin_row = end_row - local_size; MatMPIAIJSetPreallocation(.....). But note that normally if the matrix comes from a discretization on a grid you would not use either approach above. The parallel layout of the grid would determine the local sizes and you won't not obtain them with PetscSplitOwnership() or local_size = PETSC_DECIDE; Where is your matrix coming from? Barry > > > > > > Thanks, > Massoud > > > On Mon, Dec 12, 2016 at 12:35 AM, Jed Brown wrote: > Massoud Rezavand writes: > > > Dear PETSc team, > > > > What is the difference between the following two methods to get the local > > dimensions of a square matrix A? If they do the same, which one is > > recommended? Should I use MPI_Scan after both? > > I would typically use 1 because it's fewer calls and automatically uses > the correct communicator. You can use MatGetOwnershipRange() instead of > manually using MPI_Scan. > > > 1) > > > > PetscInt local_size = PETSC_DECIDE; > > > > MatSetSizes(A, local_size, local_size, N, N); > > > > > > 2) > > > > PetscInt local_size = PETSC_DECIDE; > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > > begin_row = end_row - local_size; > > > > > > Thanks in advance, > > Massoud > From msdrezavand at gmail.com Sun Dec 11 18:21:32 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Mon, 12 Dec 2016 01:21:32 +0100 Subject: [petsc-users] local dimensions In-Reply-To: <1790CC29-C7BC-4329-A63F-C970E33B14C7@mcs.anl.gov> References: <87oa0iniod.fsf@jedbrown.org> <1790CC29-C7BC-4329-A63F-C970E33B14C7@mcs.anl.gov> Message-ID: Thanks, as I already discussed with you, the matrix is coming from SPH discretization, which is not fixed on a grid and is changing over time. On Mon, Dec 12, 2016 at 1:10 AM, Barry Smith wrote: > > > On Dec 11, 2016, at 6:04 PM, Massoud Rezavand > wrote: > > > > Thank you very much, > > > > So, if I am using PetscSplitOwnership() and then MatGetOwnershipRange() > to be prepared for preallocation, then MatSetSizes(A, local_size, > local_size, N, N) should be called with the calculated local_size from > PetscSplitOwnership() ? > > Confusion from the two responses. You cannot use MatGetOwnershipRange() > for preallocation. > > Without preallocation: > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > MatSetSizes(A, local_size, local_size, N, N); > > MatGetOwnershipRanges(...) > > With preallocation: > > > > > > > > > 2) > > > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, > PETSC_COMM_WORLD); > > > begin_row = end_row - local_size; > > MatMPIAIJSetPreallocation(.....). > > > But note that normally if the matrix comes from a discretization on a grid > you would not use either approach above. The parallel layout of the grid > would determine the local sizes and you won't not obtain them with > PetscSplitOwnership() or local_size = PETSC_DECIDE; > > Where is your matrix coming from? > > Barry > > > > > > > > > > > > > > > Thanks, > > Massoud > > > > > > On Mon, Dec 12, 2016 at 12:35 AM, Jed Brown wrote: > > Massoud Rezavand writes: > > > > > Dear PETSc team, > > > > > > What is the difference between the following two methods to get the > local > > > dimensions of a square matrix A? If they do the same, which one is > > > recommended? Should I use MPI_Scan after both? > > > > I would typically use 1 because it's fewer calls and automatically uses > > the correct communicator. You can use MatGetOwnershipRange() instead of > > manually using MPI_Scan. > > > > > 1) > > > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > MatSetSizes(A, local_size, local_size, N, N); > > > > > > > > > 2) > > > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, > PETSC_COMM_WORLD); > > > begin_row = end_row - local_size; > > > > > > > > > Thanks in advance, > > > Massoud > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msdrezavand at gmail.com Sun Dec 11 18:26:41 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Mon, 12 Dec 2016 01:26:41 +0100 Subject: [petsc-users] local dimensions In-Reply-To: References: <87oa0iniod.fsf@jedbrown.org> <1790CC29-C7BC-4329-A63F-C970E33B14C7@mcs.anl.gov> Message-ID: Sorry, I forgot to say that my computational domain is decomposed with a parallel library using MPI, and the particles are defined with a distributed vector. So, the entries of the matrix are basically from that distributed vector. Thanks, Massoud On Mon, Dec 12, 2016 at 1:21 AM, Massoud Rezavand wrote: > Thanks, > > as I already discussed with you, the matrix is coming from SPH > discretization, which is not fixed on a grid and is changing over time. > > On Mon, Dec 12, 2016 at 1:10 AM, Barry Smith wrote: > >> >> > On Dec 11, 2016, at 6:04 PM, Massoud Rezavand >> wrote: >> > >> > Thank you very much, >> > >> > So, if I am using PetscSplitOwnership() and then MatGetOwnershipRange() >> to be prepared for preallocation, then MatSetSizes(A, local_size, >> local_size, N, N) should be called with the calculated local_size from >> PetscSplitOwnership() ? >> >> Confusion from the two responses. You cannot use >> MatGetOwnershipRange() for preallocation. >> >> Without preallocation: >> >> > > PetscInt local_size = PETSC_DECIDE; >> > > >> > > MatSetSizes(A, local_size, local_size, N, N); >> >> MatGetOwnershipRanges(...) >> >> With preallocation: >> > > >> > > >> > > 2) >> > > >> > > PetscInt local_size = PETSC_DECIDE; >> > > >> > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); >> > > >> > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, >> PETSC_COMM_WORLD); >> > > begin_row = end_row - local_size; >> >> MatMPIAIJSetPreallocation(.....). >> >> >> But note that normally if the matrix comes from a discretization on a >> grid you would not use either approach above. The parallel layout of the >> grid would determine the local sizes and you won't not obtain them with >> PetscSplitOwnership() or local_size = PETSC_DECIDE; >> >> Where is your matrix coming from? >> >> Barry >> >> >> >> > > >> > > >> >> >> > >> > Thanks, >> > Massoud >> > >> > >> > On Mon, Dec 12, 2016 at 12:35 AM, Jed Brown wrote: >> > Massoud Rezavand writes: >> > >> > > Dear PETSc team, >> > > >> > > What is the difference between the following two methods to get the >> local >> > > dimensions of a square matrix A? If they do the same, which one is >> > > recommended? Should I use MPI_Scan after both? >> > >> > I would typically use 1 because it's fewer calls and automatically uses >> > the correct communicator. You can use MatGetOwnershipRange() instead of >> > manually using MPI_Scan. >> > >> > > 1) >> > > >> > > PetscInt local_size = PETSC_DECIDE; >> > > >> > > MatSetSizes(A, local_size, local_size, N, N); >> > > >> > > >> > > 2) >> > > >> > > PetscInt local_size = PETSC_DECIDE; >> > > >> > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); >> > > >> > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, >> PETSC_COMM_WORLD); >> > > begin_row = end_row - local_size; >> > > >> > > >> > > Thanks in advance, >> > > Massoud >> > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Dec 11 19:05:21 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 11 Dec 2016 19:05:21 -0600 Subject: [petsc-users] local dimensions In-Reply-To: References: <87oa0iniod.fsf@jedbrown.org> <1790CC29-C7BC-4329-A63F-C970E33B14C7@mcs.anl.gov> Message-ID: <2661EB5F-BD8E-4C6D-B804-D3469A65F2BB@mcs.anl.gov> > On Dec 11, 2016, at 6:26 PM, Massoud Rezavand wrote: > > Sorry, I forgot to say that my computational domain is decomposed with a parallel library using MPI, and the particles are defined with a distributed vector. So, the entries of the matrix are basically from that distributed vector. Then likely you should use the same distribution for matrix rows as you do for the vector. Thus you can call VecGetLocalSize() and pass that local size in the preallocation for the matrix, you can also call VecGetOwnershipRange() to get the range of local rows in order to compute the preallocation for the matrix. Barry > > Thanks, > Massoud > > On Mon, Dec 12, 2016 at 1:21 AM, Massoud Rezavand wrote: > Thanks, > > as I already discussed with you, the matrix is coming from SPH discretization, which is not fixed on a grid and is changing over time. > > On Mon, Dec 12, 2016 at 1:10 AM, Barry Smith wrote: > > > On Dec 11, 2016, at 6:04 PM, Massoud Rezavand wrote: > > > > Thank you very much, > > > > So, if I am using PetscSplitOwnership() and then MatGetOwnershipRange() to be prepared for preallocation, then MatSetSizes(A, local_size, local_size, N, N) should be called with the calculated local_size from PetscSplitOwnership() ? > > Confusion from the two responses. You cannot use MatGetOwnershipRange() for preallocation. > > Without preallocation: > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > MatSetSizes(A, local_size, local_size, N, N); > > MatGetOwnershipRanges(...) > > With preallocation: > > > > > > > > > 2) > > > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > > > begin_row = end_row - local_size; > > MatMPIAIJSetPreallocation(.....). > > > But note that normally if the matrix comes from a discretization on a grid you would not use either approach above. The parallel layout of the grid would determine the local sizes and you won't not obtain them with PetscSplitOwnership() or local_size = PETSC_DECIDE; > > Where is your matrix coming from? > > Barry > > > > > > > > > > > > > > > Thanks, > > Massoud > > > > > > On Mon, Dec 12, 2016 at 12:35 AM, Jed Brown wrote: > > Massoud Rezavand writes: > > > > > Dear PETSc team, > > > > > > What is the difference between the following two methods to get the local > > > dimensions of a square matrix A? If they do the same, which one is > > > recommended? Should I use MPI_Scan after both? > > > > I would typically use 1 because it's fewer calls and automatically uses > > the correct communicator. You can use MatGetOwnershipRange() instead of > > manually using MPI_Scan. > > > > > 1) > > > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > MatSetSizes(A, local_size, local_size, N, N); > > > > > > > > > 2) > > > > > > PetscInt local_size = PETSC_DECIDE; > > > > > > PetscSplitOwnership(PETSC_COMM_WORLD, &local_size, &N); > > > > > > MPI_Scan(&local_size, &end_row, 1, MPIU_INT, MPI_SUM, PETSC_COMM_WORLD); > > > begin_row = end_row - local_size; > > > > > > > > > Thanks in advance, > > > Massoud > > > > > From friedmud at gmail.com Sun Dec 11 22:58:49 2016 From: friedmud at gmail.com (Derek Gaston) Date: Mon, 12 Dec 2016 04:58:49 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: A quick note: I'm not hugely invested in this idea... I'm just talking it out since I started it. The issues might outweigh potential gains... On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley wrote: > I consider this bad code management more than an analytical case for the > technique, but I can see the point. > Can you expand on that? Do you believe automatic differentiation in general to be "bad code management"? > - Extremely complex, heavy shape function evaluation (think super high > order with first and second derivatives needing to be computed) > > I honestly do not understand this one. Maybe I do not understand high > order since I never use it. If I want to compute an integral, I have > the basis functions tabulated. I understand that for high order, you use a > tensor product evaluation, but you still tabulate in 1D. What is > being recomputed here? > In unstructured mesh you still have to compute the reference->physical map for each element and map all of the gradients/second derivatives to physical space. This can be quite expensive if you have a lot of shape functions and a lot of quadrature points. Sometimes we even have to do this step twice: once for the un-deformed mesh and once for the deformed mesh... on every element. > > > - Extremely heavy material property computations > > > Yes. I have to think about this one more. > > > - MANY coupled variables (we've run thousands). > > > Ah, so you are saying that the process of field evaluation at the > quadrature points is expensive because you have so many fields. > It feels very similar to the material case, but I cannot articulate why. > It is similar: it's all about how much information you have to recompute at each quadrature point. I was simply giving different scenarios for why you could end up with heavy calculations at each quadrature point that feed into both the Residual and Jacobian calculations. > I guess my gut says that really expensive material properties, > much more expensive than my top level model, should be modeled by > something simpler at that level. Same feeling for using > thousands of fields. > Even if you can find something simpler it's good to be able to solve the expensive one to verify your simpler model. Sometimes the microstructure behavior is complicated enough that it's quite difficult to wrap up in a simpler model or (like you said) it won't be clear if a simpler model is possible without doing the more expensive model first. We really do have models that require thousands (sometimes tens of thousands) of coupled PDEs. Reusing the field evaluations for both the residual and Jacobian could be a large win. > 1) I compute a Jacobian with every residual. This sucks because line > search and lots of other things use residuals. > > 2) I compute a residual with every Jacobian. This sound like it could > work because I compute both for the Newton system, but here I > am reusing the residual I computed to check the convergence > criterion. > > Can you see a nice way to express Newton for this? > You can see my (admittedly stupidly simple) Newton code that works this way here: https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/JuliaDenseNonlinearImplicitSolver.jl#L42 Check the assembly code here to see how both are computed simultaneously: https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/Assembly.jl#L59 Lack of line search makes it pretty simple. However, even with this simple code I end up wasting one extra Jacobian evaluation once the convergence criteria has been reached. Whether or not that is advantageous depends on the relative tradeoffs of reusable element computations vs Jacobian calculation and how many nonlinear iterations you do (if you're only doing one nonlinear iteration every timestep then you're wasting 50% of your total Jacobian calculation time). For a full featured solver you would definitely also want to have the ability to compute a residual, by itself, when you want... for things like line search. You guys have definitely thought a lot more about this than I have... I'm just spitballing here... but it does seem like having an optional interface for computing a combined residual/Jacobian could save some codes a significant amount of time. This isn't a strong feeling of mine though. I think that for now the way I'll do it is simply to "waste" a residual calculation when I need a Jacobian :-) Derek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Sun Dec 11 23:23:39 2016 From: cpraveen at gmail.com (Praveen C) Date: Mon, 12 Dec 2016 10:53:39 +0530 Subject: [petsc-users] snes options for rough solution Message-ID: Dear all I am solving a nonlinear parabolic problem with snes. The newton update is rather non-smooth and I have convergence problems when using default options. Attached figure shows how solution changes in two time steps. Are there any special algorithms/options in snes that I can use for such problem ? Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test1.pdf Type: application/pdf Size: 14654 bytes Desc: not available URL: From jed at jedbrown.org Sun Dec 11 23:36:13 2016 From: jed at jedbrown.org (Jed Brown) Date: Sun, 11 Dec 2016 22:36:13 -0700 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: <87eg1dogj6.fsf@jedbrown.org> r<#secure method=pgpmime mode=sign> Derek Gaston writes: > A quick note: I'm not hugely invested in this idea... I'm just talking it > out since I started it. The issues might outweigh potential gains... > > On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley wrote: > >> I consider this bad code management more than an analytical case for the >> technique, but I can see the point. >> > > Can you expand on that? Do you believe automatic differentiation in > general to be "bad code management"? AD that prevents calling the non-AD function is bad AD. >> - Extremely complex, heavy shape function evaluation (think super high >> order with first and second derivatives needing to be computed) >> >> I honestly do not understand this one. Maybe I do not understand high >> order since I never use it. If I want to compute an integral, I have >> the basis functions tabulated. I understand that for high order, you use a >> tensor product evaluation, but you still tabulate in 1D. What is >> being recomputed here? >> > > In unstructured mesh you still have to compute the reference->physical map > for each element Yes. > and map all of the gradients/second derivatives to physical space. No, you can apply it at quadrature points and it is fewer flops and allows vectorization of the reference gradients over elements. Libmesh is just written to do a matrix-matrix product so that physical gradient matrices can be handed to users. That is convenient, but not optimal. Gradients of physical coordinates and inversion of the 3x3 coordinate Jacobians at each quadrature point are a real cost, though there are a lot of scenarios in which it is less expensive to store them than to recompute them. >> Ah, so you are saying that the process of field evaluation at the >> quadrature points is expensive because you have so many fields. >> It feels very similar to the material case, but I cannot articulate why. > > It is similar: it's all about how much information you have to recompute at > each quadrature point. I was simply giving different scenarios for why you > could end up with heavy calculations at each quadrature point that feed > into both the Residual and Jacobian calculations. Are all the fields in unique function spaces that need different transforms or different quadratures? If not, it seems like the presence of many fields would already amortize the geometric overhead of visiting an element. When the fields are coupled through an expensive material model, I realize that solving that model can be the dominant cost. In the limit of expensive materials, it is possible that the Jacobian and residual have essentially the same cost. I.e., the Jacobian can be updated for free when you compute the residual. Obviously you can just do that and save 2x on residual/Jacobian evaluation. Alternatively, you could cache the effective material coefficient (and its gradient) at each quadrature point during residual evaluation, thus avoiding a re-solve when building the Jacobian. I would recommend that unless you know that line searches are rare. It is far more common that the Jacobian is _much_ more expensive than the residual, in which case the mere possibility of a line search (or of converging) would justify deferring the Jacobian. I think it's much better to make residuals and Jacobians fast independently, then perhaps make the residual do some cheap caching, and worry about second-guessing Newton only as a last resort. That said, I have no doubt that we could demonstrate some benefit to using heuristics and a relative cost model to sometimes compute residuals and Jacobians together. It just isn't that interesting and I think the gains are likely small and will generate lots of bikeshedding about the heuristic. From knepley at gmail.com Mon Dec 12 00:20:59 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Dec 2016 00:20:59 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Sun, Dec 11, 2016 at 11:23 PM, Praveen C wrote: > Dear all > > I am solving a nonlinear parabolic problem with snes. The newton update is > rather non-smooth and I have convergence problems when using default > options. > > Attached figure shows how solution changes in two time steps. > It is not clear what you mean here. Newton does not solve timestepping problems. Maybe you are using it with an implicit timestepper, but its still not clear what you mean by non-smooth updates. Did you try with TS? Matt > Are there any special algorithms/options in snes that I can use for such > problem ? > > Thanks > praveen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Mon Dec 12 00:41:58 2016 From: cpraveen at gmail.com (Praveen C) Date: Mon, 12 Dec 2016 12:11:58 +0530 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: Sorry for being incomplete. I use backward euler and snes. The figure I sent shows solution changes by a large amount in each time step. The major change is at extrema. The change in one time step u^{n+1} - u^n which must come from snes is not a smooth function of x. If I use dt = dx, then snes does not converge even after 50 iterations, here is output (Fenics code) 0 SNES Function norm 1.977638959494e+00 1 SNES Function norm 1.924169835496e+00 2 SNES Function norm 1.922201608879e+00 3 SNES Function norm 1.920237421814e+00 4 SNES Function norm 1.918277062381e+00 5 SNES Function norm 1.916320289472e+00 6 SNES Function norm 1.914366865403e+00 7 SNES Function norm 1.912416585240e+00 8 SNES Function norm 1.910469283868e+00 9 SNES Function norm 1.899960770375e+00 10 SNES Function norm 1.879131065459e+00 11 SNES Function norm 1.857531063656e+00 12 SNES Function norm 1.836809521483e+00 13 SNES Function norm 1.816709863124e+00 14 SNES Function norm 1.797014998190e+00 15 SNES Function norm 1.777737697197e+00 16 SNES Function norm 1.758825541543e+00 17 SNES Function norm 1.740232061718e+00 18 SNES Function norm 1.721929885464e+00 19 SNES Function norm 1.703895519687e+00 20 SNES Function norm 1.686113465512e+00 21 SNES Function norm 1.668566528915e+00 22 SNES Function norm 1.651247832992e+00 23 SNES Function norm 1.634150402758e+00 24 SNES Function norm 1.617265971731e+00 25 SNES Function norm 1.600589248992e+00 26 SNES Function norm 1.584114929900e+00 27 SNES Function norm 1.567836662164e+00 28 SNES Function norm 1.551748332761e+00 29 SNES Function norm 1.535845822400e+00 30 SNES Function norm 1.520125060009e+00 31 SNES Function norm 1.504582049738e+00 32 SNES Function norm 1.489213340181e+00 33 SNES Function norm 1.474015969067e+00 34 SNES Function norm 1.458987020278e+00 35 SNES Function norm 1.444123487154e+00 36 SNES Function norm 1.429422455304e+00 37 SNES Function norm 1.414881258532e+00 38 SNES Function norm 1.400497521029e+00 39 SNES Function norm 1.386269021843e+00 40 SNES Function norm 1.372193573931e+00 41 SNES Function norm 1.358269024537e+00 42 SNES Function norm 1.344493290602e+00 43 SNES Function norm 1.330864378208e+00 44 SNES Function norm 1.317380313100e+00 45 SNES Function norm 1.304039060314e+00 46 SNES Function norm 1.290838655798e+00 47 SNES Function norm 1.277777248327e+00 48 SNES Function norm 1.264853054978e+00 49 SNES Function norm 1.252064339134e+00 50 SNES Function norm 1.239409403964e+00 Function norm is decreasing but very slowly. The initial guess for newton, which is just solution from old time, is too far from the solution. With dt = dx^2, it works fine, but this time step is too small. Best praveen On Mon, Dec 12, 2016 at 11:50 AM, Matthew Knepley wrote: > On Sun, Dec 11, 2016 at 11:23 PM, Praveen C wrote: > >> Dear all >> >> I am solving a nonlinear parabolic problem with snes. The newton update >> is rather non-smooth and I have convergence problems when using default >> options. >> >> Attached figure shows how solution changes in two time steps. >> > > It is not clear what you mean here. Newton does not solve timestepping > problems. Maybe you are using it > with an implicit timestepper, but its still not clear what you mean by > non-smooth updates. Did you try with > TS? > > Matt > > >> Are there any special algorithms/options in snes that I can use for such >> problem ? >> >> Thanks >> praveen >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 12 00:54:01 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Dec 2016 00:54:01 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Mon, Dec 12, 2016 at 12:41 AM, Praveen C wrote: > Sorry for being incomplete. I use backward euler and snes. The figure I > sent shows solution changes by a large amount in each time step. The major > change is at extrema. The change in one time step > > u^{n+1} - u^n > > which must come from snes is not a smooth function of x. > > If I use dt = dx, then snes does not converge even after 50 iterations, > here is output (Fenics code) > > 0 SNES Function norm 1.977638959494e+00 > > 1 SNES Function norm 1.924169835496e+00 > > 2 SNES Function norm 1.922201608879e+00 > > 3 SNES Function norm 1.920237421814e+00 > > 4 SNES Function norm 1.918277062381e+00 > > 5 SNES Function norm 1.916320289472e+00 > > 6 SNES Function norm 1.914366865403e+00 > > 7 SNES Function norm 1.912416585240e+00 > > 8 SNES Function norm 1.910469283868e+00 > > 9 SNES Function norm 1.899960770375e+00 > > 10 SNES Function norm 1.879131065459e+00 > > 11 SNES Function norm 1.857531063656e+00 > > 12 SNES Function norm 1.836809521483e+00 > > 13 SNES Function norm 1.816709863124e+00 > > 14 SNES Function norm 1.797014998190e+00 > > 15 SNES Function norm 1.777737697197e+00 > > 16 SNES Function norm 1.758825541543e+00 > > 17 SNES Function norm 1.740232061718e+00 > > 18 SNES Function norm 1.721929885464e+00 > > 19 SNES Function norm 1.703895519687e+00 > > 20 SNES Function norm 1.686113465512e+00 > > 21 SNES Function norm 1.668566528915e+00 > > 22 SNES Function norm 1.651247832992e+00 > > 23 SNES Function norm 1.634150402758e+00 > > 24 SNES Function norm 1.617265971731e+00 > > 25 SNES Function norm 1.600589248992e+00 > > 26 SNES Function norm 1.584114929900e+00 > > 27 SNES Function norm 1.567836662164e+00 > > 28 SNES Function norm 1.551748332761e+00 > > 29 SNES Function norm 1.535845822400e+00 > > 30 SNES Function norm 1.520125060009e+00 > > 31 SNES Function norm 1.504582049738e+00 > > 32 SNES Function norm 1.489213340181e+00 > > 33 SNES Function norm 1.474015969067e+00 > > 34 SNES Function norm 1.458987020278e+00 > > 35 SNES Function norm 1.444123487154e+00 > > 36 SNES Function norm 1.429422455304e+00 > > 37 SNES Function norm 1.414881258532e+00 > > 38 SNES Function norm 1.400497521029e+00 > > 39 SNES Function norm 1.386269021843e+00 > > 40 SNES Function norm 1.372193573931e+00 > > 41 SNES Function norm 1.358269024537e+00 > > 42 SNES Function norm 1.344493290602e+00 > > 43 SNES Function norm 1.330864378208e+00 > > 44 SNES Function norm 1.317380313100e+00 > > 45 SNES Function norm 1.304039060314e+00 > > 46 SNES Function norm 1.290838655798e+00 > > 47 SNES Function norm 1.277777248327e+00 > > 48 SNES Function norm 1.264853054978e+00 > > 49 SNES Function norm 1.252064339134e+00 > > 50 SNES Function norm 1.239409403964e+00 > > > Function norm is decreasing but very slowly. The initial guess for newton, > which is just solution from old time, is too far from the solution. > > With dt = dx^2, it works fine, but this time step is too small. > 1) Are you using TS? I am guessing the answer is no. 2) This looks like you have a bug in the Jacobian. A larger timestep just gives you the elliptic operator. Try a) Using -pc_type lu, which removes the linear solve as a variable b) Using -snes_fd on a small problem, which gives the correct Jacobian Matt > Best > praveen > > On Mon, Dec 12, 2016 at 11:50 AM, Matthew Knepley > wrote: > >> On Sun, Dec 11, 2016 at 11:23 PM, Praveen C wrote: >> >>> Dear all >>> >>> I am solving a nonlinear parabolic problem with snes. The newton update >>> is rather non-smooth and I have convergence problems when using default >>> options. >>> >>> Attached figure shows how solution changes in two time steps. >>> >> >> It is not clear what you mean here. Newton does not solve timestepping >> problems. Maybe you are using it >> with an implicit timestepper, but its still not clear what you mean by >> non-smooth updates. Did you try with >> TS? >> >> Matt >> >> >>> Are there any special algorithms/options in snes that I can use for such >>> problem ? >>> >>> Thanks >>> praveen >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Mon Dec 12 00:56:27 2016 From: cpraveen at gmail.com (Praveen C) Date: Mon, 12 Dec 2016 12:26:27 +0530 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: Increasing number of snes iterations, I get convergence. So it is a problem of initial guess being too far from the solution of the nonlinear equation. Solution can be seen here https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ray/cosmic_ray.ipynb Green curve is solution after two time steps. It took about 100 snes iterations in first time step and about 50 in second time step. I use exact Jacobian and direct LU solve. Thanks praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 12 01:04:42 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Dec 2016 01:04:42 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Mon, Dec 12, 2016 at 12:56 AM, Praveen C wrote: > Increasing number of snes iterations, I get convergence. > > So it is a problem of initial guess being too far from the solution of the > nonlinear equation. > > Solution can be seen here > > https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ > ray/cosmic_ray.ipynb > > Green curve is solution after two time steps. > > It took about 100 snes iterations in first time step and about 50 in > second time step. > > I use exact Jacobian and direct LU solve. > I do not believe its the correct Jacobian. Did you test it as I asked? Also run with -snes_monitor -ksp_monitor_true_residual -snes_view -snes_converged_reason and then -snes_fd and send all the output Matt > Thanks > praveen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 12 01:11:10 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Dec 2016 01:11:10 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Mon, Dec 12, 2016 at 1:04 AM, Matthew Knepley wrote: > On Mon, Dec 12, 2016 at 12:56 AM, Praveen C wrote: > >> Increasing number of snes iterations, I get convergence. >> >> So it is a problem of initial guess being too far from the solution of >> the nonlinear equation. >> >> Solution can be seen here >> >> https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ray >> /cosmic_ray.ipynb >> > Also, how is this a parabolic equation? It looks like u/|u'| to me, which does not look parabolic at all. Matt > Green curve is solution after two time steps. >> >> It took about 100 snes iterations in first time step and about 50 in >> second time step. >> >> I use exact Jacobian and direct LU solve. >> > > I do not believe its the correct Jacobian. Did you test it as I asked? > Also run with > > -snes_monitor -ksp_monitor_true_residual -snes_view > -snes_converged_reason > > and then > > -snes_fd > > and send all the output > > Matt > > >> Thanks >> praveen >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.buttari at enseeiht.fr Mon Dec 12 02:00:56 2016 From: alfredo.buttari at enseeiht.fr (Alfredo Buttari) Date: Mon, 12 Dec 2016 09:00:56 +0100 Subject: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes In-Reply-To: References: <3A041F37-6368-4060-81A5-59D0130584C9@mcs.anl.gov> Message-ID: Dear all, sorry for the late reply. The petsc installation went supersmooth and I could easily reproduce the issue. I dumped the matrix generated by petsc and read it back with a standalone mumps tester in order to confirm the bug. This bug has been already reported by another user, was fixed a few months ago and the fix was included in the 5.0.2 release. Could you please check if everything works well with mumps 5.0.2? Kind regards, te MUMPS team On Thu, Oct 20, 2016 at 4:44 PM, Hong wrote: > Alfredo: > It would be much easier to install petsc with mumps, parmetis, and > debugging this case. Here is what you can do on a linux machine > (see http://www.mcs.anl.gov/petsc/documentation/installation.html): > > 1) get petsc-release: > git clone -b maint https://bitbucket.org/petsc/petsc petsc > > cd petsc > git pull > export PETSC_DIR=$PWD > export PETSC_ARCH=<> > > 2) configure petsc with additional options > '--download-metis --download-parmetis --download-mumps --download-scalapack > --download-ptscotch' > see http://www.mcs.anl.gov/petsc/documentation/installation.html > > 3) build petsc and test > make > make test > > 4) test ex53.c: > cd $PETSC_DIR/src/ksp/ksp/examples/tutorials > make ex53 > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > -mat_mumps_icntl_29 2 > > 5) debugging ex53.c: > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > -mat_mumps_icntl_29 2 -start_in_debugger > > Give it a try. Contact us if you cannot reproduce this case. > > Hong > >> Dear all, >> this may well be due to a bug in the parallel analysis. Do you think you >> can reproduce the problem in a standalone MUMPS program (i.e., without going >> through PETSc) ? that would save a lot of time to track the bug since we do >> not have a PETSc install at hand. Otherwise we'll give it a shot at >> installing petsc and reproducing the problem on our side. >> >> Kind regards, >> the MUMPS team >> >> >> >> On Wed, Oct 19, 2016 at 8:32 PM, Barry Smith wrote: >>> >>> >>> Tim, >>> >>> You can/should also run with valgrind to determine exactly the first >>> point with memory corruption issues. >>> >>> Barry >>> >>> > On Oct 19, 2016, at 11:08 AM, Hong wrote: >>> > >>> > Tim: >>> > With '-mat_mumps_icntl_28 1', i.e., sequential analysis, I can run ex56 >>> > with np=3 or larger np successfully. >>> > >>> > With '-mat_mumps_icntl_28 2', i.e., parallel analysis, I can run up to >>> > np=3. >>> > >>> > For np=4: >>> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 >>> > -mat_mumps_icntl_29 2 -start_in_debugger >>> > >>> > code crashes inside mumps: >>> > Program received signal SIGSEGV, Segmentation fault. >>> > 0x00007f33d75857cb in >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( >>> > id=..., first=..., last=..., ipe=..., >>> > pe=, >>> > work=...) >>> > at dana_aux_par.F:1450 >>> > 1450 MAPTAB(J) = I >>> > (gdb) bt >>> > #0 0x00007f33d75857cb in >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( >>> > id=..., first=..., last=..., ipe=..., >>> > pe=, >>> > work=...) >>> > at dana_aux_par.F:1450 >>> > #1 0x00007f33d759207c in dmumps_parallel_analysis::dmumps_parmetis_ord >>> > ( >>> > id=..., ord=..., work=...) at dana_aux_par.F:400 >>> > #2 0x00007f33d7592d14 in dmumps_parallel_analysis::dmumps_do_par_ord >>> > (id=..., >>> > ord=..., work=...) at dana_aux_par.F:351 >>> > #3 0x00007f33d7593aa9 in dmumps_parallel_analysis::dmumps_ana_f_par >>> > (id=..., >>> > work1=..., work2=..., nfsiz=..., >>> > fils=, >>> > frere=>> > 0x0>) >>> > at dana_aux_par.F:98 >>> > #4 0x00007f33d74c622a in dmumps_ana_driver (id=...) at >>> > dana_driver.F:563 >>> > #5 0x00007f33d747706b in dmumps (id=...) at dmumps_driver.F:1108 >>> > #6 0x00007f33d74721b5 in dmumps_f77 (job=1, sym=0, par=1, >>> > comm_f77=-2080374779, n=10000, icntl=..., cntl=..., keep=..., >>> > dkeep=..., >>> > keep8=..., nz=0, irn=..., irnhere=0, jcn=..., jcnhere=0, a=..., >>> > ahere=0, >>> > nz_loc=7500, irn_loc=..., irn_lochere=1, jcn_loc=..., >>> > jcn_lochere=1, >>> > a_loc=..., a_lochere=1, nelt=0, eltptr=..., eltptrhere=0, >>> > eltvar=..., >>> > eltvarhere=0, a_elt=..., a_elthere=0, perm_in=..., perm_inhere=0, >>> > rhs=..., >>> > rhshere=0, redrhs=..., redrhshere=0, info=..., rinfo=..., >>> > infog=..., >>> > rinfog=..., deficiency=0, lwk_user=0, size_schur=0, >>> > listvar_schur=..., >>> > ---Type to continue, or q to quit--- >>> > ar_schurhere=0, schur=..., schurhere=0, wk_user=..., wk_userhere=0, >>> > colsca=..., >>> > colscahere=0, rowsca=..., rowscahere=0, instance_number=1, nrhs=1, >>> > lrhs=0, lredrhs=0, >>> > rhs_sparse=..., rhs_sparsehere=0, sol_loc=..., sol_lochere=0, >>> > irhs_sparse=..., >>> > irhs_sparsehere=0, irhs_ptr=..., irhs_ptrhere=0, isol_loc=..., >>> > isol_lochere=0, >>> > nz_rhs=0, lsol_loc=0, schur_mloc=0, schur_nloc=0, schur_lld=0, >>> > mblock=0, nblock=0, >>> > nprow=0, npcol=0, ooc_tmpdir=..., ooc_prefix=..., >>> > write_problem=..., tmpdirlen=20, >>> > prefixlen=20, write_problemlen=20) at dmumps_f77.F:260 >>> > #7 0x00007f33d74709b1 in dmumps_c (mumps_par=0x16126f0) at >>> > mumps_c.c:415 >>> > #8 0x00007f33d68408ca in MatLUFactorSymbolic_AIJMUMPS (F=0x1610280, >>> > A=0x14bafc0, >>> > r=0x160cc30, c=0x1609ed0, info=0x15c6708) >>> > at /scratch/hzhang/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:1487 >>> > >>> > -mat_mumps_icntl_29 = 0 or 1 give same error. >>> > I'm cc'ing this email to mumps developer, who may help to resolve this >>> > matter. >>> > >>> > Hong >>> > >>> > >>> > Hi all, >>> > >>> > I have some problems with PETSc using MUMPS and PARMETIS. >>> > In some cases it works fine, but in some others it doesn't, so I am >>> > trying to understand what is happening. >>> > >>> > I just picked the following example: >>> > >>> > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex53.c.html >>> > >>> > Now, when I start it with less than 4 processes it works as expected: >>> > mpirun -n 3 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 >>> > -mat_mumps_icntl_29 2 >>> > >>> > But with 4 or more processes, it crashes, but only when I am using >>> > Parmetis: >>> > mpirun -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 >>> > -mat_mumps_icntl_29 2 >>> > >>> > Metis worked in every case I tried without any problems. >>> > >>> > I wonder if I am doing something wrong or if this is a general problem >>> > or even a bug? Is Parmetis supposed to work with that example with 4 >>> > processes? >>> > >>> > Thanks a lot and kind regards. >>> > >>> > Volker >>> > >>> > >>> > Here is the error log of process 0: >>> > >>> > Entering DMUMPS 5.0.1 driver with JOB, N = 1 10000 >>> > ================================================= >>> > MUMPS compiled with option -Dmetis >>> > MUMPS compiled with option -Dparmetis >>> > ================================================= >>> > L U Solver for unsymmetric matrices >>> > Type of parallelism: Working host >>> > >>> > ****** ANALYSIS STEP ******** >>> > >>> > ** Max-trans not allowed because matrix is distributed >>> > Using ParMETIS for parallel ordering. >>> > [0]PETSC ERROR: >>> > >>> > ------------------------------------------------------------------------ >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> > probably memory access out of range >>> > [0]PETSC ERROR: Try option -start_in_debugger or >>> > -on_error_attach_debugger >>> > [0]PETSC ERROR: or see >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >>> > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac >>> > OS X to find memory corruption errors >>> > [0]PETSC ERROR: likely location of problem given in stack below >>> > [0]PETSC ERROR: --------------------- Stack Frames >>> > ------------------------------------ >>> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> > available, >>> > [0]PETSC ERROR: INSTEAD the line number of the start of the >>> > function >>> > [0]PETSC ERROR: is given. >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic_AIJMUMPS line 1395 >>> > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/impls/aij/mpi/mumps/mumps.c >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic line 2927 >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/interface/matrix.c >>> > [0]PETSC ERROR: [0] PCSetUp_LU line 101 >>> > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/impls/factor/lu/lu.c >>> > [0]PETSC ERROR: [0] PCSetUp line 930 >>> > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/interface/precon.c >>> > [0]PETSC ERROR: [0] KSPSetUp line 305 >>> > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/interface/itfunc.c >>> > [0]PETSC ERROR: [0] KSPSolve line 563 >>> > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/interface/itfunc.c >>> > [0]PETSC ERROR: --------------------- Error Message >>> > -------------------------------------------------------------- >>> > [0]PETSC ERROR: Signal received >>> > [0]PETSC ERROR: See >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >>> > shooting. >>> > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 >>> > [0]PETSC ERROR: ./ex53 on a linux-manni-mumps named manni by 133 Wed >>> > Oct 19 16:39:49 2016 >>> > [0]PETSC ERROR: Configure options --with-cc=mpiicc --with-cxx=mpiicpc >>> > --with-fc=mpiifort --with-shared-libraries=1 >>> > --with-valgrind-dir=~/usr/valgrind/ >>> > >>> > --with-mpi-dir=/home/software/intel/Intel-2016.4/compilers_and_libraries_2016.4.258/linux/mpi >>> > --download-scalapack --download-mumps --download-metis >>> > --download-metis-shared=0 --download-parmetis >>> > --download-parmetis-shared=0 >>> > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file >>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>> > >>> >> >> >> >> -- >> ----------------------------------------- >> Alfredo Buttari, PhD >> CNRS-IRIT >> 2 rue Camichel, 31071 Toulouse, France >> http://buttari.perso.enseeiht.fr > > -- ----------------------------------------- Alfredo Buttari, PhD CNRS-IRIT 2 rue Camichel, 31071 Toulouse, France http://buttari.perso.enseeiht.fr From cpraveen at gmail.com Mon Dec 12 02:29:29 2016 From: cpraveen at gmail.com (Praveen C) Date: Mon, 12 Dec 2016 13:59:29 +0530 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: Hello Matt I have attached the detailed output. Fenics automatically computes Jacobian, so I think Jacobian should be correct. I am not able to run the Fenics code without giving the Jacobian. I am currently writing a C code where I can test this. This equation is bit weird. Its like this u_t = ( K u_x)_x K = u / sqrt(u_x^2 + eps^2) If u > 0, then this is a nonlinear parabolic eqn. Problem is that eps = h (mesh size), so at extrema, it is like u_t = (u/eps)*u_xx and (1/eps) is approximating a delta function. Best praveen On Mon, Dec 12, 2016 at 12:41 PM, Matthew Knepley wrote: > On Mon, Dec 12, 2016 at 1:04 AM, Matthew Knepley > wrote: > >> On Mon, Dec 12, 2016 at 12:56 AM, Praveen C wrote: >> >>> Increasing number of snes iterations, I get convergence. >>> >>> So it is a problem of initial guess being too far from the solution of >>> the nonlinear equation. >>> >>> Solution can be seen here >>> >>> https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ray >>> /cosmic_ray.ipynb >>> >> > Also, how is this a parabolic equation? It looks like u/|u'| to me, which > does not look parabolic at all. > > Matt > > >> Green curve is solution after two time steps. >>> >>> It took about 100 snes iterations in first time step and about 50 in >>> second time step. >>> >>> I use exact Jacobian and direct LU solve. >>> >> >> I do not believe its the correct Jacobian. Did you test it as I asked? >> Also run with >> >> -snes_monitor -ksp_monitor_true_residual -snes_view >> -snes_converged_reason >> >> and then >> >> -snes_fd >> >> and send all the output >> >> Matt >> >> >>> Thanks >>> praveen >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- h = 0.01 dt = 0.01 Solving nonlinear variational problem. 0 SNES Function norm 1.977638959494e+00 1 SNES Function norm 1.924169835496e+00 2 SNES Function norm 1.922201608879e+00 3 SNES Function norm 1.920237421814e+00 4 SNES Function norm 1.918277062381e+00 5 SNES Function norm 1.916320289472e+00 6 SNES Function norm 1.914366865403e+00 7 SNES Function norm 1.912416585240e+00 8 SNES Function norm 1.910469283868e+00 9 SNES Function norm 1.899960770375e+00 10 SNES Function norm 1.879131065459e+00 11 SNES Function norm 1.857531063656e+00 12 SNES Function norm 1.836809521483e+00 13 SNES Function norm 1.816709863124e+00 14 SNES Function norm 1.797014998190e+00 15 SNES Function norm 1.777737697197e+00 16 SNES Function norm 1.758825541543e+00 17 SNES Function norm 1.740232061718e+00 18 SNES Function norm 1.721929885464e+00 19 SNES Function norm 1.703895519687e+00 20 SNES Function norm 1.686113465512e+00 21 SNES Function norm 1.668566528915e+00 22 SNES Function norm 1.651247832992e+00 23 SNES Function norm 1.634150402758e+00 24 SNES Function norm 1.617265971731e+00 25 SNES Function norm 1.600589248992e+00 26 SNES Function norm 1.584114929900e+00 27 SNES Function norm 1.567836662164e+00 28 SNES Function norm 1.551748332761e+00 29 SNES Function norm 1.535845822400e+00 30 SNES Function norm 1.520125060009e+00 31 SNES Function norm 1.504582049738e+00 32 SNES Function norm 1.489213340181e+00 33 SNES Function norm 1.474015969067e+00 34 SNES Function norm 1.458987020278e+00 35 SNES Function norm 1.444123487154e+00 36 SNES Function norm 1.429422455304e+00 37 SNES Function norm 1.414881258532e+00 38 SNES Function norm 1.400497521029e+00 39 SNES Function norm 1.386269021843e+00 40 SNES Function norm 1.372193573931e+00 41 SNES Function norm 1.358269024537e+00 42 SNES Function norm 1.344493290602e+00 43 SNES Function norm 1.330864378208e+00 44 SNES Function norm 1.317380313100e+00 45 SNES Function norm 1.304039060314e+00 46 SNES Function norm 1.290838655798e+00 47 SNES Function norm 1.277777248327e+00 48 SNES Function norm 1.264853054978e+00 49 SNES Function norm 1.252064339134e+00 50 SNES Function norm 1.239409403964e+00 51 SNES Function norm 1.226886591052e+00 52 SNES Function norm 1.214494281861e+00 53 SNES Function norm 1.202230901070e+00 54 SNES Function norm 1.190094918961e+00 55 SNES Function norm 1.178084849542e+00 56 SNES Function norm 1.166199243366e+00 57 SNES Function norm 1.154436670909e+00 58 SNES Function norm 1.142795705612e+00 59 SNES Function norm 1.131274935077e+00 60 SNES Function norm 1.119872975088e+00 61 SNES Function norm 1.006658785619e+00 62 SNES Function norm 9.965354091038e-01 63 SNES Function norm 9.865153605132e-01 64 SNES Function norm 9.765975240363e-01 65 SNES Function norm 9.667808048914e-01 66 SNES Function norm 9.570641211503e-01 67 SNES Function norm 9.474463955125e-01 68 SNES Function norm 9.508858486555e-01 69 SNES Function norm 8.515799668765e-01 70 SNES Function norm 7.646471697850e-01 71 SNES Function norm 6.867917438028e-01 72 SNES Function norm 6.176405904289e-01 73 SNES Function norm 5.556706584294e-01 74 SNES Function norm 4.999626714271e-01 75 SNES Function norm 4.463679712027e-01 76 SNES Function norm 4.017228526152e-01 77 SNES Function norm 3.616061488103e-01 78 SNES Function norm 3.255374775336e-01 79 SNES Function norm 2.930940243173e-01 80 SNES Function norm 2.639003327720e-01 81 SNES Function norm 2.376217412647e-01 82 SNES Function norm 2.139741463439e-01 83 SNES Function norm 1.926849809517e-01 84 SNES Function norm 1.735153700354e-01 85 SNES Function norm 1.562525692758e-01 86 SNES Function norm 1.407058781515e-01 87 SNES Function norm 1.267040909513e-01 88 SNES Function norm 1.140933639284e-01 89 SNES Function norm 1.027353600019e-01 90 SNES Function norm 9.250561741668e-02 91 SNES Function norm 8.329210826886e-02 92 SNES Function norm 7.499396232175e-02 93 SNES Function norm 6.752033924276e-02 94 SNES Function norm 6.078943744255e-02 95 SNES Function norm 5.472762212026e-02 96 SNES Function norm 4.926863241002e-02 97 SNES Function norm 4.435282122697e-02 98 SNES Function norm 3.992643547354e-02 99 SNES Function norm 2.442751870615e-02 100 SNES Function norm 9.583107967453e-03 101 SNES Function norm 3.353407360472e-03 102 SNES Function norm 7.792652342405e-04 103 SNES Function norm 6.592466838500e-05 104 SNES Function norm 5.566315624542e-07 105 SNES Function norm 4.129206377245e-11 Nonlinear solve converged due to CONVERGED_FNORM_ABS iterations 105 SNES Object: 1 MPI processes type: vinewtonssls maximum iterations=200, maximum function evaluations=2000 tolerances: relative=1e-09, absolute=1e-10, solution=1e-16 total number of linear solver iterations=105 total number of function evaluations=106 norm schedule ALWAYS SNESLineSearch Object: 1 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 0., needed 0. Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=101, cols=101 package used to perform factorization: mumps total: nonzeros=499, allocated nonzeros=499 total number of mallocs used during MatSetValues calls =0 MUMPS run parameters: SYM (matrix type): 0 PAR (host participation): 1 ICNTL(1) (output for error): 6 ICNTL(2) (output of diagnostic msg): 0 ICNTL(3) (output for global info): 0 ICNTL(4) (level of printing): 0 ICNTL(5) (input mat struct): 0 ICNTL(6) (matrix prescaling): 7 ICNTL(7) (sequentia matrix ordering):7 ICNTL(8) (scalling strategy): 77 ICNTL(10) (max num of refinements): 0 ICNTL(11) (error analysis): 0 ICNTL(12) (efficiency control): 1 ICNTL(13) (efficiency control): 0 ICNTL(14) (percentage of estimated workspace increase): 20 ICNTL(18) (input mat struct): 0 ICNTL(19) (Shur complement info): 0 ICNTL(20) (rhs sparse pattern): 0 ICNTL(21) (solution struct): 0 ICNTL(22) (in-core/out-of-core facility): 0 ICNTL(23) (max size of memory can be allocated locally):0 ICNTL(24) (detection of null pivot rows): 0 ICNTL(25) (computation of a null space basis): 0 ICNTL(26) (Schur options for rhs or solution): 0 ICNTL(27) (experimental parameter): -24 ICNTL(28) (use parallel or sequential ordering): 1 ICNTL(29) (parallel ordering): 0 ICNTL(30) (user-specified set of entries in inv(A)): 0 ICNTL(31) (factors is discarded in the solve phase): 0 ICNTL(33) (compute determinant): 0 CNTL(1) (relative pivoting threshold): 0.01 CNTL(2) (stopping criterion of refinement): 1.49012e-08 CNTL(3) (absolute pivoting threshold): 0. CNTL(4) (value of static pivoting): -1. CNTL(5) (fixation for null pivots): 0. RINFO(1) (local estimated flops for the elimination after analysis): [0] 993. RINFO(2) (local estimated flops for the assembly after factorization): [0] 392. RINFO(3) (local estimated flops for the elimination after factorization): [0] 993. INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): [0] 1 INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): [0] 1 INFO(23) (num of pivots eliminated on this processor after factorization): [0] 101 RINFOG(1) (global estimated flops for the elimination after analysis): 993. RINFOG(2) (global estimated flops for the assembly after factorization): 392. RINFOG(3) (global estimated flops for the elimination after factorization): 993. (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) INFOG(3) (estimated real workspace for factors on all processors after analysis): 499 INFOG(4) (estimated integer workspace for factors on all processors after analysis): 2079 INFOG(5) (estimated maximum front size in the complete tree): 3 INFOG(6) (number of nodes in the complete tree): 99 INFOG(7) (ordering option effectively use after analysis): 2 INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 499 INFOG(10) (total integer space store the matrix factors after factorization): 2079 INFOG(11) (order of largest frontal matrix after factorization): 3 INFOG(12) (number of off-diagonal pivots): 0 INFOG(13) (number of delayed pivots after factorization): 0 INFOG(14) (number of memory compress after factorization): 0 INFOG(15) (number of steps of iterative refinement after solution): 0 INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 1 INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 1 INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 1 INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 1 INFOG(20) (estimated number of entries in the factors): 499 INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 1 INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 1 INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 INFOG(28) (after factorization: number of null pivots encountered): 0 INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 499 INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 0, 0 INFOG(32) (after analysis: type of analysis done): 1 INFOG(33) (value used for ICNTL(8)): 7 INFOG(34) (exponent of the determinant if determinant is requested): 0 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=101, cols=101 total: nonzeros=303, allocated nonzeros=303 total number of mallocs used during MatSetValues calls =0 not using I-node routines PETSc SNES solver converged in 105 iterations with convergence reason CONVERGED_FNORM_ABS. Solving nonlinear variational problem. 0 SNES Function norm 6.623206717145e-01 1 SNES Function norm 6.556821069554e-01 2 SNES Function norm 6.491099981285e-01 3 SNES Function norm 6.426037008333e-01 4 SNES Function norm 6.361626956675e-01 5 SNES Function norm 6.297864680981e-01 6 SNES Function norm 5.749259243409e-01 7 SNES Function norm 5.691573594996e-01 8 SNES Function norm 5.634472907047e-01 9 SNES Function norm 5.577950372875e-01 10 SNES Function norm 5.521998463416e-01 11 SNES Function norm 5.466609217750e-01 12 SNES Function norm 5.411774340146e-01 13 SNES Function norm 5.357485298680e-01 14 SNES Function norm 5.303733708588e-01 15 SNES Function norm 5.250512178084e-01 16 SNES Function norm 5.197815360359e-01 17 SNES Function norm 5.145640347623e-01 18 SNES Function norm 5.093985766790e-01 19 SNES Function norm 4.646423240600e-01 20 SNES Function norm 4.576766105809e-01 21 SNES Function norm 4.174336257378e-01 22 SNES Function norm 3.742302716857e-01 23 SNES Function norm 3.357283827242e-01 24 SNES Function norm 3.014171088900e-01 25 SNES Function norm 2.704703694106e-01 26 SNES Function norm 2.433222943686e-01 27 SNES Function norm 2.186726322928e-01 28 SNES Function norm 1.965068773466e-01 29 SNES Function norm 1.766108019269e-01 30 SNES Function norm 1.587661144089e-01 31 SNES Function norm 1.427482658793e-01 32 SNES Function norm 1.283590510004e-01 33 SNES Function norm 1.154270364093e-01 34 SNES Function norm 1.038012241207e-01 35 SNES Function norm 9.334326660164e-02 36 SNES Function norm 8.393997747033e-02 37 SNES Function norm 7.549452654457e-02 38 SNES Function norm 6.790730578509e-02 39 SNES Function norm 6.108800733894e-02 40 SNES Function norm 5.495707008678e-02 41 SNES Function norm 4.944392679486e-02 42 SNES Function norm 4.448565604949e-02 43 SNES Function norm 2.815467755193e-02 44 SNES Function norm 1.194728783916e-02 45 SNES Function norm 4.292205565379e-03 46 SNES Function norm 1.102470315477e-03 47 SNES Function norm 1.185868296672e-04 48 SNES Function norm 1.687572026972e-06 49 SNES Function norm 3.521970663626e-10 Nonlinear solve converged due to CONVERGED_FNORM_RELATIVE iterations 49 SNES Object: 1 MPI processes type: vinewtonssls maximum iterations=200, maximum function evaluations=2000 tolerances: relative=1e-09, absolute=1e-10, solution=1e-16 total number of linear solver iterations=49 total number of function evaluations=50 norm schedule ALWAYS SNESLineSearch Object: 1 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 0., needed 0. Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=101, cols=101 package used to perform factorization: mumps total: nonzeros=499, allocated nonzeros=499 total number of mallocs used during MatSetValues calls =0 MUMPS run parameters: SYM (matrix type): 0 PAR (host participation): 1 ICNTL(1) (output for error): 6 ICNTL(2) (output of diagnostic msg): 0 ICNTL(3) (output for global info): 0 ICNTL(4) (level of printing): 0 ICNTL(5) (input mat struct): 0 ICNTL(6) (matrix prescaling): 7 ICNTL(7) (sequentia matrix ordering):7 ICNTL(8) (scalling strategy): 77 ICNTL(10) (max num of refinements): 0 ICNTL(11) (error analysis): 0 ICNTL(12) (efficiency control): 1 ICNTL(13) (efficiency control): 0 ICNTL(14) (percentage of estimated workspace increase): 20 ICNTL(18) (input mat struct): 0 ICNTL(19) (Shur complement info): 0 ICNTL(20) (rhs sparse pattern): 0 ICNTL(21) (solution struct): 0 ICNTL(22) (in-core/out-of-core facility): 0 ICNTL(23) (max size of memory can be allocated locally):0 ICNTL(24) (detection of null pivot rows): 0 ICNTL(25) (computation of a null space basis): 0 ICNTL(26) (Schur options for rhs or solution): 0 ICNTL(27) (experimental parameter): -24 ICNTL(28) (use parallel or sequential ordering): 1 ICNTL(29) (parallel ordering): 0 ICNTL(30) (user-specified set of entries in inv(A)): 0 ICNTL(31) (factors is discarded in the solve phase): 0 ICNTL(33) (compute determinant): 0 CNTL(1) (relative pivoting threshold): 0.01 CNTL(2) (stopping criterion of refinement): 1.49012e-08 CNTL(3) (absolute pivoting threshold): 0. CNTL(4) (value of static pivoting): -1. CNTL(5) (fixation for null pivots): 0. RINFO(1) (local estimated flops for the elimination after analysis): [0] 993. RINFO(2) (local estimated flops for the assembly after factorization): [0] 392. RINFO(3) (local estimated flops for the elimination after factorization): [0] 993. INFO(15) (estimated size of (in MB) MUMPS internal data for running numerical factorization): [0] 1 INFO(16) (size of (in MB) MUMPS internal data used during numerical factorization): [0] 1 INFO(23) (num of pivots eliminated on this processor after factorization): [0] 101 RINFOG(1) (global estimated flops for the elimination after analysis): 993. RINFOG(2) (global estimated flops for the assembly after factorization): 392. RINFOG(3) (global estimated flops for the elimination after factorization): 993. (RINFOG(12) RINFOG(13))*2^INFOG(34) (determinant): (0.,0.)*(2^0) INFOG(3) (estimated real workspace for factors on all processors after analysis): 499 INFOG(4) (estimated integer workspace for factors on all processors after analysis): 2079 INFOG(5) (estimated maximum front size in the complete tree): 3 INFOG(6) (number of nodes in the complete tree): 99 INFOG(7) (ordering option effectively use after analysis): 2 INFOG(8) (structural symmetry in percent of the permuted matrix after analysis): 100 INFOG(9) (total real/complex workspace to store the matrix factors after factorization): 499 INFOG(10) (total integer space store the matrix factors after factorization): 2079 INFOG(11) (order of largest frontal matrix after factorization): 3 INFOG(12) (number of off-diagonal pivots): 0 INFOG(13) (number of delayed pivots after factorization): 0 INFOG(14) (number of memory compress after factorization): 0 INFOG(15) (number of steps of iterative refinement after solution): 0 INFOG(16) (estimated size (in MB) of all MUMPS internal data for factorization after analysis: value on the most memory consuming processor): 1 INFOG(17) (estimated size of all MUMPS internal data for factorization after analysis: sum over all processors): 1 INFOG(18) (size of all MUMPS internal data allocated during factorization: value on the most memory consuming processor): 1 INFOG(19) (size of all MUMPS internal data allocated during factorization: sum over all processors): 1 INFOG(20) (estimated number of entries in the factors): 499 INFOG(21) (size in MB of memory effectively used during factorization - value on the most memory consuming processor): 1 INFOG(22) (size in MB of memory effectively used during factorization - sum over all processors): 1 INFOG(23) (after analysis: value of ICNTL(6) effectively used): 0 INFOG(24) (after analysis: value of ICNTL(12) effectively used): 1 INFOG(25) (after factorization: number of pivots modified by static pivoting): 0 INFOG(28) (after factorization: number of null pivots encountered): 0 INFOG(29) (after factorization: effective number of entries in the factors (sum over all processors)): 499 INFOG(30, 31) (after solution: size in Mbytes of memory used during solution phase): 0, 0 INFOG(32) (after analysis: type of analysis done): 1 INFOG(33) (value used for ICNTL(8)): 7 INFOG(34) (exponent of the determinant if determinant is requested): 0 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=101, cols=101 total: nonzeros=303, allocated nonzeros=303 total number of mallocs used during MatSetValues calls =0 not using I-node routines PETSc SNES solver converged in 49 iterations with convergence reason CONVERGED_FNORM_RELATIVE. it, t = 2 0.02 From patrick.sanan at gmail.com Mon Dec 12 03:30:52 2016 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Mon, 12 Dec 2016 10:30:52 +0100 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: On Mon, Dec 12, 2016 at 5:58 AM, Derek Gaston wrote: > A quick note: I'm not hugely invested in this idea... I'm just talking it > out since I started it. The issues might outweigh potential gains... > > On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley wrote: >> >> I consider this bad code management more than an analytical case for the >> technique, but I can see the point. > > > Can you expand on that? Do you believe automatic differentiation in general > to be "bad code management"? > >> >> - Extremely complex, heavy shape function evaluation (think super high >> order with first and second derivatives needing to be computed) >> >> I honestly do not understand this one. Maybe I do not understand high >> order since I never use it. If I want to compute an integral, I have >> the basis functions tabulated. I understand that for high order, you use a >> tensor product evaluation, but you still tabulate in 1D. What is >> being recomputed here? > > > In unstructured mesh you still have to compute the reference->physical map > for each element and map all of the gradients/second derivatives to physical > space. This can be quite expensive if you have a lot of shape functions and > a lot of quadrature points. > > Sometimes we even have to do this step twice: once for the un-deformed mesh > and once for the deformed mesh... on every element. > >> >> >>> >>> - Extremely heavy material property computations >> >> >> Yes. I have to think about this one more. >> >>> >>> - MANY coupled variables (we've run thousands). >> >> >> Ah, so you are saying that the process of field evaluation at the >> quadrature points is expensive because you have so many fields. >> It feels very similar to the material case, but I cannot articulate why. > > > It is similar: it's all about how much information you have to recompute at > each quadrature point. I was simply giving different scenarios for why you > could end up with heavy calculations at each quadrature point that feed into > both the Residual and Jacobian calculations. > >> >> I guess my gut says that really expensive material properties, >> much more expensive than my top level model, should be modeled by >> something simpler at that level. Same feeling for using >> thousands of fields. > > > Even if you can find something simpler it's good to be able to solve the > expensive one to verify your simpler model. Sometimes the microstructure > behavior is complicated enough that it's quite difficult to wrap up in a > simpler model or (like you said) it won't be clear if a simpler model is > possible without doing the more expensive model first. > > We really do have models that require thousands (sometimes tens of > thousands) of coupled PDEs. Reusing the field evaluations for both the > residual and Jacobian could be a large win. > >> >> 1) I compute a Jacobian with every residual. This sucks because line >> search and lots of other things use residuals. >> >> >> 2) I compute a residual with every Jacobian. This sound like it could >> work because I compute both for the Newton system, but here I >> am reusing the residual I computed to check the convergence >> criterion. >> >> Can you see a nice way to express Newton for this? > > > You can see my (admittedly stupidly simple) Newton code that works this way > here: > https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/JuliaDenseNonlinearImplicitSolver.jl#L42 > > Check the assembly code here to see how both are computed simultaneously: > https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/Assembly.jl#L59 > > Lack of line search makes it pretty simple. However, even with this simple > code I end up wasting one extra Jacobian evaluation once the convergence > criteria has been reached. Whether or not that is advantageous depends on > the relative tradeoffs of reusable element computations vs Jacobian > calculation and how many nonlinear iterations you do (if you're only doing > one nonlinear iteration every timestep then you're wasting 50% of your total > Jacobian calculation time). > > For a full featured solver you would definitely also want to have the > ability to compute a residual, by itself, when you want... for things like > line search. > > You guys have definitely thought a lot more about this than I have... I'm > just spitballing here... but it does seem like having an optional interface > for computing a combined residual/Jacobian could save some codes a > significant amount of time. Maybe a good way to proceed is to have the "both at once computation" explicitly available in PETSc's Newton or other solvers that "know" that they can gain efficiency by computing both at once. That is, the user is always required to register a callback to form the residual, and may optionally register a function to compute (only) the Jacobian and/or another function to compute the combined Jacobian/Residual. Newton can look for the combined function at the times when it would help, and if nothing was registered would fall back to the current technique of calling separate residual and Jacobian routines. This would preserve the benefits of the current approach, allow optimizations of the kind mentioned, and hopefully remain somewhat-maintainable; in particular, this doesn't require the user to be promised anything about the order in which Jacobians and residuals are computed. > This isn't a strong feeling of mine though. I think that for now the way > I'll do it is simply to "waste" a residual calculation when I need a > Jacobian :-) > > Derek From hzhang at mcs.anl.gov Mon Dec 12 09:00:59 2016 From: hzhang at mcs.anl.gov (Hong) Date: Mon, 12 Dec 2016 09:00:59 -0600 Subject: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes In-Reply-To: References: <3A041F37-6368-4060-81A5-59D0130584C9@mcs.anl.gov> Message-ID: Alfredo: Sure, I got the tarball of mumps-5.0.2, and will test it and update petsc-mumps interface. I'll let you know if problem remains. Hong Dear all, > sorry for the late reply. The petsc installation went supersmooth and > I could easily reproduce the issue. I dumped the matrix generated by > petsc and read it back with a standalone mumps tester in order to > confirm the bug. This bug has been already reported by another user, > was fixed a few months ago and the fix was included in the 5.0.2 > release. Could you please check if everything works well with mumps > 5.0.2? > > Kind regards, > te MUMPS team > > > > > On Thu, Oct 20, 2016 at 4:44 PM, Hong wrote: > > Alfredo: > > It would be much easier to install petsc with mumps, parmetis, and > > debugging this case. Here is what you can do on a linux machine > > (see http://www.mcs.anl.gov/petsc/documentation/installation.html): > > > > 1) get petsc-release: > > git clone -b maint https://bitbucket.org/petsc/petsc petsc > > > > cd petsc > > git pull > > export PETSC_DIR=$PWD > > export PETSC_ARCH=<> > > > > 2) configure petsc with additional options > > '--download-metis --download-parmetis --download-mumps > --download-scalapack > > --download-ptscotch' > > see http://www.mcs.anl.gov/petsc/documentation/installation.html > > > > 3) build petsc and test > > make > > make test > > > > 4) test ex53.c: > > cd $PETSC_DIR/src/ksp/ksp/examples/tutorials > > make ex53 > > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > -mat_mumps_icntl_29 2 > > > > 5) debugging ex53.c: > > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > -mat_mumps_icntl_29 2 -start_in_debugger > > > > Give it a try. Contact us if you cannot reproduce this case. > > > > Hong > > > >> Dear all, > >> this may well be due to a bug in the parallel analysis. Do you think you > >> can reproduce the problem in a standalone MUMPS program (i.e., without > going > >> through PETSc) ? that would save a lot of time to track the bug since > we do > >> not have a PETSc install at hand. Otherwise we'll give it a shot at > >> installing petsc and reproducing the problem on our side. > >> > >> Kind regards, > >> the MUMPS team > >> > >> > >> > >> On Wed, Oct 19, 2016 at 8:32 PM, Barry Smith > wrote: > >>> > >>> > >>> Tim, > >>> > >>> You can/should also run with valgrind to determine exactly the > first > >>> point with memory corruption issues. > >>> > >>> Barry > >>> > >>> > On Oct 19, 2016, at 11:08 AM, Hong wrote: > >>> > > >>> > Tim: > >>> > With '-mat_mumps_icntl_28 1', i.e., sequential analysis, I can run > ex56 > >>> > with np=3 or larger np successfully. > >>> > > >>> > With '-mat_mumps_icntl_28 2', i.e., parallel analysis, I can run up > to > >>> > np=3. > >>> > > >>> > For np=4: > >>> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > >>> > -mat_mumps_icntl_29 2 -start_in_debugger > >>> > > >>> > code crashes inside mumps: > >>> > Program received signal SIGSEGV, Segmentation fault. > >>> > 0x00007f33d75857cb in > >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( > >>> > id=..., first=..., last=..., ipe=..., > >>> > pe=, > >>> > work=...) > >>> > at dana_aux_par.F:1450 > >>> > 1450 MAPTAB(J) = I > >>> > (gdb) bt > >>> > #0 0x00007f33d75857cb in > >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( > >>> > id=..., first=..., last=..., ipe=..., > >>> > pe=, > >>> > work=...) > >>> > at dana_aux_par.F:1450 > >>> > #1 0x00007f33d759207c in dmumps_parallel_analysis::dmum > ps_parmetis_ord > >>> > ( > >>> > id=..., ord=..., work=...) at dana_aux_par.F:400 > >>> > #2 0x00007f33d7592d14 in dmumps_parallel_analysis::dmum > ps_do_par_ord > >>> > (id=..., > >>> > ord=..., work=...) at dana_aux_par.F:351 > >>> > #3 0x00007f33d7593aa9 in dmumps_parallel_analysis::dmumps_ana_f_par > >>> > (id=..., > >>> > work1=..., work2=..., nfsiz=..., > >>> > fils= 0x0>, > >>> > frere= >>> > 0x0>) > >>> > at dana_aux_par.F:98 > >>> > #4 0x00007f33d74c622a in dmumps_ana_driver (id=...) at > >>> > dana_driver.F:563 > >>> > #5 0x00007f33d747706b in dmumps (id=...) at dmumps_driver.F:1108 > >>> > #6 0x00007f33d74721b5 in dmumps_f77 (job=1, sym=0, par=1, > >>> > comm_f77=-2080374779, n=10000, icntl=..., cntl=..., keep=..., > >>> > dkeep=..., > >>> > keep8=..., nz=0, irn=..., irnhere=0, jcn=..., jcnhere=0, a=..., > >>> > ahere=0, > >>> > nz_loc=7500, irn_loc=..., irn_lochere=1, jcn_loc=..., > >>> > jcn_lochere=1, > >>> > a_loc=..., a_lochere=1, nelt=0, eltptr=..., eltptrhere=0, > >>> > eltvar=..., > >>> > eltvarhere=0, a_elt=..., a_elthere=0, perm_in=..., perm_inhere=0, > >>> > rhs=..., > >>> > rhshere=0, redrhs=..., redrhshere=0, info=..., rinfo=..., > >>> > infog=..., > >>> > rinfog=..., deficiency=0, lwk_user=0, size_schur=0, > >>> > listvar_schur=..., > >>> > ---Type to continue, or q to quit--- > >>> > ar_schurhere=0, schur=..., schurhere=0, wk_user=..., > wk_userhere=0, > >>> > colsca=..., > >>> > colscahere=0, rowsca=..., rowscahere=0, instance_number=1, > nrhs=1, > >>> > lrhs=0, lredrhs=0, > >>> > rhs_sparse=..., rhs_sparsehere=0, sol_loc=..., sol_lochere=0, > >>> > irhs_sparse=..., > >>> > irhs_sparsehere=0, irhs_ptr=..., irhs_ptrhere=0, isol_loc=..., > >>> > isol_lochere=0, > >>> > nz_rhs=0, lsol_loc=0, schur_mloc=0, schur_nloc=0, schur_lld=0, > >>> > mblock=0, nblock=0, > >>> > nprow=0, npcol=0, ooc_tmpdir=..., ooc_prefix=..., > >>> > write_problem=..., tmpdirlen=20, > >>> > prefixlen=20, write_problemlen=20) at dmumps_f77.F:260 > >>> > #7 0x00007f33d74709b1 in dmumps_c (mumps_par=0x16126f0) at > >>> > mumps_c.c:415 > >>> > #8 0x00007f33d68408ca in MatLUFactorSymbolic_AIJMUMPS (F=0x1610280, > >>> > A=0x14bafc0, > >>> > r=0x160cc30, c=0x1609ed0, info=0x15c6708) > >>> > at /scratch/hzhang/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:14 > 87 > >>> > > >>> > -mat_mumps_icntl_29 = 0 or 1 give same error. > >>> > I'm cc'ing this email to mumps developer, who may help to resolve > this > >>> > matter. > >>> > > >>> > Hong > >>> > > >>> > > >>> > Hi all, > >>> > > >>> > I have some problems with PETSc using MUMPS and PARMETIS. > >>> > In some cases it works fine, but in some others it doesn't, so I am > >>> > trying to understand what is happening. > >>> > > >>> > I just picked the following example: > >>> > > >>> > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examp > les/tutorials/ex53.c.html > >>> > > >>> > Now, when I start it with less than 4 processes it works as expected: > >>> > mpirun -n 3 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 > >>> > -mat_mumps_icntl_29 2 > >>> > > >>> > But with 4 or more processes, it crashes, but only when I am using > >>> > Parmetis: > >>> > mpirun -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 > >>> > -mat_mumps_icntl_29 2 > >>> > > >>> > Metis worked in every case I tried without any problems. > >>> > > >>> > I wonder if I am doing something wrong or if this is a general > problem > >>> > or even a bug? Is Parmetis supposed to work with that example with 4 > >>> > processes? > >>> > > >>> > Thanks a lot and kind regards. > >>> > > >>> > Volker > >>> > > >>> > > >>> > Here is the error log of process 0: > >>> > > >>> > Entering DMUMPS 5.0.1 driver with JOB, N = 1 10000 > >>> > ================================================= > >>> > MUMPS compiled with option -Dmetis > >>> > MUMPS compiled with option -Dparmetis > >>> > ================================================= > >>> > L U Solver for unsymmetric matrices > >>> > Type of parallelism: Working host > >>> > > >>> > ****** ANALYSIS STEP ******** > >>> > > >>> > ** Max-trans not allowed because matrix is distributed > >>> > Using ParMETIS for parallel ordering. > >>> > [0]PETSC ERROR: > >>> > > >>> > ------------------------------------------------------------ > ------------ > >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > >>> > probably memory access out of range > >>> > [0]PETSC ERROR: Try option -start_in_debugger or > >>> > -on_error_attach_debugger > >>> > [0]PETSC ERROR: or see > >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > >>> > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > Mac > >>> > OS X to find memory corruption errors > >>> > [0]PETSC ERROR: likely location of problem given in stack below > >>> > [0]PETSC ERROR: --------------------- Stack Frames > >>> > ------------------------------------ > >>> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > >>> > available, > >>> > [0]PETSC ERROR: INSTEAD the line number of the start of the > >>> > function > >>> > [0]PETSC ERROR: is given. > >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic_AIJMUMPS line 1395 > >>> > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/impls/ > aij/mpi/mumps/mumps.c > >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic line 2927 > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/interfac > e/matrix.c > >>> > [0]PETSC ERROR: [0] PCSetUp_LU line 101 > >>> > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/ > impls/factor/lu/lu.c > >>> > [0]PETSC ERROR: [0] PCSetUp line 930 > >>> > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/ > interface/precon.c > >>> > [0]PETSC ERROR: [0] KSPSetUp line 305 > >>> > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/ > interface/itfunc.c > >>> > [0]PETSC ERROR: [0] KSPSolve line 563 > >>> > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/ > interface/itfunc.c > >>> > [0]PETSC ERROR: --------------------- Error Message > >>> > -------------------------------------------------------------- > >>> > [0]PETSC ERROR: Signal received > >>> > [0]PETSC ERROR: See > >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > >>> > shooting. > >>> > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 > >>> > [0]PETSC ERROR: ./ex53 on a linux-manni-mumps named manni by 133 Wed > >>> > Oct 19 16:39:49 2016 > >>> > [0]PETSC ERROR: Configure options --with-cc=mpiicc --with-cxx=mpiicpc > >>> > --with-fc=mpiifort --with-shared-libraries=1 > >>> > --with-valgrind-dir=~/usr/valgrind/ > >>> > > >>> > --with-mpi-dir=/home/software/intel/Intel-2016.4/compilers_a > nd_libraries_2016.4.258/linux/mpi > >>> > --download-scalapack --download-mumps --download-metis > >>> > --download-metis-shared=0 --download-parmetis > >>> > --download-parmetis-shared=0 > >>> > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > >>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > >>> > > >>> > >> > >> > >> > >> -- > >> ----------------------------------------- > >> Alfredo Buttari, PhD > >> CNRS-IRIT > >> 2 rue Camichel, 31071 Toulouse, France > >> http://buttari.perso.enseeiht.fr > > > > > > > > -- > ----------------------------------------- > Alfredo Buttari, PhD > CNRS-IRIT > 2 rue Camichel, 31071 Toulouse, France > http://buttari.perso.enseeiht.fr > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon Dec 12 09:14:30 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 12 Dec 2016 09:14:30 -0600 Subject: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes In-Reply-To: References: <3A041F37-6368-4060-81A5-59D0130584C9@mcs.anl.gov> Message-ID: Hong, petsc master is updated to download/install mumps-5.0.2 Satish On Mon, 12 Dec 2016, Hong wrote: > Alfredo: > Sure, I got the tarball of mumps-5.0.2, and will test it and update > petsc-mumps interface. I'll let you know if problem remains. > > Hong > > Dear all, > > sorry for the late reply. The petsc installation went supersmooth and > > I could easily reproduce the issue. I dumped the matrix generated by > > petsc and read it back with a standalone mumps tester in order to > > confirm the bug. This bug has been already reported by another user, > > was fixed a few months ago and the fix was included in the 5.0.2 > > release. Could you please check if everything works well with mumps > > 5.0.2? > > > > Kind regards, > > te MUMPS team > > > > > > > > > > On Thu, Oct 20, 2016 at 4:44 PM, Hong wrote: > > > Alfredo: > > > It would be much easier to install petsc with mumps, parmetis, and > > > debugging this case. Here is what you can do on a linux machine > > > (see http://www.mcs.anl.gov/petsc/documentation/installation.html): > > > > > > 1) get petsc-release: > > > git clone -b maint https://bitbucket.org/petsc/petsc petsc > > > > > > cd petsc > > > git pull > > > export PETSC_DIR=$PWD > > > export PETSC_ARCH=<> > > > > > > 2) configure petsc with additional options > > > '--download-metis --download-parmetis --download-mumps > > --download-scalapack > > > --download-ptscotch' > > > see http://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > 3) build petsc and test > > > make > > > make test > > > > > > 4) test ex53.c: > > > cd $PETSC_DIR/src/ksp/ksp/examples/tutorials > > > make ex53 > > > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > > -mat_mumps_icntl_29 2 > > > > > > 5) debugging ex53.c: > > > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > > -mat_mumps_icntl_29 2 -start_in_debugger > > > > > > Give it a try. Contact us if you cannot reproduce this case. > > > > > > Hong > > > > > >> Dear all, > > >> this may well be due to a bug in the parallel analysis. Do you think you > > >> can reproduce the problem in a standalone MUMPS program (i.e., without > > going > > >> through PETSc) ? that would save a lot of time to track the bug since > > we do > > >> not have a PETSc install at hand. Otherwise we'll give it a shot at > > >> installing petsc and reproducing the problem on our side. > > >> > > >> Kind regards, > > >> the MUMPS team > > >> > > >> > > >> > > >> On Wed, Oct 19, 2016 at 8:32 PM, Barry Smith > > wrote: > > >>> > > >>> > > >>> Tim, > > >>> > > >>> You can/should also run with valgrind to determine exactly the > > first > > >>> point with memory corruption issues. > > >>> > > >>> Barry > > >>> > > >>> > On Oct 19, 2016, at 11:08 AM, Hong wrote: > > >>> > > > >>> > Tim: > > >>> > With '-mat_mumps_icntl_28 1', i.e., sequential analysis, I can run > > ex56 > > >>> > with np=3 or larger np successfully. > > >>> > > > >>> > With '-mat_mumps_icntl_28 2', i.e., parallel analysis, I can run up > > to > > >>> > np=3. > > >>> > > > >>> > For np=4: > > >>> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > >>> > -mat_mumps_icntl_29 2 -start_in_debugger > > >>> > > > >>> > code crashes inside mumps: > > >>> > Program received signal SIGSEGV, Segmentation fault. > > >>> > 0x00007f33d75857cb in > > >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( > > >>> > id=..., first=..., last=..., ipe=..., > > >>> > pe=, > > >>> > work=...) > > >>> > at dana_aux_par.F:1450 > > >>> > 1450 MAPTAB(J) = I > > >>> > (gdb) bt > > >>> > #0 0x00007f33d75857cb in > > >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( > > >>> > id=..., first=..., last=..., ipe=..., > > >>> > pe=, > > >>> > work=...) > > >>> > at dana_aux_par.F:1450 > > >>> > #1 0x00007f33d759207c in dmumps_parallel_analysis::dmum > > ps_parmetis_ord > > >>> > ( > > >>> > id=..., ord=..., work=...) at dana_aux_par.F:400 > > >>> > #2 0x00007f33d7592d14 in dmumps_parallel_analysis::dmum > > ps_do_par_ord > > >>> > (id=..., > > >>> > ord=..., work=...) at dana_aux_par.F:351 > > >>> > #3 0x00007f33d7593aa9 in dmumps_parallel_analysis::dmumps_ana_f_par > > >>> > (id=..., > > >>> > work1=..., work2=..., nfsiz=..., > > >>> > fils= > 0x0>, > > >>> > frere= > >>> > 0x0>) > > >>> > at dana_aux_par.F:98 > > >>> > #4 0x00007f33d74c622a in dmumps_ana_driver (id=...) at > > >>> > dana_driver.F:563 > > >>> > #5 0x00007f33d747706b in dmumps (id=...) at dmumps_driver.F:1108 > > >>> > #6 0x00007f33d74721b5 in dmumps_f77 (job=1, sym=0, par=1, > > >>> > comm_f77=-2080374779, n=10000, icntl=..., cntl=..., keep=..., > > >>> > dkeep=..., > > >>> > keep8=..., nz=0, irn=..., irnhere=0, jcn=..., jcnhere=0, a=..., > > >>> > ahere=0, > > >>> > nz_loc=7500, irn_loc=..., irn_lochere=1, jcn_loc=..., > > >>> > jcn_lochere=1, > > >>> > a_loc=..., a_lochere=1, nelt=0, eltptr=..., eltptrhere=0, > > >>> > eltvar=..., > > >>> > eltvarhere=0, a_elt=..., a_elthere=0, perm_in=..., perm_inhere=0, > > >>> > rhs=..., > > >>> > rhshere=0, redrhs=..., redrhshere=0, info=..., rinfo=..., > > >>> > infog=..., > > >>> > rinfog=..., deficiency=0, lwk_user=0, size_schur=0, > > >>> > listvar_schur=..., > > >>> > ---Type to continue, or q to quit--- > > >>> > ar_schurhere=0, schur=..., schurhere=0, wk_user=..., > > wk_userhere=0, > > >>> > colsca=..., > > >>> > colscahere=0, rowsca=..., rowscahere=0, instance_number=1, > > nrhs=1, > > >>> > lrhs=0, lredrhs=0, > > >>> > rhs_sparse=..., rhs_sparsehere=0, sol_loc=..., sol_lochere=0, > > >>> > irhs_sparse=..., > > >>> > irhs_sparsehere=0, irhs_ptr=..., irhs_ptrhere=0, isol_loc=..., > > >>> > isol_lochere=0, > > >>> > nz_rhs=0, lsol_loc=0, schur_mloc=0, schur_nloc=0, schur_lld=0, > > >>> > mblock=0, nblock=0, > > >>> > nprow=0, npcol=0, ooc_tmpdir=..., ooc_prefix=..., > > >>> > write_problem=..., tmpdirlen=20, > > >>> > prefixlen=20, write_problemlen=20) at dmumps_f77.F:260 > > >>> > #7 0x00007f33d74709b1 in dmumps_c (mumps_par=0x16126f0) at > > >>> > mumps_c.c:415 > > >>> > #8 0x00007f33d68408ca in MatLUFactorSymbolic_AIJMUMPS (F=0x1610280, > > >>> > A=0x14bafc0, > > >>> > r=0x160cc30, c=0x1609ed0, info=0x15c6708) > > >>> > at /scratch/hzhang/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:14 > > 87 > > >>> > > > >>> > -mat_mumps_icntl_29 = 0 or 1 give same error. > > >>> > I'm cc'ing this email to mumps developer, who may help to resolve > > this > > >>> > matter. > > >>> > > > >>> > Hong > > >>> > > > >>> > > > >>> > Hi all, > > >>> > > > >>> > I have some problems with PETSc using MUMPS and PARMETIS. > > >>> > In some cases it works fine, but in some others it doesn't, so I am > > >>> > trying to understand what is happening. > > >>> > > > >>> > I just picked the following example: > > >>> > > > >>> > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examp > > les/tutorials/ex53.c.html > > >>> > > > >>> > Now, when I start it with less than 4 processes it works as expected: > > >>> > mpirun -n 3 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 > > >>> > -mat_mumps_icntl_29 2 > > >>> > > > >>> > But with 4 or more processes, it crashes, but only when I am using > > >>> > Parmetis: > > >>> > mpirun -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 > > >>> > -mat_mumps_icntl_29 2 > > >>> > > > >>> > Metis worked in every case I tried without any problems. > > >>> > > > >>> > I wonder if I am doing something wrong or if this is a general > > problem > > >>> > or even a bug? Is Parmetis supposed to work with that example with 4 > > >>> > processes? > > >>> > > > >>> > Thanks a lot and kind regards. > > >>> > > > >>> > Volker > > >>> > > > >>> > > > >>> > Here is the error log of process 0: > > >>> > > > >>> > Entering DMUMPS 5.0.1 driver with JOB, N = 1 10000 > > >>> > ================================================= > > >>> > MUMPS compiled with option -Dmetis > > >>> > MUMPS compiled with option -Dparmetis > > >>> > ================================================= > > >>> > L U Solver for unsymmetric matrices > > >>> > Type of parallelism: Working host > > >>> > > > >>> > ****** ANALYSIS STEP ******** > > >>> > > > >>> > ** Max-trans not allowed because matrix is distributed > > >>> > Using ParMETIS for parallel ordering. > > >>> > [0]PETSC ERROR: > > >>> > > > >>> > ------------------------------------------------------------ > > ------------ > > >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > >>> > probably memory access out of range > > >>> > [0]PETSC ERROR: Try option -start_in_debugger or > > >>> > -on_error_attach_debugger > > >>> > [0]PETSC ERROR: or see > > >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > >>> > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > > Mac > > >>> > OS X to find memory corruption errors > > >>> > [0]PETSC ERROR: likely location of problem given in stack below > > >>> > [0]PETSC ERROR: --------------------- Stack Frames > > >>> > ------------------------------------ > > >>> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > >>> > available, > > >>> > [0]PETSC ERROR: INSTEAD the line number of the start of the > > >>> > function > > >>> > [0]PETSC ERROR: is given. > > >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic_AIJMUMPS line 1395 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/impls/ > > aij/mpi/mumps/mumps.c > > >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic line 2927 > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/interfac > > e/matrix.c > > >>> > [0]PETSC ERROR: [0] PCSetUp_LU line 101 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/ > > impls/factor/lu/lu.c > > >>> > [0]PETSC ERROR: [0] PCSetUp line 930 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/ > > interface/precon.c > > >>> > [0]PETSC ERROR: [0] KSPSetUp line 305 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/ > > interface/itfunc.c > > >>> > [0]PETSC ERROR: [0] KSPSolve line 563 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/ > > interface/itfunc.c > > >>> > [0]PETSC ERROR: --------------------- Error Message > > >>> > -------------------------------------------------------------- > > >>> > [0]PETSC ERROR: Signal received > > >>> > [0]PETSC ERROR: See > > >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > > >>> > shooting. > > >>> > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 > > >>> > [0]PETSC ERROR: ./ex53 on a linux-manni-mumps named manni by 133 Wed > > >>> > Oct 19 16:39:49 2016 > > >>> > [0]PETSC ERROR: Configure options --with-cc=mpiicc --with-cxx=mpiicpc > > >>> > --with-fc=mpiifort --with-shared-libraries=1 > > >>> > --with-valgrind-dir=~/usr/valgrind/ > > >>> > > > >>> > --with-mpi-dir=/home/software/intel/Intel-2016.4/compilers_a > > nd_libraries_2016.4.258/linux/mpi > > >>> > --download-scalapack --download-mumps --download-metis > > >>> > --download-metis-shared=0 --download-parmetis > > >>> > --download-parmetis-shared=0 > > >>> > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > >>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > >>> > > > >>> > > >> > > >> > > >> > > >> -- > > >> ----------------------------------------- > > >> Alfredo Buttari, PhD > > >> CNRS-IRIT > > >> 2 rue Camichel, 31071 Toulouse, France > > >> http://buttari.perso.enseeiht.fr > > > > > > > > > > > > > > -- > > ----------------------------------------- > > Alfredo Buttari, PhD > > CNRS-IRIT > > 2 rue Camichel, 31071 Toulouse, France > > http://buttari.perso.enseeiht.fr > > > From hzhang at mcs.anl.gov Mon Dec 12 09:27:01 2016 From: hzhang at mcs.anl.gov (Hong) Date: Mon, 12 Dec 2016 09:27:01 -0600 Subject: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes In-Reply-To: References: <3A041F37-6368-4060-81A5-59D0130584C9@mcs.anl.gov> Message-ID: Alfredo, mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 -mat_mumps_icntl_29 2 works well now. We'll upgrade petsc interface to this mumps-5.0.2. Hong On Mon, Dec 12, 2016 at 9:00 AM, Hong wrote: > Alfredo: > Sure, I got the tarball of mumps-5.0.2, and will test it and update > petsc-mumps interface. I'll let you know if problem remains. > > Hong > > Dear all, >> sorry for the late reply. The petsc installation went supersmooth and >> I could easily reproduce the issue. I dumped the matrix generated by >> petsc and read it back with a standalone mumps tester in order to >> confirm the bug. This bug has been already reported by another user, >> was fixed a few months ago and the fix was included in the 5.0.2 >> release. Could you please check if everything works well with mumps >> 5.0.2? >> >> Kind regards, >> te MUMPS team >> >> >> >> >> On Thu, Oct 20, 2016 at 4:44 PM, Hong wrote: >> > Alfredo: >> > It would be much easier to install petsc with mumps, parmetis, and >> > debugging this case. Here is what you can do on a linux machine >> > (see http://www.mcs.anl.gov/petsc/documentation/installation.html): >> > >> > 1) get petsc-release: >> > git clone -b maint https://bitbucket.org/petsc/petsc petsc >> > >> > cd petsc >> > git pull >> > export PETSC_DIR=$PWD >> > export PETSC_ARCH=<> >> > >> > 2) configure petsc with additional options >> > '--download-metis --download-parmetis --download-mumps >> --download-scalapack >> > --download-ptscotch' >> > see http://www.mcs.anl.gov/petsc/documentation/installation.html >> > >> > 3) build petsc and test >> > make >> > make test >> > >> > 4) test ex53.c: >> > cd $PETSC_DIR/src/ksp/ksp/examples/tutorials >> > make ex53 >> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 >> > -mat_mumps_icntl_29 2 >> > >> > 5) debugging ex53.c: >> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 >> > -mat_mumps_icntl_29 2 -start_in_debugger >> > >> > Give it a try. Contact us if you cannot reproduce this case. >> > >> > Hong >> > >> >> Dear all, >> >> this may well be due to a bug in the parallel analysis. Do you think >> you >> >> can reproduce the problem in a standalone MUMPS program (i.e., without >> going >> >> through PETSc) ? that would save a lot of time to track the bug since >> we do >> >> not have a PETSc install at hand. Otherwise we'll give it a shot at >> >> installing petsc and reproducing the problem on our side. >> >> >> >> Kind regards, >> >> the MUMPS team >> >> >> >> >> >> >> >> On Wed, Oct 19, 2016 at 8:32 PM, Barry Smith >> wrote: >> >>> >> >>> >> >>> Tim, >> >>> >> >>> You can/should also run with valgrind to determine exactly the >> first >> >>> point with memory corruption issues. >> >>> >> >>> Barry >> >>> >> >>> > On Oct 19, 2016, at 11:08 AM, Hong wrote: >> >>> > >> >>> > Tim: >> >>> > With '-mat_mumps_icntl_28 1', i.e., sequential analysis, I can run >> ex56 >> >>> > with np=3 or larger np successfully. >> >>> > >> >>> > With '-mat_mumps_icntl_28 2', i.e., parallel analysis, I can run up >> to >> >>> > np=3. >> >>> > >> >>> > For np=4: >> >>> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 >> >>> > -mat_mumps_icntl_29 2 -start_in_debugger >> >>> > >> >>> > code crashes inside mumps: >> >>> > Program received signal SIGSEGV, Segmentation fault. >> >>> > 0x00007f33d75857cb in >> >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( >> >>> > id=..., first=..., last=..., ipe=..., >> >>> > pe=> 0x0>, >> >>> > work=...) >> >>> > at dana_aux_par.F:1450 >> >>> > 1450 MAPTAB(J) = I >> >>> > (gdb) bt >> >>> > #0 0x00007f33d75857cb in >> >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( >> >>> > id=..., first=..., last=..., ipe=..., >> >>> > pe=> 0x0>, >> >>> > work=...) >> >>> > at dana_aux_par.F:1450 >> >>> > #1 0x00007f33d759207c in dmumps_parallel_analysis::dmum >> ps_parmetis_ord >> >>> > ( >> >>> > id=..., ord=..., work=...) at dana_aux_par.F:400 >> >>> > #2 0x00007f33d7592d14 in dmumps_parallel_analysis::dmum >> ps_do_par_ord >> >>> > (id=..., >> >>> > ord=..., work=...) at dana_aux_par.F:351 >> >>> > #3 0x00007f33d7593aa9 in dmumps_parallel_analysis::dmum >> ps_ana_f_par >> >>> > (id=..., >> >>> > work1=..., work2=..., nfsiz=..., >> >>> > fils=> 0x0>, >> >>> > frere=> >>> > 0x0>) >> >>> > at dana_aux_par.F:98 >> >>> > #4 0x00007f33d74c622a in dmumps_ana_driver (id=...) at >> >>> > dana_driver.F:563 >> >>> > #5 0x00007f33d747706b in dmumps (id=...) at dmumps_driver.F:1108 >> >>> > #6 0x00007f33d74721b5 in dmumps_f77 (job=1, sym=0, par=1, >> >>> > comm_f77=-2080374779, n=10000, icntl=..., cntl=..., keep=..., >> >>> > dkeep=..., >> >>> > keep8=..., nz=0, irn=..., irnhere=0, jcn=..., jcnhere=0, a=..., >> >>> > ahere=0, >> >>> > nz_loc=7500, irn_loc=..., irn_lochere=1, jcn_loc=..., >> >>> > jcn_lochere=1, >> >>> > a_loc=..., a_lochere=1, nelt=0, eltptr=..., eltptrhere=0, >> >>> > eltvar=..., >> >>> > eltvarhere=0, a_elt=..., a_elthere=0, perm_in=..., >> perm_inhere=0, >> >>> > rhs=..., >> >>> > rhshere=0, redrhs=..., redrhshere=0, info=..., rinfo=..., >> >>> > infog=..., >> >>> > rinfog=..., deficiency=0, lwk_user=0, size_schur=0, >> >>> > listvar_schur=..., >> >>> > ---Type to continue, or q to quit--- >> >>> > ar_schurhere=0, schur=..., schurhere=0, wk_user=..., >> wk_userhere=0, >> >>> > colsca=..., >> >>> > colscahere=0, rowsca=..., rowscahere=0, instance_number=1, >> nrhs=1, >> >>> > lrhs=0, lredrhs=0, >> >>> > rhs_sparse=..., rhs_sparsehere=0, sol_loc=..., sol_lochere=0, >> >>> > irhs_sparse=..., >> >>> > irhs_sparsehere=0, irhs_ptr=..., irhs_ptrhere=0, isol_loc=..., >> >>> > isol_lochere=0, >> >>> > nz_rhs=0, lsol_loc=0, schur_mloc=0, schur_nloc=0, schur_lld=0, >> >>> > mblock=0, nblock=0, >> >>> > nprow=0, npcol=0, ooc_tmpdir=..., ooc_prefix=..., >> >>> > write_problem=..., tmpdirlen=20, >> >>> > prefixlen=20, write_problemlen=20) at dmumps_f77.F:260 >> >>> > #7 0x00007f33d74709b1 in dmumps_c (mumps_par=0x16126f0) at >> >>> > mumps_c.c:415 >> >>> > #8 0x00007f33d68408ca in MatLUFactorSymbolic_AIJMUMPS (F=0x1610280, >> >>> > A=0x14bafc0, >> >>> > r=0x160cc30, c=0x1609ed0, info=0x15c6708) >> >>> > at /scratch/hzhang/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:14 >> 87 >> >>> > >> >>> > -mat_mumps_icntl_29 = 0 or 1 give same error. >> >>> > I'm cc'ing this email to mumps developer, who may help to resolve >> this >> >>> > matter. >> >>> > >> >>> > Hong >> >>> > >> >>> > >> >>> > Hi all, >> >>> > >> >>> > I have some problems with PETSc using MUMPS and PARMETIS. >> >>> > In some cases it works fine, but in some others it doesn't, so I am >> >>> > trying to understand what is happening. >> >>> > >> >>> > I just picked the following example: >> >>> > >> >>> > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examp >> les/tutorials/ex53.c.html >> >>> > >> >>> > Now, when I start it with less than 4 processes it works as >> expected: >> >>> > mpirun -n 3 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 >> >>> > -mat_mumps_icntl_29 2 >> >>> > >> >>> > But with 4 or more processes, it crashes, but only when I am using >> >>> > Parmetis: >> >>> > mpirun -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 >> >>> > -mat_mumps_icntl_29 2 >> >>> > >> >>> > Metis worked in every case I tried without any problems. >> >>> > >> >>> > I wonder if I am doing something wrong or if this is a general >> problem >> >>> > or even a bug? Is Parmetis supposed to work with that example with 4 >> >>> > processes? >> >>> > >> >>> > Thanks a lot and kind regards. >> >>> > >> >>> > Volker >> >>> > >> >>> > >> >>> > Here is the error log of process 0: >> >>> > >> >>> > Entering DMUMPS 5.0.1 driver with JOB, N = 1 10000 >> >>> > ================================================= >> >>> > MUMPS compiled with option -Dmetis >> >>> > MUMPS compiled with option -Dparmetis >> >>> > ================================================= >> >>> > L U Solver for unsymmetric matrices >> >>> > Type of parallelism: Working host >> >>> > >> >>> > ****** ANALYSIS STEP ******** >> >>> > >> >>> > ** Max-trans not allowed because matrix is distributed >> >>> > Using ParMETIS for parallel ordering. >> >>> > [0]PETSC ERROR: >> >>> > >> >>> > ------------------------------------------------------------ >> ------------ >> >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >> Violation, >> >>> > probably memory access out of range >> >>> > [0]PETSC ERROR: Try option -start_in_debugger or >> >>> > -on_error_attach_debugger >> >>> > [0]PETSC ERROR: or see >> >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >>> > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple >> Mac >> >>> > OS X to find memory corruption errors >> >>> > [0]PETSC ERROR: likely location of problem given in stack below >> >>> > [0]PETSC ERROR: --------------------- Stack Frames >> >>> > ------------------------------------ >> >>> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> >>> > available, >> >>> > [0]PETSC ERROR: INSTEAD the line number of the start of the >> >>> > function >> >>> > [0]PETSC ERROR: is given. >> >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic_AIJMUMPS line 1395 >> >>> > >> >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/impls/ai >> j/mpi/mumps/mumps.c >> >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic line 2927 >> >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/interfac >> e/matrix.c >> >>> > [0]PETSC ERROR: [0] PCSetUp_LU line 101 >> >>> > >> >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/impls >> /factor/lu/lu.c >> >>> > [0]PETSC ERROR: [0] PCSetUp line 930 >> >>> > >> >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/inter >> face/precon.c >> >>> > [0]PETSC ERROR: [0] KSPSetUp line 305 >> >>> > >> >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/inte >> rface/itfunc.c >> >>> > [0]PETSC ERROR: [0] KSPSolve line 563 >> >>> > >> >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/inte >> rface/itfunc.c >> >>> > [0]PETSC ERROR: --------------------- Error Message >> >>> > -------------------------------------------------------------- >> >>> > [0]PETSC ERROR: Signal received >> >>> > [0]PETSC ERROR: See >> >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble >> >>> > shooting. >> >>> > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 >> >>> > [0]PETSC ERROR: ./ex53 on a linux-manni-mumps named manni by 133 Wed >> >>> > Oct 19 16:39:49 2016 >> >>> > [0]PETSC ERROR: Configure options --with-cc=mpiicc >> --with-cxx=mpiicpc >> >>> > --with-fc=mpiifort --with-shared-libraries=1 >> >>> > --with-valgrind-dir=~/usr/valgrind/ >> >>> > >> >>> > --with-mpi-dir=/home/software/intel/Intel-2016.4/compilers_a >> nd_libraries_2016.4.258/linux/mpi >> >>> > --download-scalapack --download-mumps --download-metis >> >>> > --download-metis-shared=0 --download-parmetis >> >>> > --download-parmetis-shared=0 >> >>> > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file >> >>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >> >>> > >> >>> >> >> >> >> >> >> >> >> -- >> >> ----------------------------------------- >> >> Alfredo Buttari, PhD >> >> CNRS-IRIT >> >> 2 rue Camichel, 31071 Toulouse, France >> >> http://buttari.perso.enseeiht.fr >> > >> > >> >> >> >> -- >> ----------------------------------------- >> Alfredo Buttari, PhD >> CNRS-IRIT >> 2 rue Camichel, 31071 Toulouse, France >> http://buttari.perso.enseeiht.fr >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.buttari at enseeiht.fr Mon Dec 12 09:28:57 2016 From: alfredo.buttari at enseeiht.fr (Alfredo Buttari) Date: Mon, 12 Dec 2016 16:28:57 +0100 Subject: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes In-Reply-To: References: <3A041F37-6368-4060-81A5-59D0130584C9@mcs.anl.gov> Message-ID: Hong, great news! Regards, the MUMPS team On Mon, Dec 12, 2016 at 4:27 PM, Hong wrote: > works well now. -- ----------------------------------------- Alfredo Buttari, PhD CNRS-IRIT 2 rue Camichel, 31071 Toulouse, France http://buttari.perso.enseeiht.fr From hzhang at mcs.anl.gov Mon Dec 12 09:50:37 2016 From: hzhang at mcs.anl.gov (Zhang, Hong) Date: Mon, 12 Dec 2016 15:50:37 +0000 Subject: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes In-Reply-To: References: <3A041F37-6368-4060-81A5-59D0130584C9@mcs.anl.gov> , Message-ID: <3D9EEEDDE5F38D4886C1845F99C697F7DFFF7C@DITKA.anl.gov> I tested master branch, it works fine. Hong ________________________________________ From: Satish Balay [balay at mcs.anl.gov] Sent: Monday, December 12, 2016 9:14 AM To: Zhang, Hong Cc: Alfredo Buttari; PETSc; mumps-dev Subject: Re: [petsc-users] [mumps-dev] MUMPS and PARMETIS: Crashes Hong, petsc master is updated to download/install mumps-5.0.2 Satish On Mon, 12 Dec 2016, Hong wrote: > Alfredo: > Sure, I got the tarball of mumps-5.0.2, and will test it and update > petsc-mumps interface. I'll let you know if problem remains. > > Hong > > Dear all, > > sorry for the late reply. The petsc installation went supersmooth and > > I could easily reproduce the issue. I dumped the matrix generated by > > petsc and read it back with a standalone mumps tester in order to > > confirm the bug. This bug has been already reported by another user, > > was fixed a few months ago and the fix was included in the 5.0.2 > > release. Could you please check if everything works well with mumps > > 5.0.2? > > > > Kind regards, > > te MUMPS team > > > > > > > > > > On Thu, Oct 20, 2016 at 4:44 PM, Hong wrote: > > > Alfredo: > > > It would be much easier to install petsc with mumps, parmetis, and > > > debugging this case. Here is what you can do on a linux machine > > > (see http://www.mcs.anl.gov/petsc/documentation/installation.html): > > > > > > 1) get petsc-release: > > > git clone -b maint https://bitbucket.org/petsc/petsc petsc > > > > > > cd petsc > > > git pull > > > export PETSC_DIR=$PWD > > > export PETSC_ARCH=<> > > > > > > 2) configure petsc with additional options > > > '--download-metis --download-parmetis --download-mumps > > --download-scalapack > > > --download-ptscotch' > > > see http://www.mcs.anl.gov/petsc/documentation/installation.html > > > > > > 3) build petsc and test > > > make > > > make test > > > > > > 4) test ex53.c: > > > cd $PETSC_DIR/src/ksp/ksp/examples/tutorials > > > make ex53 > > > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > > -mat_mumps_icntl_29 2 > > > > > > 5) debugging ex53.c: > > > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > > -mat_mumps_icntl_29 2 -start_in_debugger > > > > > > Give it a try. Contact us if you cannot reproduce this case. > > > > > > Hong > > > > > >> Dear all, > > >> this may well be due to a bug in the parallel analysis. Do you think you > > >> can reproduce the problem in a standalone MUMPS program (i.e., without > > going > > >> through PETSc) ? that would save a lot of time to track the bug since > > we do > > >> not have a PETSc install at hand. Otherwise we'll give it a shot at > > >> installing petsc and reproducing the problem on our side. > > >> > > >> Kind regards, > > >> the MUMPS team > > >> > > >> > > >> > > >> On Wed, Oct 19, 2016 at 8:32 PM, Barry Smith > > wrote: > > >>> > > >>> > > >>> Tim, > > >>> > > >>> You can/should also run with valgrind to determine exactly the > > first > > >>> point with memory corruption issues. > > >>> > > >>> Barry > > >>> > > >>> > On Oct 19, 2016, at 11:08 AM, Hong wrote: > > >>> > > > >>> > Tim: > > >>> > With '-mat_mumps_icntl_28 1', i.e., sequential analysis, I can run > > ex56 > > >>> > with np=3 or larger np successfully. > > >>> > > > >>> > With '-mat_mumps_icntl_28 2', i.e., parallel analysis, I can run up > > to > > >>> > np=3. > > >>> > > > >>> > For np=4: > > >>> > mpiexec -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 2 > > >>> > -mat_mumps_icntl_29 2 -start_in_debugger > > >>> > > > >>> > code crashes inside mumps: > > >>> > Program received signal SIGSEGV, Segmentation fault. > > >>> > 0x00007f33d75857cb in > > >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( > > >>> > id=..., first=..., last=..., ipe=..., > > >>> > pe=, > > >>> > work=...) > > >>> > at dana_aux_par.F:1450 > > >>> > 1450 MAPTAB(J) = I > > >>> > (gdb) bt > > >>> > #0 0x00007f33d75857cb in > > >>> > dmumps_parallel_analysis::dmumps_build_scotch_graph ( > > >>> > id=..., first=..., last=..., ipe=..., > > >>> > pe=, > > >>> > work=...) > > >>> > at dana_aux_par.F:1450 > > >>> > #1 0x00007f33d759207c in dmumps_parallel_analysis::dmum > > ps_parmetis_ord > > >>> > ( > > >>> > id=..., ord=..., work=...) at dana_aux_par.F:400 > > >>> > #2 0x00007f33d7592d14 in dmumps_parallel_analysis::dmum > > ps_do_par_ord > > >>> > (id=..., > > >>> > ord=..., work=...) at dana_aux_par.F:351 > > >>> > #3 0x00007f33d7593aa9 in dmumps_parallel_analysis::dmumps_ana_f_par > > >>> > (id=..., > > >>> > work1=..., work2=..., nfsiz=..., > > >>> > fils= > 0x0>, > > >>> > frere= > >>> > 0x0>) > > >>> > at dana_aux_par.F:98 > > >>> > #4 0x00007f33d74c622a in dmumps_ana_driver (id=...) at > > >>> > dana_driver.F:563 > > >>> > #5 0x00007f33d747706b in dmumps (id=...) at dmumps_driver.F:1108 > > >>> > #6 0x00007f33d74721b5 in dmumps_f77 (job=1, sym=0, par=1, > > >>> > comm_f77=-2080374779, n=10000, icntl=..., cntl=..., keep=..., > > >>> > dkeep=..., > > >>> > keep8=..., nz=0, irn=..., irnhere=0, jcn=..., jcnhere=0, a=..., > > >>> > ahere=0, > > >>> > nz_loc=7500, irn_loc=..., irn_lochere=1, jcn_loc=..., > > >>> > jcn_lochere=1, > > >>> > a_loc=..., a_lochere=1, nelt=0, eltptr=..., eltptrhere=0, > > >>> > eltvar=..., > > >>> > eltvarhere=0, a_elt=..., a_elthere=0, perm_in=..., perm_inhere=0, > > >>> > rhs=..., > > >>> > rhshere=0, redrhs=..., redrhshere=0, info=..., rinfo=..., > > >>> > infog=..., > > >>> > rinfog=..., deficiency=0, lwk_user=0, size_schur=0, > > >>> > listvar_schur=..., > > >>> > ---Type to continue, or q to quit--- > > >>> > ar_schurhere=0, schur=..., schurhere=0, wk_user=..., > > wk_userhere=0, > > >>> > colsca=..., > > >>> > colscahere=0, rowsca=..., rowscahere=0, instance_number=1, > > nrhs=1, > > >>> > lrhs=0, lredrhs=0, > > >>> > rhs_sparse=..., rhs_sparsehere=0, sol_loc=..., sol_lochere=0, > > >>> > irhs_sparse=..., > > >>> > irhs_sparsehere=0, irhs_ptr=..., irhs_ptrhere=0, isol_loc=..., > > >>> > isol_lochere=0, > > >>> > nz_rhs=0, lsol_loc=0, schur_mloc=0, schur_nloc=0, schur_lld=0, > > >>> > mblock=0, nblock=0, > > >>> > nprow=0, npcol=0, ooc_tmpdir=..., ooc_prefix=..., > > >>> > write_problem=..., tmpdirlen=20, > > >>> > prefixlen=20, write_problemlen=20) at dmumps_f77.F:260 > > >>> > #7 0x00007f33d74709b1 in dmumps_c (mumps_par=0x16126f0) at > > >>> > mumps_c.c:415 > > >>> > #8 0x00007f33d68408ca in MatLUFactorSymbolic_AIJMUMPS (F=0x1610280, > > >>> > A=0x14bafc0, > > >>> > r=0x160cc30, c=0x1609ed0, info=0x15c6708) > > >>> > at /scratch/hzhang/petsc/src/mat/impls/aij/mpi/mumps/mumps.c:14 > > 87 > > >>> > > > >>> > -mat_mumps_icntl_29 = 0 or 1 give same error. > > >>> > I'm cc'ing this email to mumps developer, who may help to resolve > > this > > >>> > matter. > > >>> > > > >>> > Hong > > >>> > > > >>> > > > >>> > Hi all, > > >>> > > > >>> > I have some problems with PETSc using MUMPS and PARMETIS. > > >>> > In some cases it works fine, but in some others it doesn't, so I am > > >>> > trying to understand what is happening. > > >>> > > > >>> > I just picked the following example: > > >>> > > > >>> > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examp > > les/tutorials/ex53.c.html > > >>> > > > >>> > Now, when I start it with less than 4 processes it works as expected: > > >>> > mpirun -n 3 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 > > >>> > -mat_mumps_icntl_29 2 > > >>> > > > >>> > But with 4 or more processes, it crashes, but only when I am using > > >>> > Parmetis: > > >>> > mpirun -n 4 ./ex53 -n 10000 -ksp_view -mat_mumps_icntl_28 1 > > >>> > -mat_mumps_icntl_29 2 > > >>> > > > >>> > Metis worked in every case I tried without any problems. > > >>> > > > >>> > I wonder if I am doing something wrong or if this is a general > > problem > > >>> > or even a bug? Is Parmetis supposed to work with that example with 4 > > >>> > processes? > > >>> > > > >>> > Thanks a lot and kind regards. > > >>> > > > >>> > Volker > > >>> > > > >>> > > > >>> > Here is the error log of process 0: > > >>> > > > >>> > Entering DMUMPS 5.0.1 driver with JOB, N = 1 10000 > > >>> > ================================================= > > >>> > MUMPS compiled with option -Dmetis > > >>> > MUMPS compiled with option -Dparmetis > > >>> > ================================================= > > >>> > L U Solver for unsymmetric matrices > > >>> > Type of parallelism: Working host > > >>> > > > >>> > ****** ANALYSIS STEP ******** > > >>> > > > >>> > ** Max-trans not allowed because matrix is distributed > > >>> > Using ParMETIS for parallel ordering. > > >>> > [0]PETSC ERROR: > > >>> > > > >>> > ------------------------------------------------------------ > > ------------ > > >>> > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > > >>> > probably memory access out of range > > >>> > [0]PETSC ERROR: Try option -start_in_debugger or > > >>> > -on_error_attach_debugger > > >>> > [0]PETSC ERROR: or see > > >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > >>> > [0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple > > Mac > > >>> > OS X to find memory corruption errors > > >>> > [0]PETSC ERROR: likely location of problem given in stack below > > >>> > [0]PETSC ERROR: --------------------- Stack Frames > > >>> > ------------------------------------ > > >>> > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > > >>> > available, > > >>> > [0]PETSC ERROR: INSTEAD the line number of the start of the > > >>> > function > > >>> > [0]PETSC ERROR: is given. > > >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic_AIJMUMPS line 1395 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/impls/ > > aij/mpi/mumps/mumps.c > > >>> > [0]PETSC ERROR: [0] MatLUFactorSymbolic line 2927 > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/mat/interfac > > e/matrix.c > > >>> > [0]PETSC ERROR: [0] PCSetUp_LU line 101 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/ > > impls/factor/lu/lu.c > > >>> > [0]PETSC ERROR: [0] PCSetUp line 930 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/pc/ > > interface/precon.c > > >>> > [0]PETSC ERROR: [0] KSPSetUp line 305 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/ > > interface/itfunc.c > > >>> > [0]PETSC ERROR: [0] KSPSolve line 563 > > >>> > > > >>> > /fsgarwinhpc/133/petsc/sources/petsc-3.7.4a/src/ksp/ksp/ > > interface/itfunc.c > > >>> > [0]PETSC ERROR: --------------------- Error Message > > >>> > -------------------------------------------------------------- > > >>> > [0]PETSC ERROR: Signal received > > >>> > [0]PETSC ERROR: See > > >>> > http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble > > >>> > shooting. > > >>> > [0]PETSC ERROR: Petsc Release Version 3.7.4, Oct, 02, 2016 > > >>> > [0]PETSC ERROR: ./ex53 on a linux-manni-mumps named manni by 133 Wed > > >>> > Oct 19 16:39:49 2016 > > >>> > [0]PETSC ERROR: Configure options --with-cc=mpiicc --with-cxx=mpiicpc > > >>> > --with-fc=mpiifort --with-shared-libraries=1 > > >>> > --with-valgrind-dir=~/usr/valgrind/ > > >>> > > > >>> > --with-mpi-dir=/home/software/intel/Intel-2016.4/compilers_a > > nd_libraries_2016.4.258/linux/mpi > > >>> > --download-scalapack --download-mumps --download-metis > > >>> > --download-metis-shared=0 --download-parmetis > > >>> > --download-parmetis-shared=0 > > >>> > [0]PETSC ERROR: #1 User provided function() line 0 in unknown file > > >>> > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > >>> > > > >>> > > >> > > >> > > >> > > >> -- > > >> ----------------------------------------- > > >> Alfredo Buttari, PhD > > >> CNRS-IRIT > > >> 2 rue Camichel, 31071 Toulouse, France > > >> http://buttari.perso.enseeiht.fr > > > > > > > > > > > > > > -- > > ----------------------------------------- > > Alfredo Buttari, PhD > > CNRS-IRIT > > 2 rue Camichel, 31071 Toulouse, France > > http://buttari.perso.enseeiht.fr > > > From friedmud at gmail.com Mon Dec 12 10:36:09 2016 From: friedmud at gmail.com (Derek Gaston) Date: Mon, 12 Dec 2016 16:36:09 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: <87eg1dogj6.fsf@jedbrown.org> References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> <87eg1dogj6.fsf@jedbrown.org> Message-ID: On Mon, Dec 12, 2016 at 12:36 AM Jed Brown wrote: > > Can you expand on that? Do you believe automatic differentiation in > > general to be "bad code management"? > > AD that prevents calling the non-AD function is bad AD. > That's not exactly the problem. Even if you can call an AD and a non-AD residual... you still have to compute two residuals to compute a residual and a Jacobian separately when using AD. It's not the end of the world... but it was something that prompted me to ask the question. > Are all the fields in unique function spaces that need different > transforms or different quadratures? If not, it seems like the presence > of many fields would already amortize the geometric overhead of visiting > an element. > These were two separate examples. Expensive shape functions, by themselves, could warrant computing the residual and Jacobian simultaneously. Also: many variables, by themselves, could do the same. > Alternatively, you could cache the effective material coefficient (and its > gradient) at each quadrature point during residual evaluation, thus > avoiding a re-solve when building the Jacobian. I agree with this. We have some support for it in MOOSE now... and more plans for better support in the future. It's a classic time/space tradeoff. > I would recommend that unless you know that line searches are rare. > BTW: Many (most?) of our most complex applications all _disable_ line search. Over the years we've found line search to be more of a hindrance than a help. We typically prefer using some sort of "physics based" damped Newton. > It is far more common that the Jacobian is _much_ more expensive than > the residual, in which case the mere possibility of a line search (or of > converging) would justify deferring the Jacobian. I think it's much > better to make residuals and Jacobians fast independently, then perhaps > make the residual do some cheap caching, and worry about > second-guessing Newton only as a last resort. I think I agree. These are definitely "fringe" cases... for most applications Jacobians are _way_ more expensive. > That said, I have no doubt that we could > demonstrate some benefit to using heuristics and a relative cost model > to sometimes compute residuals and Jacobians together. It just isn't > that interesting and I think the gains are likely small and will > generate lots of bikeshedding about the heuristic. > I agree here too. It could be done... but I think you've convinced me that it's not worth the trouble :-) Thanks for the discussion everyone! Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 12 10:43:48 2016 From: jed at jedbrown.org (Jed Brown) Date: Mon, 12 Dec 2016 08:43:48 -0800 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> <87eg1dogj6.fsf@jedbrown.org> Message-ID: <00F0C633-CC8A-49D5-B0A4-4B7C963BC81E@jedbrown.org> I was responding to your statement about an AD system being unable to compute a residual without also computing a Jacobian. Beyond that, it's just about how much intermediate work could be reused. I still don't understand your many field case. The geometric overhead is amortized and the Jacobian contribution typically grows quadratically in the number of fields, so I would expect it to increase the Jacobian:residual cost ratio, not reduce it. On December 12, 2016 8:36:09 AM PST, Derek Gaston wrote: >On Mon, Dec 12, 2016 at 12:36 AM Jed Brown wrote: > >> > Can you expand on that? Do you believe automatic differentiation >in >> > general to be "bad code management"? >> >> AD that prevents calling the non-AD function is bad AD. >> > >That's not exactly the problem. Even if you can call an AD and a >non-AD >residual... you still have to compute two residuals to compute a >residual >and a Jacobian separately when using AD. > >It's not the end of the world... but it was something that prompted me >to >ask the question. > > >> Are all the fields in unique function spaces that need different >> transforms or different quadratures? If not, it seems like the >presence >> of many fields would already amortize the geometric overhead of >visiting >> an element. >> > >These were two separate examples. Expensive shape functions, by >themselves, could warrant computing the residual and Jacobian >simultaneously. Also: many variables, by themselves, could do the >same. > > >> Alternatively, you could cache the effective material coefficient >(and its >> gradient) at each quadrature point during residual evaluation, thus >> avoiding a re-solve when building the Jacobian. > > >I agree with this. We have some support for it in MOOSE now... and >more >plans for better support in the future. It's a classic time/space >tradeoff. > > >> I would recommend that unless you know that line searches are rare. >> > >BTW: Many (most?) of our most complex applications all _disable_ line >search. Over the years we've found line search to be more of a >hindrance >than a help. We typically prefer using some sort of "physics based" >damped >Newton. > > >> It is far more common that the Jacobian is _much_ more expensive than >> the residual, in which case the mere possibility of a line search (or >of >> converging) would justify deferring the Jacobian. I think it's much >> better to make residuals and Jacobians fast independently, then >perhaps >> make the residual do some cheap caching, and worry about >> second-guessing Newton only as a last resort. > > >I think I agree. These are definitely "fringe" cases... for most >applications Jacobians are _way_ more expensive. > > >> That said, I have no doubt that we could >> demonstrate some benefit to using heuristics and a relative cost >model >> to sometimes compute residuals and Jacobians together. It just isn't >> that interesting and I think the gains are likely small and will >> generate lots of bikeshedding about the heuristic. >> > >I agree here too. It could be done... but I think you've convinced me >that >it's not worth the trouble :-) > >Thanks for the discussion everyone! > >Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Mon Dec 12 11:33:05 2016 From: jed at jedbrown.org (Jed Brown) Date: Mon, 12 Dec 2016 10:33:05 -0700 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: <878trlnjce.fsf@jedbrown.org> Patrick Sanan writes: > Maybe a good way to proceed is to have the "both at once computation" > explicitly available in PETSc's Newton or other solvers that "know" > that they can gain efficiency by computing both at once. That is, the > user is always required to register a callback to form the residual, > and may optionally register a function to compute (only) the Jacobian > and/or another function to compute the combined Jacobian/Residual. > Newton can look for the combined function at the times when it would > help, and if nothing was registered would fall back to the current > technique of calling separate residual and Jacobian routines. This > would preserve the benefits of the current approach, allow > optimizations of the kind mentioned, and hopefully remain > somewhat-maintainable; in particular, this doesn't require the user to > be promised anything about the order in which Jacobians and residuals > are computed. How would the solver decide? It requires a model for the relative costs of {residual, Jacobian, residual+Jacobian}. Where would the SNES learn that? I'm not saying the above is wrong, but it adds complexity and wouldn't be useful without the cost model. An alternative would be for the user's residual to be able to query the SNES's estimated probability that a Jacobian will be needed. So if the SNES has observed quadratic convergence, it might be quite confident (unless it expects this residual to meet the convergence criteria). The advantage of this scheme is that it applies to partial representations of the Jacobian (like storing the linearization at quadrature points, versus assembling the sparse matrix). -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From bsmith at mcs.anl.gov Mon Dec 12 11:38:05 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 12 Dec 2016 11:38:05 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: <94A733DE-B106-4D48-9D88-4A60990DBAF4@mcs.anl.gov> Very cool problem. I think you should use TS to solve it. TS has higher order solvers with adaptive time-stepping, likely at the very beginning it will end up with a very small time step but then quickly increase the time-step. Frankly it is goofy to use backward Euler with fixed time step on this problem; you'll find that TS is no harder to use than SNES, you just need to use TSSetRHSFunction() and TSSetRHSJacobian() and select an implicit solver. Barry > On Dec 12, 2016, at 2:29 AM, Praveen C wrote: > > Hello Matt > > I have attached the detailed output. > > Fenics automatically computes Jacobian, so I think Jacobian should be correct. I am not able to run the Fenics code without giving the Jacobian. I am currently writing a C code where I can test this. > > This equation is bit weird. Its like this > > u_t = ( K u_x)_x > > K = u / sqrt(u_x^2 + eps^2) > > If u > 0, then this is a nonlinear parabolic eqn. Problem is that eps = h (mesh size), so at extrema, it is like > > u_t = (u/eps)*u_xx > > and (1/eps) is approximating a delta function. > > Best > praveen > > On Mon, Dec 12, 2016 at 12:41 PM, Matthew Knepley wrote: > On Mon, Dec 12, 2016 at 1:04 AM, Matthew Knepley wrote: > On Mon, Dec 12, 2016 at 12:56 AM, Praveen C wrote: > Increasing number of snes iterations, I get convergence. > > So it is a problem of initial guess being too far from the solution of the nonlinear equation. > > Solution can be seen here > > https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ray/cosmic_ray.ipynb > > Also, how is this a parabolic equation? It looks like u/|u'| to me, which does not look parabolic at all. > > Matt > > Green curve is solution after two time steps. > > It took about 100 snes iterations in first time step and about 50 in second time step. > > I use exact Jacobian and direct LU solve. > > I do not believe its the correct Jacobian. Did you test it as I asked? Also run with > > -snes_monitor -ksp_monitor_true_residual -snes_view -snes_converged_reason > > and then > > -snes_fd > > and send all the output > > Matt > > Thanks > praveen > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > From friedmud at gmail.com Mon Dec 12 11:40:16 2016 From: friedmud at gmail.com (Derek Gaston) Date: Mon, 12 Dec 2016 17:40:16 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: <00F0C633-CC8A-49D5-B0A4-4B7C963BC81E@jedbrown.org> References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> <87eg1dogj6.fsf@jedbrown.org> <00F0C633-CC8A-49D5-B0A4-4B7C963BC81E@jedbrown.org> Message-ID: On Mon, Dec 12, 2016 at 11:43 AM Jed Brown wrote: > I still don't understand your many field case. The geometric overhead is > amortized and the Jacobian contribution typically grows quadratically in > the number of fields, so I would expect it to increase the > Jacobian:residual cost ratio, not reduce it. > It depends on how things are coupled... but I do agree that with many fields the Jacobian tends to get even more expensive to produce. However, the point is just about reusing an expensive computation: the field values on every element. Regardless of how expensive the Jacobian/Residual are to compute... cutting the time in half for computing the values of many fields could be worthwhile. But I suppose your point is that if you are blindly computing Jacobians with residuals... then you're going to waste a few Jacobians (at least one every timestep for instance). If that Jacobian is incredibly expensive to produce then it's wiped out any gains made in not recomputing variable values. I can see that. How about this: what if SNES only called this optional R+J interface when it knew _for sure_ it needed both? For instance... for the first linear solve. But anyway - the gains seem pretty small and not worth it. I'm pretty convinced this idea is foolhardy ;-) Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 12 11:45:58 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 12 Dec 2016 11:45:58 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: > On Dec 11, 2016, at 10:58 PM, Derek Gaston wrote: > > A quick note: I'm not hugely invested in this idea... I'm just talking it out since I started it. The issues might outweigh potential gains... > > On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley wrote: > I consider this bad code management more than an analytical case for the technique, but I can see the point. > > Can you expand on that? Do you believe automatic differentiation in general to be "bad code management"? > > - Extremely complex, heavy shape function evaluation (think super high order with first and second derivatives needing to be computed) > > I honestly do not understand this one. Maybe I do not understand high order since I never use it. If I want to compute an integral, I have > the basis functions tabulated. I understand that for high order, you use a tensor product evaluation, but you still tabulate in 1D. What is > being recomputed here? > > In unstructured mesh you still have to compute the reference->physical map for each element and map all of the gradients/second derivatives to physical space. This can be quite expensive if you have a lot of shape functions and a lot of quadrature points. > > Sometimes we even have to do this step twice: once for the un-deformed mesh and once for the deformed mesh... on every element. > > > - Extremely heavy material property computations > > Yes. I have to think about this one more. > > - MANY coupled variables (we've run thousands). > > Ah, so you are saying that the process of field evaluation at the quadrature points is expensive because you have so many fields. > It feels very similar to the material case, but I cannot articulate why. > > It is similar: it's all about how much information you have to recompute at each quadrature point. I was simply giving different scenarios for why you could end up with heavy calculations at each quadrature point that feed into both the Residual and Jacobian calculations. > > I guess my gut says that really expensive material properties, > much more expensive than my top level model, should be modeled by something simpler at that level. Same feeling for using > thousands of fields. > > Even if you can find something simpler it's good to be able to solve the expensive one to verify your simpler model. Sometimes the microstructure behavior is complicated enough that it's quite difficult to wrap up in a simpler model or (like you said) it won't be clear if a simpler model is possible without doing the more expensive model first. > > We really do have models that require thousands (sometimes tens of thousands) of coupled PDEs. Reusing the field evaluations for both the residual and Jacobian could be a large win. > > 1) I compute a Jacobian with every residual. This sucks because line search and lots of other things use residuals. > > 2) I compute a residual with every Jacobian. This sound like it could work because I compute both for the Newton system, but here I > am reusing the residual I computed to check the convergence criterion. > > Can you see a nice way to express Newton for this? > > You can see my (admittedly stupidly simple) Newton code that works this way here: https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/JuliaDenseNonlinearImplicitSolver.jl#L42 > > Check the assembly code here to see how both are computed simultaneously: https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/Assembly.jl#L59 > > Lack of line search makes it pretty simple. However, even with this simple code I end up wasting one extra Jacobian evaluation once the convergence criteria has been reached. Whether or not that is advantageous depends on the relative tradeoffs of reusable element computations vs Jacobian calculation and how many nonlinear iterations you do (if you're only doing one nonlinear iteration every timestep then you're wasting 50% of your total Jacobian calculation time). > > For a full featured solver you would definitely also want to have the ability to compute a residual, by itself, when you want... for things like line search. > > You guys have definitely thought a lot more about this than I have... I'm just spitballing here... but it does seem like having an optional interface for computing a combined residual/Jacobian could save some codes a significant amount of time. > > This isn't a strong feeling of mine though. I think that for now the way I'll do it is simply to "waste" a residual calculation when I need a Jacobian :-) Derek, I don't understand this long conversation. You can compute the Jacobian with the Function evaluation. There is no need for a special interface, just compute it! Then provide a dummy FormJacobian that does nothing. Please let us know if this does not work. (Yes you will need to keep the Jacobian matrix inside the FormFunction context so you have access to it but that is no big deal.) Barry > > Derek From patrick.sanan at gmail.com Mon Dec 12 12:10:21 2016 From: patrick.sanan at gmail.com (Patrick Sanan) Date: Mon, 12 Dec 2016 19:10:21 +0100 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: <878trlnjce.fsf@jedbrown.org> References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> <878trlnjce.fsf@jedbrown.org> Message-ID: On Mon, Dec 12, 2016 at 6:33 PM, Jed Brown wrote: > Patrick Sanan writes: >> Maybe a good way to proceed is to have the "both at once computation" >> explicitly available in PETSc's Newton or other solvers that "know" >> that they can gain efficiency by computing both at once. That is, the >> user is always required to register a callback to form the residual, >> and may optionally register a function to compute (only) the Jacobian >> and/or another function to compute the combined Jacobian/Residual. >> Newton can look for the combined function at the times when it would >> help, and if nothing was registered would fall back to the current >> technique of calling separate residual and Jacobian routines. This >> would preserve the benefits of the current approach, allow >> optimizations of the kind mentioned, and hopefully remain >> somewhat-maintainable; in particular, this doesn't require the user to >> be promised anything about the order in which Jacobians and residuals >> are computed. > > How would the solver decide? It requires a model for the relative costs > of {residual, Jacobian, residual+Jacobian}. Where would the SNES learn > that? > > I'm not saying the above is wrong, but it adds complexity and wouldn't > be useful without the cost model. An alternative would be for the > user's residual to be able to query the SNES's estimated probability > that a Jacobian will be needed. So if the SNES has observed quadratic > convergence, it might be quite confident (unless it expects this > residual to meet the convergence criteria). The advantage of this > scheme is that it applies to partial representations of the Jacobian > (like storing the linearization at quadrature points, versus assembling > the sparse matrix). I did have an error in my thinking, which is that I forgot about the fact that you typically have the residual already at the point where the Jacobian is computed, so would would have to be using a method (like the cartoon in my mind when I wrote the above) that didn't involve a linesearch (as Derek also described) or as you suggest, somehow had a guess about when the linesearch would terminate. From friedmud at gmail.com Mon Dec 12 12:11:28 2016 From: friedmud at gmail.com (Derek Gaston) Date: Mon, 12 Dec 2016 18:11:28 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: I agree Barry - we got a bit sidetracked here. We were debating the relative merits of doing that (computing a Jacobian with every residual) and musing a bit about the possibility of whether or not SNES could be enhanced to make this more efficient. I think I've been convinced that it's not worth pursuing the SNES enhancement... the upside is limited and the complexity is not worth it. Derek On Mon, Dec 12, 2016 at 12:46 PM Barry Smith wrote: > > > On Dec 11, 2016, at 10:58 PM, Derek Gaston wrote: > > > > A quick note: I'm not hugely invested in this idea... I'm just talking > it out since I started it. The issues might outweigh potential gains... > > > > On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley > wrote: > > I consider this bad code management more than an analytical case for the > technique, but I can see the point. > > > > Can you expand on that? Do you believe automatic differentiation in > general to be "bad code management"? > > > > - Extremely complex, heavy shape function evaluation (think super > high order with first and second derivatives needing to be computed) > > > > I honestly do not understand this one. Maybe I do not understand high > order since I never use it. If I want to compute an integral, I have > > the basis functions tabulated. I understand that for high order, you use > a tensor product evaluation, but you still tabulate in 1D. What is > > being recomputed here? > > > > In unstructured mesh you still have to compute the reference->physical > map for each element and map all of the gradients/second derivatives to > physical space. This can be quite expensive if you have a lot of shape > functions and a lot of quadrature points. > > > > Sometimes we even have to do this step twice: once for the un-deformed > mesh and once for the deformed mesh... on every element. > > > > > > - Extremely heavy material property computations > > > > Yes. I have to think about this one more. > > > > - MANY coupled variables (we've run thousands). > > > > Ah, so you are saying that the process of field evaluation at the > quadrature points is expensive because you have so many fields. > > It feels very similar to the material case, but I cannot articulate why. > > > > It is similar: it's all about how much information you have to recompute > at each quadrature point. I was simply giving different scenarios for why > you could end up with heavy calculations at each quadrature point that feed > into both the Residual and Jacobian calculations. > > > > I guess my gut says that really expensive material properties, > > much more expensive than my top level model, should be modeled by > something simpler at that level. Same feeling for using > > thousands of fields. > > > > Even if you can find something simpler it's good to be able to solve the > expensive one to verify your simpler model. Sometimes the microstructure > behavior is complicated enough that it's quite difficult to wrap up in a > simpler model or (like you said) it won't be clear if a simpler model is > possible without doing the more expensive model first. > > > > We really do have models that require thousands (sometimes tens of > thousands) of coupled PDEs. Reusing the field evaluations for both the > residual and Jacobian could be a large win. > > > > 1) I compute a Jacobian with every residual. This sucks because line > search and lots of other things use residuals. > > > > 2) I compute a residual with every Jacobian. This sound like it could > work because I compute both for the Newton system, but here I > > am reusing the residual I computed to check the convergence > criterion. > > > > Can you see a nice way to express Newton for this? > > > > You can see my (admittedly stupidly simple) Newton code that works this > way here: > https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/JuliaDenseNonlinearImplicitSolver.jl#L42 > > > > Check the assembly code here to see how both are computed > simultaneously: > https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/Assembly.jl#L59 > > > > Lack of line search makes it pretty simple. However, even with this > simple code I end up wasting one extra Jacobian evaluation once the > convergence criteria has been reached. Whether or not that is advantageous > depends on the relative tradeoffs of reusable element computations vs > Jacobian calculation and how many nonlinear iterations you do (if you're > only doing one nonlinear iteration every timestep then you're wasting 50% > of your total Jacobian calculation time). > > > > For a full featured solver you would definitely also want to have the > ability to compute a residual, by itself, when you want... for things like > line search. > > > > You guys have definitely thought a lot more about this than I have... > I'm just spitballing here... but it does seem like having an optional > interface for computing a combined residual/Jacobian could save some codes > a significant amount of time. > > > > This isn't a strong feeling of mine though. I think that for now the > way I'll do it is simply to "waste" a residual calculation when I need a > Jacobian :-) > > Derek, > > I don't understand this long conversation. You can compute the > Jacobian with the Function evaluation. There is no need for a special > interface, just compute it! Then provide a dummy FormJacobian that does > nothing. Please let us know if this does not work. (Yes you will need to > keep the Jacobian matrix inside the FormFunction context so you have access > to it but that is no big deal.) > > Barry > > > > > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Dec 12 12:15:14 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 12 Dec 2016 12:15:14 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> Message-ID: <920E232F-EB54-476A-8864-D75386CB0DCB@mcs.anl.gov> Derek, Still it would be interesting to see for your application if always computing them together ended up being faster than doing "extra" work for the Jacobian. This should be trivial to try. Barry > On Dec 12, 2016, at 12:11 PM, Derek Gaston wrote: > > I agree Barry - we got a bit sidetracked here. > > We were debating the relative merits of doing that (computing a Jacobian with every residual) and musing a bit about the possibility of whether or not SNES could be enhanced to make this more efficient. > > I think I've been convinced that it's not worth pursuing the SNES enhancement... the upside is limited and the complexity is not worth it. > > Derek > > On Mon, Dec 12, 2016 at 12:46 PM Barry Smith wrote: > > > On Dec 11, 2016, at 10:58 PM, Derek Gaston wrote: > > > > A quick note: I'm not hugely invested in this idea... I'm just talking it out since I started it. The issues might outweigh potential gains... > > > > On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley wrote: > > I consider this bad code management more than an analytical case for the technique, but I can see the point. > > > > Can you expand on that? Do you believe automatic differentiation in general to be "bad code management"? > > > > - Extremely complex, heavy shape function evaluation (think super high order with first and second derivatives needing to be computed) > > > > I honestly do not understand this one. Maybe I do not understand high order since I never use it. If I want to compute an integral, I have > > the basis functions tabulated. I understand that for high order, you use a tensor product evaluation, but you still tabulate in 1D. What is > > being recomputed here? > > > > In unstructured mesh you still have to compute the reference->physical map for each element and map all of the gradients/second derivatives to physical space. This can be quite expensive if you have a lot of shape functions and a lot of quadrature points. > > > > Sometimes we even have to do this step twice: once for the un-deformed mesh and once for the deformed mesh... on every element. > > > > > > - Extremely heavy material property computations > > > > Yes. I have to think about this one more. > > > > - MANY coupled variables (we've run thousands). > > > > Ah, so you are saying that the process of field evaluation at the quadrature points is expensive because you have so many fields. > > It feels very similar to the material case, but I cannot articulate why. > > > > It is similar: it's all about how much information you have to recompute at each quadrature point. I was simply giving different scenarios for why you could end up with heavy calculations at each quadrature point that feed into both the Residual and Jacobian calculations. > > > > I guess my gut says that really expensive material properties, > > much more expensive than my top level model, should be modeled by something simpler at that level. Same feeling for using > > thousands of fields. > > > > Even if you can find something simpler it's good to be able to solve the expensive one to verify your simpler model. Sometimes the microstructure behavior is complicated enough that it's quite difficult to wrap up in a simpler model or (like you said) it won't be clear if a simpler model is possible without doing the more expensive model first. > > > > We really do have models that require thousands (sometimes tens of thousands) of coupled PDEs. Reusing the field evaluations for both the residual and Jacobian could be a large win. > > > > 1) I compute a Jacobian with every residual. This sucks because line search and lots of other things use residuals. > > > > 2) I compute a residual with every Jacobian. This sound like it could work because I compute both for the Newton system, but here I > > am reusing the residual I computed to check the convergence criterion. > > > > Can you see a nice way to express Newton for this? > > > > You can see my (admittedly stupidly simple) Newton code that works this way here: https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/JuliaDenseNonlinearImplicitSolver.jl#L42 > > > > Check the assembly code here to see how both are computed simultaneously: https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/Assembly.jl#L59 > > > > Lack of line search makes it pretty simple. However, even with this simple code I end up wasting one extra Jacobian evaluation once the convergence criteria has been reached. Whether or not that is advantageous depends on the relative tradeoffs of reusable element computations vs Jacobian calculation and how many nonlinear iterations you do (if you're only doing one nonlinear iteration every timestep then you're wasting 50% of your total Jacobian calculation time). > > > > For a full featured solver you would definitely also want to have the ability to compute a residual, by itself, when you want... for things like line search. > > > > You guys have definitely thought a lot more about this than I have... I'm just spitballing here... but it does seem like having an optional interface for computing a combined residual/Jacobian could save some codes a significant amount of time. > > > > This isn't a strong feeling of mine though. I think that for now the way I'll do it is simply to "waste" a residual calculation when I need a Jacobian :-) > > Derek, > > I don't understand this long conversation. You can compute the Jacobian with the Function evaluation. There is no need for a special interface, just compute it! Then provide a dummy FormJacobian that does nothing. Please let us know if this does not work. (Yes you will need to keep the Jacobian matrix inside the FormFunction context so you have access to it but that is no big deal.) > > Barry > > > > > Derek > From friedmud at gmail.com Mon Dec 12 12:19:30 2016 From: friedmud at gmail.com (Derek Gaston) Date: Mon, 12 Dec 2016 18:19:30 +0000 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: <920E232F-EB54-476A-8864-D75386CB0DCB@mcs.anl.gov> References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> <920E232F-EB54-476A-8864-D75386CB0DCB@mcs.anl.gov> Message-ID: I will try it and report back. Derek On Mon, Dec 12, 2016 at 1:15 PM Barry Smith wrote: > > Derek, > > Still it would be interesting to see for your application if always > computing them together ended up being faster than doing "extra" work for > the Jacobian. This should be trivial to try. > > Barry > > > On Dec 12, 2016, at 12:11 PM, Derek Gaston wrote: > > > > I agree Barry - we got a bit sidetracked here. > > > > We were debating the relative merits of doing that (computing a Jacobian > with every residual) and musing a bit about the possibility of whether or > not SNES could be enhanced to make this more efficient. > > > > I think I've been convinced that it's not worth pursuing the SNES > enhancement... the upside is limited and the complexity is not worth it. > > > > Derek > > > > On Mon, Dec 12, 2016 at 12:46 PM Barry Smith wrote: > > > > > On Dec 11, 2016, at 10:58 PM, Derek Gaston wrote: > > > > > > A quick note: I'm not hugely invested in this idea... I'm just talking > it out since I started it. The issues might outweigh potential gains... > > > > > > On Sun, Dec 11, 2016 at 4:02 PM Matthew Knepley > wrote: > > > I consider this bad code management more than an analytical case for > the technique, but I can see the point. > > > > > > Can you expand on that? Do you believe automatic differentiation in > general to be "bad code management"? > > > > > > - Extremely complex, heavy shape function evaluation (think super > high order with first and second derivatives needing to be computed) > > > > > > I honestly do not understand this one. Maybe I do not understand high > order since I never use it. If I want to compute an integral, I have > > > the basis functions tabulated. I understand that for high order, you > use a tensor product evaluation, but you still tabulate in 1D. What is > > > being recomputed here? > > > > > > In unstructured mesh you still have to compute the reference->physical > map for each element and map all of the gradients/second derivatives to > physical space. This can be quite expensive if you have a lot of shape > functions and a lot of quadrature points. > > > > > > Sometimes we even have to do this step twice: once for the un-deformed > mesh and once for the deformed mesh... on every element. > > > > > > > > > - Extremely heavy material property computations > > > > > > Yes. I have to think about this one more. > > > > > > - MANY coupled variables (we've run thousands). > > > > > > Ah, so you are saying that the process of field evaluation at the > quadrature points is expensive because you have so many fields. > > > It feels very similar to the material case, but I cannot articulate > why. > > > > > > It is similar: it's all about how much information you have to > recompute at each quadrature point. I was simply giving different > scenarios for why you could end up with heavy calculations at each > quadrature point that feed into both the Residual and Jacobian calculations. > > > > > > I guess my gut says that really expensive material properties, > > > much more expensive than my top level model, should be modeled by > something simpler at that level. Same feeling for using > > > thousands of fields. > > > > > > Even if you can find something simpler it's good to be able to solve > the expensive one to verify your simpler model. Sometimes the > microstructure behavior is complicated enough that it's quite difficult to > wrap up in a simpler model or (like you said) it won't be clear if a > simpler model is possible without doing the more expensive model first. > > > > > > We really do have models that require thousands (sometimes tens of > thousands) of coupled PDEs. Reusing the field evaluations for both the > residual and Jacobian could be a large win. > > > > > > 1) I compute a Jacobian with every residual. This sucks because line > search and lots of other things use residuals. > > > > > > 2) I compute a residual with every Jacobian. This sound like it > could work because I compute both for the Newton system, but here I > > > am reusing the residual I computed to check the convergence > criterion. > > > > > > Can you see a nice way to express Newton for this? > > > > > > You can see my (admittedly stupidly simple) Newton code that works > this way here: > https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/JuliaDenseNonlinearImplicitSolver.jl#L42 > > > > > > Check the assembly code here to see how both are computed > simultaneously: > https://github.com/friedmud/MOOSE.jl/blob/master/src/solvers/Assembly.jl#L59 > > > > > > Lack of line search makes it pretty simple. However, even with this > simple code I end up wasting one extra Jacobian evaluation once the > convergence criteria has been reached. Whether or not that is advantageous > depends on the relative tradeoffs of reusable element computations vs > Jacobian calculation and how many nonlinear iterations you do (if you're > only doing one nonlinear iteration every timestep then you're wasting 50% > of your total Jacobian calculation time). > > > > > > For a full featured solver you would definitely also want to have the > ability to compute a residual, by itself, when you want... for things like > line search. > > > > > > You guys have definitely thought a lot more about this than I have... > I'm just spitballing here... but it does seem like having an optional > interface for computing a combined residual/Jacobian could save some codes > a significant amount of time. > > > > > > This isn't a strong feeling of mine though. I think that for now the > way I'll do it is simply to "waste" a residual calculation when I need a > Jacobian :-) > > > > Derek, > > > > I don't understand this long conversation. You can compute the > Jacobian with the Function evaluation. There is no need for a special > interface, just compute it! Then provide a dummy FormJacobian that does > nothing. Please let us know if this does not work. (Yes you will need to > keep the Jacobian matrix inside the FormFunction context so you have access > to it but that is no big deal.) > > > > Barry > > > > > > > > Derek > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 12 14:27:58 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Dec 2016 14:27:58 -0600 Subject: [petsc-users] Simultaneously compute Residual+Jacobian in SNES In-Reply-To: References: <3AFF71E5-AE4A-4508-A376-E546BCF2FDBB@mcs.anl.gov> <3B0ACDF9-2856-402E-9002-3EADD605A5ED@mcs.anl.gov> <87eg1dogj6.fsf@jedbrown.org> Message-ID: On Mon, Dec 12, 2016 at 10:36 AM, Derek Gaston wrote: > On Mon, Dec 12, 2016 at 12:36 AM Jed Brown wrote: > >> > Can you expand on that? Do you believe automatic differentiation in >> > general to be "bad code management"? >> >> AD that prevents calling the non-AD function is bad AD. >> > > That's not exactly the problem. Even if you can call an AD and a non-AD > residual... you still have to compute two residuals to compute a residual > and a Jacobian separately when using AD. > > It's not the end of the world... but it was something that prompted me to > ask the question. > > >> Are all the fields in unique function spaces that need different >> transforms or different quadratures? If not, it seems like the presence >> of many fields would already amortize the geometric overhead of visiting >> an element. >> > > These were two separate examples. Expensive shape functions, by > themselves, could warrant computing the residual and Jacobian > simultaneously. Also: many variables, by themselves, could do the same. > > >> Alternatively, you could cache the effective material coefficient (and >> its gradient) at each quadrature point during residual evaluation, thus >> avoiding a re-solve when building the Jacobian. > > > I agree with this. We have some support for it in MOOSE now... and more > plans for better support in the future. It's a classic time/space tradeoff. > > >> I would recommend that unless you know that line searches are rare. >> > > BTW: Many (most?) of our most complex applications all _disable_ line > search. Over the years we've found line search to be more of a hindrance > than a help. We typically prefer using some sort of "physics based" damped > Newton. > We should move this discussion to a separate thread. I think this is the wrong choice. I would make the analogy that you have a Stokes problem, and after finding that ILU fails you go back to GS and thousands of iterates which eventually succeeds. The line search (or globalization) needs to respect problem structure. I think we have the potential to handle this in PETSc. Matt > It is far more common that the Jacobian is _much_ more expensive than >> the residual, in which case the mere possibility of a line search (or of >> converging) would justify deferring the Jacobian. I think it's much >> better to make residuals and Jacobians fast independently, then perhaps >> make the residual do some cheap caching, and worry about >> second-guessing Newton only as a last resort. > > > I think I agree. These are definitely "fringe" cases... for most > applications Jacobians are _way_ more expensive. > > >> That said, I have no doubt that we could >> demonstrate some benefit to using heuristics and a relative cost model >> to sometimes compute residuals and Jacobians together. It just isn't >> that interesting and I think the gains are likely small and will >> generate lots of bikeshedding about the heuristic. >> > > I agree here too. It could be done... but I think you've convinced me > that it's not worth the trouble :-) > > Thanks for the discussion everyone! > > Derek > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 12 15:20:44 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 12 Dec 2016 15:20:44 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Mon, Dec 12, 2016 at 2:29 AM, Praveen C wrote: > Hello Matt > > I have attached the detailed output. > > Fenics automatically computes Jacobian, so I think Jacobian should be > correct. I am not able to run the Fenics code without giving the Jacobian. > I am currently writing a C code where I can test this. > > This equation is bit weird. Its like this > > u_t = ( K u_x)_x > > K = u / sqrt(u_x^2 + eps^2) > I do not understand how to show parabolicity in this case. However, I have a more fundamental misunderstanding. In your code, I see R0 = idt*(u-uold)*v*dx + K0*ux*vx*dx for the residual, which looks like R0 = u_t + K u_x to me. Where is the extra derivative you show above? Thanks, Matt > If u > 0, then this is a nonlinear parabolic eqn. Problem is that eps = h > (mesh size), so at extrema, it is like > > u_t = (u/eps)*u_xx > > and (1/eps) is approximating a delta function. > > Best > praveen > > On Mon, Dec 12, 2016 at 12:41 PM, Matthew Knepley > wrote: > >> On Mon, Dec 12, 2016 at 1:04 AM, Matthew Knepley >> wrote: >> >>> On Mon, Dec 12, 2016 at 12:56 AM, Praveen C wrote: >>> >>>> Increasing number of snes iterations, I get convergence. >>>> >>>> So it is a problem of initial guess being too far from the solution of >>>> the nonlinear equation. >>>> >>>> Solution can be seen here >>>> >>>> https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ray >>>> /cosmic_ray.ipynb >>>> >>> >> Also, how is this a parabolic equation? It looks like u/|u'| to me, which >> does not look parabolic at all. >> >> Matt >> >> >>> Green curve is solution after two time steps. >>>> >>>> It took about 100 snes iterations in first time step and about 50 in >>>> second time step. >>>> >>>> I use exact Jacobian and direct LU solve. >>>> >>> >>> I do not believe its the correct Jacobian. Did you test it as I asked? >>> Also run with >>> >>> -snes_monitor -ksp_monitor_true_residual -snes_view >>> -snes_converged_reason >>> >>> and then >>> >>> -snes_fd >>> >>> and send all the output >>> >>> Matt >>> >>> >>>> Thanks >>>> praveen >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Tue Dec 13 00:07:56 2016 From: cpraveen at gmail.com (Praveen C) Date: Tue, 13 Dec 2016 11:37:56 +0530 Subject: [petsc-users] snes options for rough solution In-Reply-To: <94A733DE-B106-4D48-9D88-4A60990DBAF4@mcs.anl.gov> References: <94A733DE-B106-4D48-9D88-4A60990DBAF4@mcs.anl.gov> Message-ID: You are right, this problem needs adaptive time stepping. Can you recommend some papers/books on this, wrt schemes implemented in Petsc. I could write my problem in Petsc and solve with backward euler and snes (I was using fenics before). I will try TS next. Matt, I tried -snes_fd which takes more iterations than with exact Jacobian, and gives same answer. So my exact Jacobian should be ok. I have been learning petsc since a few months, and it is great that I can already solve my problems with it. Its been a lot of fun coding with petsc. Best praveen On Mon, Dec 12, 2016 at 11:08 PM, Barry Smith wrote: > > Very cool problem. > > I think you should use TS to solve it. TS has higher order solvers with > adaptive time-stepping, likely at the very beginning it will end up with a > very small time step but then quickly increase the time-step. Frankly it is > goofy to use backward Euler with fixed time step on this problem; you'll > find that TS is no harder to use than SNES, you just need to use > TSSetRHSFunction() and TSSetRHSJacobian() and select an implicit solver. > > Barry > > > On Dec 12, 2016, at 2:29 AM, Praveen C wrote: > > > > Hello Matt > > > > I have attached the detailed output. > > > > Fenics automatically computes Jacobian, so I think Jacobian should be > correct. I am not able to run the Fenics code without giving the Jacobian. > I am currently writing a C code where I can test this. > > > > This equation is bit weird. Its like this > > > > u_t = ( K u_x)_x > > > > K = u / sqrt(u_x^2 + eps^2) > > > > If u > 0, then this is a nonlinear parabolic eqn. Problem is that eps = > h (mesh size), so at extrema, it is like > > > > u_t = (u/eps)*u_xx > > > > and (1/eps) is approximating a delta function. > > > > Best > > praveen > > > > On Mon, Dec 12, 2016 at 12:41 PM, Matthew Knepley > wrote: > > On Mon, Dec 12, 2016 at 1:04 AM, Matthew Knepley > wrote: > > On Mon, Dec 12, 2016 at 12:56 AM, Praveen C wrote: > > Increasing number of snes iterations, I get convergence. > > > > So it is a problem of initial guess being too far from the solution of > the nonlinear equation. > > > > Solution can be seen here > > > > https://github.com/cpraveen/fenics/blob/master/1d/cosmic_ > ray/cosmic_ray.ipynb > > > > Also, how is this a parabolic equation? It looks like u/|u'| to me, > which does not look parabolic at all. > > > > Matt > > > > Green curve is solution after two time steps. > > > > It took about 100 snes iterations in first time step and about 50 in > second time step. > > > > I use exact Jacobian and direct LU solve. > > > > I do not believe its the correct Jacobian. Did you test it as I asked? > Also run with > > > > -snes_monitor -ksp_monitor_true_residual -snes_view > -snes_converged_reason > > > > and then > > > > -snes_fd > > > > and send all the output > > > > Matt > > > > Thanks > > praveen > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cpraveen at gmail.com Tue Dec 13 00:12:06 2016 From: cpraveen at gmail.com (Praveen C) Date: Tue, 13 Dec 2016 11:42:06 +0530 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Tue, Dec 13, 2016 at 2:50 AM, Matthew Knepley wrote: > On Mon, Dec 12, 2016 at 2:29 AM, Praveen C wrote: > >> Hello Matt >> >> I have attached the detailed output. >> >> Fenics automatically computes Jacobian, so I think Jacobian should be >> correct. I am not able to run the Fenics code without giving the Jacobian. >> I am currently writing a C code where I can test this. >> >> This equation is bit weird. Its like this >> >> u_t = ( K u_x)_x >> >> K = u / sqrt(u_x^2 + eps^2) >> > > I do not understand how to show parabolicity in this case. However, I have > a more fundamental misunderstanding. In your code, I see > > R0 = idt*(u-uold)*v*dx + K0*ux*vx*dx > > for the residual, which looks like > > R0 = u_t + K u_x > > to me. Where is the extra derivative you show above? > > Hello Matt vx is derivative of test function. So the last term is a second order differential operator. The PDE in full looks like this u_t = u_x^2/sqrt(u_x^2 + eps^2) + eps^2 * u * u_xx / (u_x^2 + eps^2)^1.5 With u > 0, the last term is parabolic. But numerically, I solve it in divergence form. Best praveen -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 13 00:28:39 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 13 Dec 2016 00:28:39 -0600 Subject: [petsc-users] snes options for rough solution In-Reply-To: References: Message-ID: On Tue, Dec 13, 2016 at 12:12 AM, Praveen C wrote: > On Tue, Dec 13, 2016 at 2:50 AM, Matthew Knepley > wrote: > >> On Mon, Dec 12, 2016 at 2:29 AM, Praveen C wrote: >> >>> Hello Matt >>> >>> I have attached the detailed output. >>> >>> Fenics automatically computes Jacobian, so I think Jacobian should be >>> correct. I am not able to run the Fenics code without giving the Jacobian. >>> I am currently writing a C code where I can test this. >>> >>> This equation is bit weird. Its like this >>> >>> u_t = ( K u_x)_x >>> >>> K = u / sqrt(u_x^2 + eps^2) >>> >> >> I do not understand how to show parabolicity in this case. However, I >> have a more fundamental misunderstanding. In your code, I see >> >> R0 = idt*(u-uold)*v*dx + K0*ux*vx*dx >> >> for the residual, which looks like >> >> R0 = u_t + K u_x >> >> to me. Where is the extra derivative you show above? >> >> > Hello Matt > > vx is derivative of test function. So the last term is a second order > differential operator. The PDE in full looks like this > > u_t = u_x^2/sqrt(u_x^2 + eps^2) + eps^2 * u * u_xx / (u_x^2 + eps^2)^1.5 > > With u > 0, the last term is parabolic. But numerically, I solve it in > divergence form. > Alright, the Jacobian looks good and you just have a hard nonlinear problem. Okay, so this problem is singularly perturbed, in that as eps->0, its loses parabolicity (I think) because the u_xx term vanishes? The reason I am so mystified by this is that I did not think that derivatives were supposed to grow with parabolic evolution, but clearly that is happening here. You can certainly use adaptive time stepping with TS, but you could also try using grid sequencing, since for a big enough H it appears you converge fast. Thanks, Matt Best > praveen > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateusz.lacki at gmail.com Tue Dec 13 02:14:56 2016 From: mateusz.lacki at gmail.com (=?utf-8?B?TWF0ZXVzeiDFgcSFY2tp?=) Date: Tue, 13 Dec 2016 09:14:56 +0100 Subject: [petsc-users] Cannot mix release and development versions of SLEPc and PETSc Message-ID: <60186EAA-BF3F-48F4-8458-C2CD890AEC81@gmail.com> Hi I want to install SLEPc but the configure command gives me the above error message: "Cannot mix release and development versions of SLEPc and PETSc?. What I understand from lecture of config/configure.py file in slepc the key is to compare ?RELEASE? paramaters. In my petsc and slepc installations I see: cat slepc-3.7.3/include/slepcversion.h | grep RELEASE | head -n 1 #define SLEPC_VERSION_RELEASE 1 and: cat petsc-3.7.4/include/petscversion.h | grep RELEASE | head -n 1 #define PETSC_VERSION_RELEASE 1 So the release seems to be the same and I do not understand why the following if condition gets triggered. if petsc.release != slepc.release: log.Exit('ERROR: Cannot mix release and development versions of SLEPc and PETSc?) Best, Mateusz ??cki -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3707 bytes Desc: not available URL: From jroman at dsic.upv.es Tue Dec 13 02:25:24 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 13 Dec 2016 09:25:24 +0100 Subject: [petsc-users] Cannot mix release and development versions of SLEPc and PETSc In-Reply-To: <60186EAA-BF3F-48F4-8458-C2CD890AEC81@gmail.com> References: <60186EAA-BF3F-48F4-8458-C2CD890AEC81@gmail.com> Message-ID: <867B430D-B1B3-4260-9E49-03823E1CEEC6@dsic.upv.es> > El 13 dic 2016, a las 9:14, Mateusz ??cki escribi?: > > Hi > I want to install SLEPc but the configure command gives me the above error message: > "Cannot mix release and development versions of SLEPc and PETSc?. > > What I understand from lecture of config/configure.py file in slepc the key is to compare ?RELEASE? paramaters. > In my petsc and slepc installations I see: > > cat slepc-3.7.3/include/slepcversion.h | grep RELEASE | head -n 1 > #define SLEPC_VERSION_RELEASE 1 > > and: > cat petsc-3.7.4/include/petscversion.h | grep RELEASE | head -n 1 > #define PETSC_VERSION_RELEASE 1 > > So the release seems to be the same and I do not understand why the following if condition gets triggered. > > if petsc.release != slepc.release: > log.Exit('ERROR: Cannot mix release and development versions of SLEPc and PETSc?) > > > Best, > Mateusz ??cki Maybe your PETSC_DIR or SLEPC_DIR points to a different directory? Did SLEPc's configure generate file $PETSC_ARCH/lib/slepc/conf/configure.log? Send its contents. Jose From niko.karin at gmail.com Tue Dec 13 10:50:29 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Tue, 13 Dec 2016 17:50:29 +0100 Subject: [petsc-users] FieldSplit and Biot's poroelasticity Message-ID: Dear Petsc-gurus, I am solving Biot's poroelasticity problem : [image: Images int?gr?es 1] I am using a mixed P2-P1 finite element discretization. I am using the fieldsplit framework to solve the linear systems. Here are the options I am using : -pc_type fieldsplit -pc_field_split_type schur -fieldsplit_0_pc_type gamg -fieldsplit_0_pc_gamg_threshold -1.0 -fieldsplit_0_ksp_type gmres -fieldsplit_0_ksp_monitor -fieldsplit_1_pc_type sor -fieldsplit_1_ksp_type gmres -pc_fieldsplit_schur_factorization_type upper By increasing the mesh size, I get increasing numbers of outer iterations. According to your own experience, among all the features of fieldsplit, was is the "best" set of preconditioners for this rather classical problem in order to get an extensible solver (I would like to solve this problem on some tens millions of unknowns of some hundreds of procs)? Thanks, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9086 bytes Desc: not available URL: From lawrence.mitchell at imperial.ac.uk Tue Dec 13 12:02:02 2016 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Tue, 13 Dec 2016 18:02:02 +0000 Subject: [petsc-users] FieldSplit and Biot's poroelasticity In-Reply-To: References: Message-ID: <30dd04bb-1596-7eb7-7f5f-aae819535d04@imperial.ac.uk> On 13/12/16 16:50, Karin&NiKo wrote: > Dear Petsc-gurus, > > I am solving Biot's poroelasticity problem : > Images int?gr?es 1 > > I am using a mixed P2-P1 finite element discretization. > > I am using the fieldsplit framework to solve the linear systems. Here > are the options I am using : > -pc_type fieldsplit > -pc_field_split_type schur > -fieldsplit_0_pc_type gamg > -fieldsplit_0_pc_gamg_threshold -1.0 > -fieldsplit_0_ksp_type gmres > -fieldsplit_0_ksp_monitor > -fieldsplit_1_pc_type sor > -fieldsplit_1_ksp_type gmres > -pc_fieldsplit_schur_factorization_type upper > > > By increasing the mesh size, I get increasing numbers of outer > iterations. > > According to your own experience, among all the features of > fieldsplit, was is the "best" set of preconditioners for this rather > classical problem in order to get an extensible solver (I would like > to solve this problem on some tens millions of unknowns of some > hundreds of procs)? Here's a recent preprint that develops a three-field formulation of the problem that gets reasonably mesh and parameter-independent iteration counts using block-diagonal preconditioning. https://arxiv.org/abs/1507.03199 (No need for schur complements) If you can create the relevant blocks it should be implementable with -pc_fieldsplit_type additive Lawrence -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From niko.karin at gmail.com Tue Dec 13 12:34:29 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Tue, 13 Dec 2016 19:34:29 +0100 Subject: [petsc-users] FieldSplit and Biot's poroelasticity In-Reply-To: <30dd04bb-1596-7eb7-7f5f-aae819535d04@imperial.ac.uk> References: <30dd04bb-1596-7eb7-7f5f-aae819535d04@imperial.ac.uk> Message-ID: Thank you very much for this preprint, Lawrence. I have also planned to use the pressure mass matrix for the A11 block. Unfortunately, at this time, I have no time for implementing things. What I would like to do is to get the best out of the built-in methods of fieldsplit/PETSc. Any hint is welcome! Nicolas 2016-12-13 19:02 GMT+01:00 Lawrence Mitchell < lawrence.mitchell at imperial.ac.uk>: > > > On 13/12/16 16:50, Karin&NiKo wrote: > > Dear Petsc-gurus, > > > > I am solving Biot's poroelasticity problem : > > Images int?gr?es 1 > > > > I am using a mixed P2-P1 finite element discretization. > > > > I am using the fieldsplit framework to solve the linear systems. Here > > are the options I am using : > > -pc_type fieldsplit > > -pc_field_split_type schur > > -fieldsplit_0_pc_type gamg > > -fieldsplit_0_pc_gamg_threshold -1.0 > > -fieldsplit_0_ksp_type gmres > > -fieldsplit_0_ksp_monitor > > -fieldsplit_1_pc_type sor > > -fieldsplit_1_ksp_type gmres > > -pc_fieldsplit_schur_factorization_type upper > > > > > > By increasing the mesh size, I get increasing numbers of outer > > iterations. > > > > According to your own experience, among all the features of > > fieldsplit, was is the "best" set of preconditioners for this rather > > classical problem in order to get an extensible solver (I would like > > to solve this problem on some tens millions of unknowns of some > > hundreds of procs)? > > Here's a recent preprint that develops a three-field formulation of > the problem that gets reasonably mesh and parameter-independent > iteration counts using block-diagonal preconditioning. > > https://arxiv.org/abs/1507.03199 > > (No need for schur complements) > > If you can create the relevant blocks it should be implementable with > -pc_fieldsplit_type additive > > > Lawrence > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Dec 13 12:41:05 2016 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 13 Dec 2016 12:41:05 -0600 Subject: [petsc-users] FieldSplit and Biot's poroelasticity In-Reply-To: References: Message-ID: On Tue, Dec 13, 2016 at 10:50 AM, Karin&NiKo wrote: > Dear Petsc-gurus, > > I am solving Biot's poroelasticity problem : > [image: Images int?gr?es 1] > > I am using a mixed P2-P1 finite element discretization. > > I am using the fieldsplit framework to solve the linear systems. Here are > the options I am using : > -pc_type fieldsplit > -pc_field_split_type schur > -fieldsplit_0_pc_type gamg > -fieldsplit_0_pc_gamg_threshold -1.0 > -fieldsplit_0_ksp_type gmres > -fieldsplit_0_ksp_monitor > -fieldsplit_1_pc_type sor > -fieldsplit_1_ksp_type gmres > -pc_fieldsplit_schur_factorization_type upper > > > By increasing the mesh size, I get increasing numbers of outer iterations. > > According to your own experience, among all the features of fieldsplit, > was is the "best" set of preconditioners for this rather classical problem > in order to get an extensible solver (I would like to solve this problem > on some tens millions of unknowns of some hundreds of procs)? > Lawrence is right that you should construct the right preconditioner matrix for the Schur complement, and its probably just something like I + \Delta with the correct multipliers. Without the mass matrix, it will likely be quite bad. It should not take much time to code that up since you already have the mass matrix from your c_0 p term. Matt > Thanks, > Nicolas > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9086 bytes Desc: not available URL: From niko.karin at gmail.com Wed Dec 14 02:17:00 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Wed, 14 Dec 2016 09:17:00 +0100 Subject: [petsc-users] FieldSplit and Biot's poroelasticity In-Reply-To: References: Message-ID: Lawrence, Matt, I really do share your point. Nevertheless there are sometimes good reasons to do things "not the best way they should be done", at least in a first time (here PETSc is used within a huge fortran-based general purpose finite element solver and build and extract the pressure mass matrix is not a straightforward task). In the present case, I am looking for "the less worst approach" out of the fieldsplit built-in preconditioners. And I consider this is not an uninteresting question. Best regards, Nicolas 2016-12-13 19:41 GMT+01:00 Matthew Knepley : > On Tue, Dec 13, 2016 at 10:50 AM, Karin&NiKo wrote: > >> Dear Petsc-gurus, >> >> I am solving Biot's poroelasticity problem : >> [image: Images int?gr?es 1] >> >> I am using a mixed P2-P1 finite element discretization. >> >> I am using the fieldsplit framework to solve the linear systems. Here are >> the options I am using : >> -pc_type fieldsplit >> -pc_field_split_type schur >> -fieldsplit_0_pc_type gamg >> -fieldsplit_0_pc_gamg_threshold -1.0 >> -fieldsplit_0_ksp_type gmres >> -fieldsplit_0_ksp_monitor >> -fieldsplit_1_pc_type sor >> -fieldsplit_1_ksp_type gmres >> -pc_fieldsplit_schur_factorization_type upper >> >> >> By increasing the mesh size, I get increasing numbers of outer >> iterations. >> >> According to your own experience, among all the features of fieldsplit, >> was is the "best" set of preconditioners for this rather classical problem >> in order to get an extensible solver (I would like to solve this problem >> on some tens millions of unknowns of some hundreds of procs)? >> > > Lawrence is right that you should construct the right preconditioner > matrix for the Schur complement, and its probably just something like I + > \Delta with > the correct multipliers. Without the mass matrix, it will likely be quite > bad. It should not take much time to code that up since you already have > the mass > matrix from your c_0 p term. > > Matt > > >> Thanks, >> Nicolas >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9086 bytes Desc: not available URL: From knepley at gmail.com Wed Dec 14 08:24:20 2016 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Dec 2016 08:24:20 -0600 Subject: [petsc-users] FieldSplit and Biot's poroelasticity In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 2:17 AM, Karin&NiKo wrote: > Lawrence, Matt, > > I really do share your point. > Nevertheless there are sometimes good reasons to do things "not the best > way they should be done", at least in a first time (here PETSc is used > within a huge fortran-based general purpose finite element solver and build > and extract the pressure mass matrix is not a straightforward task). > In the present case, I am looking for "the less worst approach" out of the > fieldsplit built-in preconditioners. > And I consider this is not an uninteresting question. > Depending on how diagonally dominant things are, 'selfp' could be an acceptable replacement for using the mass matrix: http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFieldSplitSetSchurPre.html#PCFieldSplitSetSchurPre Matt > Best regards, > Nicolas > > 2016-12-13 19:41 GMT+01:00 Matthew Knepley : > >> On Tue, Dec 13, 2016 at 10:50 AM, Karin&NiKo >> wrote: >> >>> Dear Petsc-gurus, >>> >>> I am solving Biot's poroelasticity problem : >>> [image: Images int?gr?es 1] >>> >>> I am using a mixed P2-P1 finite element discretization. >>> >>> I am using the fieldsplit framework to solve the linear systems. Here >>> are the options I am using : >>> -pc_type fieldsplit >>> -pc_field_split_type schur >>> -fieldsplit_0_pc_type gamg >>> -fieldsplit_0_pc_gamg_threshold -1.0 >>> -fieldsplit_0_ksp_type gmres >>> -fieldsplit_0_ksp_monitor >>> -fieldsplit_1_pc_type sor >>> -fieldsplit_1_ksp_type gmres >>> -pc_fieldsplit_schur_factorization_type upper >>> >>> >>> By increasing the mesh size, I get increasing numbers of outer >>> iterations. >>> >>> According to your own experience, among all the features of fieldsplit, >>> was is the "best" set of preconditioners for this rather classical problem >>> in order to get an extensible solver (I would like to solve this problem >>> on some tens millions of unknowns of some hundreds of procs)? >>> >> >> Lawrence is right that you should construct the right preconditioner >> matrix for the Schur complement, and its probably just something like I + >> \Delta with >> the correct multipliers. Without the mass matrix, it will likely be quite >> bad. It should not take much time to code that up since you already have >> the mass >> matrix from your c_0 p term. >> >> Matt >> >> >>> Thanks, >>> Nicolas >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9086 bytes Desc: not available URL: From fangbowa at buffalo.edu Wed Dec 14 17:19:39 2016 From: fangbowa at buffalo.edu (Fangbo Wang) Date: Wed, 14 Dec 2016 18:19:39 -0500 Subject: [petsc-users] How to run PETSc on two computers? Message-ID: HI, I know how to install, compile and link Petsc on one computer and it works very well. However, I have no idea to run Petsc on two computers. Also, I can not find information on PETSc website regarding this issue. Could anyone tell me how to do this? Thank you very much! Fangbo -- Fangbo Wang, PhD student Stochastic Geomechanics Research Group Department of Civil, Structural and Environmental Engineering University at Buffalo Email: *fangbowa at buffalo.edu * -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 14 17:48:54 2016 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Dec 2016 17:48:54 -0600 Subject: [petsc-users] How to run PETSc on two computers? In-Reply-To: References: Message-ID: On Wed, Dec 14, 2016 at 5:19 PM, Fangbo Wang wrote: > HI, > > I know how to install, compile and link Petsc on one computer and it works > very well. However, I have no idea to run Petsc on two computers. Also, I > can not find information on PETSc website regarding this issue. > > Could anyone tell me how to do this? Thank you very much! > PETSc is an MPI program, so you run it in parallel the same way you run any MPI program. There are many MPI tutorials on the web, and the way to do it depends on exactly how things are setup on your system, so I can't just tell you. Normally, the system administrator of a system installs MPI and you have an 'mpiexec' executable for running in parallel. Is this missing? Thanks, Matt P.S. Prof. Baumann in MechE is an expert > Fangbo > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: *fangbowa at buffalo.edu * > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From niko.karin at gmail.com Thu Dec 15 03:03:56 2016 From: niko.karin at gmail.com (Karin&NiKo) Date: Thu, 15 Dec 2016 10:03:56 +0100 Subject: [petsc-users] FieldSplit and Biot's poroelasticity In-Reply-To: References: Message-ID: Thank you very much Matt. I have given selfp a try and I am even more convienced that the pressure mass matrix must be implemented! Regards, Nicolas 2016-12-14 15:24 GMT+01:00 Matthew Knepley : > On Wed, Dec 14, 2016 at 2:17 AM, Karin&NiKo wrote: > >> Lawrence, Matt, >> >> I really do share your point. >> Nevertheless there are sometimes good reasons to do things "not the best >> way they should be done", at least in a first time (here PETSc is used >> within a huge fortran-based general purpose finite element solver and build >> and extract the pressure mass matrix is not a straightforward task). >> In the present case, I am looking for "the less worst approach" out of >> the fieldsplit built-in preconditioners. >> And I consider this is not an uninteresting question. >> > > Depending on how diagonally dominant things are, 'selfp' could be an > acceptable replacement for using the mass matrix: > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/ > PCFieldSplitSetSchurPre.html#PCFieldSplitSetSchurPre > > Matt > > >> Best regards, >> Nicolas >> >> 2016-12-13 19:41 GMT+01:00 Matthew Knepley : >> >>> On Tue, Dec 13, 2016 at 10:50 AM, Karin&NiKo >>> wrote: >>> >>>> Dear Petsc-gurus, >>>> >>>> I am solving Biot's poroelasticity problem : >>>> [image: Images int?gr?es 1] >>>> >>>> I am using a mixed P2-P1 finite element discretization. >>>> >>>> I am using the fieldsplit framework to solve the linear systems. Here >>>> are the options I am using : >>>> -pc_type fieldsplit >>>> -pc_field_split_type schur >>>> -fieldsplit_0_pc_type gamg >>>> -fieldsplit_0_pc_gamg_threshold -1.0 >>>> -fieldsplit_0_ksp_type gmres >>>> -fieldsplit_0_ksp_monitor >>>> -fieldsplit_1_pc_type sor >>>> -fieldsplit_1_ksp_type gmres >>>> -pc_fieldsplit_schur_factorization_type upper >>>> >>>> >>>> By increasing the mesh size, I get increasing numbers of outer >>>> iterations. >>>> >>>> According to your own experience, among all the features of fieldsplit, >>>> was is the "best" set of preconditioners for this rather classical problem >>>> in order to get an extensible solver (I would like to solve this problem >>>> on some tens millions of unknowns of some hundreds of procs)? >>>> >>> >>> Lawrence is right that you should construct the right preconditioner >>> matrix for the Schur complement, and its probably just something like I + >>> \Delta with >>> the correct multipliers. Without the mass matrix, it will likely be >>> quite bad. It should not take much time to code that up since you already >>> have the mass >>> matrix from your c_0 p term. >>> >>> Matt >>> >>> >>>> Thanks, >>>> Nicolas >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9086 bytes Desc: not available URL: From aurelien.ponte at ifremer.fr Fri Dec 16 08:45:10 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Fri, 16 Dec 2016 15:45:10 +0100 Subject: [petsc-users] strange out memory issues or bus errors when increasing pb size Message-ID: Hi, I am inverting a 3D elliptical operator with petsc4py (3.4, petsc is 3.4.5) installed via conda: https://anaconda.org/sed-pro-inria/petsc4py I get systematic crashes (out of memory or bus error) when I reach a certain grid size (512 x 256 x 100) even though I maintain the same number of grid points per processor. I used up to 256 procs and got similar crashes. Browsing the internet indicates that using 64-bit-indices may be cure for such pb. It will take however a significant amount of effort for me to install petsc4py and petsc with this option. I do not even know how to check whether my current versions of petsc4py and petsc was installed with it. Would you have any tips or recommendation about how I could address the issue ? Thanks, Aurelien -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From bsmith at mcs.anl.gov Fri Dec 16 08:56:40 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 16 Dec 2016 08:56:40 -0600 Subject: [petsc-users] strange out memory issues or bus errors when increasing pb size In-Reply-To: References: Message-ID: > On Dec 16, 2016, at 8:45 AM, Aurelien Ponte wrote: > > Hi, > > I am inverting a 3D elliptical operator with petsc4py (3.4, petsc is 3.4.5) > installed via conda: https://anaconda.org/sed-pro-inria/petsc4py > > I get systematic crashes (out of memory or bus error) when I reach a certain > > grid size (512 x 256 x 100) even though I maintain the same number of grid points per processor. > I used up to 256 procs and got similar crashes. > > Browsing the internet indicates that using 64-bit-indices may be cure for > such pb. Yes, this is the problem. > It will take however a significant amount of effort for me to install > petsc4py and petsc with this option. Hopefully another user knows an easy way to install petsc4py to use 64 bit indices > I do not even know how to check whether > my current versions of petsc4py and petsc was installed with it. It was not. > > Would you have any tips or recommendation about how I could address the issue ? > > Thanks, > > Aurelien > > > -- > Aur?lien Ponte > Tel: (+33) 2 98 22 40 73 > Fax: (+33) 2 98 22 44 96 > UMR 6523, IFREMER > ZI de la Pointe du Diable > CS 10070 > 29280 Plouzan? > From aurelien.ponte at ifremer.fr Fri Dec 16 11:59:59 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Fri, 16 Dec 2016 18:59:59 +0100 Subject: [petsc-users] strange out memory issues or bus errors when increasing pb size In-Reply-To: References: Message-ID: <1ce66937-ece9-07b0-8528-eab6a567e539@ifremer.fr> Thanks Barry for the prompt reply. I guess I'll have to recompile codes then. I am actually running the code on a cluster and you'll find below a list of the modules installed. If anyone has any tips on what are my best options in order to compile petsc4py and petsc (and numpy, netcdf4-python actually) with this configuration, I'll be most grateful. thanks aurelien service7>414% module avail -------------------------- /appli/modulefiles --------------------------- Automake/1.14_gcc-4.8.0 BANDELA/1011 Bash/4.3_gcc-5.3.0 EcmwfTools/emos__000392-intel-12.1.5 EcmwfTools/grib_api__1.11.0-intel-12.1.5 Go/1.1.1 Go/1.2.1 Go/1.3 Go/1.4.0 Go/1.4.1 Go/1.5.1 GrADS/grads-2.0.2 JAGS/3.3.0-intel-11.1.073 JAGS/4.0.0-intel-15.0.090-seq JAGS/4.0.0-intel-15.0.090-thread Latex/20150521__gcc-4.9.2 MPlayer/1.2__gcc-4.9.2 Migrate-N/3.3.2__gcc-4.7.1_mpt-2.06 Octave/4.0.2_gcc-5.3.0 OpenBUGS/3.2.1-intel-11.1.073 Perl/5.22.1_gcc-5.3.0 R/2.11.1-gnu R/2.11.1-intel R/2.14.2-gnu-4.3 R/2.14.2-intel-11.1.073 R/2.15.0-gnu-4.3 R/2.15.0-intel-11.1.073 R/2.15.3-gnu-4.8.0 R/2.15.3-intel-12.1.5 R/3.0.1-intel-12.1.5 R/3.0.2-intel-14.0.0 R/3.0.3-intel-12.1.5 R/3.0.3-intel-14.0.0 R/3.1.2-intel-15.0.090 R/3.2.2-intel-12.1.5 R/3.2.3-intel-12.1.5 R/3.2.4-intel-12.1.5 R/patched-2.14.1-25.1-gnu Saturne/4.0.3_2015.3.187_5.0.3.048 SpecFEM2D/20141231-intel-15.0.90 TVD/8.2.0-1 TVD/8.4.1-5 TVD/8.6.0-2 TVD/recent anaconda/3 anaconda/uv2 cdo/1.5.6.1_gcc-4.7.0 cmake/2.8.8 code_aster/10.6.0-3 cuda/2.10 cuda/2.3 cuda/4.2.9 cuda/5.5.25 ddt/2.6 ddt/4.1.1 ddt/4.2.2 exiv2/0.24-gcc-4.8.2 ffmpeg/ffmpeg-2.2.1 gcc/4.2.2 gcc/4.7.0 gcc/4.7.1 gcc/4.8.0 gcc/4.8.2 gcc/4.8.4 gcc/4.9.2 gcc/5.3.0 gerris/1.3.2 gperf/3.0.4 gsl/1.14-intel-11.1.073 gsl/1.15-gcc-4.6.3 gsl/1.15-intel-12.1.5 hdf5/1.8.8-intel-11.1.073 hdf5/hdf5-1.8.12_intel-14.0.0 hdf5/hdf5-1.8.12_intel-14.0.0_mpi-4.0.0.028 hdf5/intel-10.1.008 hmpp/2.0.0 hmpp/2.1.0sp1 hmpp/2.2.0 idv/3.1 intel-comp/11.1.073 intel-comp/12.1.5 intel-comp/14.0.0 intel-comp/2015.0.090 intel-comp/2015.3.187 intel-mpi/4.0.0.028 intel-mpi/5.0.3.048 java/1.5.0 java/1.6.0 java/1.7.0 java/1.8.0 matlab/2006b matlab/2007b matlab/2009b matlab/2011b matlab/2013b matlab/2013c mkl/10.3 mpinside/3.5.3 mpt/0test mpt/1.21 mpt/1.23 mpt/1.24 mpt/1.25 mpt/2.01 mpt/2.04 mpt/2.06 mpt/2.08 ncarg-4.2.2/gnu ncarg-4.2.2/intel-10.0.025 ncarg-4.2.2/intel-10.0.026 ncarg-4.2.2/intel-10.1.008 ncarg-4.2.2/pgi-7.1-1 ncltest/5.2.1 ncltest/6.0.0 nco/4.2.1_gcc-4.7.0 nco/4.3.4-intel12.1.5 nco/4.4.2_gcc-4.8.0 ncview/2.1.2_gcc-4.7.0 ncview/2.1.2_intel netCDF/4.0-intel-11.1.073 netCDF/4.1.3-intel-11.1.073 netCDF/4.2.1-gcc-4.7.0 netCDF/4.2.1-intel-11.1.073 netCDF/4.2.1-intel-11.1.073_mpi-4.0.0.028 netCDF/4.2.1-intel-11.1.073_mpt-2.06 netCDF/4.2.1-intel-12.1.5 netCDF/4.2.1-intel-12.1.5_mpt-2.06 netCDF/4.2.1.1-intel-12.1.5_mpt-2.06 netCDF/4.2.1.1-intel-12.1.5_mpt-2.06_p netCDF/4.2.1.1-intel-12.1.5_mpt-2.06_pp netCDF/4.2.1.1-intel-12.1.5_mpt-2.06_ppp netCDF/impi5 netCDF/impi5-debug netCDF/impi5_4.2 netcdf-gcc/3.6.2 netcdf-gcc/3.6.3 netcdf-intel/3.6.3-11.1.073 netcdf-intel/3.6.3-11.1.073-fpic netcdf-pgi/3.6.2-7.0-7 netcdf-pgi/3.6.2-7.1-1 old/R/2.1.1 old/R/2.8.1 old/cmkl/10.0.011 old/cmkl/10.0.3.020 old/cmkl/10.0.5.025 old/cmkl/10.1.3.027 old/cmkl/10.2.2.025 old/cmkl/9.1.021 old/cmkl/phase2 old/cmkl/recent old/ddt/2.1 old/ddt/2.2 old/ddt/2.3 old/ddt/2.4.1 old/ddt/2.5.1 old/ddt/recent old/gcc/4.2.2 old/intel/10.0.025 old/intel/10.0.026 old/intel/10.1.008 old/intel/10.1.015 old/intel/10.1.018 old/intel/newest old/intel/recent old/intel-cc/10.0.025 old/intel-cc/10.0.026 old/intel-cc/10.1.008 old/intel-cc/10.1.015 old/intel-cc/10.1.018 old/intel-cc/11.0.081 old/intel-cc/11.0.083 old/intel-cc/11.1.038 old/intel-cc/11.1.073 old/intel-cc/9.1.045 old/intel-cc-10/10.0.025 old/intel-comp/11.0.081 old/intel-comp/11.0.083 old/intel-comp/11.1.038 old/intel-comp/11.1.046 old/intel-comp/11.1.059 old/intel-fc/10.0.025 old/intel-fc/10.0.026 old/intel-fc/10.1.008 old/intel-fc/10.1.015 old/intel-fc/10.1.018 old/intel-fc/11.0.081 old/intel-fc/11.0.083 old/intel-fc/11.1.038 old/intel-fc/11.1.073 old/intel-fc/9.1.045 old/intel-fc-10/10.0.025 old/intel-mpi/3.0.043 old/intel-mpi/3.1 old/intel-mpi/3.2.0.011 old/intel-mpi/3.2.1.009 old/intel-mpi/3.2.2.006 old/mvapich2/intel old/netcdf-intel/3.6.2-10.0.025 old/netcdf-intel/3.6.2-10.0.026 old/netcdf-intel/3.6.2-10.1.008 old/netcdf-intel/3.6.3-11.1.038 old/netcdf-intel-10/3.6.2 old/openmpi/intel pfmt/1.3 pgi/16.3 pgi/7.0-7 pgi/7.1-1 pgi/7.1-2 pgi/7.2 pgi/8.0-4 pgi/pgi/16.3 pgi/pgi/8.0-6 pgi/pgi32/8.0-6 pgi/pgi64/8.0-6 pnetCDF/pnetcdf-1.3.1__intel-12.1.5_mpi-4.0.0.028 proj/4.8.0-intel-12.1.5 python/2.7.10_gnu-4.9.2 python/2.7.3_gnu-4.7.0 python/2.7.3_gnu-4.7.1 python/2.7.5_gnu-4.8.0 scilab/scilab-5.4.1 szip/2.1-intel-11.1.073 udunits/1.12.11-intel-11.1.073 udunits/2.1.19-intel-11.1.073 udunits/2.1.24-intel-12.1.5 unigifsicle/1.39-719.16 uv-cdat/1.0.1next valgrind/valgrind-3.11.0__gcc-4.9.2 valgrind/valgrind-3.11.0__gcc-4.9.2__intel-mpi.5.0.3.048 valgrind/valgrind-3.8.1__gcc.4.8.0 vtune/2013 wgrib2/netcdf3/intel-11.1.073 xios/1.0 zlib/1.2.6-intel-11.1.073 Le 16/12/2016 ? 15:56, Barry Smith a ?crit : >> On Dec 16, 2016, at 8:45 AM, Aurelien Ponte wrote: >> >> Hi, >> >> I am inverting a 3D elliptical operator with petsc4py (3.4, petsc is 3.4.5) >> installed via conda: https://anaconda.org/sed-pro-inria/petsc4py >> >> I get systematic crashes (out of memory or bus error) when I reach a certain >> >> grid size (512 x 256 x 100) even though I maintain the same number of grid points per processor. >> I used up to 256 procs and got similar crashes. >> >> Browsing the internet indicates that using 64-bit-indices may be cure for >> such pb. > Yes, this is the problem. > >> It will take however a significant amount of effort for me to install >> petsc4py and petsc with this option. > Hopefully another user knows an easy way to install petsc4py to use 64 bit indices > >> I do not even know how to check whether >> my current versions of petsc4py and petsc was installed with it. > It was not. >> Would you have any tips or recommendation about how I could address the issue ? >> >> Thanks, >> >> Aurelien >> >> >> -- >> Aur?lien Ponte >> Tel: (+33) 2 98 22 40 73 >> Fax: (+33) 2 98 22 44 96 >> UMR 6523, IFREMER >> ZI de la Pointe du Diable >> CS 10070 >> 29280 Plouzan? >> -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From msdrezavand at gmail.com Fri Dec 16 20:17:23 2016 From: msdrezavand at gmail.com (Massoud Rezavand) Date: Sat, 17 Dec 2016 03:17:23 +0100 Subject: [petsc-users] structure of A Message-ID: Dear PETSc team, Sorry if my question is more related to math. Using PETSc, how important is the structure of the matrix A for performance? I mean mainly the diagonal and off-diagonal parts. For example, solving with a matrix which is dense in diagonal part and sparse in off-diagonal part is faster than with a matrix in which the non-zeros are distributed randomly? Thanks in advance. Massoud -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Dec 16 20:22:14 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 16 Dec 2016 20:22:14 -0600 Subject: [petsc-users] structure of A In-Reply-To: References: Message-ID: <50F32139-4047-433C-80CF-0485F10B3060@mcs.anl.gov> > On Dec 16, 2016, at 8:17 PM, Massoud Rezavand wrote: > > Dear PETSc team, > > Sorry if my question is more related to math. > Using PETSc, how important is the structure of the matrix A for performance? I mean mainly the diagonal and off-diagonal parts. > > For example, solving with a matrix which is dense in diagonal part and sparse in off-diagonal part is faster than with a matrix in which the non-zeros are distributed randomly? Yes, loosely speaking this is true. More technically there is a term "diagonally dominate", or block diagonally dominate, for those matrices generally iterative methods perform better. Barry > > Thanks in advance. > > Massoud From bsmith at mcs.anl.gov Fri Dec 16 20:33:34 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 16 Dec 2016 20:33:34 -0600 Subject: [petsc-users] structure of A In-Reply-To: References: <50F32139-4047-433C-80CF-0485F10B3060@mcs.anl.gov> Message-ID: <9F9756AA-A2AA-4324-80D9-0FC8AAE4544B@mcs.anl.gov> Direct solvers are less sensitive to whether the matrix is diagonal dominate but in the extreme, since matrices that are not diagonally dominate are generally more ill-conditioned, direct solvers in that extreme region will produce less accurate answers. Direct solvers have the additional problem they do not, and likely cannot, scale to really large problems, 10's of millions to billions of unknowns while iterative solves can (assuming the matrix is suitable for direct solvers) can solve problems with billions of unknowns. Barry > On Dec 16, 2016, at 8:26 PM, Massoud Rezavand wrote: > > Thanks you very much. > > As far as I know, PETSc provides direct solvers, as well. How about direct solvers and the performance for a diagonally dominant matrix and a random matrix? > > Massoud > > On Sat, Dec 17, 2016 at 3:22 AM, Barry Smith wrote: > > > On Dec 16, 2016, at 8:17 PM, Massoud Rezavand wrote: > > > > Dear PETSc team, > > > > Sorry if my question is more related to math. > > Using PETSc, how important is the structure of the matrix A for performance? I mean mainly the diagonal and off-diagonal parts. > > > > For example, solving with a matrix which is dense in diagonal part and sparse in off-diagonal part is faster than with a matrix in which the non-zeros are distributed randomly? > > Yes, loosely speaking this is true. More technically there is a term "diagonally dominate", or block diagonally dominate, for those matrices generally iterative methods perform better. > > Barry > > > > > > Thanks in advance. > > > > Massoud > > From aurelien.ponte at ifremer.fr Sat Dec 17 09:00:44 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sat, 17 Dec 2016 16:00:44 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices Message-ID: Hi all, I am trying to install petsc4py and petsc with the --with-64-bit-indices option. I followed the pip install described on the petsc4py bitbucket with some slight modifications: module load python/2.7.10_gnu-4.9.2 wget https://bootstrap.pypa.io/get-pip.py python get-pip.py --user setenv MPICC mpiicc pip install --user --upgrade mpi4py pip install --user --upgrade numpy pip install --user petsc petsc4py --install-option="--with-64-bit-indices" but I do get the error copied below. Any ideas on what I could do? Should I try to use a different method of install? thanks aurelien service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. cmdoptions.check_install_build_global(options) Collecting petsc Downloading petsc-3.7.2.1.tar.gz (8.7MB) 100% |################################| 8.7MB 116kB/s Collecting petsc4py Downloading petsc4py-3.7.0.tar.gz (1.7MB) 100% |################################| 1.7MB 415kB/s Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) Skipping bdist_wheel for petsc, due to binaries being disabled for it. Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. Installing collected packages: petsc, petsc4py Running setup.py install for petsc ... error Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: -c --help [cmd1 cmd2 ...] or: -c --help-commands or: -c cmd --help error: option --with-64-bit-indices not recognized ---------------------------------------- Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From bsmith at mcs.anl.gov Sat Dec 17 13:19:41 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 17 Dec 2016 13:19:41 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: Message-ID: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> Looks like --install-option= are options for pip not the underlying package. Lisandro, how does one do what seems to be a simple request? > On Dec 17, 2016, at 9:00 AM, Aurelien Ponte wrote: > > Hi all, > > I am trying to install petsc4py and petsc with the --with-64-bit-indices option. > > I followed the pip install described on the petsc4py bitbucket with some slight modifications: > > module load python/2.7.10_gnu-4.9.2 > wget https://bootstrap.pypa.io/get-pip.py > python get-pip.py --user > setenv MPICC mpiicc > pip install --user --upgrade mpi4py > pip install --user --upgrade numpy > pip install --user petsc petsc4py --install-option="--with-64-bit-indices" > > but I do get the error copied below. > > Any ideas on what I could do? > > Should I try to use a different method of install? > > thanks > > aurelien > > > > > service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" > /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. > cmdoptions.check_install_build_global(options) > Collecting petsc > Downloading petsc-3.7.2.1.tar.gz (8.7MB) > 100% |################################| 8.7MB 116kB/s > Collecting petsc4py > Downloading petsc4py-3.7.0.tar.gz (1.7MB) > 100% |################################| 1.7MB 415kB/s > Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) > Skipping bdist_wheel for petsc, due to binaries being disabled for it. > Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. > Installing collected packages: petsc, petsc4py > Running setup.py install for petsc ... error > Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: > usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] > or: -c --help [cmd1 cmd2 ...] > or: -c --help-commands > or: -c cmd --help > > error: option --with-64-bit-indices not recognized > > ---------------------------------------- > Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ > > > -- > Aur?lien Ponte > Tel: (+33) 2 98 22 40 73 > Fax: (+33) 2 98 22 44 96 > UMR 6523, IFREMER > ZI de la Pointe du Diable > CS 10070 > 29280 Plouzan? > From aurelien.ponte at ifremer.fr Sat Dec 17 14:18:34 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sat, 17 Dec 2016 21:18:34 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> Message-ID: <94503b7b-2a40-ec17-a512-8490d0a6ca38@ifremer.fr> Ok, while waiting for an answer for the pip approach, I am trying another one: module load python/2.7.10_gnu-4.9.2 setenv MPICC mpiicc setenv PETSC_DIR /home1/caparmor/aponte/petsc/petsc-3.7.4 setenv PETSC_ARCH linux-gnu-intel wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.7.4.tar.gz wget https://bitbucket.org/petsc/petsc4py/downloads/petsc4py-3.7.0.tar.gz pip install --user --upgrade cython (not sure cython is required if not in dev mode) cd /home1/caparmor/aponte/petsc/petsc-3.7.4 ./configure PETSC_ARCH=linux-gnu-intel --with-cc=mpiicc --with-fc=mpiifort --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl --with-64-bit-indices --download-petsc4py=/home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz But the latter fails with the following message (which I do not understand as petsc4py-3.7.0.tar.gz is indeed in the right place): ================================================================================ TEST configureLibrary from config.packages.petsc4py(/home1/caparmor/aponte/petsc/petsc-3.7.4/config/BuildSystem/config/packages/petsc4py.py:82) TESTING: configureLibrary from config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:82) Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py Could not locate an existing copy of PETSC4PY: [] Downloading petsc4py =============================================================================== Trying to download file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz for PETSC4PY =============================================================================== Downloading file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz to /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz Extracting /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz Executing: cd /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages; chmod -R a+r petsc4py-3.7.0;find petsc4py-3.7.0 -type d -name "*" -exec chmod a+rx {} \; Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py Could not locate an existing copy of PETSC4PY: ['petsc4py-3.7.0'] ERROR: Failed to download PETSC4PY **** Configure header /tmp/petsc-ViI4NW/confdefs.h **** any ideas for this one? thanks aurelien Le 17/12/2016 ? 20:19, Barry Smith a ?crit : > Looks like --install-option= are options for pip not the underlying package. > > Lisandro, how does one do what seems to be a simple request? > > >> On Dec 17, 2016, at 9:00 AM, Aurelien Ponte wrote: >> >> Hi all, >> >> I am trying to install petsc4py and petsc with the --with-64-bit-indices option. >> >> I followed the pip install described on the petsc4py bitbucket with some slight modifications: >> >> module load python/2.7.10_gnu-4.9.2 >> wget https://bootstrap.pypa.io/get-pip.py >> python get-pip.py --user >> setenv MPICC mpiicc >> pip install --user --upgrade mpi4py >> pip install --user --upgrade numpy >> pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >> >> but I do get the error copied below. >> >> Any ideas on what I could do? >> >> Should I try to use a different method of install? >> >> thanks >> >> aurelien >> >> >> >> >> service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >> /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. >> cmdoptions.check_install_build_global(options) >> Collecting petsc >> Downloading petsc-3.7.2.1.tar.gz (8.7MB) >> 100% |################################| 8.7MB 116kB/s >> Collecting petsc4py >> Downloading petsc4py-3.7.0.tar.gz (1.7MB) >> 100% |################################| 1.7MB 415kB/s >> Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) >> Skipping bdist_wheel for petsc, due to binaries being disabled for it. >> Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. >> Installing collected packages: petsc, petsc4py >> Running setup.py install for petsc ... error >> Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: >> usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] >> or: -c --help [cmd1 cmd2 ...] >> or: -c --help-commands >> or: -c cmd --help >> >> error: option --with-64-bit-indices not recognized >> >> ---------------------------------------- >> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ >> >> >> -- >> Aur?lien Ponte >> Tel: (+33) 2 98 22 40 73 >> Fax: (+33) 2 98 22 44 96 >> UMR 6523, IFREMER >> ZI de la Pointe du Diable >> CS 10070 >> 29280 Plouzan? >> -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From bsmith at mcs.anl.gov Sat Dec 17 14:24:59 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 17 Dec 2016 14:24:59 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <94503b7b-2a40-ec17-a512-8490d0a6ca38@ifremer.fr> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <94503b7b-2a40-ec17-a512-8490d0a6ca38@ifremer.fr> Message-ID: <4D9F88A0-9187-4ABD-94F6-0030D5FA295B@mcs.anl.gov> Please do ls /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages and send the results. It looks like we may have a bug in checking for the correct file. Sorry about this, it is not suppose to be this difficult. Barry > On Dec 17, 2016, at 2:18 PM, Aurelien Ponte wrote: > > Ok, while waiting for an answer for the pip approach, I am trying another one: > > module load python/2.7.10_gnu-4.9.2 > setenv MPICC mpiicc > setenv PETSC_DIR /home1/caparmor/aponte/petsc/petsc-3.7.4 > setenv PETSC_ARCH linux-gnu-intel > > wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.7.4.tar.gz > wget https://bitbucket.org/petsc/petsc4py/downloads/petsc4py-3.7.0.tar.gz > pip install --user --upgrade cython (not sure cython is required if not in dev mode) > cd /home1/caparmor/aponte/petsc/petsc-3.7.4 > ./configure PETSC_ARCH=linux-gnu-intel --with-cc=mpiicc --with-fc=mpiifort --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl --with-64-bit-indices --download-petsc4py=/home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz > > But the latter fails with the following message (which I do not understand as petsc4py-3.7.0.tar.gz is indeed in the right place): > > ================================================================================ > TEST configureLibrary from config.packages.petsc4py(/home1/caparmor/aponte/petsc/petsc-3.7.4/config/BuildSystem/config/packages/petsc4py.py:82) > TESTING: configureLibrary from config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:82) > Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py > Could not locate an existing copy of PETSC4PY: > [] > Downloading petsc4py > =============================================================================== > Trying to download file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz for PETSC4PY > =============================================================================== > > Downloading file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz to /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz > Extracting /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz > Executing: cd /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages; chmod -R a+r petsc4py-3.7.0;find petsc4py-3.7.0 -type d -name "*" -exec chmod a+rx {} \; > Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py > Could not locate an existing copy of PETSC4PY: > ['petsc4py-3.7.0'] > ERROR: Failed to download PETSC4PY > **** Configure header /tmp/petsc-ViI4NW/confdefs.h **** > > > any ideas for this one? > thanks > aurelien > > > Le 17/12/2016 ? 20:19, Barry Smith a ?crit : >> Looks like --install-option= are options for pip not the underlying package. >> >> Lisandro, how does one do what seems to be a simple request? >> >> >>> On Dec 17, 2016, at 9:00 AM, Aurelien Ponte wrote: >>> >>> Hi all, >>> >>> I am trying to install petsc4py and petsc with the --with-64-bit-indices option. >>> >>> I followed the pip install described on the petsc4py bitbucket with some slight modifications: >>> >>> module load python/2.7.10_gnu-4.9.2 >>> wget https://bootstrap.pypa.io/get-pip.py >>> python get-pip.py --user >>> setenv MPICC mpiicc >>> pip install --user --upgrade mpi4py >>> pip install --user --upgrade numpy >>> pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >>> >>> but I do get the error copied below. >>> >>> Any ideas on what I could do? >>> >>> Should I try to use a different method of install? >>> >>> thanks >>> >>> aurelien >>> >>> >>> >>> >>> service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >>> /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. >>> cmdoptions.check_install_build_global(options) >>> Collecting petsc >>> Downloading petsc-3.7.2.1.tar.gz (8.7MB) >>> 100% |################################| 8.7MB 116kB/s >>> Collecting petsc4py >>> Downloading petsc4py-3.7.0.tar.gz (1.7MB) >>> 100% |################################| 1.7MB 415kB/s >>> Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) >>> Skipping bdist_wheel for petsc, due to binaries being disabled for it. >>> Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. >>> Installing collected packages: petsc, petsc4py >>> Running setup.py install for petsc ... error >>> Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: >>> usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] >>> or: -c --help [cmd1 cmd2 ...] >>> or: -c --help-commands >>> or: -c cmd --help >>> >>> error: option --with-64-bit-indices not recognized >>> >>> ---------------------------------------- >>> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ >>> >>> >>> -- >>> Aur?lien Ponte >>> Tel: (+33) 2 98 22 40 73 >>> Fax: (+33) 2 98 22 44 96 >>> UMR 6523, IFREMER >>> ZI de la Pointe du Diable >>> CS 10070 >>> 29280 Plouzan? >>> > > > -- > Aur?lien Ponte > Tel: (+33) 2 98 22 40 73 > Fax: (+33) 2 98 22 44 96 > UMR 6523, IFREMER > ZI de la Pointe du Diable > CS 10070 > 29280 Plouzan? > From lawrence.mitchell at imperial.ac.uk Sat Dec 17 15:36:06 2016 From: lawrence.mitchell at imperial.ac.uk (Lawrence Mitchell) Date: Sat, 17 Dec 2016 21:36:06 +0000 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> Message-ID: <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> > On 17 Dec 2016, at 19:19, Barry Smith wrote: > > Looks like --install-option= are options for pip not the underlying package. > > Lisandro, how does one do what seems to be a simple request? Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to configure during pip install From aurelien.ponte at ifremer.fr Sat Dec 17 15:41:50 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sat, 17 Dec 2016 22:41:50 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <4D9F88A0-9187-4ABD-94F6-0030D5FA295B@mcs.anl.gov> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <94503b7b-2a40-ec17-a512-8490d0a6ca38@ifremer.fr> <4D9F88A0-9187-4ABD-94F6-0030D5FA295B@mcs.anl.gov> Message-ID: <6269e3b2-92b4-01b0-3856-18fc3bb7cf43@ifremer.fr> no worries: service7>501% ls /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages petsc4py-3.7.0 service7>502% thanks aurelien Le 17/12/2016 ? 21:24, Barry Smith a ?crit : > Please do > > ls /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages > > and send the results. It looks like we may have a bug in checking for the correct file. > > Sorry about this, it is not suppose to be this difficult. > > Barry > > >> On Dec 17, 2016, at 2:18 PM, Aurelien Ponte wrote: >> >> Ok, while waiting for an answer for the pip approach, I am trying another one: >> >> module load python/2.7.10_gnu-4.9.2 >> setenv MPICC mpiicc >> setenv PETSC_DIR /home1/caparmor/aponte/petsc/petsc-3.7.4 >> setenv PETSC_ARCH linux-gnu-intel >> >> wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.7.4.tar.gz >> wget https://bitbucket.org/petsc/petsc4py/downloads/petsc4py-3.7.0.tar.gz >> pip install --user --upgrade cython (not sure cython is required if not in dev mode) >> cd /home1/caparmor/aponte/petsc/petsc-3.7.4 >> ./configure PETSC_ARCH=linux-gnu-intel --with-cc=mpiicc --with-fc=mpiifort --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl --with-64-bit-indices --download-petsc4py=/home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz >> >> But the latter fails with the following message (which I do not understand as petsc4py-3.7.0.tar.gz is indeed in the right place): >> >> ================================================================================ >> TEST configureLibrary from config.packages.petsc4py(/home1/caparmor/aponte/petsc/petsc-3.7.4/config/BuildSystem/config/packages/petsc4py.py:82) >> TESTING: configureLibrary from config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:82) >> Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py >> Could not locate an existing copy of PETSC4PY: >> [] >> Downloading petsc4py >> =============================================================================== >> Trying to download file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz for PETSC4PY >> =============================================================================== >> >> Downloading file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz to /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz >> Extracting /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz >> Executing: cd /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages; chmod -R a+r petsc4py-3.7.0;find petsc4py-3.7.0 -type d -name "*" -exec chmod a+rx {} \; >> Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py >> Could not locate an existing copy of PETSC4PY: >> ['petsc4py-3.7.0'] >> ERROR: Failed to download PETSC4PY >> **** Configure header /tmp/petsc-ViI4NW/confdefs.h **** >> >> >> any ideas for this one? >> thanks >> aurelien >> >> >> Le 17/12/2016 ? 20:19, Barry Smith a ?crit : >>> Looks like --install-option= are options for pip not the underlying package. >>> >>> Lisandro, how does one do what seems to be a simple request? >>> >>> >>>> On Dec 17, 2016, at 9:00 AM, Aurelien Ponte wrote: >>>> >>>> Hi all, >>>> >>>> I am trying to install petsc4py and petsc with the --with-64-bit-indices option. >>>> >>>> I followed the pip install described on the petsc4py bitbucket with some slight modifications: >>>> >>>> module load python/2.7.10_gnu-4.9.2 >>>> wget https://bootstrap.pypa.io/get-pip.py >>>> python get-pip.py --user >>>> setenv MPICC mpiicc >>>> pip install --user --upgrade mpi4py >>>> pip install --user --upgrade numpy >>>> pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >>>> >>>> but I do get the error copied below. >>>> >>>> Any ideas on what I could do? >>>> >>>> Should I try to use a different method of install? >>>> >>>> thanks >>>> >>>> aurelien >>>> >>>> >>>> >>>> >>>> service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >>>> /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. >>>> cmdoptions.check_install_build_global(options) >>>> Collecting petsc >>>> Downloading petsc-3.7.2.1.tar.gz (8.7MB) >>>> 100% |################################| 8.7MB 116kB/s >>>> Collecting petsc4py >>>> Downloading petsc4py-3.7.0.tar.gz (1.7MB) >>>> 100% |################################| 1.7MB 415kB/s >>>> Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) >>>> Skipping bdist_wheel for petsc, due to binaries being disabled for it. >>>> Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. >>>> Installing collected packages: petsc, petsc4py >>>> Running setup.py install for petsc ... error >>>> Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: >>>> usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] >>>> or: -c --help [cmd1 cmd2 ...] >>>> or: -c --help-commands >>>> or: -c cmd --help >>>> >>>> error: option --with-64-bit-indices not recognized >>>> >>>> ---------------------------------------- >>>> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ >>>> >>>> >>>> -- >>>> Aur?lien Ponte >>>> Tel: (+33) 2 98 22 40 73 >>>> Fax: (+33) 2 98 22 44 96 >>>> UMR 6523, IFREMER >>>> ZI de la Pointe du Diable >>>> CS 10070 >>>> 29280 Plouzan? >>>> >> >> -- >> Aur?lien Ponte >> Tel: (+33) 2 98 22 40 73 >> Fax: (+33) 2 98 22 44 96 >> UMR 6523, IFREMER >> ZI de la Pointe du Diable >> CS 10070 >> 29280 Plouzan? >> -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From bsmith at mcs.anl.gov Sat Dec 17 15:41:51 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 17 Dec 2016 15:41:51 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: > On Dec 17, 2016, at 3:36 PM, Lawrence Mitchell wrote: > > > >> On 17 Dec 2016, at 19:19, Barry Smith wrote: >> >> Looks like --install-option= are options for pip not the underlying package. >> >> Lisandro, how does one do what seems to be a simple request? > > Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to configure during pip install Thanks. Maybe this info should be added to the install page. From balay at mcs.anl.gov Sat Dec 17 15:42:13 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 17 Dec 2016 15:42:13 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <4D9F88A0-9187-4ABD-94F6-0030D5FA295B@mcs.anl.gov> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <94503b7b-2a40-ec17-a512-8490d0a6ca38@ifremer.fr> <4D9F88A0-9187-4ABD-94F6-0030D5FA295B@mcs.anl.gov> Message-ID: self.gitcommit = '026d6fa' # maint/3.7 from may-21-2026 self.download = ['git://https://bitbucket.org/petsc/petsc4py','https://bitbucket.org/petsc/petsc4py/get/'+self.gitcommit+'.tar.gz'] self.downloaddirname = 'petsc-petsc4py' Configure is setup to use tarball that is obtained from the gitcommit - i.e not the petsc4py release tarball. i.e use: https://bitbucket.org/petsc/petsc4py/get/026d6fa.tar.gz [or let petsc configure download it for you - instead of wget] Also I think its best to use gnu/mpi to be compatible with gnu/python. Satish On Sat, 17 Dec 2016, Barry Smith wrote: > > Please do > > ls /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages > > and send the results. It looks like we may have a bug in checking for the correct file. > > Sorry about this, it is not suppose to be this difficult. > > Barry > > > > On Dec 17, 2016, at 2:18 PM, Aurelien Ponte wrote: > > > > Ok, while waiting for an answer for the pip approach, I am trying another one: > > > > module load python/2.7.10_gnu-4.9.2 > > setenv MPICC mpiicc > > setenv PETSC_DIR /home1/caparmor/aponte/petsc/petsc-3.7.4 > > setenv PETSC_ARCH linux-gnu-intel > > > > wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.7.4.tar.gz > > wget https://bitbucket.org/petsc/petsc4py/downloads/petsc4py-3.7.0.tar.gz > > pip install --user --upgrade cython (not sure cython is required if not in dev mode) > > cd /home1/caparmor/aponte/petsc/petsc-3.7.4 > > ./configure PETSC_ARCH=linux-gnu-intel --with-cc=mpiicc --with-fc=mpiifort --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl --with-64-bit-indices --download-petsc4py=/home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz > > > > But the latter fails with the following message (which I do not understand as petsc4py-3.7.0.tar.gz is indeed in the right place): > > > > ================================================================================ > > TEST configureLibrary from config.packages.petsc4py(/home1/caparmor/aponte/petsc/petsc-3.7.4/config/BuildSystem/config/packages/petsc4py.py:82) > > TESTING: configureLibrary from config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:82) > > Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py > > Could not locate an existing copy of PETSC4PY: > > [] > > Downloading petsc4py > > =============================================================================== > > Trying to download file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz for PETSC4PY > > =============================================================================== > > > > Downloading file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz to /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz > > Extracting /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz > > Executing: cd /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages; chmod -R a+r petsc4py-3.7.0;find petsc4py-3.7.0 -type d -name "*" -exec chmod a+rx {} \; > > Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py > > Could not locate an existing copy of PETSC4PY: > > ['petsc4py-3.7.0'] > > ERROR: Failed to download PETSC4PY > > **** Configure header /tmp/petsc-ViI4NW/confdefs.h **** > > > > > > any ideas for this one? > > thanks > > aurelien > > > > > > Le 17/12/2016 ? 20:19, Barry Smith a ?crit : > >> Looks like --install-option= are options for pip not the underlying package. > >> > >> Lisandro, how does one do what seems to be a simple request? > >> > >> > >>> On Dec 17, 2016, at 9:00 AM, Aurelien Ponte wrote: > >>> > >>> Hi all, > >>> > >>> I am trying to install petsc4py and petsc with the --with-64-bit-indices option. > >>> > >>> I followed the pip install described on the petsc4py bitbucket with some slight modifications: > >>> > >>> module load python/2.7.10_gnu-4.9.2 > >>> wget https://bootstrap.pypa.io/get-pip.py > >>> python get-pip.py --user > >>> setenv MPICC mpiicc > >>> pip install --user --upgrade mpi4py > >>> pip install --user --upgrade numpy > >>> pip install --user petsc petsc4py --install-option="--with-64-bit-indices" > >>> > >>> but I do get the error copied below. > >>> > >>> Any ideas on what I could do? > >>> > >>> Should I try to use a different method of install? > >>> > >>> thanks > >>> > >>> aurelien > >>> > >>> > >>> > >>> > >>> service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" > >>> /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. > >>> cmdoptions.check_install_build_global(options) > >>> Collecting petsc > >>> Downloading petsc-3.7.2.1.tar.gz (8.7MB) > >>> 100% |################################| 8.7MB 116kB/s > >>> Collecting petsc4py > >>> Downloading petsc4py-3.7.0.tar.gz (1.7MB) > >>> 100% |################################| 1.7MB 415kB/s > >>> Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) > >>> Skipping bdist_wheel for petsc, due to binaries being disabled for it. > >>> Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. > >>> Installing collected packages: petsc, petsc4py > >>> Running setup.py install for petsc ... error > >>> Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: > >>> usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] > >>> or: -c --help [cmd1 cmd2 ...] > >>> or: -c --help-commands > >>> or: -c cmd --help > >>> > >>> error: option --with-64-bit-indices not recognized > >>> > >>> ---------------------------------------- > >>> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ > >>> > >>> > >>> -- > >>> Aur?lien Ponte > >>> Tel: (+33) 2 98 22 40 73 > >>> Fax: (+33) 2 98 22 44 96 > >>> UMR 6523, IFREMER > >>> ZI de la Pointe du Diable > >>> CS 10070 > >>> 29280 Plouzan? > >>> > > > > > > -- > > Aur?lien Ponte > > Tel: (+33) 2 98 22 40 73 > > Fax: (+33) 2 98 22 44 96 > > UMR 6523, IFREMER > > ZI de la Pointe du Diable > > CS 10070 > > 29280 Plouzan? > > > > From bsmith at mcs.anl.gov Sat Dec 17 16:35:59 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 17 Dec 2016 16:35:59 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <94503b7b-2a40-ec17-a512-8490d0a6ca38@ifremer.fr> <4D9F88A0-9187-4ABD-94F6-0030D5FA295B@mcs.anl.gov> Message-ID: <18614B33-1728-4F72-82AF-9FBC4348D2F6@mcs.anl.gov> > On Dec 17, 2016, at 3:42 PM, Satish Balay wrote: > > self.gitcommit = '026d6fa' # maint/3.7 from may-21-2026 > self.download = ['git://https://bitbucket.org/petsc/petsc4py','https://bitbucket.org/petsc/petsc4py/get/'+self.gitcommit+'.tar.gz'] > self.downloaddirname = 'petsc-petsc4py' > > Configure is setup to use tarball that is obtained from the gitcommit - i.e not the petsc4py release tarball. This is nuts, how is anyone suppose to know this obscure mis-feature? I have fixed it in the branch barry/allow-multiple-downloaddirnames so that it can use the petsc4py release tarball as well. Barry > i.e use: > > https://bitbucket.org/petsc/petsc4py/get/026d6fa.tar.gz > > [or let petsc configure download it for you - instead of wget] > > Also I think its best to use gnu/mpi to be compatible with gnu/python. > > Satish > > On Sat, 17 Dec 2016, Barry Smith wrote: > >> >> Please do >> >> ls /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages >> >> and send the results. It looks like we may have a bug in checking for the correct file. >> >> Sorry about this, it is not suppose to be this difficult. >> >> Barry >> >> >>> On Dec 17, 2016, at 2:18 PM, Aurelien Ponte wrote: >>> >>> Ok, while waiting for an answer for the pip approach, I am trying another one: >>> >>> module load python/2.7.10_gnu-4.9.2 >>> setenv MPICC mpiicc >>> setenv PETSC_DIR /home1/caparmor/aponte/petsc/petsc-3.7.4 >>> setenv PETSC_ARCH linux-gnu-intel >>> >>> wget http://ftp.mcs.anl.gov/pub/petsc/release-snapshots/petsc-3.7.4.tar.gz >>> wget https://bitbucket.org/petsc/petsc4py/downloads/petsc4py-3.7.0.tar.gz >>> pip install --user --upgrade cython (not sure cython is required if not in dev mode) >>> cd /home1/caparmor/aponte/petsc/petsc-3.7.4 >>> ./configure PETSC_ARCH=linux-gnu-intel --with-cc=mpiicc --with-fc=mpiifort --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl --with-64-bit-indices --download-petsc4py=/home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz >>> >>> But the latter fails with the following message (which I do not understand as petsc4py-3.7.0.tar.gz is indeed in the right place): >>> >>> ================================================================================ >>> TEST configureLibrary from config.packages.petsc4py(/home1/caparmor/aponte/petsc/petsc-3.7.4/config/BuildSystem/config/packages/petsc4py.py:82) >>> TESTING: configureLibrary from config.packages.petsc4py(config/BuildSystem/config/packages/petsc4py.py:82) >>> Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py >>> Could not locate an existing copy of PETSC4PY: >>> [] >>> Downloading petsc4py >>> =============================================================================== >>> Trying to download file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz for PETSC4PY >>> =============================================================================== >>> >>> Downloading file:///home1/caparmor/aponte/petsc/petsc4py-3.7.0.tar.gz to /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz >>> Extracting /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages/_d_petsc4py-3.7.0.tar.gz >>> Executing: cd /home1/caparmor/aponte/petsc/petsc-3.7.4/linux-gnu-intel/externalpackages; chmod -R a+r petsc4py-3.7.0;find petsc4py-3.7.0 -type d -name "*" -exec chmod a+rx {} \; >>> Looking for PETSC4PY at git.petsc4py, hg.petsc4py or a directory starting with petsc-petsc4py >>> Could not locate an existing copy of PETSC4PY: >>> ['petsc4py-3.7.0'] >>> ERROR: Failed to download PETSC4PY >>> **** Configure header /tmp/petsc-ViI4NW/confdefs.h **** >>> >>> >>> any ideas for this one? >>> thanks >>> aurelien >>> >>> >>> Le 17/12/2016 ? 20:19, Barry Smith a ?crit : >>>> Looks like --install-option= are options for pip not the underlying package. >>>> >>>> Lisandro, how does one do what seems to be a simple request? >>>> >>>> >>>>> On Dec 17, 2016, at 9:00 AM, Aurelien Ponte wrote: >>>>> >>>>> Hi all, >>>>> >>>>> I am trying to install petsc4py and petsc with the --with-64-bit-indices option. >>>>> >>>>> I followed the pip install described on the petsc4py bitbucket with some slight modifications: >>>>> >>>>> module load python/2.7.10_gnu-4.9.2 >>>>> wget https://bootstrap.pypa.io/get-pip.py >>>>> python get-pip.py --user >>>>> setenv MPICC mpiicc >>>>> pip install --user --upgrade mpi4py >>>>> pip install --user --upgrade numpy >>>>> pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >>>>> >>>>> but I do get the error copied below. >>>>> >>>>> Any ideas on what I could do? >>>>> >>>>> Should I try to use a different method of install? >>>>> >>>>> thanks >>>>> >>>>> aurelien >>>>> >>>>> >>>>> >>>>> >>>>> service7>479% pip install --user petsc petsc4py --install-option="--with-64-bit-indices" >>>>> /home1/caparmor/aponte/.local/lib/python2.7/site-packages/pip/commands/install.py:194: UserWarning: Disabling all use of wheels due to the use of --build-options / --global-options / --install-options. >>>>> cmdoptions.check_install_build_global(options) >>>>> Collecting petsc >>>>> Downloading petsc-3.7.2.1.tar.gz (8.7MB) >>>>> 100% |################################| 8.7MB 116kB/s >>>>> Collecting petsc4py >>>>> Downloading petsc4py-3.7.0.tar.gz (1.7MB) >>>>> 100% |################################| 1.7MB 415kB/s >>>>> Requirement already satisfied: numpy in /home1/caparmor/aponte/.local/lib/python2.7/site-packages (from petsc4py) >>>>> Skipping bdist_wheel for petsc, due to binaries being disabled for it. >>>>> Skipping bdist_wheel for petsc4py, due to binaries being disabled for it. >>>>> Installing collected packages: petsc, petsc4py >>>>> Running setup.py install for petsc ... error >>>>> Complete output from command /appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=: >>>>> usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] >>>>> or: -c --help [cmd1 cmd2 ...] >>>>> or: -c --help-commands >>>>> or: -c cmd --help >>>>> >>>>> error: option --with-64-bit-indices not recognized >>>>> >>>>> ---------------------------------------- >>>>> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-3C49gO/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-iuRtIV-record/install-record.txt --single-version-externally-managed --compile --with-64-bit-indices --user --prefix=" failed with error code 1 in /tmp/pip-build-3C49gO/petsc/ >>>>> >>>>> >>>>> -- >>>>> Aur?lien Ponte >>>>> Tel: (+33) 2 98 22 40 73 >>>>> Fax: (+33) 2 98 22 44 96 >>>>> UMR 6523, IFREMER >>>>> ZI de la Pointe du Diable >>>>> CS 10070 >>>>> 29280 Plouzan? >>>>> >>> >>> >>> -- >>> Aur?lien Ponte >>> Tel: (+33) 2 98 22 40 73 >>> Fax: (+33) 2 98 22 44 96 >>> UMR 6523, IFREMER >>> ZI de la Pointe du Diable >>> CS 10070 >>> 29280 Plouzan? >>> >> >> From aurelien.ponte at ifremer.fr Sun Dec 18 07:57:35 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sun, 18 Dec 2016 14:57:35 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: Allright I got the following to complete: ### install pip: module load python/2.7.10_gnu-4.9.2 wget https://bootstrap.pypa.io/get-pip.py python get-pip.py --user # edit .cshrc: set path = ($path $home/.local/bin) + setenv LD_LIBRARY_PATH /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} # setenv MPICC mpiicc pip install --user --upgrade mpi4py pip install --user --upgrade numpy setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' pip install --user petsc petsc4py But now I get the following at run time: *** libmkl_mc3.so *** failed with error : /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined symbol: mkl_dft_commit_descriptor_s_c2c_md_omp *** libmkl_def.so *** failed with error : /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined symbol: mkl_dft_commit_descriptor_s_c2c_md_omp MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so Any ideas? I will also try the other approach (from source). thanks aurelien Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : > >> On 17 Dec 2016, at 19:19, Barry Smith wrote: >> >> Looks like --install-option= are options for pip not the underlying package. >> >> Lisandro, how does one do what seems to be a simple request? > Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to configure during pip install -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From balay at mcs.anl.gov Sun Dec 18 09:52:04 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 18 Dec 2016 09:52:04 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: How about using --download-fblaslapack instead of MKL? Satish On Sun, 18 Dec 2016, Aurelien Ponte wrote: > Allright I got the following to complete: > > ### install pip: > module load python/2.7.10_gnu-4.9.2 > wget https://bootstrap.pypa.io/get-pip.py > python get-pip.py --user > # edit .cshrc: set path = ($path $home/.local/bin) + setenv LD_LIBRARY_PATH > /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} > > # > setenv MPICC mpiicc > pip install --user --upgrade mpi4py > pip install --user --upgrade numpy > setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices > --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' > pip install --user petsc petsc4py > > But now I get the following at run time: > *** libmkl_mc3.so *** failed with error : > /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined symbol: > mkl_dft_commit_descriptor_s_c2c_md_omp > *** libmkl_def.so *** failed with error : > /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined symbol: > mkl_dft_commit_descriptor_s_c2c_md_omp > MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so > MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so > > > Any ideas? > > I will also try the other approach (from source). > > thanks > > aurelien > > > > > Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : > > > > > On 17 Dec 2016, at 19:19, Barry Smith wrote: > > > > > > Looks like --install-option= are options for pip not the underlying > > > package. > > > > > > Lisandro, how does one do what seems to be a simple request? > > Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to > > configure during pip install > > > From aurelien.ponte at ifremer.fr Sun Dec 18 13:54:09 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sun, 18 Dec 2016 20:54:09 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: Here is what I am trying now: setenv MPICC mpiicc pip install --user --upgrade --ignore-installed --no-cache-dir mpi4py pip install --user --upgrade --ignore-installed --no-cache-dir numpy setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=mpif90 --download-fblaslapack' pip install --user --upgrade --ignore-installed --no-cache-dir petsc petsc4py I get now the following error message (I'll send the logs to petsc-maint at mcs.anl.gov as suggested): CC arch-python-linux-x86_64/obj/src/vec/is/is/interface/index.o Fatal Error: Reading module mpi at line 1 column 2: Unexpected EOF gmake[2]: *** [arch-python-linux-x86_64/obj/src/sys/f90-mod/petscsysmod.o] Error 1 gmake[2]: *** Waiting for unfinished jobs.... /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(3): warning #161: unrecognized #pragma #pragma clang diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(4): warning #161: unrecognized #pragma #pragma gcc diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(3): warning #161: unrecognized #pragma #pragma clang diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(4): warning #161: unrecognized #pragma #pragma gcc diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(3): warning #161: unrecognized #pragma #pragma clang diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(4): warning #161: unrecognized #pragma #pragma gcc diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(2): warning #161: unrecognized #pragma #pragma clang diagnostic ignored "-Wdeprecated-declarations" ^ /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(3): warning #161: unrecognized #pragma #pragma gcc diagnostic ignored "-Wdeprecated-declarations" ^ gmake[2]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' gmake[1]: *** [gnumake] Error 2 gmake[1]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' **************************ERROR************************************* Error during compile, check arch-python-linux-x86_64/lib/petsc/conf/make.log Send it and arch-python-linux-x86_64/lib/petsc/conf/configure.log to petsc-maint at mcs.anl.gov ******************************************************************** make: *** [all] Error 1 Traceback (most recent call last): File "", line 1, in File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 302, in **metadata) File "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 219, in run build(self.dry_run) File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 164, in build if status != 0: raise RuntimeError(status) RuntimeError: 512 ---------------------------------------- Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-t7pV1u/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-ebXwB3-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-t7pV1u/petsc/ Le 18/12/2016 ? 16:52, Satish Balay a ?crit : > How about using --download-fblaslapack instead of MKL? > > Satish > > On Sun, 18 Dec 2016, Aurelien Ponte wrote: > >> Allright I got the following to complete: >> >> ### install pip: >> module load python/2.7.10_gnu-4.9.2 >> wget https://bootstrap.pypa.io/get-pip.py >> python get-pip.py --user >> # edit .cshrc: set path = ($path $home/.local/bin) + setenv LD_LIBRARY_PATH >> /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} >> >> # >> setenv MPICC mpiicc >> pip install --user --upgrade mpi4py >> pip install --user --upgrade numpy >> setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices >> --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' >> pip install --user petsc petsc4py >> >> But now I get the following at run time: >> *** libmkl_mc3.so *** failed with error : >> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined symbol: >> mkl_dft_commit_descriptor_s_c2c_md_omp >> *** libmkl_def.so *** failed with error : >> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined symbol: >> mkl_dft_commit_descriptor_s_c2c_md_omp >> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >> >> >> Any ideas? >> >> I will also try the other approach (from source). >> >> thanks >> >> aurelien >> >> >> >> >> Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : >>>> On 17 Dec 2016, at 19:19, Barry Smith wrote: >>>> >>>> Looks like --install-option= are options for pip not the underlying >>>> package. >>>> >>>> Lisandro, how does one do what seems to be a simple request? >>> Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to >>> configure during pip install >> >> -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From bsmith at mcs.anl.gov Sun Dec 18 13:59:48 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 18 Dec 2016 13:59:48 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: Do you need Fortran? Since you are using PETSc from python. If you do not need it then change --with-fc=mpif90 to --with-fc=0 Looks like something is wrong with the mpi Fortran 90 module > Fatal Error: Reading module mpi at line 1 column 2: Unexpected EOF > On Dec 18, 2016, at 1:54 PM, Aurelien Ponte wrote: > > Here is what I am trying now: > > setenv MPICC mpiicc > pip install --user --upgrade --ignore-installed --no-cache-dir mpi4py > pip install --user --upgrade --ignore-installed --no-cache-dir numpy > setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=mpif90 --download-fblaslapack' > pip install --user --upgrade --ignore-installed --no-cache-dir petsc petsc4py > > I get now the following error message (I'll send the logs to petsc-maint at mcs.anl.gov as suggested): > > CC arch-python-linux-x86_64/obj/src/vec/is/is/interface/index.o > Fatal Error: Reading module mpi at line 1 column 2: Unexpected EOF > gmake[2]: *** [arch-python-linux-x86_64/obj/src/sys/f90-mod/petscsysmod.o] Error 1 > gmake[2]: *** Waiting for unfinished jobs.... > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(3): warning #161: unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(4): warning #161: unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(3): warning #161: unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(4): warning #161: unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(3): warning #161: unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(4): warning #161: unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(2): warning #161: unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(3): warning #161: unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > gmake[2]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' > gmake[1]: *** [gnumake] Error 2 > gmake[1]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' > **************************ERROR************************************* > Error during compile, check arch-python-linux-x86_64/lib/petsc/conf/make.log > Send it and arch-python-linux-x86_64/lib/petsc/conf/configure.log to petsc-maint at mcs.anl.gov > ******************************************************************** > make: *** [all] Error 1 > Traceback (most recent call last): > File "", line 1, in > File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 302, in > **metadata) > File "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/core.py", line 151, in setup > dist.run_commands() > File "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", line 953, in run_commands > self.run_command(cmd) > File "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", line 972, in run_command > cmd_obj.run() > File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 219, in run > build(self.dry_run) > File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 164, in build > if status != 0: raise RuntimeError(status) > RuntimeError: 512 > > ---------------------------------------- > Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-t7pV1u/petsc/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-ebXwB3-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-t7pV1u/petsc/ > > > > > Le 18/12/2016 ? 16:52, Satish Balay a ?crit : >> How about using --download-fblaslapack instead of MKL? >> >> Satish >> >> On Sun, 18 Dec 2016, Aurelien Ponte wrote: >> >>> Allright I got the following to complete: >>> >>> ### install pip: >>> module load python/2.7.10_gnu-4.9.2 >>> wget https://bootstrap.pypa.io/get-pip.py >>> python get-pip.py --user >>> # edit .cshrc: set path = ($path $home/.local/bin) + setenv LD_LIBRARY_PATH >>> /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} >>> >>> # >>> setenv MPICC mpiicc >>> pip install --user --upgrade mpi4py >>> pip install --user --upgrade numpy >>> setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices >>> --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' >>> pip install --user petsc petsc4py >>> >>> But now I get the following at run time: >>> *** libmkl_mc3.so *** failed with error : >>> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined symbol: >>> mkl_dft_commit_descriptor_s_c2c_md_omp >>> *** libmkl_def.so *** failed with error : >>> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined symbol: >>> mkl_dft_commit_descriptor_s_c2c_md_omp >>> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >>> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >>> >>> >>> Any ideas? >>> >>> I will also try the other approach (from source). >>> >>> thanks >>> >>> aurelien >>> >>> >>> >>> >>> Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : >>>>> On 17 Dec 2016, at 19:19, Barry Smith wrote: >>>>> >>>>> Looks like --install-option= are options for pip not the underlying >>>>> package. >>>>> >>>>> Lisandro, how does one do what seems to be a simple request? >>>> Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to >>>> configure during pip install >>> >>> > > > -- > Aur?lien Ponte > Tel: (+33) 2 98 22 40 73 > Fax: (+33) 2 98 22 44 96 > UMR 6523, IFREMER > ZI de la Pointe du Diable > CS 10070 > 29280 Plouzan? > From balay at mcs.anl.gov Sun Dec 18 14:00:36 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 18 Dec 2016 14:00:36 -0600 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: On Sun, 18 Dec 2016, Aurelien Ponte wrote: > Here is what I am trying now: > > setenv MPICC mpiicc Suggest not using intel compiler [petsc configure ignores MPICC env variable anyway] > pip install --user --upgrade --ignore-installed --no-cache-dir mpi4py > pip install --user --upgrade --ignore-installed --no-cache-dir numpy > setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=mpif90 Also specify compatible mpicc,mpicxx [i.e ones that use gcc/g++/gfortran - so that they are compatible with python] i.e --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 > --download-fblaslapack' > pip install --user --upgrade --ignore-installed --no-cache-dir petsc petsc4py > > I get now the following error message (I'll send the logs to > petsc-maint at mcs.anl.gov as suggested): > > CC arch-python-linux-x86_64/obj/src/vec/is/is/interface/index.o > Fatal Error: Reading module mpi at line 1 column 2: Unexpected EOF Some fortran module issue - happens if the compile and mpi have a mismatch.. Satish > gmake[2]: *** > [arch-python-linux-x86_64/obj/src/sys/f90-mod/petscsysmod.o] Error 1 > gmake[2]: *** Waiting for unfinished jobs.... > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(3): warning > #161: unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(4): warning #161: > unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(3): warning #161: > unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(4): warning #161: > unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(3): warning #161: > unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(4): warning #161: > unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(2): warning #161: > unrecognized #pragma > #pragma clang diagnostic ignored "-Wdeprecated-declarations" > ^ > > /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(3): warning #161: > unrecognized #pragma > #pragma gcc diagnostic ignored "-Wdeprecated-declarations" > ^ > > gmake[2]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' > gmake[1]: *** [gnumake] Error 2 > gmake[1]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' > **************************ERROR************************************* > Error during compile, check > arch-python-linux-x86_64/lib/petsc/conf/make.log > Send it and arch-python-linux-x86_64/lib/petsc/conf/configure.log to > petsc-maint at mcs.anl.gov > ******************************************************************** > make: *** [all] Error 1 > Traceback (most recent call last): > File "", line 1, in > File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 302, in > **metadata) > File > "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/core.py", > line 151, in setup > dist.run_commands() > File > "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", > line 953, in run_commands > self.run_command(cmd) > File > "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", > line 972, in run_command > cmd_obj.run() > File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 219, in run > build(self.dry_run) > File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 164, in build > if status != 0: raise RuntimeError(status) > RuntimeError: 512 > > ---------------------------------------- > Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import > setuptools, > tokenize;__file__='/tmp/pip-build-t7pV1u/petsc/setup.py';f=getattr(tokenize, > 'open', open)(__file__);code=f.read().replace('\r\n', > '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record > /tmp/pip-ebXwB3-record/install-record.txt --single-version-externally-managed > --compile --user --prefix=" failed with error code 1 in > /tmp/pip-build-t7pV1u/petsc/ > > > > > Le 18/12/2016 ? 16:52, Satish Balay a ?crit : > > How about using --download-fblaslapack instead of MKL? > > > > Satish > > > > On Sun, 18 Dec 2016, Aurelien Ponte wrote: > > > > > Allright I got the following to complete: > > > > > > ### install pip: > > > module load python/2.7.10_gnu-4.9.2 > > > wget https://bootstrap.pypa.io/get-pip.py > > > python get-pip.py --user > > > # edit .cshrc: set path = ($path $home/.local/bin) + setenv > > > LD_LIBRARY_PATH > > > /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} > > > > > > # > > > setenv MPICC mpiicc > > > pip install --user --upgrade mpi4py > > > pip install --user --upgrade numpy > > > setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices > > > --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' > > > pip install --user petsc petsc4py > > > > > > But now I get the following at run time: > > > *** libmkl_mc3.so *** failed with error : > > > /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined > > > symbol: > > > mkl_dft_commit_descriptor_s_c2c_md_omp > > > *** libmkl_def.so *** failed with error : > > > /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined > > > symbol: > > > mkl_dft_commit_descriptor_s_c2c_md_omp > > > MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so > > > MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so > > > > > > > > > Any ideas? > > > > > > I will also try the other approach (from source). > > > > > > thanks > > > > > > aurelien > > > > > > > > > > > > > > > Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : > > > > > On 17 Dec 2016, at 19:19, Barry Smith wrote: > > > > > > > > > > Looks like --install-option= are options for pip not the underlying > > > > > package. > > > > > > > > > > Lisandro, how does one do what seems to be a simple request? > > > > Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to > > > > configure during pip install > > > > > > > > > From aurelien.ponte at ifremer.fr Sun Dec 18 14:22:02 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sun, 18 Dec 2016 21:22:02 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: I am trying now trying with: setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=0 --download-f2cblaslapack' Unfortunately I believe there is only intel mpi on the cluster I am working on (judging from the module avail command). Do you believe I should try to compile openmpi with gcc? aurelien Le 18/12/2016 ? 21:00, Satish Balay a ?crit : > On Sun, 18 Dec 2016, Aurelien Ponte wrote: > >> Here is what I am trying now: >> >> setenv MPICC mpiicc > Suggest not using intel compiler [petsc configure ignores MPICC env variable anyway] > >> pip install --user --upgrade --ignore-installed --no-cache-dir mpi4py >> pip install --user --upgrade --ignore-installed --no-cache-dir numpy >> setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=mpif90 > Also specify compatible mpicc,mpicxx [i.e ones that use gcc/g++/gfortran - so that they are compatible with python] > > i.e --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 > >> --download-fblaslapack' >> pip install --user --upgrade --ignore-installed --no-cache-dir petsc petsc4py >> >> I get now the following error message (I'll send the logs to >> petsc-maint at mcs.anl.gov as suggested): >> >> CC arch-python-linux-x86_64/obj/src/vec/is/is/interface/index.o >> Fatal Error: Reading module mpi at line 1 column 2: Unexpected EOF > Some fortran module issue - happens if the compile and mpi have a mismatch.. > > Satish > >> gmake[2]: *** >> [arch-python-linux-x86_64/obj/src/sys/f90-mod/petscsysmod.o] Error 1 >> gmake[2]: *** Waiting for unfinished jobs.... >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(3): warning >> #161: unrecognized #pragma >> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(4): warning #161: >> unrecognized #pragma >> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(3): warning #161: >> unrecognized #pragma >> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(4): warning #161: >> unrecognized #pragma >> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(3): warning #161: >> unrecognized #pragma >> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(4): warning #161: >> unrecognized #pragma >> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(2): warning #161: >> unrecognized #pragma >> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(3): warning #161: >> unrecognized #pragma >> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >> ^ >> >> gmake[2]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' >> gmake[1]: *** [gnumake] Error 2 >> gmake[1]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' >> **************************ERROR************************************* >> Error during compile, check >> arch-python-linux-x86_64/lib/petsc/conf/make.log >> Send it and arch-python-linux-x86_64/lib/petsc/conf/configure.log to >> petsc-maint at mcs.anl.gov >> ******************************************************************** >> make: *** [all] Error 1 >> Traceback (most recent call last): >> File "", line 1, in >> File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 302, in >> **metadata) >> File >> "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/core.py", >> line 151, in setup >> dist.run_commands() >> File >> "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", >> line 953, in run_commands >> self.run_command(cmd) >> File >> "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", >> line 972, in run_command >> cmd_obj.run() >> File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 219, in run >> build(self.dry_run) >> File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 164, in build >> if status != 0: raise RuntimeError(status) >> RuntimeError: 512 >> >> ---------------------------------------- >> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u -c "import >> setuptools, >> tokenize;__file__='/tmp/pip-build-t7pV1u/petsc/setup.py';f=getattr(tokenize, >> 'open', open)(__file__);code=f.read().replace('\r\n', >> '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record >> /tmp/pip-ebXwB3-record/install-record.txt --single-version-externally-managed >> --compile --user --prefix=" failed with error code 1 in >> /tmp/pip-build-t7pV1u/petsc/ >> >> >> >> >> Le 18/12/2016 ? 16:52, Satish Balay a ?crit : >>> How about using --download-fblaslapack instead of MKL? >>> >>> Satish >>> >>> On Sun, 18 Dec 2016, Aurelien Ponte wrote: >>> >>>> Allright I got the following to complete: >>>> >>>> ### install pip: >>>> module load python/2.7.10_gnu-4.9.2 >>>> wget https://bootstrap.pypa.io/get-pip.py >>>> python get-pip.py --user >>>> # edit .cshrc: set path = ($path $home/.local/bin) + setenv >>>> LD_LIBRARY_PATH >>>> /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} >>>> >>>> # >>>> setenv MPICC mpiicc >>>> pip install --user --upgrade mpi4py >>>> pip install --user --upgrade numpy >>>> setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices >>>> --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' >>>> pip install --user petsc petsc4py >>>> >>>> But now I get the following at run time: >>>> *** libmkl_mc3.so *** failed with error : >>>> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined >>>> symbol: >>>> mkl_dft_commit_descriptor_s_c2c_md_omp >>>> *** libmkl_def.so *** failed with error : >>>> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined >>>> symbol: >>>> mkl_dft_commit_descriptor_s_c2c_md_omp >>>> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >>>> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >>>> >>>> >>>> Any ideas? >>>> >>>> I will also try the other approach (from source). >>>> >>>> thanks >>>> >>>> aurelien >>>> >>>> >>>> >>>> >>>> Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : >>>>>> On 17 Dec 2016, at 19:19, Barry Smith wrote: >>>>>> >>>>>> Looks like --install-option= are options for pip not the underlying >>>>>> package. >>>>>> >>>>>> Lisandro, how does one do what seems to be a simple request? >>>>> Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to pass to >>>>> configure during pip install >>>> >> >> -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From aurelien.ponte at ifremer.fr Sun Dec 18 14:35:45 2016 From: aurelien.ponte at ifremer.fr (Aurelien Ponte) Date: Sun, 18 Dec 2016 21:35:45 +0100 Subject: [petsc-users] petsc4py --with-64-bit-indices In-Reply-To: References: <7A9C9AB9-44F9-4478-B2BE-ED4879B3E4D0@mcs.anl.gov> <321A525C-FE98-483A-8B8F-7FCB2C343135@imperial.ac.uk> Message-ID: All right, the build did succeed and I was able to perform the computations that used to fail without --with-64-bit-indices. So to summarize: setenv MPICC mpiicc (useless according to Satish) pip install --user --upgrade --ignore-installed --no-cache-dir mpi4py pip install --user --upgrade --ignore-installed --no-cache-dir numpy setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=0 --download-f2cblaslapack' pip install --user --upgrade --ignore-installed --no-cache-dir petsc petsc4py thanks to all for your help, aurelien Le 18/12/2016 ? 21:22, Aurelien Ponte a ?crit : > I am trying now trying with: > setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=0 > --download-f2cblaslapack' > > Unfortunately I believe there is only intel mpi on the cluster I am > working on (judging from the module avail command). > Do you believe I should try to compile openmpi with gcc? > > aurelien > > > Le 18/12/2016 ? 21:00, Satish Balay a ?crit : >> On Sun, 18 Dec 2016, Aurelien Ponte wrote: >> >>> Here is what I am trying now: >>> >>> setenv MPICC mpiicc >> Suggest not using intel compiler [petsc configure ignores MPICC env >> variable anyway] >> >>> pip install --user --upgrade --ignore-installed --no-cache-dir mpi4py >>> pip install --user --upgrade --ignore-installed --no-cache-dir numpy >>> setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices --with-fc=mpif90 >> Also specify compatible mpicc,mpicxx [i.e ones that use >> gcc/g++/gfortran - so that they are compatible with python] >> >> i.e --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90 >> >>> --download-fblaslapack' >>> pip install --user --upgrade --ignore-installed --no-cache-dir petsc >>> petsc4py >>> >>> I get now the following error message (I'll send the logs to >>> petsc-maint at mcs.anl.gov as suggested): >>> >>> CC >>> arch-python-linux-x86_64/obj/src/vec/is/is/interface/index.o >>> Fatal Error: Reading module mpi at line 1 column 2: Unexpected EOF >> Some fortran module issue - happens if the compile and mpi have a >> mismatch.. >> >> Satish >> >>> gmake[2]: *** >>> [arch-python-linux-x86_64/obj/src/sys/f90-mod/petscsysmod.o] Error 1 >>> gmake[2]: *** Waiting for unfinished jobs.... >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(3): warning >>> #161: unrecognized #pragma >>> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/client.c(4): warning >>> #161: >>> unrecognized #pragma >>> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(3): warning >>> #161: >>> unrecognized #pragma >>> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/google.c(4): warning >>> #161: >>> unrecognized #pragma >>> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(3): warning >>> #161: >>> unrecognized #pragma >>> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/box.c(4): warning >>> #161: >>> unrecognized #pragma >>> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(2): warning >>> #161: >>> unrecognized #pragma >>> #pragma clang diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> /tmp/pip-build-t7pV1u/petsc/src/sys/webclient/globus.c(3): warning >>> #161: >>> unrecognized #pragma >>> #pragma gcc diagnostic ignored "-Wdeprecated-declarations" >>> ^ >>> >>> gmake[2]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' >>> gmake[1]: *** [gnumake] Error 2 >>> gmake[1]: Leaving directory `/tmp/pip-build-t7pV1u/petsc' >>> **************************ERROR************************************* >>> Error during compile, check >>> arch-python-linux-x86_64/lib/petsc/conf/make.log >>> Send it and >>> arch-python-linux-x86_64/lib/petsc/conf/configure.log to >>> petsc-maint at mcs.anl.gov >>> ******************************************************************** >>> make: *** [all] Error 1 >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 302, in >>> >>> **metadata) >>> File >>> "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/core.py", >>> >>> line 151, in setup >>> dist.run_commands() >>> File >>> "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", >>> >>> line 953, in run_commands >>> self.run_command(cmd) >>> File >>> "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/lib/python2.7/distutils/dist.py", >>> >>> line 972, in run_command >>> cmd_obj.run() >>> File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 219, in run >>> build(self.dry_run) >>> File "/tmp/pip-build-t7pV1u/petsc/setup.py", line 164, in build >>> if status != 0: raise RuntimeError(status) >>> RuntimeError: 512 >>> >>> ---------------------------------------- >>> Command "/appli/python/2.7.10_gcc-4.9.2/python-2.7.10/bin/python -u >>> -c "import >>> setuptools, >>> tokenize;__file__='/tmp/pip-build-t7pV1u/petsc/setup.py';f=getattr(tokenize, >>> >>> 'open', open)(__file__);code=f.read().replace('\r\n', >>> '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record >>> /tmp/pip-ebXwB3-record/install-record.txt >>> --single-version-externally-managed >>> --compile --user --prefix=" failed with error code 1 in >>> /tmp/pip-build-t7pV1u/petsc/ >>> >>> >>> >>> >>> Le 18/12/2016 ? 16:52, Satish Balay a ?crit : >>>> How about using --download-fblaslapack instead of MKL? >>>> >>>> Satish >>>> >>>> On Sun, 18 Dec 2016, Aurelien Ponte wrote: >>>> >>>>> Allright I got the following to complete: >>>>> >>>>> ### install pip: >>>>> module load python/2.7.10_gnu-4.9.2 >>>>> wget https://bootstrap.pypa.io/get-pip.py >>>>> python get-pip.py --user >>>>> # edit .cshrc: set path = ($path $home/.local/bin) + setenv >>>>> LD_LIBRARY_PATH >>>>> /home1/caparmor/aponte/.local/lib:${LD_LIBRARY_PATH} >>>>> >>>>> # >>>>> setenv MPICC mpiicc >>>>> pip install --user --upgrade mpi4py >>>>> pip install --user --upgrade numpy >>>>> setenv PETSC_CONFIGURE_OPTIONS '--with-64-bit-indices >>>>> --with-blas-lapack-dir=/appli/intel/Compiler/11.1/073/mkl/lib/em64t' >>>>> pip install --user petsc petsc4py >>>>> >>>>> But now I get the following at run time: >>>>> *** libmkl_mc3.so *** failed with error : >>>>> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_mc3.so: undefined >>>>> symbol: >>>>> mkl_dft_commit_descriptor_s_c2c_md_omp >>>>> *** libmkl_def.so *** failed with error : >>>>> /appli/intel/Compiler/11.1/073/mkl/lib/em64t/libmkl_def.so: undefined >>>>> symbol: >>>>> mkl_dft_commit_descriptor_s_c2c_md_omp >>>>> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >>>>> MKL FATAL ERROR: Cannot load neither libmkl_mc3.so nor libmkl_def.so >>>>> >>>>> >>>>> Any ideas? >>>>> >>>>> I will also try the other approach (from source). >>>>> >>>>> thanks >>>>> >>>>> aurelien >>>>> >>>>> >>>>> >>>>> >>>>> Le 17/12/2016 ? 22:36, Lawrence Mitchell a ?crit : >>>>>>> On 17 Dec 2016, at 19:19, Barry Smith wrote: >>>>>>> >>>>>>> Looks like --install-option= are options for pip not the >>>>>>> underlying >>>>>>> package. >>>>>>> >>>>>>> Lisandro, how does one do what seems to be a simple request? >>>>>> Set PETSC_CONFIGURE_OPTIONS to any additional flags you want to >>>>>> pass to >>>>>> configure during pip install >>>>> >>> >>> > > -- Aur?lien Ponte Tel: (+33) 2 98 22 40 73 Fax: (+33) 2 98 22 44 96 UMR 6523, IFREMER ZI de la Pointe du Diable CS 10070 29280 Plouzan? From andreas at ices.utexas.edu Mon Dec 19 14:10:18 2016 From: andreas at ices.utexas.edu (Andreas Mang) Date: Mon, 19 Dec 2016 14:10:18 -0600 Subject: [petsc-users] traceback & error handling Message-ID: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> Hey guys: I have some problems with the error handling. On my local machine (where I debug) I get a million warning messages if I do #undef __FUNCT__ #define __FUNCT__ ?ClassName::FunctionName? (i.e., file.cpp:XXX: __FUNCT__=?ClassName::FunctionName" does not agree with __func__=?FunctionName?) If I run the same code using intel15 compilers it?s the opposite (which I discovered just now). That is, I get an error for #undef __FUNCT__ #define __FUNCT__ ?FunctionName? (i.e., file.cpp:XXX: __FUNCT__=?FunctionName" does not agree with __func__=?ClassName::FunctionName?) I do like the error handling by PETSc. I think it?s quite helpful. Obviously, I can write my own stack trace but why bother if it?s already there. I did check your online documentation and I could no longer find these definitions in your code. So, should I just remove all of these definitions? Is there a quick fix? Is this depreciated? Second of all, I saw you do no longer use error handling in your examples at all, i.e., ierr = FunctionCall(); CHKERRQ(ierr); and friends have vanished. Why is that? Is it just to keep the examples simple or are you moving away from using these Macros for error handling. I hope I did not miss any changes in this regard in one of your announcements. I could not find anything in the documentation. Thanks Andreas From balay at mcs.anl.gov Mon Dec 19 16:09:17 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 19 Dec 2016 16:09:17 -0600 Subject: [petsc-users] traceback & error handling In-Reply-To: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> References: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> Message-ID: PETSc code doesn't use classes - so we don't see this isssue. One way to fix this is: #undef #undef __FUNCT__ #if (__INTEL_COMPILER) #define __FUNCT__ ?ClassName::FunctionName? #else #define __FUNCT__ ?FunctionName? #endif Alternative is to not do this check For compiles that define __func__ [like intel, gcc] __FUNCT__ is not used anyway. So perhaps the following will work? [without having to modify petsc include files] #undef PetscCheck__FUNCT__ #define PetscCheck__FUNCT__() Wrt CHKERRQ() - which code are you refering to? Satish On Mon, 19 Dec 2016, Andreas Mang wrote: > Hey guys: > > I have some problems with the error handling. On my local machine (where I debug) I get a million warning messages if I do > > #undef __FUNCT__ > #define __FUNCT__ ?ClassName::FunctionName? > > (i.e., file.cpp:XXX: __FUNCT__=?ClassName::FunctionName" does not agree with __func__=?FunctionName?) > > If I run the same code using intel15 compilers it?s the opposite (which I discovered just now). That is, I get an error for > > #undef __FUNCT__ > #define __FUNCT__ ?FunctionName? > > (i.e., file.cpp:XXX: __FUNCT__=?FunctionName" does not agree with __func__=?ClassName::FunctionName?) > > I do like the error handling by PETSc. I think it?s quite helpful. Obviously, I can write my own stack trace but why bother if it?s already there. I did check your online documentation and I could no longer find these definitions in your code. So, should I just remove all of these definitions? Is there a quick fix? Is this depreciated? > > > Second of all, I saw you do no longer use error handling in your examples at all, i.e., > > ierr = FunctionCall(); CHKERRQ(ierr); > > and friends have vanished. Why is that? Is it just to keep the examples simple or are you moving away from using these Macros for error handling. > > I hope I did not miss any changes in this regard in one of your announcements. I could not find anything in the documentation. > > Thanks > Andreas > > > From andreas at ices.utexas.edu Mon Dec 19 16:30:07 2016 From: andreas at ices.utexas.edu (Andreas Mang) Date: Mon, 19 Dec 2016 16:30:07 -0600 Subject: [petsc-users] traceback & error handling In-Reply-To: References: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> Message-ID: <05E38C25-CA97-4669-B1A0-4882B2C2FF58@ices.utexas.edu> Hey Satish: Thanks for your help. I did not mention this, but it?s both for intel compilers (modules intel15 and intel17). I have not checked in detail. I think I?ll opt for your second suggestion. I do not want to introduce compiler specific defines excessively throughout my code. Considering the error handling: Sorry for not being precise. I am referring to any of the examples in the documentation (I did not check all, obviously). Here are two: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex1.c.html or a more sophisticated one: http://www.mcs.anl.gov/petsc/petsc-current/src/tao/bound/examples/tutorials/plate2.c.html None of these examples for user routines contain any CHKERRQ calls anymore. A [Metainfo: A student asked me why I use these; I explained and referred him to the documentation; we then discovered that the examples do not use the error handling any longer] > On Dec 19, 2016, at 4:09 PM, Satish Balay wrote: > > PETSc code doesn't use classes - so we don't see this isssue. > > One way to fix this is: > > #undef #undef __FUNCT__ > > #if (__INTEL_COMPILER) > #define __FUNCT__ ?ClassName::FunctionName? > #else > #define __FUNCT__ ?FunctionName? > #endif > > Alternative is to not do this check For compiles that define __func__ > [like intel, gcc] __FUNCT__ is not used anyway. So perhaps the following > will work? [without having to modify petsc include files] > > #undef PetscCheck__FUNCT__ > #define PetscCheck__FUNCT__() > > Wrt CHKERRQ() - which code are you refering to? > > Satish > > On Mon, 19 Dec 2016, Andreas Mang wrote: > >> Hey guys: >> >> I have some problems with the error handling. On my local machine (where I debug) I get a million warning messages if I do >> >> #undef __FUNCT__ >> #define __FUNCT__ ?ClassName::FunctionName? >> >> (i.e., file.cpp:XXX: __FUNCT__=?ClassName::FunctionName" does not agree with __func__=?FunctionName?) >> >> If I run the same code using intel15 compilers it?s the opposite (which I discovered just now). That is, I get an error for >> >> #undef __FUNCT__ >> #define __FUNCT__ ?FunctionName? >> >> (i.e., file.cpp:XXX: __FUNCT__=?FunctionName" does not agree with __func__=?ClassName::FunctionName?) >> >> I do like the error handling by PETSc. I think it?s quite helpful. Obviously, I can write my own stack trace but why bother if it?s already there. I did check your online documentation and I could no longer find these definitions in your code. So, should I just remove all of these definitions? Is there a quick fix? Is this depreciated? >> >> >> Second of all, I saw you do no longer use error handling in your examples at all, i.e., >> >> ierr = FunctionCall(); CHKERRQ(ierr); >> >> and friends have vanished. Why is that? Is it just to keep the examples simple or are you moving away from using these Macros for error handling. >> >> I hope I did not miss any changes in this regard in one of your announcements. I could not find anything in the documentation. >> >> Thanks >> Andreas >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Mon Dec 19 16:34:35 2016 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Dec 2016 16:34:35 -0600 Subject: [petsc-users] traceback & error handling In-Reply-To: <05E38C25-CA97-4669-B1A0-4882B2C2FF58@ices.utexas.edu> References: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> <05E38C25-CA97-4669-B1A0-4882B2C2FF58@ices.utexas.edu> Message-ID: On Mon, Dec 19, 2016 at 4:30 PM, Andreas Mang wrote: > Hey Satish: > > Thanks for your help. I did not mention this, but it?s both for intel > compilers (modules intel15 and intel17). I have not checked in detail. I > think I?ll opt for your second suggestion. I do not want to introduce > compiler specific defines excessively throughout my code. > > Considering the error handling: Sorry for not being precise. I am > referring to any of the examples in the documentation (I did not check all, > obviously). Here are two: > > http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/ > examples/tutorials/ex1.c.html > > or a more sophisticated one: > > http://www.mcs.anl.gov/petsc/petsc-current/src/tao/bound/ > examples/tutorials/plate2.c.html > > None of these examples for user routines contain any CHKERRQ calls anymore. > > A > > [Metainfo: A student asked me why I use these; I explained and referred > him to the documentation; we then discovered that the examples do not use > the error handling any longer] > Barry wrote a script that strips out the error checking from the HTML. Follow the link at the top to raw source: http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/pc/examples/tutorials/ex1.c and it has error checking. Matt > > > On Dec 19, 2016, at 4:09 PM, Satish Balay wrote: > > PETSc code doesn't use classes - so we don't see this isssue. > > One way to fix this is: > > #undef #undef __FUNCT__ > > #if (__INTEL_COMPILER) > #define __FUNCT__ ?ClassName::FunctionName? > #else > #define __FUNCT__ ?FunctionName? > #endif > > Alternative is to not do this check For compiles that define __func__ > [like intel, gcc] __FUNCT__ is not used anyway. So perhaps the following > will work? [without having to modify petsc include files] > > #undef PetscCheck__FUNCT__ > #define PetscCheck__FUNCT__() > > Wrt CHKERRQ() - which code are you refering to? > > Satish > > On Mon, 19 Dec 2016, Andreas Mang wrote: > > Hey guys: > > I have some problems with the error handling. On my local machine (where I > debug) I get a million warning messages if I do > > #undef __FUNCT__ > #define __FUNCT__ ?ClassName::FunctionName? > > (i.e., file.cpp:XXX: __FUNCT__=?ClassName::FunctionName" does not agree > with __func__=?FunctionName?) > > If I run the same code using intel15 compilers it?s the opposite (which I > discovered just now). That is, I get an error for > > #undef __FUNCT__ > #define __FUNCT__ ?FunctionName? > > (i.e., file.cpp:XXX: __FUNCT__=?FunctionName" does not agree with > __func__=?ClassName::FunctionName?) > > I do like the error handling by PETSc. I think it?s quite helpful. > Obviously, I can write my own stack trace but why bother if it?s already > there. I did check your online documentation and I could no longer find > these definitions in your code. So, should I just remove all of these > definitions? Is there a quick fix? Is this depreciated? > > > Second of all, I saw you do no longer use error handling in your examples > at all, i.e., > > ierr = FunctionCall(); CHKERRQ(ierr); > > and friends have vanished. Why is that? Is it just to keep the examples > simple or are you moving away from using these Macros for error handling. > > I hope I did not miss any changes in this regard in one of your > announcements. I could not find anything in the documentation. > > Thanks > Andreas > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From gotofd at gmail.com Tue Dec 20 10:50:02 2016 From: gotofd at gmail.com (Ji Zhang) Date: Wed, 21 Dec 2016 00:50:02 +0800 Subject: [petsc-users] fast multipole method using petsc Message-ID: Dear all, I'm a petsc user. Currently, the system I face is so huge that is out of memory. I want to perform fast multipole method using petsc. Is it possible for me to tell the solver the result of matrix-vector product, which is gmres need indeed, instead of generate the whole matrix directly? For example, give a function headle to the solver. Thanks. ?? ?? ????????? ?????????? ???????????10????9?? ?100193? Best, Regards, Zhang Ji, PhD student Beijing Computational Science Research Center Zhongguancun Software Park II, No. 10 Dongbeiwang West Road, Haidian District, Beijing 100193, China -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Dec 20 10:58:10 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 20 Dec 2016 10:58:10 -0600 Subject: [petsc-users] fast multipole method using petsc In-Reply-To: References: Message-ID: <2BB1B7DE-65CB-416D-85D4-7020079B2DEA@mcs.anl.gov> > On Dec 20, 2016, at 10:50 AM, Ji Zhang wrote: > > Dear all, > > I'm a petsc user. Currently, the system I face is so huge that is out of memory. I want to perform fast multipole method using petsc. Is it possible for me to tell the solver the result of matrix-vector product, which is gmres need indeed, instead of generate the whole matrix directly? Absolutely. http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateShell.html http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatShellSetOperation.html follow the examples links for some simple examples. Barry > For example, give a function headle to the solver. > > Thanks. > > ?? > ?? > ????????? > ?????????? > ???????????10????9?? ?100193? > > Best, > Regards, > Zhang Ji, PhD student > Beijing Computational Science Research Center > Zhongguancun Software Park II, No. 10 Dongbeiwang West Road, Haidian District, Beijing 100193, China From gotofd at gmail.com Tue Dec 20 11:42:03 2016 From: gotofd at gmail.com (Ji Zhang) Date: Wed, 21 Dec 2016 01:42:03 +0800 Subject: [petsc-users] fast multipole method using petsc In-Reply-To: <2BB1B7DE-65CB-416D-85D4-7020079B2DEA@mcs.anl.gov> References: <2BB1B7DE-65CB-416D-85D4-7020079B2DEA@mcs.anl.gov> Message-ID: Dear Barry, Thanks a lot for your information. It's great. ?? ?? ????????? ?????????? ???????????10????9?? ?100193? Best, Regards, Zhang Ji, PhD student Beijing Computational Science Research Center Zhongguancun Software Park II, No. 10 Dongbeiwang West Road, Haidian District, Beijing 100193, China On Wed, Dec 21, 2016 at 12:58 AM, Barry Smith wrote: > > > On Dec 20, 2016, at 10:50 AM, Ji Zhang wrote: > > > > Dear all, > > > > I'm a petsc user. Currently, the system I face is so huge that is out of > memory. I want to perform fast multipole method using petsc. Is it possible > for me to tell the solver the result of matrix-vector product, which is > gmres need indeed, instead of generate the whole matrix directly? > > > Absolutely. > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/ > MatCreateShell.html > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/ > MatShellSetOperation.html > > follow the examples links for some simple examples. > > Barry > > > For example, give a function headle to the solver. > > > > Thanks. > > > > ?? > > ?? > > ????????? > > ?????????? > > ???????????10????9?? ?100193? > > > > Best, > > Regards, > > Zhang Ji, PhD student > > Beijing Computational Science Research Center > > Zhongguancun Software Park II, No. 10 Dongbeiwang West Road, Haidian > District, Beijing 100193, China > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fangbowa at buffalo.edu Wed Dec 21 14:30:22 2016 From: fangbowa at buffalo.edu (Fangbo Wang) Date: Wed, 21 Dec 2016 15:30:22 -0500 Subject: [petsc-users] How to automatically create many Petsc matrices with similar names(A1, A2, A3....)? Message-ID: Hi, *Background:* I have a global matrix (sparse) which is very large (2 million by million), it is generated from local block matrix with size around 30,000 by 30,000 (also sparse) using stochastic galerkin projection method. In the global matrix, I have 700 local block matrices which most of them are similar. At the end of the day, I only need to save 45 different local block matrices which can save a lot of memory than saving the global matrix. *my problem:* I don't want to mannually create 45 petsc matrices with similar names, for example, A1, A2, A3, A4, etc. I want to automatically create these matrices and be able to call these matrices according to its name. Any one have experiences on this? Thank you very much. Fangbo Wang -- Fangbo Wang, PhD student Stochastic Geomechanics Research Group Department of Civil, Structural and Environmental Engineering University at Buffalo Email: *fangbowa at buffalo.edu * -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Dec 21 14:33:33 2016 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 21 Dec 2016 14:33:33 -0600 Subject: [petsc-users] How to automatically create many Petsc matrices with similar names(A1, A2, A3....)? In-Reply-To: References: Message-ID: On Wed, Dec 21, 2016 at 2:30 PM, Fangbo Wang wrote: > Hi, > > *Background:* > I have a global matrix (sparse) which is very large (2 million by > million), it is generated from local block matrix with size around 30,000 > by 30,000 (also sparse) using stochastic galerkin projection method. > > In the global matrix, I have 700 local block matrices which most of them > are similar. At the end of the day, I only need to save 45 different local > block matrices which can save a lot of memory than saving the global matrix. > > *my problem:* > I don't want to mannually create 45 petsc matrices with similar names, for > example, A1, A2, A3, A4, etc. I want to automatically create these > matrices and be able to call these matrices according to its name. > > Any one have experiences on this? Thank you very much. > Try using http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatCreateNest.html#MatCreateNest although its usually intended for s smaller number of block matrices. Matt Fangbo Wang > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: *fangbowa at buffalo.edu * > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 21 14:34:49 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 21 Dec 2016 14:34:49 -0600 Subject: [petsc-users] How to automatically create many Petsc matrices with similar names(A1, A2, A3....)? In-Reply-To: References: Message-ID: > On Dec 21, 2016, at 2:30 PM, Fangbo Wang wrote: > > Hi, > > Background: > I have a global matrix (sparse) which is very large (2 million by million), it is generated from local block matrix with size around 30,000 by 30,000 (also sparse) using stochastic galerkin projection method. > > In the global matrix, I have 700 local block matrices which most of them are similar. At the end of the day, I only need to save 45 different local block matrices which can save a lot of memory than saving the global matrix. > > my problem: > I don't want to mannually create 45 petsc matrices with similar names, for example, A1, A2, A3, A4, etc. I want to automatically create these matrices and be able to call these matrices according to its name. Have an array of matrices? Mat A[45]; for (i=0; i<45; i++) { MatCreate(PETSC_COMM_SELF,&A[i]); MatSetType(A[i],... .... Barry > > Any one have experiences on this? Thank you very much. > > Fangbo Wang > > -- > Fangbo Wang, PhD student > Stochastic Geomechanics Research Group > Department of Civil, Structural and Environmental Engineering > University at Buffalo > Email: fangbowa at buffalo.edu From mailinglists at xgm.de Thu Dec 22 07:07:35 2016 From: mailinglists at xgm.de (Florian Lindner) Date: Thu, 22 Dec 2016 14:07:35 +0100 Subject: [petsc-users] How to compute y = y - A*b Message-ID: <6bcca0f3-ae5d-4223-4c68-2c354554cdba@xgm.de> Hello, what is the best / most efficient way to compute: y = y - A * b with vectors b, y and matrix A: * VecAXPY: I need to compute A*b first MatMult(A, b, r); VecAXPY(y, -1, r); * VecWAXPY: Same case, but I don't reuse y MatMult(A, b, r); VecWAXPY(w, -1, r, y); * VecAYPX: Don't work, because I need to multiply r = A*b with -1 Is there anything else I have overseen, or should I just go with VecAXPY? Best, Florian From timothee.nicolas at gmail.com Thu Dec 22 07:15:07 2016 From: timothee.nicolas at gmail.com (=?UTF-8?Q?Timoth=C3=A9e_Nicolas?=) Date: Thu, 22 Dec 2016 14:15:07 +0100 Subject: [petsc-users] How to compute y = y - A*b In-Reply-To: <6bcca0f3-ae5d-4223-4c68-2c354554cdba@xgm.de> References: <6bcca0f3-ae5d-4223-4c68-2c354554cdba@xgm.de> Message-ID: What exactly is the problem? Don't you get good performances with the first option? I think it is good enough. However, I think it is safer to make sure that the scalars have the same type. i.e. sometimes you will get problems passing numerical values to PETSc functions if they are written as integers when a scalar is expected and vice versa. So I think you should rather define one=1.0d0 and then VecAXPY(y, -one, r); Best Timoth?e 2016-12-22 14:07 GMT+01:00 Florian Lindner : > Hello, > > what is the best / most efficient way to compute: > > y = y - A * b > > with vectors b, y and matrix A: > > * VecAXPY: I need to compute A*b first > > MatMult(A, b, r); > VecAXPY(y, -1, r); > > * VecWAXPY: Same case, but I don't reuse y > > MatMult(A, b, r); > VecWAXPY(w, -1, r, y); > > * VecAYPX: Don't work, because I need to multiply r = A*b with -1 > > Is there anything else I have overseen, or should I just go with VecAXPY? > > Best, > Florian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 22 07:27:52 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Dec 2016 07:27:52 -0600 Subject: [petsc-users] How to compute y = y - A*b In-Reply-To: <6bcca0f3-ae5d-4223-4c68-2c354554cdba@xgm.de> References: <6bcca0f3-ae5d-4223-4c68-2c354554cdba@xgm.de> Message-ID: On Thu, Dec 22, 2016 at 7:07 AM, Florian Lindner wrote: > Hello, > > what is the best / most efficient way to compute: > > y = y - A * b > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatMultAdd.html Matt > with vectors b, y and matrix A: > > * VecAXPY: I need to compute A*b first > > MatMult(A, b, r); > VecAXPY(y, -1, r); > > * VecWAXPY: Same case, but I don't reuse y > > MatMult(A, b, r); > VecWAXPY(w, -1, r, y); > > * VecAYPX: Don't work, because I need to multiply r = A*b with -1 > > Is there anything else I have overseen, or should I just go with VecAXPY? > > Best, > Florian > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thronesf at gmail.com Thu Dec 22 11:01:55 2016 From: thronesf at gmail.com (Sharp Stone) Date: Thu, 22 Dec 2016 12:01:55 -0500 Subject: [petsc-users] Petsc+Chombo problem Message-ID: Dear folks, I'm now using Chombo with Petsc solver. When compiling the Chombo examples of Petsc, I always got errors below. I wonder if anyone has such experience to get rid of these. Thanks very much in advance! In file included from PetscCompGrid.cpp:13: PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or directory In file included from PetscCompGrid.cpp:13: PetscCompGrid.H:55: error: ?Mat? does not name a type PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope PetscCompGrid.H:66: error: template argument 1 is invalid PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope PetscCompGrid.H:90: error: template argument 1 is invalid PetscCompGrid.H:90: error: template argument 1 is invalid PetscCompGrid.H:90: error: template argument 1 is invalid PetscCompGrid.H:90: error: template argument 1 is invalid PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope PetscCompGrid.H:92: error: template argument 1 is invalid PetscCompGrid.H:92: error: template argument 1 is invalid PetscCompGrid.H:92: error: template argument 1 is invalid PetscCompGrid.H:92: error: template argument 1 is invalid PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope PetscCompGrid.H:93: error: template argument 1 is invalid PetscCompGrid.H:93: error: template argument 1 is invalid PetscCompGrid.H:93: error: template argument 1 is invalid PetscCompGrid.H:93: error: template argument 1 is invalid PetscCompGrid.H:97: error: ?Mat? does not name a type PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field named ?m_mat? PetscCompGrid.H: At global scope: PetscCompGrid.H:138: error: ?PetscReal? does not name a type PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? with no type PetscCompGrid.H:140: error: expected ?;? before ?*? token PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field named ?m_Rcoefs? PetscCompGrid.cpp: In member function ?void CompBC::define(int, IntVect)?: PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope PetscCompGrid.cpp: At global scope: PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type PetscCompGrid.cpp: In member function ?virtual void ConstDiriBC::createCoefs()?: PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp: In member function ?virtual void ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, Real, bool)?: PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::clean()?: PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::define(const ProblemDomain&, Vector&, Vector&, BCHolder, const RealVect&, int, int)?: PetscCompGrid.cpp:218: error: request for member ?resize? in ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class type ?int? PetscCompGrid.cpp:219: error: request for member ?resize? in ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of non-class type ?int? PetscCompGrid.cpp:220: error: request for member ?resize? in ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of non-class type ?int? PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope PetscCompGrid.cpp:249: error: expected `;' before ?my0? PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:256: error: template argument 1 is invalid PetscCompGrid.cpp:256: error: template argument 1 is invalid PetscCompGrid.cpp:256: error: template argument 1 is invalid PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:257: error: template argument 1 is invalid PetscCompGrid.cpp:257: error: template argument 1 is invalid PetscCompGrid.cpp:257: error: new initializer expression list treated as compound expression PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in initialization PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:261: error: template argument 1 is invalid PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, which is of non-class type ?int? PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:288: error: template argument 1 is invalid PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:295: error: template argument 1 is invalid PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this scope PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this scope PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this scope PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in this scope PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:334: error: template argument 1 is invalid PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:350: error: template argument 1 is invalid PetscCompGrid.cpp:350: error: template argument 1 is invalid PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? token PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:358: error: template argument 1 is invalid PetscCompGrid.cpp:358: error: template argument 1 is invalid PetscCompGrid.cpp:358: error: new initializer expression list treated as compound expression PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in initialization PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:359: error: template argument 1 is invalid PetscCompGrid.cpp:359: error: template argument 1 is invalid PetscCompGrid.cpp:359: error: template argument 1 is invalid PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:363: error: template argument 1 is invalid PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, which is of non-class type ?int? PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:379: error: template argument 1 is invalid PetscCompGrid.cpp:379: error: template argument 1 is invalid PetscCompGrid.cpp:379: error: new initializer expression list treated as compound expression PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in initialization PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:380: error: template argument 1 is invalid PetscCompGrid.cpp:380: error: template argument 1 is invalid PetscCompGrid.cpp:380: error: template argument 1 is invalid PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:384: error: template argument 1 is invalid PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, which is of non-class type ?int? PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:394: error: template argument 1 is invalid PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, which is of non-class type ?int? PetscCompGrid.cpp: At global scope: PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, StencilTensor&)?: PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named ?getCoef? PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, StencilTensor&)?: PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope PetscCompGrid.cpp:699: error: template argument 1 is invalid PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:700: error: template argument 1 is invalid PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function PetscCompGrid.cpp:720: error: expected `;' before ?cidx? PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, StencilTensor&)?: PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope PetscCompGrid.cpp:772: error: template argument 1 is invalid PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a constant-expression PetscCompGrid.cpp:773: error: template argument 1 is invalid PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? token PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array subscript PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, which is of non-class type ?int? PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function PetscCompGrid.cpp: At global scope: PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] Error 1 make[2]: *** [AMRElliptic] Error 2 make[1]: *** [AMRElliptic] Error 2 make: *** [execPETSc] Error 2 -- Best regards, Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Dec 22 11:05:52 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 22 Dec 2016 11:05:52 -0600 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: On Thu, 22 Dec 2016, Sharp Stone wrote: > Dear folks, > > I'm now using Chombo with Petsc solver. When compiling the Chombo examples > of Petsc, I always got errors below. I wonder if anyone has such experience > to get rid of these. Thanks very much in advance! > > > In file included from PetscCompGrid.cpp:13: > PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or directory Thats a petsc-3.5 path. Which version of chombo/petsc are you using? Are they supporsed to be compatible? Satish > In file included from PetscCompGrid.cpp:13: > PetscCompGrid.H:55: error: ?Mat? does not name a type > PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:66: error: template argument 1 is invalid > PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:97: error: ?Mat? does not name a type > PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: > PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field > named ?m_mat? > PetscCompGrid.H: At global scope: > PetscCompGrid.H:138: error: ?PetscReal? does not name a type > PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? with > no type > PetscCompGrid.H:140: error: expected ?;? before ?*? token > PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: > PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope > PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: > PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field named > ?m_Rcoefs? > PetscCompGrid.cpp: In member function ?void CompBC::define(int, IntVect)?: > PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope > PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope > PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type > PetscCompGrid.cpp: In member function ?virtual void > ConstDiriBC::createCoefs()?: > PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void > ConstDiriBC::operator()(FArrayBox&, > const Box&, const ProblemDomain&, Real, bool)?: > PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::clean()?: > PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope > PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::define(const ProblemDomain&, Vector&, > Vector&, BCHolder, const RealVect&, int, int)?: > PetscCompGrid.cpp:218: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class type > ?int? > PetscCompGrid.cpp:219: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of > non-class type ?int? > PetscCompGrid.cpp:220: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of > non-class type ?int? > PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:249: error: expected `;' before ?my0? > PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:257: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: new initializer expression list treated as > compound expression > PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in > initialization > PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:261: error: template argument 1 is invalid > PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:288: error: template argument 1 is invalid > PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:295: error: template argument 1 is invalid > PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope > PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope > PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? > PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope > PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope > PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope > PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope > PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this scope > PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this scope > PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this > scope > PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope > PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope > PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in this > scope > PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:334: error: template argument 1 is invalid > PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:350: error: template argument 1 is invalid > PetscCompGrid.cpp:350: error: template argument 1 is invalid > PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? token > PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:358: error: template argument 1 is invalid > PetscCompGrid.cpp:358: error: template argument 1 is invalid > PetscCompGrid.cpp:358: error: new initializer expression list treated as > compound expression > PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in > initialization > PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:363: error: template argument 1 is invalid > PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:379: error: template argument 1 is invalid > PetscCompGrid.cpp:379: error: template argument 1 is invalid > PetscCompGrid.cpp:379: error: new initializer expression list treated as > compound expression > PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in > initialization > PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:384: error: template argument 1 is invalid > PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:394: error: template argument 1 is invalid > PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, which > is of non-class type ?int? > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::applyBCs(IntVect, > int, const DataIndex&, Box, StencilTensor&)?: > PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named > ?getCoef? > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, > StencilTensor&)?: > PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:699: error: template argument 1 is invalid > PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:700: error: template argument 1 is invalid > PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:720: error: expected `;' before ?cidx? > PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope > PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope > PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function > PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, > StencilTensor&)?: > PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:772: error: template argument 1 is invalid > PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:773: error: template argument 1 is invalid > PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type > make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] > Error 1 > make[2]: *** [AMRElliptic] Error 2 > make[1]: *** [AMRElliptic] Error 2 > make: *** [execPETSc] Error 2 > > From knepley at gmail.com Thu Dec 22 11:05:56 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Dec 2016 11:05:56 -0600 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone wrote: > Dear folks, > > I'm now using Chombo with Petsc solver. When compiling the Chombo examples > of Petsc, I always got errors below. I wonder if anyone has such experience > to get rid of these. Thanks very much in advance! > > > In file included from PetscCompGrid.cpp:13: > PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or > directory > You might have a version mismatch here. PETSc changed the directory structure to conform better to the Linux standard $PETSC_DIR/include/petsc-private became $PETSC_DIR/include/petsc/private Thanks, Matt > In file included from PetscCompGrid.cpp:13: > PetscCompGrid.H:55: error: ?Mat? does not name a type > PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:66: error: template argument 1 is invalid > PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:97: error: ?Mat? does not name a type > PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: > PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field > named ?m_mat? > PetscCompGrid.H: At global scope: > PetscCompGrid.H:138: error: ?PetscReal? does not name a type > PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? > with no type > PetscCompGrid.H:140: error: expected ?;? before ?*? token > PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: > PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope > PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: > PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field named > ?m_Rcoefs? > PetscCompGrid.cpp: In member function ?void CompBC::define(int, IntVect)?: > PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope > PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope > PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type > PetscCompGrid.cpp: In member function ?virtual void > ConstDiriBC::createCoefs()?: > PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void > ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, > Real, bool)?: > PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::clean()?: > PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope > PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::define(const ProblemDomain&, Vector&, > Vector&, BCHolder, const RealVect&, int, int)?: > PetscCompGrid.cpp:218: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class > type ?int? > PetscCompGrid.cpp:219: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of > non-class type ?int? > PetscCompGrid.cpp:220: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of > non-class type ?int? > PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:249: error: expected `;' before ?my0? > PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:257: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: new initializer expression list treated as > compound expression > PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in > initialization > PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:261: error: template argument 1 is invalid > PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:288: error: template argument 1 is invalid > PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:295: error: template argument 1 is invalid > PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope > PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope > PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? > PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope > PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope > PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope > PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope > PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this scope > PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this scope > PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this > scope > PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope > PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope > PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in this > scope > PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:334: error: template argument 1 is invalid > PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:350: error: template argument 1 is invalid > PetscCompGrid.cpp:350: error: template argument 1 is invalid > PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? token > PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:358: error: template argument 1 is invalid > PetscCompGrid.cpp:358: error: template argument 1 is invalid > PetscCompGrid.cpp:358: error: new initializer expression list treated as > compound expression > PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in > initialization > PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:363: error: template argument 1 is invalid > PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:379: error: template argument 1 is invalid > PetscCompGrid.cpp:379: error: template argument 1 is invalid > PetscCompGrid.cpp:379: error: new initializer expression list treated as > compound expression > PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in > initialization > PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:384: error: template argument 1 is invalid > PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:394: error: template argument 1 is invalid > PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, which > is of non-class type ?int? > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, > StencilTensor&)?: > PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named > ?getCoef? > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, > StencilTensor&)?: > PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:699: error: template argument 1 is invalid > PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:700: error: template argument 1 is invalid > PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:720: error: expected `;' before ?cidx? > PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope > PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope > PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function > PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, > StencilTensor&)?: > PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:772: error: template argument 1 is invalid > PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a > constant-expression > PetscCompGrid.cpp:773: error: template argument 1 is invalid > PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, > which is of non-class type ?int? > PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type > make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] > Error 1 > make[2]: *** [AMRElliptic] Error 2 > make[1]: *** [AMRElliptic] Error 2 > make: *** [execPETSc] Error 2 > > -- > Best regards, > > Feng > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thronesf at gmail.com Thu Dec 22 11:54:44 2016 From: thronesf at gmail.com (Sharp Stone) Date: Thu, 22 Dec 2016 12:54:44 -0500 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: Mat and Satish, Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have not yet received replies from Chombo. So I have to ask you guys to see if my paths have been correctly set up. When I changed the #include to #include, I still got the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or directory". Highly appreciate your help! On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley wrote: > On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone wrote: > >> Dear folks, >> >> I'm now using Chombo with Petsc solver. When compiling the Chombo >> examples of Petsc, I always got errors below. I wonder if anyone has such >> experience to get rid of these. Thanks very much in advance! >> >> >> In file included from PetscCompGrid.cpp:13: >> PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or >> directory >> > > You might have a version mismatch here. PETSc changed the directory > structure to conform better to the Linux standard > > $PETSC_DIR/include/petsc-private > > became > > $PETSC_DIR/include/petsc/private > > Thanks, > > Matt > > >> In file included from PetscCompGrid.cpp:13: >> PetscCompGrid.H:55: error: ?Mat? does not name a type >> PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.H:66: error: template argument 1 is invalid >> PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.H:90: error: template argument 1 is invalid >> PetscCompGrid.H:90: error: template argument 1 is invalid >> PetscCompGrid.H:90: error: template argument 1 is invalid >> PetscCompGrid.H:90: error: template argument 1 is invalid >> PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.H:92: error: template argument 1 is invalid >> PetscCompGrid.H:92: error: template argument 1 is invalid >> PetscCompGrid.H:92: error: template argument 1 is invalid >> PetscCompGrid.H:92: error: template argument 1 is invalid >> PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.H:93: error: template argument 1 is invalid >> PetscCompGrid.H:93: error: template argument 1 is invalid >> PetscCompGrid.H:93: error: template argument 1 is invalid >> PetscCompGrid.H:93: error: template argument 1 is invalid >> PetscCompGrid.H:97: error: ?Mat? does not name a type >> PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: >> PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field >> named ?m_mat? >> PetscCompGrid.H: At global scope: >> PetscCompGrid.H:138: error: ?PetscReal? does not name a type >> PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? >> with no type >> PetscCompGrid.H:140: error: expected ?;? before ?*? token >> PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: >> PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope >> PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: >> PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field named >> ?m_Rcoefs? >> PetscCompGrid.cpp: In member function ?void CompBC::define(int, IntVect)?: >> PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope >> PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope >> PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope >> PetscCompGrid.cpp: At global scope: >> PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type >> PetscCompGrid.cpp: In member function ?virtual void >> ConstDiriBC::createCoefs()?: >> PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp: In member function ?virtual void >> ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, >> Real, bool)?: >> PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope >> PetscCompGrid.cpp: In member function ?virtual void >> PetscCompGrid::clean()?: >> PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope >> PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope >> PetscCompGrid.cpp: In member function ?virtual void >> PetscCompGrid::define(const ProblemDomain&, Vector&, >> Vector&, BCHolder, const RealVect&, int, int)?: >> PetscCompGrid.cpp:218: error: request for member ?resize? in >> ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class >> type ?int? >> PetscCompGrid.cpp:219: error: request for member ?resize? in >> ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of >> non-class type ?int? >> PetscCompGrid.cpp:220: error: request for member ?resize? in >> ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of >> non-class type ?int? >> PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.cpp:249: error: expected `;' before ?my0? >> PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:256: error: template argument 1 is invalid >> PetscCompGrid.cpp:256: error: template argument 1 is invalid >> PetscCompGrid.cpp:256: error: template argument 1 is invalid >> PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:257: error: template argument 1 is invalid >> PetscCompGrid.cpp:257: error: template argument 1 is invalid >> PetscCompGrid.cpp:257: error: new initializer expression list treated as >> compound expression >> PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in >> initialization >> PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:261: error: template argument 1 is invalid >> PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, >> which is of non-class type ?int? >> PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:288: error: template argument 1 is invalid >> PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:295: error: template argument 1 is invalid >> PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function >> PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope >> PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope >> PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? >> PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope >> PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope >> PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope >> PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope >> PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this >> scope >> PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this scope >> PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this >> scope >> PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope >> PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope >> PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in >> this scope >> PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:334: error: template argument 1 is invalid >> PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >> PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function >> PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:350: error: template argument 1 is invalid >> PetscCompGrid.cpp:350: error: template argument 1 is invalid >> PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? token >> PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:358: error: template argument 1 is invalid >> PetscCompGrid.cpp:358: error: template argument 1 is invalid >> PetscCompGrid.cpp:358: error: new initializer expression list treated as >> compound expression >> PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in >> initialization >> PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:359: error: template argument 1 is invalid >> PetscCompGrid.cpp:359: error: template argument 1 is invalid >> PetscCompGrid.cpp:359: error: template argument 1 is invalid >> PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:363: error: template argument 1 is invalid >> PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? >> PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, >> which is of non-class type ?int? >> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:379: error: template argument 1 is invalid >> PetscCompGrid.cpp:379: error: template argument 1 is invalid >> PetscCompGrid.cpp:379: error: new initializer expression list treated as >> compound expression >> PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in >> initialization >> PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:380: error: template argument 1 is invalid >> PetscCompGrid.cpp:380: error: template argument 1 is invalid >> PetscCompGrid.cpp:380: error: template argument 1 is invalid >> PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:384: error: template argument 1 is invalid >> PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? >> PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, >> which is of non-class type ?int? >> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:394: error: template argument 1 is invalid >> PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? >> PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, which >> is of non-class type ?int? >> PetscCompGrid.cpp: At global scope: >> PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.cpp: In member function ?virtual void >> PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, >> StencilTensor&)?: >> PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named >> ?getCoef? >> PetscCompGrid.cpp: In member function ?virtual void >> PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, >> StencilTensor&)?: >> PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.cpp:699: error: template argument 1 is invalid >> PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:700: error: template argument 1 is invalid >> PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function >> PetscCompGrid.cpp:720: error: expected `;' before ?cidx? >> PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope >> PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope >> PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function >> PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function >> PetscCompGrid.cpp: In member function ?virtual void >> PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, >> StencilTensor&)?: >> PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope >> PetscCompGrid.cpp:772: error: template argument 1 is invalid >> PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a >> constant-expression >> PetscCompGrid.cpp:773: error: template argument 1 is invalid >> PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? token >> PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array subscript >> PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, >> which is of non-class type ?int? >> PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function >> PetscCompGrid.cpp: At global scope: >> PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type >> PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type >> make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] >> Error 1 >> make[2]: *** [AMRElliptic] Error 2 >> make[1]: *** [AMRElliptic] Error 2 >> make: *** [execPETSc] Error 2 >> >> -- >> Best regards, >> >> Feng >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Best regards, Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 22 12:07:34 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Dec 2016 12:07:34 -0600 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: On Thu, Dec 22, 2016 at 11:54 AM, Sharp Stone wrote: > Mat and Satish, > > Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have not > yet received replies from Chombo. So I have to ask you guys to see if my > paths have been correctly set up. When I changed the #include > to #include, I still got > the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such > file or directory". Highly appreciate your help! > Clearly there was another include. Satish is right that you need to get the right version for Chombo. Matt > On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley > wrote: > >> On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone wrote: >> >>> Dear folks, >>> >>> I'm now using Chombo with Petsc solver. When compiling the Chombo >>> examples of Petsc, I always got errors below. I wonder if anyone has such >>> experience to get rid of these. Thanks very much in advance! >>> >>> >>> In file included from PetscCompGrid.cpp:13: >>> PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or >>> directory >>> >> >> You might have a version mismatch here. PETSc changed the directory >> structure to conform better to the Linux standard >> >> $PETSC_DIR/include/petsc-private >> >> became >> >> $PETSC_DIR/include/petsc/private >> >> Thanks, >> >> Matt >> >> >>> In file included from PetscCompGrid.cpp:13: >>> PetscCompGrid.H:55: error: ?Mat? does not name a type >>> PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.H:66: error: template argument 1 is invalid >>> PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.H:90: error: template argument 1 is invalid >>> PetscCompGrid.H:90: error: template argument 1 is invalid >>> PetscCompGrid.H:90: error: template argument 1 is invalid >>> PetscCompGrid.H:90: error: template argument 1 is invalid >>> PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.H:92: error: template argument 1 is invalid >>> PetscCompGrid.H:92: error: template argument 1 is invalid >>> PetscCompGrid.H:92: error: template argument 1 is invalid >>> PetscCompGrid.H:92: error: template argument 1 is invalid >>> PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.H:93: error: template argument 1 is invalid >>> PetscCompGrid.H:93: error: template argument 1 is invalid >>> PetscCompGrid.H:93: error: template argument 1 is invalid >>> PetscCompGrid.H:93: error: template argument 1 is invalid >>> PetscCompGrid.H:97: error: ?Mat? does not name a type >>> PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: >>> PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field >>> named ?m_mat? >>> PetscCompGrid.H: At global scope: >>> PetscCompGrid.H:138: error: ?PetscReal? does not name a type >>> PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? >>> with no type >>> PetscCompGrid.H:140: error: expected ?;? before ?*? token >>> PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: >>> PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope >>> PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: >>> PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field >>> named ?m_Rcoefs? >>> PetscCompGrid.cpp: In member function ?void CompBC::define(int, >>> IntVect)?: >>> PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope >>> PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope >>> PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope >>> PetscCompGrid.cpp: At global scope: >>> PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type >>> PetscCompGrid.cpp: In member function ?virtual void >>> ConstDiriBC::createCoefs()?: >>> PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp: In member function ?virtual void >>> ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, >>> Real, bool)?: >>> PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope >>> PetscCompGrid.cpp: In member function ?virtual void >>> PetscCompGrid::clean()?: >>> PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope >>> PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope >>> PetscCompGrid.cpp: In member function ?virtual void >>> PetscCompGrid::define(const ProblemDomain&, Vector&, >>> Vector&, BCHolder, const RealVect&, int, int)?: >>> PetscCompGrid.cpp:218: error: request for member ?resize? in >>> ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class >>> type ?int? >>> PetscCompGrid.cpp:219: error: request for member ?resize? in >>> ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of >>> non-class type ?int? >>> PetscCompGrid.cpp:220: error: request for member ?resize? in >>> ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of >>> non-class type ?int? >>> PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.cpp:249: error: expected `;' before ?my0? >>> PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>> PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:257: error: template argument 1 is invalid >>> PetscCompGrid.cpp:257: error: template argument 1 is invalid >>> PetscCompGrid.cpp:257: error: new initializer expression list treated as >>> compound expression >>> PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in >>> initialization >>> PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:261: error: template argument 1 is invalid >>> PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, >>> which is of non-class type ?int? >>> PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:288: error: template argument 1 is invalid >>> PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:295: error: template argument 1 is invalid >>> PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function >>> PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope >>> PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope >>> PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? >>> PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope >>> PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope >>> PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope >>> PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope >>> PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this >>> scope >>> PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this >>> scope >>> PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this >>> scope >>> PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope >>> PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope >>> PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in >>> this scope >>> PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:334: error: template argument 1 is invalid >>> PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >>> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >>> PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function >>> PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:350: error: template argument 1 is invalid >>> PetscCompGrid.cpp:350: error: template argument 1 is invalid >>> PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? >>> token >>> PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:358: error: template argument 1 is invalid >>> PetscCompGrid.cpp:358: error: template argument 1 is invalid >>> PetscCompGrid.cpp:358: error: new initializer expression list treated as >>> compound expression >>> PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in >>> initialization >>> PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>> PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:363: error: template argument 1 is invalid >>> PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? >>> PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, >>> which is of non-class type ?int? >>> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:379: error: template argument 1 is invalid >>> PetscCompGrid.cpp:379: error: template argument 1 is invalid >>> PetscCompGrid.cpp:379: error: new initializer expression list treated as >>> compound expression >>> PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in >>> initialization >>> PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>> PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:384: error: template argument 1 is invalid >>> PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? >>> PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, >>> which is of non-class type ?int? >>> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:394: error: template argument 1 is invalid >>> PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? >>> PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, >>> which is of non-class type ?int? >>> PetscCompGrid.cpp: At global scope: >>> PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.cpp: In member function ?virtual void >>> PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, >>> StencilTensor&)?: >>> PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named >>> ?getCoef? >>> PetscCompGrid.cpp: In member function ?virtual void >>> PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, >>> StencilTensor&)?: >>> PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.cpp:699: error: template argument 1 is invalid >>> PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:700: error: template argument 1 is invalid >>> PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function >>> PetscCompGrid.cpp:720: error: expected `;' before ?cidx? >>> PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope >>> PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope >>> PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function >>> PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function >>> PetscCompGrid.cpp: In member function ?virtual void >>> PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, >>> StencilTensor&)?: >>> PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope >>> PetscCompGrid.cpp:772: error: template argument 1 is invalid >>> PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a >>> constant-expression >>> PetscCompGrid.cpp:773: error: template argument 1 is invalid >>> PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? >>> token >>> PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array >>> subscript >>> PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, >>> which is of non-class type ?int? >>> PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function >>> PetscCompGrid.cpp: At global scope: >>> PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type >>> PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type >>> make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] >>> Error 1 >>> make[2]: *** [AMRElliptic] Error 2 >>> make[1]: *** [AMRElliptic] Error 2 >>> make: *** [execPETSc] Error 2 >>> >>> -- >>> Best regards, >>> >>> Feng >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Best regards, > > Feng > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From thronesf at gmail.com Thu Dec 22 12:14:17 2016 From: thronesf at gmail.com (Sharp Stone) Date: Thu, 22 Dec 2016 13:14:17 -0500 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: Thanks, Matt! Both libraries are the latest, and I guess they are compatible except for the path setups. I'm waiting for their reply. You are right. There's another file that needs to be modified. But the error can't go away "PetscCompGrid.H:19:34: error: petsc/private/pcimpl.h: No such file or directory". If you guys have any ideas, please let me know. Thank you very much! On Thu, Dec 22, 2016 at 1:07 PM, Matthew Knepley wrote: > On Thu, Dec 22, 2016 at 11:54 AM, Sharp Stone wrote: > >> Mat and Satish, >> >> Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have >> not yet received replies from Chombo. So I have to ask you guys to see if >> my paths have been correctly set up. When I changed the #include >> to #include, I still >> got the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No >> such file or directory". Highly appreciate your help! >> > > Clearly there was another include. Satish is right that you need to get > the right version for Chombo. > > Matt > > >> On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley >> wrote: >> >>> On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone >>> wrote: >>> >>>> Dear folks, >>>> >>>> I'm now using Chombo with Petsc solver. When compiling the Chombo >>>> examples of Petsc, I always got errors below. I wonder if anyone has such >>>> experience to get rid of these. Thanks very much in advance! >>>> >>>> >>>> In file included from PetscCompGrid.cpp:13: >>>> PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or >>>> directory >>>> >>> >>> You might have a version mismatch here. PETSc changed the directory >>> structure to conform better to the Linux standard >>> >>> $PETSC_DIR/include/petsc-private >>> >>> became >>> >>> $PETSC_DIR/include/petsc/private >>> >>> Thanks, >>> >>> Matt >>> >>> >>>> In file included from PetscCompGrid.cpp:13: >>>> PetscCompGrid.H:55: error: ?Mat? does not name a type >>>> PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.H:66: error: template argument 1 is invalid >>>> PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>> PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>> PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>> PetscCompGrid.H:97: error: ?Mat? does not name a type >>>> PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: >>>> PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any >>>> field named ?m_mat? >>>> PetscCompGrid.H: At global scope: >>>> PetscCompGrid.H:138: error: ?PetscReal? does not name a type >>>> PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? >>>> with no type >>>> PetscCompGrid.H:140: error: expected ?;? before ?*? token >>>> PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: >>>> PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope >>>> PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: >>>> PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field >>>> named ?m_Rcoefs? >>>> PetscCompGrid.cpp: In member function ?void CompBC::define(int, >>>> IntVect)?: >>>> PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope >>>> PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope >>>> PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this >>>> scope >>>> PetscCompGrid.cpp: At global scope: >>>> PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> ConstDiriBC::createCoefs()?: >>>> PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, >>>> Real, bool)?: >>>> PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> PetscCompGrid::clean()?: >>>> PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope >>>> PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this >>>> scope >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> PetscCompGrid::define(const ProblemDomain&, Vector&, >>>> Vector&, BCHolder, const RealVect&, int, int)?: >>>> PetscCompGrid.cpp:218: error: request for member ?resize? in >>>> ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class >>>> type ?int? >>>> PetscCompGrid.cpp:219: error: request for member ?resize? in >>>> ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of >>>> non-class type ?int? >>>> PetscCompGrid.cpp:220: error: request for member ?resize? in >>>> ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of >>>> non-class type ?int? >>>> PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.cpp:249: error: expected `;' before ?my0? >>>> PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:257: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:257: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:257: error: new initializer expression list treated >>>> as compound expression >>>> PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in >>>> initialization >>>> PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:261: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, >>>> which is of non-class type ?int? >>>> PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:288: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:295: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function >>>> PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope >>>> PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope >>>> PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? >>>> PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope >>>> PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope >>>> PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope >>>> PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope >>>> PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this >>>> scope >>>> PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this >>>> scope >>>> PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in >>>> this scope >>>> PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope >>>> PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this >>>> scope >>>> PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in >>>> this scope >>>> PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:334: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >>>> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >>>> PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function >>>> PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:350: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:350: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? >>>> token >>>> PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:358: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:358: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:358: error: new initializer expression list treated >>>> as compound expression >>>> PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? >>>> in initialization >>>> PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:363: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? >>>> PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, >>>> which is of non-class type ?int? >>>> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:379: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:379: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:379: error: new initializer expression list treated >>>> as compound expression >>>> PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? >>>> in initialization >>>> PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:384: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? >>>> PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, >>>> which is of non-class type ?int? >>>> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:394: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? >>>> PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, >>>> which is of non-class type ?int? >>>> PetscCompGrid.cpp: At global scope: >>>> PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, >>>> StencilTensor&)?: >>>> PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named >>>> ?getCoef? >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, >>>> StencilTensor&)?: >>>> PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.cpp:699: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:700: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function >>>> PetscCompGrid.cpp:720: error: expected `;' before ?cidx? >>>> PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope >>>> PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope >>>> PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function >>>> PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function >>>> PetscCompGrid.cpp: In member function ?virtual void >>>> PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, >>>> StencilTensor&)?: >>>> PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope >>>> PetscCompGrid.cpp:772: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a >>>> constant-expression >>>> PetscCompGrid.cpp:773: error: template argument 1 is invalid >>>> PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? >>>> token >>>> PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array >>>> subscript >>>> PetscCompGrid.cpp:774: error: request for member ?box? in >>>> ?covergidfab?, which is of non-class type ?int? >>>> PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function >>>> PetscCompGrid.cpp: At global scope: >>>> PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type >>>> PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type >>>> make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] >>>> Error 1 >>>> make[2]: *** [AMRElliptic] Error 2 >>>> make[1]: *** [AMRElliptic] Error 2 >>>> make: *** [execPETSc] Error 2 >>>> >>>> -- >>>> Best regards, >>>> >>>> Feng >>>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which their >>> experiments lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> Best regards, >> >> Feng >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -- Best regards, Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Dec 22 12:19:50 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 22 Dec 2016 12:19:50 -0600 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: <8123B4EF-900D-4E11-BEFD-525D9E4F3DF0@mcs.anl.gov> It is very likely the lastest Chombo DOES NOT WORK with the latest PETSc. You need to try early versions of PETSc with that version of Chombo. Chombo should have this documented somewhere. > On Dec 22, 2016, at 12:14 PM, Sharp Stone wrote: > > Thanks, Matt! Both libraries are the latest, and I guess they are compatible except for the path setups. I'm waiting for their reply. > > You are right. There's another file that needs to be modified. But the error can't go away "PetscCompGrid.H:19:34: error: petsc/private/pcimpl.h: No such file or directory". If you guys have any ideas, please let me know. > > Thank you very much! > > On Thu, Dec 22, 2016 at 1:07 PM, Matthew Knepley wrote: > On Thu, Dec 22, 2016 at 11:54 AM, Sharp Stone wrote: > Mat and Satish, > > Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have not yet received replies from Chombo. So I have to ask you guys to see if my paths have been correctly set up. When I changed the #include to #include, I still got the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or directory". Highly appreciate your help! > > Clearly there was another include. Satish is right that you need to get the right version for Chombo. > > Matt > > On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley wrote: > On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone wrote: > Dear folks, > > I'm now using Chombo with Petsc solver. When compiling the Chombo examples of Petsc, I always got errors below. I wonder if anyone has such experience to get rid of these. Thanks very much in advance! > > > In file included from PetscCompGrid.cpp:13: > PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or directory > > You might have a version mismatch here. PETSc changed the directory structure to conform better to the Linux standard > > $PETSC_DIR/include/petsc-private > > became > > $PETSC_DIR/include/petsc/private > > Thanks, > > Matt > > In file included from PetscCompGrid.cpp:13: > PetscCompGrid.H:55: error: ?Mat? does not name a type > PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:66: error: template argument 1 is invalid > PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type > PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:90: error: template argument 1 is invalid > PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:92: error: template argument 1 is invalid > PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:93: error: template argument 1 is invalid > PetscCompGrid.H:97: error: ?Mat? does not name a type > PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: > PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field named ?m_mat? > PetscCompGrid.H: At global scope: > PetscCompGrid.H:138: error: ?PetscReal? does not name a type > PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? with no type > PetscCompGrid.H:140: error: expected ?;? before ?*? token > PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: > PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope > PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: > PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field named ?m_Rcoefs? > PetscCompGrid.cpp: In member function ?void CompBC::define(int, IntVect)?: > PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope > PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope > PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type > PetscCompGrid.cpp: In member function ?virtual void ConstDiriBC::createCoefs()?: > PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, Real, bool)?: > PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::clean()?: > PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope > PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope > PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::define(const ProblemDomain&, Vector&, Vector&, BCHolder, const RealVect&, int, int)?: > PetscCompGrid.cpp:218: error: request for member ?resize? in ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class type ?int? > PetscCompGrid.cpp:219: error: request for member ?resize? in ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of non-class type ?int? > PetscCompGrid.cpp:220: error: request for member ?resize? in ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of non-class type ?int? > PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:249: error: expected `;' before ?my0? > PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:256: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:257: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: template argument 1 is invalid > PetscCompGrid.cpp:257: error: new initializer expression list treated as compound expression > PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in initialization > PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:261: error: template argument 1 is invalid > PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, which is of non-class type ?int? > PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:288: error: template argument 1 is invalid > PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:295: error: template argument 1 is invalid > PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope > PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope > PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? > PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope > PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope > PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope > PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope > PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this scope > PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this scope > PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this scope > PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope > PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope > PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in this scope > PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:334: error: template argument 1 is invalid > PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:350: error: template argument 1 is invalid > PetscCompGrid.cpp:350: error: template argument 1 is invalid > PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? token > PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:358: error: template argument 1 is invalid > PetscCompGrid.cpp:358: error: template argument 1 is invalid > PetscCompGrid.cpp:358: error: new initializer expression list treated as compound expression > PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in initialization > PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:359: error: template argument 1 is invalid > PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:363: error: template argument 1 is invalid > PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, which is of non-class type ?int? > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:379: error: template argument 1 is invalid > PetscCompGrid.cpp:379: error: template argument 1 is invalid > PetscCompGrid.cpp:379: error: new initializer expression list treated as compound expression > PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in initialization > PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:380: error: template argument 1 is invalid > PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:384: error: template argument 1 is invalid > PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, which is of non-class type ?int? > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:394: error: template argument 1 is invalid > PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? > PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, which is of non-class type ?int? > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, StencilTensor&)?: > PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named ?getCoef? > PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, StencilTensor&)?: > PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:699: error: template argument 1 is invalid > PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:700: error: template argument 1 is invalid > PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp:720: error: expected `;' before ?cidx? > PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope > PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope > PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function > PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function > PetscCompGrid.cpp: In member function ?virtual void PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, StencilTensor&)?: > PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope > PetscCompGrid.cpp:772: error: template argument 1 is invalid > PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a constant-expression > PetscCompGrid.cpp:773: error: template argument 1 is invalid > PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? token > PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array subscript > PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, which is of non-class type ?int? > PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function > PetscCompGrid.cpp: At global scope: > PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type > PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type > make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] Error 1 > make[2]: *** [AMRElliptic] Error 2 > make[1]: *** [AMRElliptic] Error 2 > make: *** [execPETSc] Error 2 > > -- > Best regards, > > Feng > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -- > Best regards, > > Feng > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > > > -- > Best regards, > > Feng From thronesf at gmail.com Thu Dec 22 12:23:17 2016 From: thronesf at gmail.com (Sharp Stone) Date: Thu, 22 Dec 2016 13:23:17 -0500 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: <8123B4EF-900D-4E11-BEFD-525D9E4F3DF0@mcs.anl.gov> References: <8123B4EF-900D-4E11-BEFD-525D9E4F3DF0@mcs.anl.gov> Message-ID: All right. Thanks Barry! I'll try older versions of PETSc. Merry Christmas to you guys! On Thu, Dec 22, 2016 at 1:19 PM, Barry Smith wrote: > > It is very likely the lastest Chombo DOES NOT WORK with the latest > PETSc. You need to try early versions of PETSc with that version of Chombo. > Chombo should have this documented somewhere. > > > > On Dec 22, 2016, at 12:14 PM, Sharp Stone wrote: > > > > Thanks, Matt! Both libraries are the latest, and I guess they are > compatible except for the path setups. I'm waiting for their reply. > > > > You are right. There's another file that needs to be modified. But the > error can't go away "PetscCompGrid.H:19:34: error: petsc/private/pcimpl.h: > No such file or directory". If you guys have any ideas, please let me know. > > > > Thank you very much! > > > > On Thu, Dec 22, 2016 at 1:07 PM, Matthew Knepley > wrote: > > On Thu, Dec 22, 2016 at 11:54 AM, Sharp Stone > wrote: > > Mat and Satish, > > > > Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have > not yet received replies from Chombo. So I have to ask you guys to see if > my paths have been correctly set up. When I changed the #include > to #include, I still got > the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such > file or directory". Highly appreciate your help! > > > > Clearly there was another include. Satish is right that you need to get > the right version for Chombo. > > > > Matt > > > > On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley > wrote: > > On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone > wrote: > > Dear folks, > > > > I'm now using Chombo with Petsc solver. When compiling the Chombo > examples of Petsc, I always got errors below. I wonder if anyone has such > experience to get rid of these. Thanks very much in advance! > > > > > > In file included from PetscCompGrid.cpp:13: > > PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or > directory > > > > You might have a version mismatch here. PETSc changed the directory > structure to conform better to the Linux standard > > > > $PETSC_DIR/include/petsc-private > > > > became > > > > $PETSC_DIR/include/petsc/private > > > > Thanks, > > > > Matt > > > > In file included from PetscCompGrid.cpp:13: > > PetscCompGrid.H:55: error: ?Mat? does not name a type > > PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.H:66: error: template argument 1 is invalid > > PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.H:90: error: template argument 1 is invalid > > PetscCompGrid.H:90: error: template argument 1 is invalid > > PetscCompGrid.H:90: error: template argument 1 is invalid > > PetscCompGrid.H:90: error: template argument 1 is invalid > > PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.H:92: error: template argument 1 is invalid > > PetscCompGrid.H:92: error: template argument 1 is invalid > > PetscCompGrid.H:92: error: template argument 1 is invalid > > PetscCompGrid.H:92: error: template argument 1 is invalid > > PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.H:93: error: template argument 1 is invalid > > PetscCompGrid.H:93: error: template argument 1 is invalid > > PetscCompGrid.H:93: error: template argument 1 is invalid > > PetscCompGrid.H:93: error: template argument 1 is invalid > > PetscCompGrid.H:97: error: ?Mat? does not name a type > > PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: > > PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field > named ?m_mat? > > PetscCompGrid.H: At global scope: > > PetscCompGrid.H:138: error: ?PetscReal? does not name a type > > PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? > with no type > > PetscCompGrid.H:140: error: expected ?;? before ?*? token > > PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: > > PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope > > PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: > > PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field > named ?m_Rcoefs? > > PetscCompGrid.cpp: In member function ?void CompBC::define(int, > IntVect)?: > > PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope > > PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope > > PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope > > PetscCompGrid.cpp: At global scope: > > PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type > > PetscCompGrid.cpp: In member function ?virtual void > ConstDiriBC::createCoefs()?: > > PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp: In member function ?virtual void > ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, > Real, bool)?: > > PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope > > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::clean()?: > > PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope > > PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope > > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::define(const ProblemDomain&, Vector&, > Vector&, BCHolder, const RealVect&, int, int)?: > > PetscCompGrid.cpp:218: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class > type ?int? > > PetscCompGrid.cpp:219: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of > non-class type ?int? > > PetscCompGrid.cpp:220: error: request for member ?resize? in > ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of > non-class type ?int? > > PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.cpp:249: error: expected `;' before ?my0? > > PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:256: error: template argument 1 is invalid > > PetscCompGrid.cpp:256: error: template argument 1 is invalid > > PetscCompGrid.cpp:256: error: template argument 1 is invalid > > PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:257: error: template argument 1 is invalid > > PetscCompGrid.cpp:257: error: template argument 1 is invalid > > PetscCompGrid.cpp:257: error: new initializer expression list treated as > compound expression > > PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in > initialization > > PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:261: error: template argument 1 is invalid > > PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > > PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:288: error: template argument 1 is invalid > > PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:295: error: template argument 1 is invalid > > PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function > > PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope > > PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope > > PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? > > PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope > > PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope > > PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope > > PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope > > PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this > scope > > PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this > scope > > PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this > scope > > PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope > > PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope > > PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in > this scope > > PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:334: error: template argument 1 is invalid > > PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > > PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function > > PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:350: error: template argument 1 is invalid > > PetscCompGrid.cpp:350: error: template argument 1 is invalid > > PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? > token > > PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:358: error: template argument 1 is invalid > > PetscCompGrid.cpp:358: error: template argument 1 is invalid > > PetscCompGrid.cpp:358: error: new initializer expression list treated as > compound expression > > PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in > initialization > > PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:359: error: template argument 1 is invalid > > PetscCompGrid.cpp:359: error: template argument 1 is invalid > > PetscCompGrid.cpp:359: error: template argument 1 is invalid > > PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:363: error: template argument 1 is invalid > > PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? > > PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:379: error: template argument 1 is invalid > > PetscCompGrid.cpp:379: error: template argument 1 is invalid > > PetscCompGrid.cpp:379: error: new initializer expression list treated as > compound expression > > PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in > initialization > > PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:380: error: template argument 1 is invalid > > PetscCompGrid.cpp:380: error: template argument 1 is invalid > > PetscCompGrid.cpp:380: error: template argument 1 is invalid > > PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:384: error: template argument 1 is invalid > > PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? > > PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, > which is of non-class type ?int? > > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:394: error: template argument 1 is invalid > > PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? > > PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, > which is of non-class type ?int? > > PetscCompGrid.cpp: At global scope: > > PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, > StencilTensor&)?: > > PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named > ?getCoef? > > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, > StencilTensor&)?: > > PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.cpp:699: error: template argument 1 is invalid > > PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:700: error: template argument 1 is invalid > > PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function > > PetscCompGrid.cpp:720: error: expected `;' before ?cidx? > > PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope > > PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope > > PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function > > PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function > > PetscCompGrid.cpp: In member function ?virtual void > PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, > StencilTensor&)?: > > PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope > > PetscCompGrid.cpp:772: error: template argument 1 is invalid > > PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a > constant-expression > > PetscCompGrid.cpp:773: error: template argument 1 is invalid > > PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? > token > > PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array > subscript > > PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, > which is of non-class type ?int? > > PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function > > PetscCompGrid.cpp: At global scope: > > PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type > > PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type > > make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] > Error 1 > > make[2]: *** [AMRElliptic] Error 2 > > make[1]: *** [AMRElliptic] Error 2 > > make: *** [execPETSc] Error 2 > > > > -- > > Best regards, > > > > Feng > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > > > -- > > Best regards, > > > > Feng > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > > > > > -- > > Best regards, > > > > Feng > > -- Best regards, Feng -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Dec 22 12:23:54 2016 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Dec 2016 12:23:54 -0600 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: Message-ID: On Thu, Dec 22, 2016 at 12:14 PM, Sharp Stone wrote: > Thanks, Matt! Both libraries are the latest, and I guess they are > compatible except for the path setups. I'm waiting for their reply. > No they are clearly not compatible. > You are right. There's another file that needs to be modified. But the > error can't go away "PetscCompGrid.H:19:34: error: petsc/private/pcimpl.h: > No such file or directory". If you guys have any ideas, please let me know. > 1) CLEARLY, you must replace all instances of this in Chombo in order to get rid of this error 2) EVEN if you do that, it won't matter at all. If Chombo is including those headers, it is using a different interface, and if you include the current headers, you will get type mismatches. 2) The ONLY solution here is to talk to the Chombo people and see which version is compatible. If the Chombo people do not respond, you should really think about whether it is a piece of software you want to be using. Matt > Thank you very much! > > On Thu, Dec 22, 2016 at 1:07 PM, Matthew Knepley > wrote: > >> On Thu, Dec 22, 2016 at 11:54 AM, Sharp Stone wrote: >> >>> Mat and Satish, >>> >>> Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have >>> not yet received replies from Chombo. So I have to ask you guys to see if >>> my paths have been correctly set up. When I changed the #include >>> to #include, I still >>> got the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No >>> such file or directory". Highly appreciate your help! >>> >> >> Clearly there was another include. Satish is right that you need to get >> the right version for Chombo. >> >> Matt >> >> >>> On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley >>> wrote: >>> >>>> On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone >>>> wrote: >>>> >>>>> Dear folks, >>>>> >>>>> I'm now using Chombo with Petsc solver. When compiling the Chombo >>>>> examples of Petsc, I always got errors below. I wonder if anyone has such >>>>> experience to get rid of these. Thanks very much in advance! >>>>> >>>>> >>>>> In file included from PetscCompGrid.cpp:13: >>>>> PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or >>>>> directory >>>>> >>>> >>>> You might have a version mismatch here. PETSc changed the directory >>>> structure to conform better to the Linux standard >>>> >>>> $PETSC_DIR/include/petsc-private >>>> >>>> became >>>> >>>> $PETSC_DIR/include/petsc/private >>>> >>>> Thanks, >>>> >>>> Matt >>>> >>>> >>>>> In file included from PetscCompGrid.cpp:13: >>>>> PetscCompGrid.H:55: error: ?Mat? does not name a type >>>>> PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.H:66: error: template argument 1 is invalid >>>>> PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>>> PetscCompGrid.H:90: error: template argument 1 is invalid >>>>> PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>>> PetscCompGrid.H:92: error: template argument 1 is invalid >>>>> PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>>> PetscCompGrid.H:93: error: template argument 1 is invalid >>>>> PetscCompGrid.H:97: error: ?Mat? does not name a type >>>>> PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: >>>>> PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any >>>>> field named ?m_mat? >>>>> PetscCompGrid.H: At global scope: >>>>> PetscCompGrid.H:138: error: ?PetscReal? does not name a type >>>>> PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? >>>>> with no type >>>>> PetscCompGrid.H:140: error: expected ?;? before ?*? token >>>>> PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: >>>>> PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope >>>>> PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: >>>>> PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field >>>>> named ?m_Rcoefs? >>>>> PetscCompGrid.cpp: In member function ?void CompBC::define(int, >>>>> IntVect)?: >>>>> PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope >>>>> PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope >>>>> PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this >>>>> scope >>>>> PetscCompGrid.cpp: At global scope: >>>>> PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> ConstDiriBC::createCoefs()?: >>>>> PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, >>>>> Real, bool)?: >>>>> PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> PetscCompGrid::clean()?: >>>>> PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope >>>>> PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this >>>>> scope >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> PetscCompGrid::define(const ProblemDomain&, Vector&, >>>>> Vector&, BCHolder, const RealVect&, int, int)?: >>>>> PetscCompGrid.cpp:218: error: request for member ?resize? in >>>>> ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of >>>>> non-class type ?int? >>>>> PetscCompGrid.cpp:219: error: request for member ?resize? in >>>>> ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is >>>>> of non-class type ?int? >>>>> PetscCompGrid.cpp:220: error: request for member ?resize? in >>>>> ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of >>>>> non-class type ?int? >>>>> PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.cpp:249: error: expected `;' before ?my0? >>>>> PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:256: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:257: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:257: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:257: error: new initializer expression list treated >>>>> as compound expression >>>>> PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in >>>>> initialization >>>>> PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:261: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, >>>>> which is of non-class type ?int? >>>>> PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:288: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:295: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function >>>>> PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope >>>>> PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope >>>>> PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? >>>>> PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope >>>>> PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope >>>>> PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this >>>>> scope >>>>> PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope >>>>> PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this >>>>> scope >>>>> PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this >>>>> scope >>>>> PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in >>>>> this scope >>>>> PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope >>>>> PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this >>>>> scope >>>>> PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in >>>>> this scope >>>>> PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:334: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >>>>> PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function >>>>> PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function >>>>> PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:350: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:350: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? >>>>> token >>>>> PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:358: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:358: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:358: error: new initializer expression list treated >>>>> as compound expression >>>>> PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? >>>>> in initialization >>>>> PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:359: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:363: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? >>>>> PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, >>>>> which is of non-class type ?int? >>>>> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:379: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:379: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:379: error: new initializer expression list treated >>>>> as compound expression >>>>> PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? >>>>> in initialization >>>>> PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:380: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:384: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? >>>>> PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, >>>>> which is of non-class type ?int? >>>>> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:394: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? >>>>> PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, >>>>> which is of non-class type ?int? >>>>> PetscCompGrid.cpp: At global scope: >>>>> PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, >>>>> StencilTensor&)?: >>>>> PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named >>>>> ?getCoef? >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, >>>>> StencilTensor&)?: >>>>> PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.cpp:699: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:700: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function >>>>> PetscCompGrid.cpp:720: error: expected `;' before ?cidx? >>>>> PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope >>>>> PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope >>>>> PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function >>>>> PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function >>>>> PetscCompGrid.cpp: In member function ?virtual void >>>>> PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, >>>>> StencilTensor&)?: >>>>> PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope >>>>> PetscCompGrid.cpp:772: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a >>>>> constant-expression >>>>> PetscCompGrid.cpp:773: error: template argument 1 is invalid >>>>> PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? >>>>> token >>>>> PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array >>>>> subscript >>>>> PetscCompGrid.cpp:774: error: request for member ?box? in >>>>> ?covergidfab?, which is of non-class type ?int? >>>>> PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function >>>>> PetscCompGrid.cpp: At global scope: >>>>> PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type >>>>> PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type >>>>> make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] >>>>> Error 1 >>>>> make[2]: *** [AMRElliptic] Error 2 >>>>> make[1]: *** [AMRElliptic] Error 2 >>>>> make: *** [execPETSc] Error 2 >>>>> >>>>> -- >>>>> Best regards, >>>>> >>>>> Feng >>>>> >>>> >>>> >>>> >>>> -- >>>> What most experimenters take for granted before they begin their >>>> experiments is infinitely more interesting than any results to which their >>>> experiments lead. >>>> -- Norbert Wiener >>>> >>> >>> >>> >>> -- >>> Best regards, >>> >>> Feng >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > > > -- > Best regards, > > Feng > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Dec 22 12:45:04 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 22 Dec 2016 12:45:04 -0600 Subject: [petsc-users] Petsc+Chombo problem In-Reply-To: References: <8123B4EF-900D-4E11-BEFD-525D9E4F3DF0@mcs.anl.gov> Message-ID: >From https://commons.lbl.gov/display/chombo/Chombo+Download+Page >>> The 3.2 version of this software was released March 25. 2014 Chombo 3.2 Release Notes: We have implemented interfaces to the PETSc solver library << So it might be compatible with petsc-3.4 http://www.mcs.anl.gov/petsc/documentation/changes/index.html Satish On Thu, 22 Dec 2016, Sharp Stone wrote: > All right. Thanks Barry! I'll try older versions of PETSc. > > Merry Christmas to you guys! > > On Thu, Dec 22, 2016 at 1:19 PM, Barry Smith wrote: > > > > > It is very likely the lastest Chombo DOES NOT WORK with the latest > > PETSc. You need to try early versions of PETSc with that version of Chombo. > > Chombo should have this documented somewhere. > > > > > > > On Dec 22, 2016, at 12:14 PM, Sharp Stone wrote: > > > > > > Thanks, Matt! Both libraries are the latest, and I guess they are > > compatible except for the path setups. I'm waiting for their reply. > > > > > > You are right. There's another file that needs to be modified. But the > > error can't go away "PetscCompGrid.H:19:34: error: petsc/private/pcimpl.h: > > No such file or directory". If you guys have any ideas, please let me know. > > > > > > Thank you very much! > > > > > > On Thu, Dec 22, 2016 at 1:07 PM, Matthew Knepley > > wrote: > > > On Thu, Dec 22, 2016 at 11:54 AM, Sharp Stone > > wrote: > > > Mat and Satish, > > > > > > Thank you for your replies. I'm using Chombo-3.2 + Petsc-3.7.4. I have > > not yet received replies from Chombo. So I have to ask you guys to see if > > my paths have been correctly set up. When I changed the #include > > to #include, I still got > > the errors "PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such > > file or directory". Highly appreciate your help! > > > > > > Clearly there was another include. Satish is right that you need to get > > the right version for Chombo. > > > > > > Matt > > > > > > On Thu, Dec 22, 2016 at 12:05 PM, Matthew Knepley > > wrote: > > > On Thu, Dec 22, 2016 at 11:01 AM, Sharp Stone > > wrote: > > > Dear folks, > > > > > > I'm now using Chombo with Petsc solver. When compiling the Chombo > > examples of Petsc, I always got errors below. I wonder if anyone has such > > experience to get rid of these. Thanks very much in advance! > > > > > > > > > In file included from PetscCompGrid.cpp:13: > > > PetscCompGrid.H:19:34: error: petsc-private/pcimpl.h: No such file or > > directory > > > > > > You might have a version mismatch here. PETSc changed the directory > > structure to conform better to the Linux standard > > > > > > $PETSC_DIR/include/petsc-private > > > > > > became > > > > > > $PETSC_DIR/include/petsc/private > > > > > > Thanks, > > > > > > Matt > > > > > > In file included from PetscCompGrid.cpp:13: > > > PetscCompGrid.H:55: error: ?Mat? does not name a type > > > PetscCompGrid.H:60: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.H:62: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.H:63: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.H:66: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.H:66: error: template argument 1 is invalid > > > PetscCompGrid.H:76: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.H:90: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.H:90: error: template argument 1 is invalid > > > PetscCompGrid.H:90: error: template argument 1 is invalid > > > PetscCompGrid.H:90: error: template argument 1 is invalid > > > PetscCompGrid.H:90: error: template argument 1 is invalid > > > PetscCompGrid.H:92: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.H:92: error: template argument 1 is invalid > > > PetscCompGrid.H:92: error: template argument 1 is invalid > > > PetscCompGrid.H:92: error: template argument 1 is invalid > > > PetscCompGrid.H:92: error: template argument 1 is invalid > > > PetscCompGrid.H:93: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.H:93: error: template argument 1 is invalid > > > PetscCompGrid.H:93: error: template argument 1 is invalid > > > PetscCompGrid.H:93: error: template argument 1 is invalid > > > PetscCompGrid.H:93: error: template argument 1 is invalid > > > PetscCompGrid.H:97: error: ?Mat? does not name a type > > > PetscCompGrid.H: In constructor ?PetscCompGrid::PetscCompGrid(int)?: > > > PetscCompGrid.H:35: error: class ?PetscCompGrid? does not have any field > > named ?m_mat? > > > PetscCompGrid.H: At global scope: > > > PetscCompGrid.H:138: error: ?PetscReal? does not name a type > > > PetscCompGrid.H:140: error: ISO C++ forbids declaration of ?PetscReal? > > with no type > > > PetscCompGrid.H:140: error: expected ?;? before ?*? token > > > PetscCompGrid.cpp: In destructor ?virtual CompBC::~CompBC()?: > > > PetscCompGrid.cpp:38: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp:38: error: ?PetscFree? was not declared in this scope > > > PetscCompGrid.cpp: In constructor ?CompBC::CompBC(int, IntVect)?: > > > PetscCompGrid.cpp:41: error: class ?CompBC? does not have any field > > named ?m_Rcoefs? > > > PetscCompGrid.cpp: In member function ?void CompBC::define(int, > > IntVect)?: > > > PetscCompGrid.cpp:49: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp:49: error: ?PetscFree? was not declared in this scope > > > PetscCompGrid.cpp:54: error: ?PetscReal? was not declared in this scope > > > PetscCompGrid.cpp:54: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp:54: error: ?PetscMalloc? was not declared in this scope > > > PetscCompGrid.cpp: At global scope: > > > PetscCompGrid.cpp:59: error: ?PetscReal? does not name a type > > > PetscCompGrid.cpp: In member function ?virtual void > > ConstDiriBC::createCoefs()?: > > > PetscCompGrid.cpp:72: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp:77: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp:85: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp:93: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp: In member function ?virtual void > > ConstDiriBC::operator()(FArrayBox&, const Box&, const ProblemDomain&, > > Real, bool)?: > > > PetscCompGrid.cpp:137: error: ?m_Rcoefs? was not declared in this scope > > > PetscCompGrid.cpp: In member function ?virtual void > > PetscCompGrid::clean()?: > > > PetscCompGrid.cpp:154: error: ?m_mat? was not declared in this scope > > > PetscCompGrid.cpp:156: error: ?MatDestroy? was not declared in this scope > > > PetscCompGrid.cpp: In member function ?virtual void > > PetscCompGrid::define(const ProblemDomain&, Vector&, > > Vector&, BCHolder, const RealVect&, int, int)?: > > > PetscCompGrid.cpp:218: error: request for member ?resize? in > > ?((PetscCompGrid*)this)->PetscCompGrid::m_GIDs?, which is of non-class > > type ?int? > > > PetscCompGrid.cpp:219: error: request for member ?resize? in > > ?((PetscCompGrid*)this)->PetscCompGrid::m_crsSupportGIDs?, which is of > > non-class type ?int? > > > PetscCompGrid.cpp:220: error: request for member ?resize? in > > ?((PetscCompGrid*)this)->PetscCompGrid::m_fineCoverGIDs?, which is of > > non-class type ?int? > > > PetscCompGrid.cpp:249: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.cpp:249: error: expected `;' before ?my0? > > > PetscCompGrid.cpp:256: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:256: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:256: error: template argument 1 is invalid > > > PetscCompGrid.cpp:256: error: template argument 1 is invalid > > > PetscCompGrid.cpp:256: error: template argument 1 is invalid > > > PetscCompGrid.cpp:257: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:257: error: template argument 1 is invalid > > > PetscCompGrid.cpp:257: error: template argument 1 is invalid > > > PetscCompGrid.cpp:257: error: new initializer expression list treated as > > compound expression > > > PetscCompGrid.cpp:257: error: cannot convert ?IntVect? to ?int? in > > initialization > > > PetscCompGrid.cpp:261: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:261: error: template argument 1 is invalid > > > PetscCompGrid.cpp:261: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:261: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:262: error: request for member ?setVal? in ?gidfab?, > > which is of non-class type ?int? > > > PetscCompGrid.cpp:276: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:281: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:288: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:288: error: template argument 1 is invalid > > > PetscCompGrid.cpp:288: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:288: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:295: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:295: error: template argument 1 is invalid > > > PetscCompGrid.cpp:295: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:295: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:299: error: ?gidfab? cannot be used as a function > > > PetscCompGrid.cpp:299: error: ?my0? was not declared in this scope > > > PetscCompGrid.cpp:308: error: ?MPI_Comm? was not declared in this scope > > > PetscCompGrid.cpp:308: error: expected `;' before ?wcomm? > > > PetscCompGrid.cpp:310: error: ?wcomm? was not declared in this scope > > > PetscCompGrid.cpp:310: error: ?m_mat? was not declared in this scope > > > PetscCompGrid.cpp:310: error: ?MatCreate? was not declared in this scope > > > PetscCompGrid.cpp:311: error: ?my0? was not declared in this scope > > > PetscCompGrid.cpp:311: error: ?PETSC_DECIDE? was not declared in this > > scope > > > PetscCompGrid.cpp:311: error: ?MatSetSizes? was not declared in this > > scope > > > PetscCompGrid.cpp:312: error: ?MatSetBlockSize? was not declared in this > > scope > > > PetscCompGrid.cpp:313: error: ?MATAIJ? was not declared in this scope > > > PetscCompGrid.cpp:313: error: ?MatSetType? was not declared in this scope > > > PetscCompGrid.cpp:314: error: ?MatSetFromOptions? was not declared in > > this scope > > > PetscCompGrid.cpp:334: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:334: error: template argument 1 is invalid > > > PetscCompGrid.cpp:334: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:334: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > > > PetscCompGrid.cpp:338: error: ?gidfab? cannot be used as a function > > > PetscCompGrid.cpp:339: error: ?gidfab? cannot be used as a function > > > PetscCompGrid.cpp:342: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:348: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:350: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:350: error: template argument 1 is invalid > > > PetscCompGrid.cpp:350: error: template argument 1 is invalid > > > PetscCompGrid.cpp:350: error: invalid type in declaration before ?;? > > token > > > PetscCompGrid.cpp:358: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:358: error: template argument 1 is invalid > > > PetscCompGrid.cpp:358: error: template argument 1 is invalid > > > PetscCompGrid.cpp:358: error: new initializer expression list treated as > > compound expression > > > PetscCompGrid.cpp:358: error: cannot convert ?const IntVect? to ?int? in > > initialization > > > PetscCompGrid.cpp:359: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:359: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:359: error: template argument 1 is invalid > > > PetscCompGrid.cpp:359: error: template argument 1 is invalid > > > PetscCompGrid.cpp:359: error: template argument 1 is invalid > > > PetscCompGrid.cpp:363: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:363: error: template argument 1 is invalid > > > PetscCompGrid.cpp:363: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:363: error: no match for ?operator[]? in ?* pl[dit]? > > > PetscCompGrid.cpp:364: error: request for member ?setVal? in ?gidfab?, > > which is of non-class type ?int? > > > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:367: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:379: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:379: error: template argument 1 is invalid > > > PetscCompGrid.cpp:379: error: template argument 1 is invalid > > > PetscCompGrid.cpp:379: error: new initializer expression list treated as > > compound expression > > > PetscCompGrid.cpp:379: error: cannot convert ?const IntVect? to ?int? in > > initialization > > > PetscCompGrid.cpp:380: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:380: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:380: error: template argument 1 is invalid > > > PetscCompGrid.cpp:380: error: template argument 1 is invalid > > > PetscCompGrid.cpp:380: error: template argument 1 is invalid > > > PetscCompGrid.cpp:384: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:384: error: template argument 1 is invalid > > > PetscCompGrid.cpp:384: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:384: error: no match for ?operator[]? in ?* pl[dit]? > > > PetscCompGrid.cpp:385: error: request for member ?setVal? in ?gidfab?, > > which is of non-class type ?int? > > > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:388: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:394: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:394: error: template argument 1 is invalid > > > PetscCompGrid.cpp:394: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:394: error: no match for ?operator[]? in ?* pl[dit]? > > > PetscCompGrid.cpp:398: error: request for member ?box? in ?gidfab?, > > which is of non-class type ?int? > > > PetscCompGrid.cpp: At global scope: > > > PetscCompGrid.cpp:409: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.cpp: In member function ?virtual void > > PetscCompGrid::applyBCs(IntVect, int, const DataIndex&, Box, > > StencilTensor&)?: > > > PetscCompGrid.cpp:611: error: ?class ConstDiriBC? has no member named > > ?getCoef? > > > PetscCompGrid.cpp: In member function ?virtual void > > PetscCompGrid::InterpToCoarse(IntVect, int, const DataIndex&, > > StencilTensor&)?: > > > PetscCompGrid.cpp:699: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.cpp:699: error: template argument 1 is invalid > > > PetscCompGrid.cpp:699: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:699: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:700: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:700: error: template argument 1 is invalid > > > PetscCompGrid.cpp:700: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:700: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:709: error: ?gidfab? cannot be used as a function > > > PetscCompGrid.cpp:720: error: expected `;' before ?cidx? > > > PetscCompGrid.cpp:720: error: ?cidx? was not declared in this scope > > > PetscCompGrid.cpp:723: error: ?kk? was not declared in this scope > > > PetscCompGrid.cpp:741: error: ?supgidfab? cannot be used as a function > > > PetscCompGrid.cpp:743: error: ?supgidfab? cannot be used as a function > > > PetscCompGrid.cpp: In member function ?virtual void > > PetscCompGrid::InterpToFine(IntVect, int, const DataIndex&, > > StencilTensor&)?: > > > PetscCompGrid.cpp:772: error: ?PetscInt? was not declared in this scope > > > PetscCompGrid.cpp:772: error: template argument 1 is invalid > > > PetscCompGrid.cpp:772: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:772: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:773: error: ?PetscInt? cannot appear in a > > constant-expression > > > PetscCompGrid.cpp:773: error: template argument 1 is invalid > > > PetscCompGrid.cpp:773: error: invalid type in declaration before ?=? > > token > > > PetscCompGrid.cpp:773: error: invalid types ?int[int]? for array > > subscript > > > PetscCompGrid.cpp:774: error: request for member ?box? in ?covergidfab?, > > which is of non-class type ?int? > > > PetscCompGrid.cpp:783: error: ?gidfab? cannot be used as a function > > > PetscCompGrid.cpp: At global scope: > > > PetscCompGrid.cpp:812: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.cpp:917: error: ?PetscErrorCode? does not name a type > > > PetscCompGrid.cpp:964: error: ?PetscErrorCode? does not name a type > > > make[3]: *** [o/2d.Linux.64.mpicxx.mpif90.DEBUG.PETSC/PetscCompGrid.o] > > Error 1 > > > make[2]: *** [AMRElliptic] Error 2 > > > make[1]: *** [AMRElliptic] Error 2 > > > make: *** [execPETSc] Error 2 > > > > > > -- > > > Best regards, > > > > > > Feng > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > > -- Norbert Wiener > > > > > > > > > > > > -- > > > Best regards, > > > > > > Feng > > > > > > > > > > > > -- > > > What most experimenters take for granted before they begin their > > experiments is infinitely more interesting than any results to which their > > experiments lead. > > > -- Norbert Wiener > > > > > > > > > > > > -- > > > Best regards, > > > > > > Feng > > > > > > > From bsmith at mcs.anl.gov Thu Dec 22 14:25:11 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 22 Dec 2016 14:25:11 -0600 Subject: [petsc-users] traceback & error handling In-Reply-To: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> References: <220B732B-87A7-440E-AC7C-6792CF7E41CF@ices.utexas.edu> Message-ID: Andreas, Since almost all compilers now provide a variant of __funct__ automatically I've decided we can now remove all these definitions and still provide the tracebacks. Thus we are removing the need to define __FUNCT__ in the next couple of days in the master branch. One still needs, of course, to provide the CHKERRQ(ierr); Barry > On Dec 19, 2016, at 2:10 PM, Andreas Mang wrote: > > Hey guys: > > I have some problems with the error handling. On my local machine (where I debug) I get a million warning messages if I do > > #undef __FUNCT__ > #define __FUNCT__ ?ClassName::FunctionName? > > (i.e., file.cpp:XXX: __FUNCT__=?ClassName::FunctionName" does not agree with __func__=?FunctionName?) > > If I run the same code using intel15 compilers it?s the opposite (which I discovered just now). That is, I get an error for > > #undef __FUNCT__ > #define __FUNCT__ ?FunctionName? > > (i.e., file.cpp:XXX: __FUNCT__=?FunctionName" does not agree with __func__=?ClassName::FunctionName?) > > I do like the error handling by PETSc. I think it?s quite helpful. Obviously, I can write my own stack trace but why bother if it?s already there. I did check your online documentation and I could no longer find these definitions in your code. So, should I just remove all of these definitions? Is there a quick fix? Is this depreciated? > > > Second of all, I saw you do no longer use error handling in your examples at all, i.e., > > ierr = FunctionCall(); CHKERRQ(ierr); > > and friends have vanished. Why is that? Is it just to keep the examples simple or are you moving away from using these Macros for error handling. > > I hope I did not miss any changes in this regard in one of your announcements. I could not find anything in the documentation. > > Thanks > Andreas > > From zonexo at gmail.com Sun Dec 25 05:27:07 2016 From: zonexo at gmail.com (TAY wee-beng) Date: Sun, 25 Dec 2016 19:27:07 +0800 Subject: [petsc-users] unsubscribe Message-ID: <0a17f1b2-45d6-3b02-593e-798fd1de74c0@gmail.com> -- Thank you. Yours sincerely, TAY wee-beng From zonexo at gmail.com Tue Dec 27 02:02:40 2016 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 27 Dec 2016 16:02:40 +0800 Subject: [petsc-users] Problem linking code with MS MPI + PETSc Message-ID: Hi, I am now trying to use MS MPI after compiling with PETSc. I was using MPICH before. During linking, I got the error: 1>Compiling manifest to resources... 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 1>Copyright (C) Microsoft Corporation. All rights reserved. 1>Linking... 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB1 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB1 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB2 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB2 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB3 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB3 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB4 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB4 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB7 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB7 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB8 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB8 imported 1>libpetsc.lib(pinit.o) : error LNK2019: unresolved external symbol MPI_Reduce_scatter_block referenced in function PetscMaxSum 1>libpetsc.lib(mpits.o) : error LNK2001: unresolved external symbol MPI_Reduce_scatter_block 1>C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe : fatal error LNK1120: 1 unresolved externals 1> May I know what's wrong? -- Thank you Yours sincerely, TAY wee-beng From zonexo at gmail.com Tue Dec 27 02:24:03 2016 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 27 Dec 2016 16:24:03 +0800 Subject: [petsc-users] Fwd: Problem linking code with MS MPI + PETSc In-Reply-To: References: Message-ID: <2f0c30bf-07cd-75b5-e52a-4876cbf1e84f@gmail.com> Hi, Sorry I just realised that I was pointing to the wrong library. It's ok now. -------- Forwarded Message -------- Subject: Problem linking code with MS MPI + PETSc Date: Tue, 27 Dec 2016 16:02:40 +0800 From: TAY wee-beng To: PETSc list Hi, I am now trying to use MS MPI after compiling with PETSc. I was using MPICH before. During linking, I got the error: 1>Compiling manifest to resources... 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 1>Copyright (C) Microsoft Corporation. All rights reserved. 1>Linking... 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB1 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB1 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB2 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB2 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB3 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB3 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB4 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB4 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB7 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB7 imported 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol MPIFCMB8 imported 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol MPIFCMB8 imported 1>libpetsc.lib(pinit.o) : error LNK2019: unresolved external symbol MPI_Reduce_scatter_block referenced in function PetscMaxSum 1>libpetsc.lib(mpits.o) : error LNK2001: unresolved external symbol MPI_Reduce_scatter_block 1>C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe : fatal error LNK1120: 1 unresolved externals 1> May I know what's wrong? -- Thank you Yours sincerely, TAY wee-beng -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Dec 27 02:38:03 2016 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 27 Dec 2016 16:38:03 +0800 Subject: [petsc-users] Problem with "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac..." Message-ID: <979d2932-ae00-185c-3847-7474b5ca3b7f@gmail.com> Hi, I'm now trying to compile an old code and I got the error: "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac... which happens at: call MatSetValues(A_mat_uv,1,II,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) I remember having this error. However, I can't find my solution. I recall it's due to some changes in the new PETSc version. Can you help? -- Thank you Yours sincerely, TAY wee-beng From hongzhang at anl.gov Tue Dec 27 11:07:17 2016 From: hongzhang at anl.gov (Zhang, Hong) Date: Tue, 27 Dec 2016 17:07:17 +0000 Subject: [petsc-users] Problem with "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac..." In-Reply-To: <979d2932-ae00-185c-3847-7474b5ca3b7f@gmail.com> References: <979d2932-ae00-185c-3847-7474b5ca3b7f@gmail.com> Message-ID: PetscInt IIA(1) IIA(1) = II call MatSetValues(A_mat_uv,1,IIA,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) Hong (Mr) > On Dec 27, 2016, at 2:38 AM, TAY wee-beng wrote: > > Hi, > > I'm now trying to compile an old code and I got the error: > > "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac... > > which happens at: > > call MatSetValues(A_mat_uv,1,II,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) > > I remember having this error. However, I can't find my solution. I recall it's due to some changes in the new PETSc version. > > Can you help? > > > -- > Thank you > > Yours sincerely, > > TAY wee-beng > From balay at mcs.anl.gov Tue Dec 27 11:17:08 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 27 Dec 2016 11:17:08 -0600 Subject: [petsc-users] Problem with "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac..." In-Reply-To: References: <979d2932-ae00-185c-3847-7474b5ca3b7f@gmail.com> Message-ID: Looks like this issue was previously discussed http://lists.mcs.anl.gov/pipermail/petsc-users/2015-October/027365.html Satish On Tue, 27 Dec 2016, Zhang, Hong wrote: > PetscInt IIA(1) > IIA(1) = II > call MatSetValues(A_mat_uv,1,IIA,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) > > Hong (Mr) > > > On Dec 27, 2016, at 2:38 AM, TAY wee-beng wrote: > > > > Hi, > > > > I'm now trying to compile an old code and I got the error: > > > > "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac... > > > > which happens at: > > > > call MatSetValues(A_mat_uv,1,II,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) > > > > I remember having this error. However, I can't find my solution. I recall it's due to some changes in the new PETSc version. > > > > Can you help? > > > > > > -- > > Thank you > > > > Yours sincerely, > > > > TAY wee-beng > > > > From balay at mcs.anl.gov Tue Dec 27 11:19:08 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 27 Dec 2016 11:19:08 -0600 Subject: [petsc-users] Fwd: Problem linking code with MS MPI + PETSc In-Reply-To: <2f0c30bf-07cd-75b5-e52a-4876cbf1e84f@gmail.com> References: <2f0c30bf-07cd-75b5-e52a-4876cbf1e84f@gmail.com> Message-ID: Its good to always verify if a petsc example with makefile works [or reproduces the same error] If the petsc example with petsc makefile works - then you would look at the differences bettween this compile - and yours.. Satish On Tue, 27 Dec 2016, TAY wee-beng wrote: > Hi, > > Sorry I just realised that I was pointing to the wrong library. It's ok now. > > > > -------- Forwarded Message -------- > Subject: Problem linking code with MS MPI + PETSc > Date: Tue, 27 Dec 2016 16:02:40 +0800 > From: TAY wee-beng > To: PETSc list > > > > Hi, > > I am now trying to use MS MPI after compiling with PETSc. I was using > MPICH before. > > During linking, I got the error: > > 1>Compiling manifest to resources... > 1>Microsoft (R) Windows (R) Resource Compiler Version 6.1.6723.1 > 1>Copyright (C) Microsoft Corporation. All rights reserved. > 1>Linking... > 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol > MPIFCMB1 imported > 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol > MPIFCMB1 imported > 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol > MPIFCMB2 imported > 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol > MPIFCMB2 imported > 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol > MPIFCMB3 imported > 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol > MPIFCMB3 imported > 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol > MPIFCMB4 imported > 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol > MPIFCMB4 imported > 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol > MPIFCMB7 imported > 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol > MPIFCMB7 imported > 1>libpetsc.lib(somefort.o) : warning LNK4049: locally defined symbol > MPIFCMB8 imported > 1>libpetsc.lib(f90_fwrap.o) : warning LNK4049: locally defined symbol > MPIFCMB8 imported > 1>libpetsc.lib(pinit.o) : error LNK2019: unresolved external symbol > MPI_Reduce_scatter_block referenced in function PetscMaxSum > 1>libpetsc.lib(mpits.o) : error LNK2001: unresolved external symbol > MPI_Reduce_scatter_block > 1>C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe : fatal error > LNK1120: 1 unresolved externals > 1> > > > May I know what's wrong? > > > From zonexo at gmail.com Tue Dec 27 19:30:37 2016 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 28 Dec 2016 09:30:37 +0800 Subject: [petsc-users] Problem with "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac..." In-Reply-To: References: <979d2932-ae00-185c-3847-7474b5ca3b7f@gmail.com> Message-ID: <172e4bd5-de09-a059-079b-af47fd57fe81@gmail.com> Thanks Satish and Hong! Also realised that besides IIA, impl_mat_A has to be an array too. Yours sincerely, TAY wee-beng On 28/12/2016 1:17 AM, Satish Balay wrote: > Looks like this issue was previously discussed > > http://lists.mcs.anl.gov/pipermail/petsc-users/2015-October/027365.html > > Satish > > On Tue, 27 Dec 2016, Zhang, Hong wrote: > >> PetscInt IIA(1) >> IIA(1) = II >> call MatSetValues(A_mat_uv,1,IIA,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) >> >> Hong (Mr) >> >>> On Dec 27, 2016, at 2:38 AM, TAY wee-beng wrote: >>> >>> Hi, >>> >>> I'm now trying to compile an old code and I got the error: >>> >>> "If the actual argument is scalar, the dummy argument shall be scalar unless the actual argument is of type charac... >>> >>> which happens at: >>> >>> call MatSetValues(A_mat_uv,1,II,1,int_impl(k,3),impl_mat_A,INSERT_VALUES,ierr) >>> >>> I remember having this error. However, I can't find my solution. I recall it's due to some changes in the new PETSc version. >>> >>> Can you help? >>> >>> >>> -- >>> Thank you >>> >>> Yours sincerely, >>> >>> TAY wee-beng >>> >> From vijay.gopal.c at gmail.com Wed Dec 28 09:26:21 2016 From: vijay.gopal.c at gmail.com (Vijay Gopal Chilkuri) Date: Wed, 28 Dec 2016 16:26:21 +0100 Subject: [petsc-users] Benchmarking Message-ID: Dear developers, I'm doing exact diagonalization studies of some phenomenological model Hamiltonian. In this study I have to diagonalize large sparse matrices in Hilbert space of Slater determinants many times. I've successfully used PETSc + SLEPc to get few smallest eigenvalues. For example I've been able to diagonalize a matrix of rank *91454220* with 990 processors. This diagonalization took *15328.695847 *Sec (or *4.25* Hrs.) I have two questions: 1. Is this time reasonable, if not, is it possible to optimize further ? 2. I've tried a quick google search but could not find a comprehensive benchmarking of the SLEPc library for sparse matrix diagonalization. Could you point me to a publication/resource which has such a benchmarking ? Thanks for your help. PETSc Version: master branch commit: b33322e SLEPc Version: master branch commit: c596d1c Best, Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Dec 28 09:42:29 2016 From: jed at jedbrown.org (Jed Brown) Date: Wed, 28 Dec 2016 08:42:29 -0700 Subject: [petsc-users] Benchmarking In-Reply-To: References: Message-ID: <871swsf4be.fsf@jedbrown.org> Vijay Gopal Chilkuri writes: > Dear developers, > > I'm doing exact diagonalization studies of some phenomenological model > Hamiltonian. In this study I have to diagonalize large sparse matrices in > Hilbert space of Slater determinants many times. > > I've successfully used PETSc + SLEPc to get few smallest eigenvalues. > For example I've been able to diagonalize a matrix of rank *91454220* with > 990 processors. This diagonalization took *15328.695847 *Sec (or *4.25* > Hrs.) How sparse is your matrix, where does it come from (affects spectrum and thus convergence rate), how many eigenvalues did you request, and what preconditioner did you use? Sending the output of running with -eps_view -log_view is necessary to start understanding the performance. > I have two questions: > > 1. Is this time reasonable, if not, is it possible to optimize further ? > > 2. I've tried a quick google search but could not find a comprehensive > benchmarking of the SLEPc library for sparse matrix diagonalization. Could > you point me to a publication/resource which has such a benchmarking ? > > Thanks for your help. > > PETSc Version: master branch commit: b33322e > SLEPc Version: master branch commit: c596d1c > > Best, > Vijay -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From hzhang at mcs.anl.gov Wed Dec 28 09:42:59 2016 From: hzhang at mcs.anl.gov (Hong) Date: Wed, 28 Dec 2016 09:42:59 -0600 Subject: [petsc-users] Benchmarking In-Reply-To: References: Message-ID: Vijay: The performance of eigenvalue computation depends on many factors - matrix features, location of eigenvalues, orthogonalization of eigenvectors - how many eigensolutions do you compute, largest/smallest spectrum, accuracy - algorithms used - computer used ... > > > I'm doing exact diagonalization studies of some phenomenological model > Hamiltonian. In this study I have to diagonalize large sparse matrices in > Hilbert space of Slater determinants many times. > Why do you carry out these experiments? For solving this type of problem, I would suggest searching related research publications and compare your results. > > I've successfully used PETSc + SLEPc to get few smallest eigenvalues. > For example I've been able to diagonalize a matrix of rank *91454220* > with 990 processors. This diagonalization took *15328.695847 *Sec (or > *4.25* Hrs.) > The matrix size 91M is quite amazing. Hong > > I have two questions: > > 1. Is this time reasonable, if not, is it possible to optimize further ? > > 2. I've tried a quick google search but could not find a comprehensive > benchmarking of the SLEPc library for sparse matrix diagonalization. Could > you point me to a publication/resource which has such a benchmarking ? > > Thanks for your help. > > PETSc Version: master branch commit: b33322e > SLEPc Version: master branch commit: c596d1c > > Best, > Vijay > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Dec 28 09:44:28 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 28 Dec 2016 16:44:28 +0100 Subject: [petsc-users] Benchmarking In-Reply-To: References: Message-ID: <9F17F484-BAAE-48BB-9E9F-93AC849BAE2F@dsic.upv.es> > El 28 dic 2016, a las 16:26, Vijay Gopal Chilkuri escribi?: > > Dear developers, > > I'm doing exact diagonalization studies of some phenomenological model Hamiltonian. In this study I have to diagonalize large sparse matrices in Hilbert space of Slater determinants many times. > > I've successfully used PETSc + SLEPc to get few smallest eigenvalues. > For example I've been able to diagonalize a matrix of rank 91454220 with 990 processors. This diagonalization took 15328.695847 Sec (or 4.25 Hrs.) > > I have two questions: > > 1. Is this time reasonable, if not, is it possible to optimize further ? It depends on how many eigenvalues are being computed. If computing more than 1000 eigenvalues it is very important to set the mpd parameter, see section 2.6.5. > > 2. I've tried a quick google search but could not find a comprehensive benchmarking of the SLEPc library for sparse matrix diagonalization. Could you point me to a publication/resource which has such a benchmarking ? Some papers in the list of applications have performance results. http://slepc.upv.es/material/appli.htm See for instance [Moran et al 2011] for results up to 2048 cores. See also [Steiger et al 2011]. Jose > > Thanks for your help. > > PETSc Version: master branch commit: b33322e > SLEPc Version: master branch commit: c596d1c > > Best, > Vijay > > From vijay.gopal.c at gmail.com Wed Dec 28 10:01:40 2016 From: vijay.gopal.c at gmail.com (Vijay Gopal Chilkuri) Date: Wed, 28 Dec 2016 17:01:40 +0100 Subject: [petsc-users] Benchmarking In-Reply-To: <871swsf4be.fsf@jedbrown.org> References: <871swsf4be.fsf@jedbrown.org> Message-ID: Dear Jed, I will try to answer your questions. I've started a calculation with the -eps_view and -log_view and send you the output once it is done. On Wed, Dec 28, 2016 at 4:42 PM, Jed Brown wrote: > Vijay Gopal Chilkuri writes: > > > How sparse is your matrix, where does it come from (affects spectrum and > thus convergence rate), The matrix is a variant of the Double Exchange model Hamiltonian. It has at most 48 non-zero elements per row. > how many eigenvalues did you request, and what > preconditioner did you use? I requested for 2 eigenvalues. I used the krylovshur solver from the SLEPc package with the default preconditioner. I'm curious about the preconditioner thing. Can you suggest some suitable for my system ? Thanks a lot ! Vijay > Sending the output of running with > -eps_view -log_view is necessary to start understanding the performance. > > > I have two questions: > > > > 1. Is this time reasonable, if not, is it possible to optimize further ? > > > > 2. I've tried a quick google search but could not find a comprehensive > > benchmarking of the SLEPc library for sparse matrix diagonalization. > Could > > you point me to a publication/resource which has such a benchmarking ? > > > > Thanks for your help. > > > > PETSc Version: master branch commit: b33322e > > SLEPc Version: master branch commit: c596d1c > > > > Best, > > Vijay > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vijay.gopal.c at gmail.com Wed Dec 28 10:12:15 2016 From: vijay.gopal.c at gmail.com (Vijay Gopal Chilkuri) Date: Wed, 28 Dec 2016 17:12:15 +0100 Subject: [petsc-users] Benchmarking In-Reply-To: References: Message-ID: Dear Hong, On Wed, Dec 28, 2016 at 4:42 PM, Hong wrote: > Vijay: > The performance of eigenvalue computation depends on many factors > - matrix features, location of eigenvalues, orthogonalization of > eigenvectors > - how many eigensolutions do you compute, largest/smallest spectrum, > accuracy > - algorithms used > - computer used ... > I've used the krylovshur solver from SLEPc. I've asked for two lowest roots within the 1e-10 error bar. The matrix has at most 48 nonzero elements per row. Here are some details about the cluster: Processor: Intel(r) IVYBRIDGE 2,8 Ghz 10 (bisocket) Ram : 64Gb Interconnection: Infiniband (Full Data Rate ~ 6.89Gb/s) > >> >> I'm doing exact diagonalization studies of some phenomenological model >> Hamiltonian. In this study I have to diagonalize large sparse matrices in >> Hilbert space of Slater determinants many times. >> > Why do you carry out these experiments? For solving this type of problem, > I would suggest searching related research publications and compare your > results. > I'm using a variant of the traditional Double Exchange Hamiltonian. I'm interested in a specific part of the parameter space which is not fully explored in the literature. In this region the low energy spectrum is unusually dense (thus the exact diagonalization technique.) To my knowledge such a set of parameters has not been explored before. Hope this answers your question... Thanks, Vijay >> I've successfully used PETSc + SLEPc to get few smallest eigenvalues. >> For example I've been able to diagonalize a matrix of rank *91454220* >> with 990 processors. This diagonalization took *15328.695847 *Sec (or >> *4.25* Hrs.) >> > > The matrix size 91M is quite amazing. > > Hong > >> >> I have two questions: >> >> 1. Is this time reasonable, if not, is it possible to optimize further ? >> >> 2. I've tried a quick google search but could not find a comprehensive >> benchmarking of the SLEPc library for sparse matrix diagonalization. Could >> you point me to a publication/resource which has such a benchmarking ? >> >> Thanks for your help. >> >> PETSc Version: master branch commit: b33322e >> SLEPc Version: master branch commit: c596d1c >> >> Best, >> Vijay >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vijay.gopal.c at gmail.com Wed Dec 28 10:14:12 2016 From: vijay.gopal.c at gmail.com (Vijay Gopal Chilkuri) Date: Wed, 28 Dec 2016 17:14:12 +0100 Subject: [petsc-users] Benchmarking In-Reply-To: <9F17F484-BAAE-48BB-9E9F-93AC849BAE2F@dsic.upv.es> References: <9F17F484-BAAE-48BB-9E9F-93AC849BAE2F@dsic.upv.es> Message-ID: Dear Jose, Thanks for the references. I'm only calculation the smallest 10-100 eigenvalues using the krylovshur algorithm. How important is it to use a preconditioner ? My matrix has at most 48 nonzero elements per row. Best, Vijay On Wed, Dec 28, 2016 at 4:44 PM, Jose E. Roman wrote: > > > El 28 dic 2016, a las 16:26, Vijay Gopal Chilkuri < > vijay.gopal.c at gmail.com> escribi?: > > > > Dear developers, > > > > I'm doing exact diagonalization studies of some phenomenological model > Hamiltonian. In this study I have to diagonalize large sparse matrices in > Hilbert space of Slater determinants many times. > > > > I've successfully used PETSc + SLEPc to get few smallest eigenvalues. > > For example I've been able to diagonalize a matrix of rank 91454220 with > 990 processors. This diagonalization took 15328.695847 Sec (or 4.25 Hrs.) > > > > I have two questions: > > > > 1. Is this time reasonable, if not, is it possible to optimize further ? > > It depends on how many eigenvalues are being computed. If computing more > than 1000 eigenvalues it is very important to set the mpd parameter, see > section 2.6.5. > > > > > 2. I've tried a quick google search but could not find a comprehensive > benchmarking of the SLEPc library for sparse matrix diagonalization. Could > you point me to a publication/resource which has such a benchmarking ? > > Some papers in the list of applications have performance results. > http://slepc.upv.es/material/appli.htm > See for instance [Moran et al 2011] for results up to 2048 cores. See also > [Steiger et al 2011]. > > Jose > > > > > > Thanks for your help. > > > > PETSc Version: master branch commit: b33322e > > SLEPc Version: master branch commit: c596d1c > > > > Best, > > Vijay > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Dec 28 10:21:07 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 28 Dec 2016 17:21:07 +0100 Subject: [petsc-users] Benchmarking In-Reply-To: References: <9F17F484-BAAE-48BB-9E9F-93AC849BAE2F@dsic.upv.es> Message-ID: <2EC847A7-690A-4FFE-BF22-BABBEC8DFEDF@dsic.upv.es> > El 28 dic 2016, a las 17:14, Vijay Gopal Chilkuri escribi?: > > Dear Jose, > > Thanks for the references. > > I'm only calculation the smallest 10-100 eigenvalues using the krylovshur algorithm. > > How important is it to use a preconditioner ? My matrix has at most 48 nonzero elements per row. > > With Krylov-Schur you don't need a preconditioner if computing smallest-real eigenvalues (which I think is your case). Preconditioners are relevant only for interior eigenvalues (chapter 3 of the users guide). Jose From bikash at umich.edu Wed Dec 28 14:50:29 2016 From: bikash at umich.edu (Bikash Kanungo) Date: Wed, 28 Dec 2016 15:50:29 -0500 Subject: [petsc-users] Shell matrix in BVOrthogonalize Message-ID: Hi, I have been using BVOrthogonalize in SLEPc with a symmetric positive definite matrix (say B) in my BVSetMatrix. I was wondering if I can create a shell matrix for B and use it to overload the MatMult operations. If yes, what MATOPs like MATOP_MULT, MATOP_MULT_TRANSPOSE, MATOP_GET_DIAGONAL should I provide? Thanks, Bikash -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Wed Dec 28 15:06:29 2016 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 28 Dec 2016 22:06:29 +0100 Subject: [petsc-users] Shell matrix in BVOrthogonalize In-Reply-To: References: Message-ID: > El 28 dic 2016, a las 21:50, Bikash Kanungo escribi?: > > Hi, > > I have been using BVOrthogonalize in SLEPc with a symmetric positive definite matrix (say B) in my BVSetMatrix. I was wondering if I can create a shell matrix for B and use it to overload the MatMult operations. If yes, what MATOPs like MATOP_MULT, MATOP_MULT_TRANSPOSE, MATOP_GET_DIAGONAL should I provide? > > Thanks, > Bikash > Yes, it should be enough to provide MATOP_MULT. Jose From bikash at umich.edu Wed Dec 28 15:24:26 2016 From: bikash at umich.edu (Bikash Kanungo) Date: Wed, 28 Dec 2016 16:24:26 -0500 Subject: [petsc-users] Shell matrix in BVOrthogonalize In-Reply-To: References: Message-ID: Thank you so much, Jose. On Wed, Dec 28, 2016 at 4:06 PM, Jose E. Roman wrote: > > > El 28 dic 2016, a las 21:50, Bikash Kanungo escribi?: > > > > Hi, > > > > I have been using BVOrthogonalize in SLEPc with a symmetric positive > definite matrix (say B) in my BVSetMatrix. I was wondering if I can create > a shell matrix for B and use it to overload the MatMult operations. If yes, > what MATOPs like MATOP_MULT, MATOP_MULT_TRANSPOSE, MATOP_GET_DIAGONAL > should I provide? > > > > Thanks, > > Bikash > > > > Yes, it should be enough to provide MATOP_MULT. > Jose > > > -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb020287 at gmail.com Wed Dec 28 15:49:22 2016 From: sb020287 at gmail.com (Somdeb Bandopadhyay) Date: Thu, 29 Dec 2016 05:49:22 +0800 Subject: [petsc-users] Interaction between multiple DMDA grid Message-ID: Hi all, I am trying to modify a single block incompressible solver for flow over open cavity. One way I could do it, is to set part of the domain as flagged and apply velocity field=0 all over it. But it does not seem to be appropriate way (I am wasting alot of grid, just to have a small cavity). So I was wondering, is there any example to interact between two DMDA grids? I am using petsc to solve the poisson equation so if I can just create two set of grid (say DA_grid1 and DA_grid2) and have a interface between them, my problem would be alot more easier. Thanks a lot in advance. Somdeb -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Dec 28 19:37:51 2016 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 28 Dec 2016 19:37:51 -0600 Subject: [petsc-users] Interaction between multiple DMDA grid In-Reply-To: References: Message-ID: <83646EAD-3A1E-4778-96E6-1793CF9F2924@mcs.anl.gov> Sadly this type of situation is not as easy to handle as one would hope. What you suggest is possible, there are basically two different approaches 1) have both DA live on all processes 2) Create a subcommunicator for DAtop and its complement for DAcavity. In either case I would suggest using the "ghost" boundary conditions for the top of the DAcavity and the bottom of the DAtop and use two VecScatters to communicate from/to them and the regular parts of the domain. Then essentially ghost point updates are done with DAGlobalToLocal() on each DA and the VecScatters. For parts of DATop that are not above the DACavity you would just ignore the "extra" ghost locations. I would start by figuring out how to do this and not yet worry about Jacobians etc. Just be able to go from global to local on the composite grid vector to the local vectors. Use a very small number of points and run starting on one process. I would use 1) not 2) since it is simpler to understand. Good luck, Barry > On Dec 28, 2016, at 3:49 PM, Somdeb Bandopadhyay wrote: > > Hi all, > I am trying to modify a single block incompressible solver for flow over open cavity. One way I could do it, is to set part of the domain as flagged and apply velocity field=0 all over it. But it does not seem to be appropriate way (I am wasting alot of grid, just to have a small cavity). So I was wondering, is there any example to interact between two DMDA grids? I am using petsc to solve the poisson equation so if I can just create two set of grid (say DA_grid1 and DA_grid2) and have a interface between them, my problem would be alot more easier. > > Thanks a lot in advance. > > Somdeb From sb020287 at gmail.com Wed Dec 28 20:00:58 2016 From: sb020287 at gmail.com (Somdeb Bandopadhyay) Date: Thu, 29 Dec 2016 10:00:58 +0800 Subject: [petsc-users] Interaction between multiple DMDA grid In-Reply-To: <83646EAD-3A1E-4778-96E6-1793CF9F2924@mcs.anl.gov> References: <83646EAD-3A1E-4778-96E6-1793CF9F2924@mcs.anl.gov> Message-ID: Hi Sir, Thanks a lot for the clarification. I will give it a try, let's see how things go. Somdeb On Thu, Dec 29, 2016 at 9:37 AM, Barry Smith wrote: > > Sadly this type of situation is not as easy to handle as one would > hope. What you suggest is possible, there are basically two different > approaches > > 1) have both DA live on all processes > > 2) Create a subcommunicator for DAtop and its complement for DAcavity. > > In either case I would suggest using the "ghost" boundary conditions for > the top of the DAcavity and the bottom of the DAtop and use two > VecScatters to communicate from/to them and the regular parts of the > domain. Then essentially ghost point updates are done with > DAGlobalToLocal() on each DA and the VecScatters. For parts of DATop that > are not above the DACavity you would just ignore the "extra" ghost > locations. > > I would start by figuring out how to do this and not yet worry about > Jacobians etc. Just be able to go from global to local on the composite > grid vector to the local vectors. Use a very small number of points and run > starting on one process. I would use 1) not 2) since it is simpler to > understand. > > Good luck, > > Barry > > > > On Dec 28, 2016, at 3:49 PM, Somdeb Bandopadhyay > wrote: > > > > Hi all, > > I am trying to modify a single block incompressible solver for flow > over open cavity. One way I could do it, is to set part of the domain as > flagged and apply velocity field=0 all over it. But it does not seem to be > appropriate way (I am wasting alot of grid, just to have a small cavity). > So I was wondering, is there any example to interact between two DMDA > grids? I am using petsc to solve the poisson equation so if I can just > create two set of grid (say DA_grid1 and DA_grid2) and have a interface > between them, my problem would be alot more easier. > > > > Thanks a lot in advance. > > > > Somdeb > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pooja.singh09 at hotmail.com Fri Dec 30 01:32:46 2016 From: pooja.singh09 at hotmail.com (Pooja Singh) Date: Fri, 30 Dec 2016 07:32:46 +0000 Subject: [petsc-users] Translation and Interpretation Solution !! Message-ID: Hello Sir/Ma'am, Hope you are doing best, Need Document Translation, Interpretation or Localization? We provide Translation Services for all type of documents in languages like Russian, Polish, Chinese, German, Swedish, Spanish, Ukrainian, Italian, Arabic, Japanese, Thai, Dutch, French, Vietnamese, Swedish, Korean and regional Languages. Specializations: Medical Translation, Marketing Material Translation, Academic Translation, Book Translation, Financial Translation, Technical Translation, Legal Translation, E-learning course translation, Website and Software Localization. Major clientele: HP, Samsung Engineering, Fluor, ABB Ltd, TOYO, Sulzer, Emerson, TATA, NIIT, Petrofac, BHEL, Siemens Ltd, Larsen & Toubro, Posco, Schneider Electric. If you have translation or interpretation requirements then please share with us in detail. I will be happy to connect with you. Best Regards, Pooja Singh Team-Tr & In Note-If this is not relevant you then revert with "No" in subject line. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.Chamberland at giref.ulaval.ca Sat Dec 31 09:53:57 2016 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Sat, 31 Dec 2016 10:53:57 -0500 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) Message-ID: Hi, I am just starting to debug a bug encountered with and only with SuperLU_Dist combined with MKL on a 2 processes validation test. (the same test works fine with MUMPS on 2 processes). I just noticed that the SuperLU_Dist version installed by PETSc configure script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. Before going further, I just want to ask: Is there any specific reason to stick to 5.1.0? Here is some more information: On process 2 I have this printed in stdout: Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . and in stderr: Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) >= (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - 1))) && ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' failed. [saruman:15771] *** Process received signal *** This is the 7th call to KSPSolve in the same execution. Here is the last KSPView: KSP Object:(o_slin) 2 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object:(o_slin) 2 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: natural factor fill ratio given 0., needed 0. Factored matrix follows: Mat Object: 2 MPI processes type: mpiaij rows=382, cols=382 package used to perform factorization: superlu_dist total: nonzeros=0, allocated nonzeros=0 total number of mallocs used during MatSetValues calls =0 SuperLU_DIST run parameters: Process grid nprow 2 x npcol 1 Equilibrate matrix TRUE Matrix input mode 1 Replace tiny pivots FALSE Use iterative refinement FALSE Processors in row 2 col partition 1 Row permutation LargeDiag Column permutation METIS_AT_PLUS_A Parallel symbolic factorization FALSE Repeated factorization SamePattern linear system matrix = precond matrix: Mat Object: (o_slin) 2 MPI processes type: mpiaij rows=382, cols=382 total: nonzeros=4458, allocated nonzeros=4458 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 109 nodes, limit used is 5 I know this information is not enough to help debug, but I would like to know if PETSc guys will upgrade to 5.1.3 before trying to debug anything. Thanks, Eric From knepley at gmail.com Sat Dec 31 10:51:47 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 31 Dec 2016 10:51:47 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: On Sat, Dec 31, 2016 at 9:53 AM, Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi, > > I am just starting to debug a bug encountered with and only with > SuperLU_Dist combined with MKL on a 2 processes validation test. > > (the same test works fine with MUMPS on 2 processes). > > I just noticed that the SuperLU_Dist version installed by PETSc configure > script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. > > Before going further, I just want to ask: > > Is there any specific reason to stick to 5.1.0? > Can you debug in 'master' which does have 5.1.3, including an important bug fix? Matt > > Here is some more information: > > On process 2 I have this printed in stdout: > > Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . > > and in stderr: > > Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == > (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof > (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) > (old_size) >= (unsigned long)((((__builtin_offsetof (struct malloc_chunk, > fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - 1))) > && ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' > failed. > [saruman:15771] *** Process received signal *** > > This is the 7th call to KSPSolve in the same execution. Here is the last > KSPView: > > KSP Object:(o_slin) 2 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object:(o_slin) 2 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 0., needed 0. > Factored matrix follows: > Mat Object: 2 MPI processes > type: mpiaij > rows=382, cols=382 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > SuperLU_DIST run parameters: > Process grid nprow 2 x npcol 1 > Equilibrate matrix TRUE > Matrix input mode 1 > Replace tiny pivots FALSE > Use iterative refinement FALSE > Processors in row 2 col partition 1 > Row permutation LargeDiag > Column permutation METIS_AT_PLUS_A > Parallel symbolic factorization FALSE > Repeated factorization SamePattern > linear system matrix = precond matrix: > Mat Object: (o_slin) 2 MPI processes > type: mpiaij > rows=382, cols=382 > total: nonzeros=4458, allocated nonzeros=4458 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 109 nodes, limit used is > 5 > > I know this information is not enough to help debug, but I would like to > know if PETSc guys will upgrade to 5.1.3 before trying to debug anything. > > Thanks, > Eric > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Dec 31 10:52:47 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 10:52:47 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: On Sat, 31 Dec 2016, Eric Chamberland wrote: > Hi, > > I am just starting to debug a bug encountered with and only with SuperLU_Dist > combined with MKL on a 2 processes validation test. > > (the same test works fine with MUMPS on 2 processes). > > I just noticed that the SuperLU_Dist version installed by PETSc configure > script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. If you use petsc-master - it will install 5.1.3 by default. > > Before going further, I just want to ask: > > Is there any specific reason to stick to 5.1.0? We don't usually upgrade externalpackage version in PETSc releases [unless its tested to work and fixes known bugs]. There could be API changes - or build changes that can potentially conflict. >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple of bugs]. You might be able to do the following with petsc-3.7 [with git externalpackage repos] --download-superlu_dist --download-superlu_dit-commit=v5.1.3 Satish > Here is some more information: > > On process 2 I have this printed in stdout: > > Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . > > and in stderr: > > Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == > (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof > (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) > >= (unsigned long)((((__builtin_offsetof (struct malloc_chunk, > fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - 1))) && > ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' failed. > [saruman:15771] *** Process received signal *** > > This is the 7th call to KSPSolve in the same execution. Here is the last > KSPView: > > KSP Object:(o_slin) 2 MPI processes > type: preonly > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > left preconditioning > using NONE norm type for convergence test > PC Object:(o_slin) 2 MPI processes > type: lu > LU: out-of-place factorization > tolerance for zero pivot 2.22045e-14 > matrix ordering: natural > factor fill ratio given 0., needed 0. > Factored matrix follows: > Mat Object: 2 MPI processes > type: mpiaij > rows=382, cols=382 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=0 > total number of mallocs used during MatSetValues calls =0 > SuperLU_DIST run parameters: > Process grid nprow 2 x npcol 1 > Equilibrate matrix TRUE > Matrix input mode 1 > Replace tiny pivots FALSE > Use iterative refinement FALSE > Processors in row 2 col partition 1 > Row permutation LargeDiag > Column permutation METIS_AT_PLUS_A > Parallel symbolic factorization FALSE > Repeated factorization SamePattern > linear system matrix = precond matrix: > Mat Object: (o_slin) 2 MPI processes > type: mpiaij > rows=382, cols=382 > total: nonzeros=4458, allocated nonzeros=4458 > total number of mallocs used during MatSetValues calls =0 > using I-node (on process 0) routines: found 109 nodes, limit used is 5 > > I know this information is not enough to help debug, but I would like to know > if PETSc guys will upgrade to 5.1.3 before trying to debug anything. > > Thanks, > Eric > > From balay at mcs.anl.gov Sat Dec 31 11:17:18 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 11:17:18 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: On Sat, 31 Dec 2016, Satish Balay wrote: > From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple of bugs]. ok - updated maint to use 5.1.3 Satish From Eric.Chamberland at giref.ulaval.ca Sat Dec 31 12:10:34 2016 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Sat, 31 Dec 2016 13:10:34 -0500 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: Hi, ok I will test with 5.1.3 with the option you gave me (--download-superlu_dit-commit=v5.1.3). But from what you and Matthew said, I should have 5.1.3 with petsc-master, but the last night log shows me library file name 5.1.0: http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure.log So I am a bit confused: Why did I got 5.1.0 last night? (I use the petsc-master tarball, is it the reason?) Thanks, Eric Le 2016-12-31 ? 11:52, Satish Balay a ?crit : > On Sat, 31 Dec 2016, Eric Chamberland wrote: > >> Hi, >> >> I am just starting to debug a bug encountered with and only with SuperLU_Dist >> combined with MKL on a 2 processes validation test. >> >> (the same test works fine with MUMPS on 2 processes). >> >> I just noticed that the SuperLU_Dist version installed by PETSc configure >> script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. > If you use petsc-master - it will install 5.1.3 by default. >> Before going further, I just want to ask: >> >> Is there any specific reason to stick to 5.1.0? > We don't usually upgrade externalpackage version in PETSc releases > [unless its tested to work and fixes known bugs]. There could be API > changes - or build changes that can potentially conflict. > > >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple of bugs]. > > You might be able to do the following with petsc-3.7 [with git externalpackage repos] > > --download-superlu_dist --download-superlu_dit-commit=v5.1.3 > > Satish > >> Here is some more information: >> >> On process 2 I have this printed in stdout: >> >> Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . >> >> and in stderr: >> >> Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == >> (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof >> (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) >>> = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, >> fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - 1))) && >> ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' failed. >> [saruman:15771] *** Process received signal *** >> >> This is the 7th call to KSPSolve in the same execution. Here is the last >> KSPView: >> >> KSP Object:(o_slin) 2 MPI processes >> type: preonly >> maximum iterations=10000, initial guess is zero >> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >> left preconditioning >> using NONE norm type for convergence test >> PC Object:(o_slin) 2 MPI processes >> type: lu >> LU: out-of-place factorization >> tolerance for zero pivot 2.22045e-14 >> matrix ordering: natural >> factor fill ratio given 0., needed 0. >> Factored matrix follows: >> Mat Object: 2 MPI processes >> type: mpiaij >> rows=382, cols=382 >> package used to perform factorization: superlu_dist >> total: nonzeros=0, allocated nonzeros=0 >> total number of mallocs used during MatSetValues calls =0 >> SuperLU_DIST run parameters: >> Process grid nprow 2 x npcol 1 >> Equilibrate matrix TRUE >> Matrix input mode 1 >> Replace tiny pivots FALSE >> Use iterative refinement FALSE >> Processors in row 2 col partition 1 >> Row permutation LargeDiag >> Column permutation METIS_AT_PLUS_A >> Parallel symbolic factorization FALSE >> Repeated factorization SamePattern >> linear system matrix = precond matrix: >> Mat Object: (o_slin) 2 MPI processes >> type: mpiaij >> rows=382, cols=382 >> total: nonzeros=4458, allocated nonzeros=4458 >> total number of mallocs used during MatSetValues calls =0 >> using I-node (on process 0) routines: found 109 nodes, limit used is 5 >> >> I know this information is not enough to help debug, but I would like to know >> if PETSc guys will upgrade to 5.1.3 before trying to debug anything. >> >> Thanks, >> Eric >> >> From knepley at gmail.com Sat Dec 31 12:14:35 2016 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 31 Dec 2016 12:14:35 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: On Sat, Dec 31, 2016 at 12:10 PM, Eric Chamberland < Eric.Chamberland at giref.ulaval.ca> wrote: > Hi, > > ok I will test with 5.1.3 with the option you gave me > (--download-superlu_dit-commit=v5.1.3). > > But from what you and Matthew said, I should have 5.1.3 with petsc-master, > but the last night log shows me library file name 5.1.0: > > http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016 > .12.31.02h00m01s_configure.log > > So I am a bit confused: Why did I got 5.1.0 last night? (I use the > petsc-master tarball, is it the reason?) > We do not automatically upgrade the version of dependent packages. You have to delete them and reconfigure if you want us to download the new thing. Matt > Thanks, > > Eric > > > Le 2016-12-31 ? 11:52, Satish Balay a ?crit : > >> On Sat, 31 Dec 2016, Eric Chamberland wrote: >> >> Hi, >>> >>> I am just starting to debug a bug encountered with and only with >>> SuperLU_Dist >>> combined with MKL on a 2 processes validation test. >>> >>> (the same test works fine with MUMPS on 2 processes). >>> >>> I just noticed that the SuperLU_Dist version installed by PETSc configure >>> script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. >>> >> If you use petsc-master - it will install 5.1.3 by default. >> >>> Before going further, I just want to ask: >>> >>> Is there any specific reason to stick to 5.1.0? >>> >> We don't usually upgrade externalpackage version in PETSc releases >> [unless its tested to work and fixes known bugs]. There could be API >> changes - or build changes that can potentially conflict. >> >> >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple >> of bugs]. >> >> You might be able to do the following with petsc-3.7 [with git >> externalpackage repos] >> >> --download-superlu_dist --download-superlu_dit-commit=v5.1.3 >> >> Satish >> >> Here is some more information: >>> >>> On process 2 I have this printed in stdout: >>> >>> Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . >>> >>> and in stderr: >>> >>> Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == >>> (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof >>> (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) >>> (old_size) >>> >>>> = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, >>>> >>> fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - >>> 1))) && >>> ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' >>> failed. >>> [saruman:15771] *** Process received signal *** >>> >>> This is the 7th call to KSPSolve in the same execution. Here is the last >>> KSPView: >>> >>> KSP Object:(o_slin) 2 MPI processes >>> type: preonly >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>> left preconditioning >>> using NONE norm type for convergence test >>> PC Object:(o_slin) 2 MPI processes >>> type: lu >>> LU: out-of-place factorization >>> tolerance for zero pivot 2.22045e-14 >>> matrix ordering: natural >>> factor fill ratio given 0., needed 0. >>> Factored matrix follows: >>> Mat Object: 2 MPI processes >>> type: mpiaij >>> rows=382, cols=382 >>> package used to perform factorization: superlu_dist >>> total: nonzeros=0, allocated nonzeros=0 >>> total number of mallocs used during MatSetValues calls =0 >>> SuperLU_DIST run parameters: >>> Process grid nprow 2 x npcol 1 >>> Equilibrate matrix TRUE >>> Matrix input mode 1 >>> Replace tiny pivots FALSE >>> Use iterative refinement FALSE >>> Processors in row 2 col partition 1 >>> Row permutation LargeDiag >>> Column permutation METIS_AT_PLUS_A >>> Parallel symbolic factorization FALSE >>> Repeated factorization SamePattern >>> linear system matrix = precond matrix: >>> Mat Object: (o_slin) 2 MPI processes >>> type: mpiaij >>> rows=382, cols=382 >>> total: nonzeros=4458, allocated nonzeros=4458 >>> total number of mallocs used during MatSetValues calls =0 >>> using I-node (on process 0) routines: found 109 nodes, limit used >>> is 5 >>> >>> I know this information is not enough to help debug, but I would like to >>> know >>> if PETSc guys will upgrade to 5.1.3 before trying to debug anything. >>> >>> Thanks, >>> Eric >>> >>> >>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Dec 31 12:17:27 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 12:17:27 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: >>>>>>>> =============================================================================== Trying to download git://https://github.com/xiaoyeli/superlu_dist for SUPERLU_DIST =============================================================================== Executing: git clone https://github.com/xiaoyeli/superlu_dist /pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist stdout: Cloning into '/pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist'... Looking for SUPERLU_DIST at git.superlu_dist, hg.superlu_dist or a directory starting with ['superlu_dist'] Found a copy of SUPERLU_DIST in git.superlu_dist Executing: ['git', 'rev-parse', '--git-dir'] stdout: .git Executing: ['git', 'cat-file', '-e', 'v5.1.3^{commit}'] Executing: ['git', 'rev-parse', 'v5.1.3'] stdout: 7306f704c6c8d5113def649b76def3c8eb607690 Executing: ['git', 'stash'] stdout: No local changes to save Executing: ['git', 'clean', '-f', '-d', '-x'] Executing: ['git', 'checkout', '-f', '7306f704c6c8d5113def649b76def3c8eb607690'] <<<<<<<< Per log below - its using 5.1.3. Why did you think you got 5.1.0? Satish On Sat, 31 Dec 2016, Eric Chamberland wrote: > Hi, > > ok I will test with 5.1.3 with the option you gave me > (--download-superlu_dit-commit=v5.1.3). > > But from what you and Matthew said, I should have 5.1.3 with petsc-master, but > the last night log shows me library file name 5.1.0: > > http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure.log > > So I am a bit confused: Why did I got 5.1.0 last night? (I use the > petsc-master tarball, is it the reason?) > > Thanks, > > Eric > > > Le 2016-12-31 ? 11:52, Satish Balay a ?crit : > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > Hi, > > > > > > I am just starting to debug a bug encountered with and only with > > > SuperLU_Dist > > > combined with MKL on a 2 processes validation test. > > > > > > (the same test works fine with MUMPS on 2 processes). > > > > > > I just noticed that the SuperLU_Dist version installed by PETSc configure > > > script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. > > If you use petsc-master - it will install 5.1.3 by default. > > > Before going further, I just want to ask: > > > > > > Is there any specific reason to stick to 5.1.0? > > We don't usually upgrade externalpackage version in PETSc releases > > [unless its tested to work and fixes known bugs]. There could be API > > changes - or build changes that can potentially conflict. > > > > >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple of > > bugs]. > > > > You might be able to do the following with petsc-3.7 [with git > > externalpackage repos] > > > > --download-superlu_dist --download-superlu_dit-commit=v5.1.3 > > > > Satish > > > > > Here is some more information: > > > > > > On process 2 I have this printed in stdout: > > > > > > Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . > > > > > > and in stderr: > > > > > > Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == > > > (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof > > > (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) > > > (old_size) > > > > = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, > > > fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - > > > 1))) && > > > ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' > > > failed. > > > [saruman:15771] *** Process received signal *** > > > > > > This is the 7th call to KSPSolve in the same execution. Here is the last > > > KSPView: > > > > > > KSP Object:(o_slin) 2 MPI processes > > > type: preonly > > > maximum iterations=10000, initial guess is zero > > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > > left preconditioning > > > using NONE norm type for convergence test > > > PC Object:(o_slin) 2 MPI processes > > > type: lu > > > LU: out-of-place factorization > > > tolerance for zero pivot 2.22045e-14 > > > matrix ordering: natural > > > factor fill ratio given 0., needed 0. > > > Factored matrix follows: > > > Mat Object: 2 MPI processes > > > type: mpiaij > > > rows=382, cols=382 > > > package used to perform factorization: superlu_dist > > > total: nonzeros=0, allocated nonzeros=0 > > > total number of mallocs used during MatSetValues calls =0 > > > SuperLU_DIST run parameters: > > > Process grid nprow 2 x npcol 1 > > > Equilibrate matrix TRUE > > > Matrix input mode 1 > > > Replace tiny pivots FALSE > > > Use iterative refinement FALSE > > > Processors in row 2 col partition 1 > > > Row permutation LargeDiag > > > Column permutation METIS_AT_PLUS_A > > > Parallel symbolic factorization FALSE > > > Repeated factorization SamePattern > > > linear system matrix = precond matrix: > > > Mat Object: (o_slin) 2 MPI processes > > > type: mpiaij > > > rows=382, cols=382 > > > total: nonzeros=4458, allocated nonzeros=4458 > > > total number of mallocs used during MatSetValues calls =0 > > > using I-node (on process 0) routines: found 109 nodes, limit used > > > is 5 > > > > > > I know this information is not enough to help debug, but I would like to > > > know > > > if PETSc guys will upgrade to 5.1.3 before trying to debug anything. > > > > > > Thanks, > > > Eric > > > > > > > > From Eric.Chamberland at giref.ulaval.ca Sat Dec 31 12:26:47 2016 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Sat, 31 Dec 2016 13:26:47 -0500 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> Ah ok, I see! Here look at the file name in the configure.log: Install the project... /usr/bin/cmake -P cmake_install.cmake -- Install configuration: "DEBUG" -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5.1.0 -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5 It is saying 5.1.0, but in fact you are right: it is 5.1.3 that is downloaded!!! :) And FWIW, the nighlty automatic compilation of PETSc starts within a brand new and empty directory each night... Thanks to both of you again! :) Eric Le 2016-12-31 ? 13:17, Satish Balay a ?crit : > =============================================================================== > Trying to download git://https://github.com/xiaoyeli/superlu_dist for SUPERLU_DIST > =============================================================================== > > Executing: git clone https://github.com/xiaoyeli/superlu_dist /pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist > stdout: Cloning into '/pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist'... > Looking for SUPERLU_DIST at git.superlu_dist, hg.superlu_dist or a directory starting with ['superlu_dist'] > Found a copy of SUPERLU_DIST in git.superlu_dist > Executing: ['git', 'rev-parse', '--git-dir'] > stdout: .git > Executing: ['git', 'cat-file', '-e', 'v5.1.3^{commit}'] > Executing: ['git', 'rev-parse', 'v5.1.3'] > stdout: 7306f704c6c8d5113def649b76def3c8eb607690 > Executing: ['git', 'stash'] > stdout: No local changes to save > Executing: ['git', 'clean', '-f', '-d', '-x'] > Executing: ['git', 'checkout', '-f', '7306f704c6c8d5113def649b76def3c8eb607690'] > <<<<<<<< > > Per log below - its using 5.1.3. Why did you think you got 5.1.0? > > Satish > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > >> Hi, >> >> ok I will test with 5.1.3 with the option you gave me >> (--download-superlu_dit-commit=v5.1.3). >> >> But from what you and Matthew said, I should have 5.1.3 with petsc-master, but >> the last night log shows me library file name 5.1.0: >> >> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure.log >> >> So I am a bit confused: Why did I got 5.1.0 last night? (I use the >> petsc-master tarball, is it the reason?) >> >> Thanks, >> >> Eric >> >> >> Le 2016-12-31 ? 11:52, Satish Balay a ?crit : >>> On Sat, 31 Dec 2016, Eric Chamberland wrote: >>> >>>> Hi, >>>> >>>> I am just starting to debug a bug encountered with and only with >>>> SuperLU_Dist >>>> combined with MKL on a 2 processes validation test. >>>> >>>> (the same test works fine with MUMPS on 2 processes). >>>> >>>> I just noticed that the SuperLU_Dist version installed by PETSc configure >>>> script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. >>> If you use petsc-master - it will install 5.1.3 by default. >>>> Before going further, I just want to ask: >>>> >>>> Is there any specific reason to stick to 5.1.0? >>> We don't usually upgrade externalpackage version in PETSc releases >>> [unless its tested to work and fixes known bugs]. There could be API >>> changes - or build changes that can potentially conflict. >>> >>> >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple of >>> bugs]. >>> >>> You might be able to do the following with petsc-3.7 [with git >>> externalpackage repos] >>> >>> --download-superlu_dist --download-superlu_dit-commit=v5.1.3 >>> >>> Satish >>> >>>> Here is some more information: >>>> >>>> On process 2 I have this printed in stdout: >>>> >>>> Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . >>>> >>>> and in stderr: >>>> >>>> Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top == >>>> (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - __builtin_offsetof >>>> (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) >>>> (old_size) >>>>> = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, >>>> fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - >>>> 1))) && >>>> ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' >>>> failed. >>>> [saruman:15771] *** Process received signal *** >>>> >>>> This is the 7th call to KSPSolve in the same execution. Here is the last >>>> KSPView: >>>> >>>> KSP Object:(o_slin) 2 MPI processes >>>> type: preonly >>>> maximum iterations=10000, initial guess is zero >>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>> left preconditioning >>>> using NONE norm type for convergence test >>>> PC Object:(o_slin) 2 MPI processes >>>> type: lu >>>> LU: out-of-place factorization >>>> tolerance for zero pivot 2.22045e-14 >>>> matrix ordering: natural >>>> factor fill ratio given 0., needed 0. >>>> Factored matrix follows: >>>> Mat Object: 2 MPI processes >>>> type: mpiaij >>>> rows=382, cols=382 >>>> package used to perform factorization: superlu_dist >>>> total: nonzeros=0, allocated nonzeros=0 >>>> total number of mallocs used during MatSetValues calls =0 >>>> SuperLU_DIST run parameters: >>>> Process grid nprow 2 x npcol 1 >>>> Equilibrate matrix TRUE >>>> Matrix input mode 1 >>>> Replace tiny pivots FALSE >>>> Use iterative refinement FALSE >>>> Processors in row 2 col partition 1 >>>> Row permutation LargeDiag >>>> Column permutation METIS_AT_PLUS_A >>>> Parallel symbolic factorization FALSE >>>> Repeated factorization SamePattern >>>> linear system matrix = precond matrix: >>>> Mat Object: (o_slin) 2 MPI processes >>>> type: mpiaij >>>> rows=382, cols=382 >>>> total: nonzeros=4458, allocated nonzeros=4458 >>>> total number of mallocs used during MatSetValues calls =0 >>>> using I-node (on process 0) routines: found 109 nodes, limit used >>>> is 5 >>>> >>>> I know this information is not enough to help debug, but I would like to >>>> know >>>> if PETSc guys will upgrade to 5.1.3 before trying to debug anything. >>>> >>>> Thanks, >>>> Eric >>>> >>>> >> From balay at mcs.anl.gov Sat Dec 31 12:28:03 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 12:28:03 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: On Sat, 31 Dec 2016, Matthew Knepley wrote: > > We do not automatically upgrade the version of dependent packages. If git is installed - then configure prefers git repo - and that will get upgraded [or downgraded] automatically - based on gitcommit in configure [or commandline]. > You have to delete them and reconfigure if you want us to download > the new thing. This is true if configure used the tarball install for the externlapackage. Satish From balay at mcs.anl.gov Sat Dec 31 12:35:58 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 12:35:58 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> References: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> Message-ID: Ah - ok. A bug in superlu_dist. Version string in CMakeLists.txt needs updating for every release.. set(VERSION_MAJOR "5") set(VERSION_MINOR "1") set(VERSION_BugFix "0") cc:ing Sherry. Satish On Sat, 31 Dec 2016, Eric Chamberland wrote: > Ah ok, I see! Here look at the file name in the configure.log: > > Install the project... > /usr/bin/cmake -P cmake_install.cmake > -- Install configuration: "DEBUG" > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5.1.0 > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5 > > It is saying 5.1.0, but in fact you are right: it is 5.1.3 that is > downloaded!!! :) > > And FWIW, the nighlty automatic compilation of PETSc starts within a brand new > and empty directory each night... > > Thanks to both of you again! :) > > Eric > > > Le 2016-12-31 ? 13:17, Satish Balay a ?crit : > > =============================================================================== > > Trying to download > > git://https://github.com/xiaoyeli/superlu_dist for > > SUPERLU_DIST > > =============================================================================== > > > > Executing: git clone https://github.com/xiaoyeli/superlu_dist > > Executing: /pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist > > stdout: Cloning into > > '/pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist'... > > Looking for SUPERLU_DIST at git.superlu_dist, > > hg.superlu_dist or a directory starting with > > ['superlu_dist'] > > Found a copy of SUPERLU_DIST in git.superlu_dist > > Executing: ['git', 'rev-parse', '--git-dir'] > > stdout: .git > > Executing: ['git', 'cat-file', '-e', 'v5.1.3^{commit}'] > > Executing: ['git', 'rev-parse', 'v5.1.3'] > > stdout: 7306f704c6c8d5113def649b76def3c8eb607690 > > Executing: ['git', 'stash'] > > stdout: No local changes to save > > Executing: ['git', 'clean', '-f', '-d', '-x'] > > Executing: ['git', 'checkout', '-f', > > Executing: '7306f704c6c8d5113def649b76def3c8eb607690'] > > <<<<<<<< > > > > Per log below - its using 5.1.3. Why did you think you got 5.1.0? > > > > Satish > > > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > Hi, > > > > > > ok I will test with 5.1.3 with the option you gave me > > > (--download-superlu_dit-commit=v5.1.3). > > > > > > But from what you and Matthew said, I should have 5.1.3 with petsc-master, > > > but > > > the last night log shows me library file name 5.1.0: > > > > > > http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure.log > > > > > > So I am a bit confused: Why did I got 5.1.0 last night? (I use the > > > petsc-master tarball, is it the reason?) > > > > > > Thanks, > > > > > > Eric > > > > > > > > > Le 2016-12-31 ? 11:52, Satish Balay a ?crit : > > > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > > > > > Hi, > > > > > > > > > > I am just starting to debug a bug encountered with and only with > > > > > SuperLU_Dist > > > > > combined with MKL on a 2 processes validation test. > > > > > > > > > > (the same test works fine with MUMPS on 2 processes). > > > > > > > > > > I just noticed that the SuperLU_Dist version installed by PETSc > > > > > configure > > > > > script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. > > > > If you use petsc-master - it will install 5.1.3 by default. > > > > > Before going further, I just want to ask: > > > > > > > > > > Is there any specific reason to stick to 5.1.0? > > > > We don't usually upgrade externalpackage version in PETSc releases > > > > [unless its tested to work and fixes known bugs]. There could be API > > > > changes - or build changes that can potentially conflict. > > > > > > > > >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a couple > > > > of > > > > bugs]. > > > > > > > > You might be able to do the following with petsc-3.7 [with git > > > > externalpackage repos] > > > > > > > > --download-superlu_dist --download-superlu_dit-commit=v5.1.3 > > > > > > > > Satish > > > > > > > > > Here is some more information: > > > > > > > > > > On process 2 I have this printed in stdout: > > > > > > > > > > Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . > > > > > > > > > > and in stderr: > > > > > > > > > > Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion `(old_top > > > > > == > > > > > (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - > > > > > __builtin_offsetof > > > > > (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) > > > > > (old_size) > > > > > > = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, > > > > > fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) - > > > > > 1))) && > > > > > ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == 0)' > > > > > failed. > > > > > [saruman:15771] *** Process received signal *** > > > > > > > > > > This is the 7th call to KSPSolve in the same execution. Here is the > > > > > last > > > > > KSPView: > > > > > > > > > > KSP Object:(o_slin) 2 MPI processes > > > > > type: preonly > > > > > maximum iterations=10000, initial guess is zero > > > > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > > > > left preconditioning > > > > > using NONE norm type for convergence test > > > > > PC Object:(o_slin) 2 MPI processes > > > > > type: lu > > > > > LU: out-of-place factorization > > > > > tolerance for zero pivot 2.22045e-14 > > > > > matrix ordering: natural > > > > > factor fill ratio given 0., needed 0. > > > > > Factored matrix follows: > > > > > Mat Object: 2 MPI processes > > > > > type: mpiaij > > > > > rows=382, cols=382 > > > > > package used to perform factorization: superlu_dist > > > > > total: nonzeros=0, allocated nonzeros=0 > > > > > total number of mallocs used during MatSetValues calls =0 > > > > > SuperLU_DIST run parameters: > > > > > Process grid nprow 2 x npcol 1 > > > > > Equilibrate matrix TRUE > > > > > Matrix input mode 1 > > > > > Replace tiny pivots FALSE > > > > > Use iterative refinement FALSE > > > > > Processors in row 2 col partition 1 > > > > > Row permutation LargeDiag > > > > > Column permutation METIS_AT_PLUS_A > > > > > Parallel symbolic factorization FALSE > > > > > Repeated factorization SamePattern > > > > > linear system matrix = precond matrix: > > > > > Mat Object: (o_slin) 2 MPI processes > > > > > type: mpiaij > > > > > rows=382, cols=382 > > > > > total: nonzeros=4458, allocated nonzeros=4458 > > > > > total number of mallocs used during MatSetValues calls =0 > > > > > using I-node (on process 0) routines: found 109 nodes, limit > > > > > used > > > > > is 5 > > > > > > > > > > I know this information is not enough to help debug, but I would like > > > > > to > > > > > know > > > > > if PETSc guys will upgrade to 5.1.3 before trying to debug anything. > > > > > > > > > > Thanks, > > > > > Eric > > > > > > > > > > > > > > > From Eric.Chamberland at giref.ulaval.ca Sat Dec 31 12:35:51 2016 From: Eric.Chamberland at giref.ulaval.ca (Eric Chamberland) Date: Sat, 31 Dec 2016 13:35:51 -0500 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> References: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> Message-ID: I think there is definitly a problem. After looking at the files installed either from petsc-master tarball or the manual configure I just did with --download-superlu_dist-commit=v5.1.3, the file include/superlu_defs.h have these values: #define SUPERLU_DIST_MAJOR_VERSION 5 #define SUPERLU_DIST_MINOR_VERSION 1 #define SUPERLU_DIST_PATCH_VERSION 0 What's wrong? Eric Le 2016-12-31 ? 13:26, Eric Chamberland a ?crit : > Ah ok, I see! Here look at the file name in the configure.log: > > Install the project... > /usr/bin/cmake -P cmake_install.cmake > -- Install configuration: "DEBUG" > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5.1.0 > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5 > > It is saying 5.1.0, but in fact you are right: it is 5.1.3 that is > downloaded!!! :) > > And FWIW, the nighlty automatic compilation of PETSc starts within a > brand new and empty directory each night... > > Thanks to both of you again! :) > > Eric > > > Le 2016-12-31 ? 13:17, Satish Balay a ?crit : >> =============================================================================== >> Trying to download >> git://https://github.com/xiaoyeli/superlu_dist for SUPERLU_DIST >> =============================================================================== >> Executing: git clone >> https://github.com/xiaoyeli/superlu_dist >> /pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist >> stdout: Cloning into >> '/pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist'... >> Looking for SUPERLU_DIST at git.superlu_dist, >> hg.superlu_dist or a directory starting with ['superlu_dist'] >> Found a copy of SUPERLU_DIST in git.superlu_dist >> Executing: ['git', 'rev-parse', '--git-dir'] >> stdout: .git >> Executing: ['git', 'cat-file', '-e', 'v5.1.3^{commit}'] >> Executing: ['git', 'rev-parse', 'v5.1.3'] >> stdout: 7306f704c6c8d5113def649b76def3c8eb607690 >> Executing: ['git', 'stash'] >> stdout: No local changes to save >> Executing: ['git', 'clean', '-f', '-d', '-x'] >> Executing: ['git', 'checkout', '-f', >> '7306f704c6c8d5113def649b76def3c8eb607690'] >> <<<<<<<< >> >> Per log below - its using 5.1.3. Why did you think you got 5.1.0? >> >> Satish >> >> On Sat, 31 Dec 2016, Eric Chamberland wrote: >> >>> Hi, >>> >>> ok I will test with 5.1.3 with the option you gave me >>> (--download-superlu_dit-commit=v5.1.3). >>> >>> But from what you and Matthew said, I should have 5.1.3 with >>> petsc-master, but >>> the last night log shows me library file name 5.1.0: >>> >>> http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure.log >>> >>> >>> So I am a bit confused: Why did I got 5.1.0 last night? (I use the >>> petsc-master tarball, is it the reason?) >>> >>> Thanks, >>> >>> Eric >>> >>> >>> Le 2016-12-31 ? 11:52, Satish Balay a ?crit : >>>> On Sat, 31 Dec 2016, Eric Chamberland wrote: >>>> >>>>> Hi, >>>>> >>>>> I am just starting to debug a bug encountered with and only with >>>>> SuperLU_Dist >>>>> combined with MKL on a 2 processes validation test. >>>>> >>>>> (the same test works fine with MUMPS on 2 processes). >>>>> >>>>> I just noticed that the SuperLU_Dist version installed by PETSc >>>>> configure >>>>> script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. >>>> If you use petsc-master - it will install 5.1.3 by default. >>>>> Before going further, I just want to ask: >>>>> >>>>> Is there any specific reason to stick to 5.1.0? >>>> We don't usually upgrade externalpackage version in PETSc releases >>>> [unless its tested to work and fixes known bugs]. There could be API >>>> changes - or build changes that can potentially conflict. >>>> >>>> >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a >>>> couple of >>>> bugs]. >>>> >>>> You might be able to do the following with petsc-3.7 [with git >>>> externalpackage repos] >>>> >>>> --download-superlu_dist --download-superlu_dit-commit=v5.1.3 >>>> >>>> Satish >>>> >>>>> Here is some more information: >>>>> >>>>> On process 2 I have this printed in stdout: >>>>> >>>>> Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . >>>>> >>>>> and in stderr: >>>>> >>>>> Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion >>>>> `(old_top == >>>>> (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - >>>>> __builtin_offsetof >>>>> (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) >>>>> (old_size) >>>>>> = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, >>>>> fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 >>>>> *(sizeof(size_t))) - >>>>> 1))) && >>>>> ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == >>>>> 0)' >>>>> failed. >>>>> [saruman:15771] *** Process received signal *** >>>>> >>>>> This is the 7th call to KSPSolve in the same execution. Here is >>>>> the last >>>>> KSPView: >>>>> >>>>> KSP Object:(o_slin) 2 MPI processes >>>>> type: preonly >>>>> maximum iterations=10000, initial guess is zero >>>>> tolerances: relative=1e-05, absolute=1e-50, divergence=10000. >>>>> left preconditioning >>>>> using NONE norm type for convergence test >>>>> PC Object:(o_slin) 2 MPI processes >>>>> type: lu >>>>> LU: out-of-place factorization >>>>> tolerance for zero pivot 2.22045e-14 >>>>> matrix ordering: natural >>>>> factor fill ratio given 0., needed 0. >>>>> Factored matrix follows: >>>>> Mat Object: 2 MPI processes >>>>> type: mpiaij >>>>> rows=382, cols=382 >>>>> package used to perform factorization: superlu_dist >>>>> total: nonzeros=0, allocated nonzeros=0 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> SuperLU_DIST run parameters: >>>>> Process grid nprow 2 x npcol 1 >>>>> Equilibrate matrix TRUE >>>>> Matrix input mode 1 >>>>> Replace tiny pivots FALSE >>>>> Use iterative refinement FALSE >>>>> Processors in row 2 col partition 1 >>>>> Row permutation LargeDiag >>>>> Column permutation METIS_AT_PLUS_A >>>>> Parallel symbolic factorization FALSE >>>>> Repeated factorization SamePattern >>>>> linear system matrix = precond matrix: >>>>> Mat Object: (o_slin) 2 MPI processes >>>>> type: mpiaij >>>>> rows=382, cols=382 >>>>> total: nonzeros=4458, allocated nonzeros=4458 >>>>> total number of mallocs used during MatSetValues calls =0 >>>>> using I-node (on process 0) routines: found 109 nodes, >>>>> limit used >>>>> is 5 >>>>> >>>>> I know this information is not enough to help debug, but I would >>>>> like to >>>>> know >>>>> if PETSc guys will upgrade to 5.1.3 before trying to debug anything. >>>>> >>>>> Thanks, >>>>> Eric >>>>> >>>>> >>> From balay at mcs.anl.gov Sat Dec 31 12:36:42 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 12:36:42 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: Message-ID: On Sat, 31 Dec 2016, Eric Chamberland wrote: > ok I will test with 5.1.3 with the option you gave me > (--download-superlu_dit-commit=v5.1.3). BTW: I have a typo here - it should be: --download-superlu_dist-commit=v5.1.3 Satish From balay at mcs.anl.gov Sat Dec 31 12:39:10 2016 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 31 Dec 2016 12:39:10 -0600 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> Message-ID: Ok - one more place superlu_dist stores version number - that needs updating with every release. cc:ing Sherry Satish On Sat, 31 Dec 2016, Eric Chamberland wrote: > I think there is definitly a problem. > > After looking at the files installed either from petsc-master tarball or the > manual configure I just did with --download-superlu_dist-commit=v5.1.3, the > file include/superlu_defs.h have these values: > > #define SUPERLU_DIST_MAJOR_VERSION 5 > #define SUPERLU_DIST_MINOR_VERSION 1 > #define SUPERLU_DIST_PATCH_VERSION 0 > > What's wrong? > > Eric > > > Le 2016-12-31 ? 13:26, Eric Chamberland a ?crit : > > Ah ok, I see! Here look at the file name in the configure.log: > > > > Install the project... > > /usr/bin/cmake -P cmake_install.cmake > > -- Install configuration: "DEBUG" > > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5.1.0 > > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5 > > > > It is saying 5.1.0, but in fact you are right: it is 5.1.3 that is > > downloaded!!! :) > > > > And FWIW, the nighlty automatic compilation of PETSc starts within a brand > > new and empty directory each night... > > > > Thanks to both of you again! :) > > > > Eric > > > > > > Le 2016-12-31 ? 13:17, Satish Balay a ?crit : > > > =============================================================================== > > > Trying to download > > > git://https://github.com/xiaoyeli/superlu_dist for SUPERLU_DIST > > > =============================================================================== > > > Executing: git clone > > > https://github.com/xiaoyeli/superlu_dist > > > /pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist > > > stdout: Cloning into > > > '/pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/externalpackages/git.superlu_dist'... > > > Looking for SUPERLU_DIST at git.superlu_dist, > > > hg.superlu_dist or a directory starting with ['superlu_dist'] > > > Found a copy of SUPERLU_DIST in git.superlu_dist > > > Executing: ['git', 'rev-parse', '--git-dir'] > > > stdout: .git > > > Executing: ['git', 'cat-file', '-e', 'v5.1.3^{commit}'] > > > Executing: ['git', 'rev-parse', 'v5.1.3'] > > > stdout: 7306f704c6c8d5113def649b76def3c8eb607690 > > > Executing: ['git', 'stash'] > > > stdout: No local changes to save > > > Executing: ['git', 'clean', '-f', '-d', '-x'] > > > Executing: ['git', 'checkout', '-f', > > > '7306f704c6c8d5113def649b76def3c8eb607690'] > > > <<<<<<<< > > > > > > Per log below - its using 5.1.3. Why did you think you got 5.1.0? > > > > > > Satish > > > > > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > > > Hi, > > > > > > > > ok I will test with 5.1.3 with the option you gave me > > > > (--download-superlu_dit-commit=v5.1.3). > > > > > > > > But from what you and Matthew said, I should have 5.1.3 with > > > > petsc-master, but > > > > the last night log shows me library file name 5.1.0: > > > > > > > > http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/2016.12.31.02h00m01s_configure.log > > > > > > > > > > > > So I am a bit confused: Why did I got 5.1.0 last night? (I use the > > > > petsc-master tarball, is it the reason?) > > > > > > > > Thanks, > > > > > > > > Eric > > > > > > > > > > > > Le 2016-12-31 ? 11:52, Satish Balay a ?crit : > > > > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > I am just starting to debug a bug encountered with and only with > > > > > > SuperLU_Dist > > > > > > combined with MKL on a 2 processes validation test. > > > > > > > > > > > > (the same test works fine with MUMPS on 2 processes). > > > > > > > > > > > > I just noticed that the SuperLU_Dist version installed by PETSc > > > > > > configure > > > > > > script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. > > > > > If you use petsc-master - it will install 5.1.3 by default. > > > > > > Before going further, I just want to ask: > > > > > > > > > > > > Is there any specific reason to stick to 5.1.0? > > > > > We don't usually upgrade externalpackage version in PETSc releases > > > > > [unless its tested to work and fixes known bugs]. There could be API > > > > > changes - or build changes that can potentially conflict. > > > > > > > > > > >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a > > > > > couple of > > > > > bugs]. > > > > > > > > > > You might be able to do the following with petsc-3.7 [with git > > > > > externalpackage repos] > > > > > > > > > > --download-superlu_dist --download-superlu_dit-commit=v5.1.3 > > > > > > > > > > Satish > > > > > > > > > > > Here is some more information: > > > > > > > > > > > > On process 2 I have this printed in stdout: > > > > > > > > > > > > Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . > > > > > > > > > > > > and in stderr: > > > > > > > > > > > > Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion > > > > > > `(old_top == > > > > > > (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - > > > > > > __builtin_offsetof > > > > > > (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) > > > > > > (old_size) > > > > > > > = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, > > > > > > fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 *(sizeof(size_t))) > > > > > > - > > > > > > 1))) && > > > > > > ((old_top)->size & 0x1) && ((unsigned long) old_end & pagemask) == > > > > > > 0)' > > > > > > failed. > > > > > > [saruman:15771] *** Process received signal *** > > > > > > > > > > > > This is the 7th call to KSPSolve in the same execution. Here is the > > > > > > last > > > > > > KSPView: > > > > > > > > > > > > KSP Object:(o_slin) 2 MPI processes > > > > > > type: preonly > > > > > > maximum iterations=10000, initial guess is zero > > > > > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000. > > > > > > left preconditioning > > > > > > using NONE norm type for convergence test > > > > > > PC Object:(o_slin) 2 MPI processes > > > > > > type: lu > > > > > > LU: out-of-place factorization > > > > > > tolerance for zero pivot 2.22045e-14 > > > > > > matrix ordering: natural > > > > > > factor fill ratio given 0., needed 0. > > > > > > Factored matrix follows: > > > > > > Mat Object: 2 MPI processes > > > > > > type: mpiaij > > > > > > rows=382, cols=382 > > > > > > package used to perform factorization: superlu_dist > > > > > > total: nonzeros=0, allocated nonzeros=0 > > > > > > total number of mallocs used during MatSetValues calls > > > > > > =0 > > > > > > SuperLU_DIST run parameters: > > > > > > Process grid nprow 2 x npcol 1 > > > > > > Equilibrate matrix TRUE > > > > > > Matrix input mode 1 > > > > > > Replace tiny pivots FALSE > > > > > > Use iterative refinement FALSE > > > > > > Processors in row 2 col partition 1 > > > > > > Row permutation LargeDiag > > > > > > Column permutation METIS_AT_PLUS_A > > > > > > Parallel symbolic factorization FALSE > > > > > > Repeated factorization SamePattern > > > > > > linear system matrix = precond matrix: > > > > > > Mat Object: (o_slin) 2 MPI processes > > > > > > type: mpiaij > > > > > > rows=382, cols=382 > > > > > > total: nonzeros=4458, allocated nonzeros=4458 > > > > > > total number of mallocs used during MatSetValues calls =0 > > > > > > using I-node (on process 0) routines: found 109 nodes, limit > > > > > > used > > > > > > is 5 > > > > > > > > > > > > I know this information is not enough to help debug, but I would > > > > > > like to > > > > > > know > > > > > > if PETSc guys will upgrade to 5.1.3 before trying to debug anything. > > > > > > > > > > > > Thanks, > > > > > > Eric > > > > > > > > > > > > > > > > > > > From xsli at lbl.gov Sat Dec 31 15:18:16 2016 From: xsli at lbl.gov (Xiaoye S. Li) Date: Sat, 31 Dec 2016 13:18:16 -0800 Subject: [petsc-users] Error with SuperLU_DIST (mkl related?) In-Reply-To: References: <2ef21b36-fa1b-c4c9-a2c9-00ada423a0c7@giref.ulaval.ca> Message-ID: I just updated version string in git repo and tarball. Sherry On Sat, Dec 31, 2016 at 10:39 AM, Satish Balay wrote: > Ok - one more place superlu_dist stores version number - that needs > updating with every release. > > cc:ing Sherry > > Satish > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > I think there is definitly a problem. > > > > After looking at the files installed either from petsc-master tarball or > the > > manual configure I just did with --download-superlu_dist-commit=v5.1.3, > the > > file include/superlu_defs.h have these values: > > > > #define SUPERLU_DIST_MAJOR_VERSION 5 > > #define SUPERLU_DIST_MINOR_VERSION 1 > > #define SUPERLU_DIST_PATCH_VERSION 0 > > > > What's wrong? > > > > Eric > > > > > > Le 2016-12-31 ? 13:26, Eric Chamberland a ?crit : > > > Ah ok, I see! Here look at the file name in the configure.log: > > > > > > Install the project... > > > /usr/bin/cmake -P cmake_install.cmake > > > -- Install configuration: "DEBUG" > > > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5.1.0 > > > -- Installing: /opt/petsc-master_debug/lib/libsuperlu_dist.so.5 > > > > > > It is saying 5.1.0, but in fact you are right: it is 5.1.3 that is > > > downloaded!!! :) > > > > > > And FWIW, the nighlty automatic compilation of PETSc starts within a > brand > > > new and empty directory each night... > > > > > > Thanks to both of you again! :) > > > > > > Eric > > > > > > > > > Le 2016-12-31 ? 13:17, Satish Balay a ?crit : > > > > ============================================================ > =================== > > > > Trying to download > > > > git://https://github.com/xiaoyeli/superlu_dist for SUPERLU_DIST > > > > ============================================================ > =================== > > > > Executing: git clone > > > > https://github.com/xiaoyeli/superlu_dist > > > > /pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/ > COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/ > externalpackages/git.superlu_dist > > > > stdout: Cloning into > > > > '/pmi/cmpbib/compilation_BIB_gcc_redhat_petsc-master_debug/ > COMPILE_AUTO/petsc-master-debug/arch-linux2-c-debug/ > externalpackages/git.superlu_dist'... > > > > Looking for SUPERLU_DIST at git.superlu_dist, > > > > hg.superlu_dist or a directory starting with ['superlu_dist'] > > > > Found a copy of SUPERLU_DIST in git.superlu_dist > > > > Executing: ['git', 'rev-parse', '--git-dir'] > > > > stdout: .git > > > > Executing: ['git', 'cat-file', '-e', 'v5.1.3^{commit}'] > > > > Executing: ['git', 'rev-parse', 'v5.1.3'] > > > > stdout: 7306f704c6c8d5113def649b76def3c8eb607690 > > > > Executing: ['git', 'stash'] > > > > stdout: No local changes to save > > > > Executing: ['git', 'clean', '-f', '-d', '-x'] > > > > Executing: ['git', 'checkout', '-f', > > > > '7306f704c6c8d5113def649b76def3c8eb607690'] > > > > <<<<<<<< > > > > > > > > Per log below - its using 5.1.3. Why did you think you got 5.1.0? > > > > > > > > Satish > > > > > > > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > > > > > Hi, > > > > > > > > > > ok I will test with 5.1.3 with the option you gave me > > > > > (--download-superlu_dit-commit=v5.1.3). > > > > > > > > > > But from what you and Matthew said, I should have 5.1.3 with > > > > > petsc-master, but > > > > > the last night log shows me library file name 5.1.0: > > > > > > > > > > http://www.giref.ulaval.ca/~cmpgiref/petsc-master-debug/ > 2016.12.31.02h00m01s_configure.log > > > > > > > > > > > > > > > So I am a bit confused: Why did I got 5.1.0 last night? (I use the > > > > > petsc-master tarball, is it the reason?) > > > > > > > > > > Thanks, > > > > > > > > > > Eric > > > > > > > > > > > > > > > Le 2016-12-31 ? 11:52, Satish Balay a ?crit : > > > > > > On Sat, 31 Dec 2016, Eric Chamberland wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > I am just starting to debug a bug encountered with and only > with > > > > > > > SuperLU_Dist > > > > > > > combined with MKL on a 2 processes validation test. > > > > > > > > > > > > > > (the same test works fine with MUMPS on 2 processes). > > > > > > > > > > > > > > I just noticed that the SuperLU_Dist version installed by PETSc > > > > > > > configure > > > > > > > script is 5.1.0 and the latest SuperLU_DIST is 5.1.3. > > > > > > If you use petsc-master - it will install 5.1.3 by default. > > > > > > > Before going further, I just want to ask: > > > > > > > > > > > > > > Is there any specific reason to stick to 5.1.0? > > > > > > We don't usually upgrade externalpackage version in PETSc > releases > > > > > > [unless its tested to work and fixes known bugs]. There could be > API > > > > > > changes - or build changes that can potentially conflict. > > > > > > > > > > > > >From what I know - 5.1.3 should work with petsc-3.7 [it fixes a > > > > > > couple of > > > > > > bugs]. > > > > > > > > > > > > You might be able to do the following with petsc-3.7 [with git > > > > > > externalpackage repos] > > > > > > > > > > > > --download-superlu_dist --download-superlu_dit-commit=v5.1.3 > > > > > > > > > > > > Satish > > > > > > > > > > > > > Here is some more information: > > > > > > > > > > > > > > On process 2 I have this printed in stdout: > > > > > > > > > > > > > > Intel MKL ERROR: Parameter 6 was incorrect on entry to DTRSM . > > > > > > > > > > > > > > and in stderr: > > > > > > > > > > > > > > Test.ProblemeEFGen.opt: malloc.c:2369: sysmalloc: Assertion > > > > > > > `(old_top == > > > > > > > (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) - > > > > > > > __builtin_offsetof > > > > > > > (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned > long) > > > > > > > (old_size) > > > > > > > > = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, > > > > > > > fd_nextsize))+((2 *(sizeof(size_t))) - 1)) & ~((2 > *(sizeof(size_t))) > > > > > > > - > > > > > > > 1))) && > > > > > > > ((old_top)->size & 0x1) && ((unsigned long) old_end & > pagemask) == > > > > > > > 0)' > > > > > > > failed. > > > > > > > [saruman:15771] *** Process received signal *** > > > > > > > > > > > > > > This is the 7th call to KSPSolve in the same execution. Here > is the > > > > > > > last > > > > > > > KSPView: > > > > > > > > > > > > > > KSP Object:(o_slin) 2 MPI processes > > > > > > > type: preonly > > > > > > > maximum iterations=10000, initial guess is zero > > > > > > > tolerances: relative=1e-05, absolute=1e-50, > divergence=10000. > > > > > > > left preconditioning > > > > > > > using NONE norm type for convergence test > > > > > > > PC Object:(o_slin) 2 MPI processes > > > > > > > type: lu > > > > > > > LU: out-of-place factorization > > > > > > > tolerance for zero pivot 2.22045e-14 > > > > > > > matrix ordering: natural > > > > > > > factor fill ratio given 0., needed 0. > > > > > > > Factored matrix follows: > > > > > > > Mat Object: 2 MPI processes > > > > > > > type: mpiaij > > > > > > > rows=382, cols=382 > > > > > > > package used to perform factorization: superlu_dist > > > > > > > total: nonzeros=0, allocated nonzeros=0 > > > > > > > total number of mallocs used during MatSetValues > calls > > > > > > > =0 > > > > > > > SuperLU_DIST run parameters: > > > > > > > Process grid nprow 2 x npcol 1 > > > > > > > Equilibrate matrix TRUE > > > > > > > Matrix input mode 1 > > > > > > > Replace tiny pivots FALSE > > > > > > > Use iterative refinement FALSE > > > > > > > Processors in row 2 col partition 1 > > > > > > > Row permutation LargeDiag > > > > > > > Column permutation METIS_AT_PLUS_A > > > > > > > Parallel symbolic factorization FALSE > > > > > > > Repeated factorization SamePattern > > > > > > > linear system matrix = precond matrix: > > > > > > > Mat Object: (o_slin) 2 MPI processes > > > > > > > type: mpiaij > > > > > > > rows=382, cols=382 > > > > > > > total: nonzeros=4458, allocated nonzeros=4458 > > > > > > > total number of mallocs used during MatSetValues calls =0 > > > > > > > using I-node (on process 0) routines: found 109 nodes, > limit > > > > > > > used > > > > > > > is 5 > > > > > > > > > > > > > > I know this information is not enough to help debug, but I > would > > > > > > > like to > > > > > > > know > > > > > > > if PETSc guys will upgrade to 5.1.3 before trying to debug > anything. > > > > > > > > > > > > > > Thanks, > > > > > > > Eric > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: