From aamor at pa.uc3m.es Thu Feb 1 01:46:25 2018 From: aamor at pa.uc3m.es (=?UTF-8?B?QWRyacOhbiBBbW9y?=) Date: Thu, 1 Feb 2018 08:46:25 +0100 Subject: [petsc-users] Subscribe Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From aamor at pa.uc3m.es Thu Feb 1 02:45:21 2018 From: aamor at pa.uc3m.es (=?UTF-8?B?QWRyacOhbiBBbW9y?=) Date: Thu, 1 Feb 2018 09:45:21 +0100 Subject: [petsc-users] Problem when solving matrices with identity matrices as diagonal block domains Message-ID: Hi, First, I am a novice in the use of PETSC so apologies for having a newbie mistake, but maybe you can help me! I am solving a matrix of the kind: (Identity (50% dense)block (50% dense)block Identity) I have found a problem in the performance of the solver when I treat the diagonal blocks as sparse matrices in FORTRAN. In other words, I use the routine: MatCreateSeqAIJ To preallocate the matrix, and then I have tried: 1. To call MatSetValues for all the values of the identity matrices. I mean, if the identity matrix has a dimension of 22x22, I call MatSetValues 22*22 times. 2. To call MatSetValues only once per row. If the identity matrix has a dimension of 22x22, I call MatSetValues only 22 times. With the case 1, the iterative solver (I have tried with the default one and KSPBCGS) only takes one iteration to converge and it converges with a residual of 1E-14. However, with the case 2, the iterative solver takes, say, 9 iterations and converges with a residual of 1E-04. The matrices that are loaded into PETSC are exactly the same (I have written them to a file from the matrix which is solved, getting it with MatGetValues). What can be happening? I know that the fact that only takes one iteration is because the iterative solver is "lucky" and its first guess is the right one, but I don't understand the difference in the performance since the matrix is the same. I would like to use the case 2 since my matrices are quite large and it's much more efficient. Please help me! Thanks! Adrian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Feb 1 09:20:37 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 1 Feb 2018 15:20:37 +0000 Subject: [petsc-users] Problem when solving matrices with identity matrices as diagonal block domains In-Reply-To: References: Message-ID: <7CBAEA6B-2526-4A5E-88B9-417B2581A7EF@anl.gov> 1) By default if you call MatSetValues() with a zero element the sparse Mat will store the 0 into the matrix. If you do not call it with zero elements then it does not create a zero entry for that location. 2) Many of the preconditioners in PETSc are based on "nonzero entries" in sparse matrices (here a nonzero entry simply means any location in a matrix where a value is stored -- even if the value is zero). In particular ILU(0) does a LU on the "nonzero" structure of the matrix Hence in your case it is doing ILU(0) on a dense matrix since you set all the entries in the matrix and thus producing a direct solver. The lesson is you should only be setting true nonzero values into the matrix, not zero entries. There is a MatOption MAT_IGNORE_ZERO_ENTRIES which, if you set it, prevents the matrix from creating a location for the zero values. If you set this first on the matrix then your two approaches will result in the same preconditioner and same iterative convergence. Barry > On Feb 1, 2018, at 2:45 AM, Adri?n Amor wrote: > > Hi, > > First, I am a novice in the use of PETSC so apologies for having a newbie mistake, but maybe you can help me! I am solving a matrix of the kind: > (Identity (50% dense)block > (50% dense)block Identity) > > I have found a problem in the performance of the solver when I treat the diagonal blocks as sparse matrices in FORTRAN. In other words, I use the routine: > MatCreateSeqAIJ > To preallocate the matrix, and then I have tried: > 1. To call MatSetValues for all the values of the identity matrices. I mean, if the identity matrix has a dimension of 22x22, I call MatSetValues 22*22 times. > 2. To call MatSetValues only once per row. If the identity matrix has a dimension of 22x22, I call MatSetValues only 22 times. > > With the case 1, the iterative solver (I have tried with the default one and KSPBCGS) only takes one iteration to converge and it converges with a residual of 1E-14. However, with the case 2, the iterative solver takes, say, 9 iterations and converges with a residual of 1E-04. The matrices that are loaded into PETSC are exactly the same (I have written them to a file from the matrix which is solved, getting it with MatGetValues). > > What can be happening? I know that the fact that only takes one iteration is because the iterative solver is "lucky" and its first guess is the right one, but I don't understand the difference in the performance since the matrix is the same. I would like to use the case 2 since my matrices are quite large and it's much more efficient. > > Please help me! Thanks! > > Adrian. From aamor at pa.uc3m.es Thu Feb 1 11:34:03 2018 From: aamor at pa.uc3m.es (=?UTF-8?B?QWRyacOhbiBBbW9y?=) Date: Thu, 1 Feb 2018 18:34:03 +0100 Subject: [petsc-users] Problem when solving matrices with identity matrices as diagonal block domains In-Reply-To: <7CBAEA6B-2526-4A5E-88B9-417B2581A7EF@anl.gov> References: <7CBAEA6B-2526-4A5E-88B9-417B2581A7EF@anl.gov> Message-ID: Thanks, it's true that with MAT_IGNORE_ZERO_ENTRIES I get the same performance. I assumed that explicitly calling to KSPSetType(petsc_ksp, KSPBCGS, petsc_ierr) it wouldn't use the direct solver from PETSC. Thank you for the detailed response, it was really convenient! 2018-02-01 16:20 GMT+01:00 Smith, Barry F. : > > 1) By default if you call MatSetValues() with a zero element the sparse > Mat will store the 0 into the matrix. If you do not call it with zero > elements then it does not create a zero entry for that location. > > 2) Many of the preconditioners in PETSc are based on "nonzero entries" > in sparse matrices (here a nonzero entry simply means any location in a > matrix where a value is stored -- even if the value is zero). In particular > ILU(0) does a LU on the "nonzero" structure of the matrix > > Hence in your case it is doing ILU(0) on a dense matrix since you set all > the entries in the matrix and thus producing a direct solver. > > The lesson is you should only be setting true nonzero values into the > matrix, not zero entries. There is a MatOption MAT_IGNORE_ZERO_ENTRIES > which, if you set it, prevents the matrix from creating a location for the > zero values. If you set this first on the matrix then your two approaches > will result in the same preconditioner and same iterative convergence. > > Barry > > > On Feb 1, 2018, at 2:45 AM, Adri?n Amor wrote: > > > > Hi, > > > > First, I am a novice in the use of PETSC so apologies for having a > newbie mistake, but maybe you can help me! I am solving a matrix of the > kind: > > (Identity (50% dense)block > > (50% dense)block Identity) > > > > I have found a problem in the performance of the solver when I treat the > diagonal blocks as sparse matrices in FORTRAN. In other words, I use the > routine: > > MatCreateSeqAIJ > > To preallocate the matrix, and then I have tried: > > 1. To call MatSetValues for all the values of the identity matrices. I > mean, if the identity matrix has a dimension of 22x22, I call MatSetValues > 22*22 times. > > 2. To call MatSetValues only once per row. If the identity matrix has a > dimension of 22x22, I call MatSetValues only 22 times. > > > > With the case 1, the iterative solver (I have tried with the default one > and KSPBCGS) only takes one iteration to converge and it converges with a > residual of 1E-14. However, with the case 2, the iterative solver takes, > say, 9 iterations and converges with a residual of 1E-04. The matrices that > are loaded into PETSC are exactly the same (I have written them to a file > from the matrix which is solved, getting it with MatGetValues). > > > > What can be happening? I know that the fact that only takes one > iteration is because the iterative solver is "lucky" and its first guess is > the right one, but I don't understand the difference in the performance > since the matrix is the same. I would like to use the case 2 since my > matrices are quite large and it's much more efficient. > > > > Please help me! Thanks! > > > > Adrian. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcrean01 at gmail.com Thu Feb 1 11:48:23 2018 From: jcrean01 at gmail.com (Jared Crean) Date: Thu, 1 Feb 2018 12:48:23 -0500 Subject: [petsc-users] Error Using Matrix-Free Linear Operator with Matrix-Explicit Preconditioner Message-ID: Hello, ??? I am trying to use a matrix-free linear operator with a matrix-explicit preconditioner, but when I try to do the KSP solve it gives the error: [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers [0]PETSC ERROR: Could not locate a solver package. Perhaps you must ./configure with --download- [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 [0]PETSC ERROR: ./testcase on a arch-linux2-c-debug named jared-r15 by jared Thu Feb? 1 12:40:57 2018 [0]PETSC ERROR: Configure options [0]PETSC ERROR: #1 MatGetFactor() line 4346 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/mat/interface/matrix.c [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/factor/ilu/ilu.c [0]PETSC ERROR: #3 PCSetUp() line 924 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #4 KSPSetUp() line 381 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #5 PCSetUpOnBlocks_BJacobi_Singleblock() line 618 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/bjacobi/bjacobi.c [0]PETSC ERROR: #6 PCSetUpOnBlocks() line 955 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c [0]PETSC ERROR: #7 KSPSetUpOnBlocks() line 213 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: #8 KSPSolve() line 613 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c ? The code to reproduce this error is attached.? The error is present on Petsc 3.7.6 and 3.8.3.? I noticed two things while creating the test case: 1) using a jacobi preconditioner works (using block jacobi with ILU on each block does not), and 2) if I replace the shell matrix with the preconditioner matrix in KSPSetOperators(), there is no error (with the block jacobi ILU preconditioner). ? Is this a bug in Petsc or did I setup the preconditioner incorrectly? ??? Jared Crean -------------- next part -------------- A non-text attachment was scrubbed... Name: testcase.c Type: text/x-csrc Size: 3133 bytes Desc: not available URL: From bsmith at mcs.anl.gov Thu Feb 1 13:03:10 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 1 Feb 2018 19:03:10 +0000 Subject: [petsc-users] Problem when solving matrices with identity matrices as diagonal block domains In-Reply-To: References: <7CBAEA6B-2526-4A5E-88B9-417B2581A7EF@anl.gov> Message-ID: <8AFD7C13-E094-4B26-B33A-76BF51D2C235@anl.gov> > On Feb 1, 2018, at 11:34 AM, Adri?n Amor wrote: > > Thanks, it's true that with MAT_IGNORE_ZERO_ENTRIES I get the same performance. I assumed that explicitly calling to KSPSetType(petsc_ksp, KSPBCGS, petsc_ierr) it wouldn't use the direct solver from PETSC. It is not really a direct solver, it is just the default ILU(0) solver for sparse matrix is a direct solver if the all the zeros are also stored with the sparse matrix. Barry > Thank you for the detailed response, it was really convenient! > > 2018-02-01 16:20 GMT+01:00 Smith, Barry F. : > > 1) By default if you call MatSetValues() with a zero element the sparse Mat will store the 0 into the matrix. If you do not call it with zero elements then it does not create a zero entry for that location. > > 2) Many of the preconditioners in PETSc are based on "nonzero entries" in sparse matrices (here a nonzero entry simply means any location in a matrix where a value is stored -- even if the value is zero). In particular ILU(0) does a LU on the "nonzero" structure of the matrix > > Hence in your case it is doing ILU(0) on a dense matrix since you set all the entries in the matrix and thus producing a direct solver. > > The lesson is you should only be setting true nonzero values into the matrix, not zero entries. There is a MatOption MAT_IGNORE_ZERO_ENTRIES which, if you set it, prevents the matrix from creating a location for the zero values. If you set this first on the matrix then your two approaches will result in the same preconditioner and same iterative convergence. > > Barry > > > On Feb 1, 2018, at 2:45 AM, Adri?n Amor wrote: > > > > Hi, > > > > First, I am a novice in the use of PETSC so apologies for having a newbie mistake, but maybe you can help me! I am solving a matrix of the kind: > > (Identity (50% dense)block > > (50% dense)block Identity) > > > > I have found a problem in the performance of the solver when I treat the diagonal blocks as sparse matrices in FORTRAN. In other words, I use the routine: > > MatCreateSeqAIJ > > To preallocate the matrix, and then I have tried: > > 1. To call MatSetValues for all the values of the identity matrices. I mean, if the identity matrix has a dimension of 22x22, I call MatSetValues 22*22 times. > > 2. To call MatSetValues only once per row. If the identity matrix has a dimension of 22x22, I call MatSetValues only 22 times. > > > > With the case 1, the iterative solver (I have tried with the default one and KSPBCGS) only takes one iteration to converge and it converges with a residual of 1E-14. However, with the case 2, the iterative solver takes, say, 9 iterations and converges with a residual of 1E-04. The matrices that are loaded into PETSC are exactly the same (I have written them to a file from the matrix which is solved, getting it with MatGetValues). > > > > What can be happening? I know that the fact that only takes one iteration is because the iterative solver is "lucky" and its first guess is the right one, but I don't understand the difference in the performance since the matrix is the same. I would like to use the case 2 since my matrices are quite large and it's much more efficient. > > > > Please help me! Thanks! > > > > Adrian. > > From stefano.zampini at gmail.com Thu Feb 1 13:08:09 2018 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Thu, 1 Feb 2018 22:08:09 +0300 Subject: [petsc-users] Problem when solving matrices with identity matrices as diagonal block domains In-Reply-To: References: <7CBAEA6B-2526-4A5E-88B9-417B2581A7EF@anl.gov> Message-ID: <5E4265BD-D50E-4717-B08C-FA9686128CF6@gmail.com> Note that you don?t need to assemble the 2x2 block matrix, as the solution can be computed via a Schur complement argument given the matrix [I B; C I] and rhs [f1,f2], you can solve S x_2 = f1 - B f2, with S = I - CB, and then obtain x_1 = f1 - B x_2. > On Feb 1, 2018, at 8:34 PM, Adri?n Amor wrote: > > Thanks, it's true that with MAT_IGNORE_ZERO_ENTRIES I get the same performance. I assumed that explicitly calling to KSPSetType(petsc_ksp, KSPBCGS, petsc_ierr) it wouldn't use the direct solver from PETSC. Thank you for the detailed response, it was really convenient! > > 2018-02-01 16:20 GMT+01:00 Smith, Barry F. >: > > 1) By default if you call MatSetValues() with a zero element the sparse Mat will store the 0 into the matrix. If you do not call it with zero elements then it does not create a zero entry for that location. > > 2) Many of the preconditioners in PETSc are based on "nonzero entries" in sparse matrices (here a nonzero entry simply means any location in a matrix where a value is stored -- even if the value is zero). In particular ILU(0) does a LU on the "nonzero" structure of the matrix > > Hence in your case it is doing ILU(0) on a dense matrix since you set all the entries in the matrix and thus producing a direct solver. > > The lesson is you should only be setting true nonzero values into the matrix, not zero entries. There is a MatOption MAT_IGNORE_ZERO_ENTRIES which, if you set it, prevents the matrix from creating a location for the zero values. If you set this first on the matrix then your two approaches will result in the same preconditioner and same iterative convergence. > > Barry > > > On Feb 1, 2018, at 2:45 AM, Adri?n Amor > wrote: > > > > Hi, > > > > First, I am a novice in the use of PETSC so apologies for having a newbie mistake, but maybe you can help me! I am solving a matrix of the kind: > > (Identity (50% dense)block > > (50% dense)block Identity) > > > > I have found a problem in the performance of the solver when I treat the diagonal blocks as sparse matrices in FORTRAN. In other words, I use the routine: > > MatCreateSeqAIJ > > To preallocate the matrix, and then I have tried: > > 1. To call MatSetValues for all the values of the identity matrices. I mean, if the identity matrix has a dimension of 22x22, I call MatSetValues 22*22 times. > > 2. To call MatSetValues only once per row. If the identity matrix has a dimension of 22x22, I call MatSetValues only 22 times. > > > > With the case 1, the iterative solver (I have tried with the default one and KSPBCGS) only takes one iteration to converge and it converges with a residual of 1E-14. However, with the case 2, the iterative solver takes, say, 9 iterations and converges with a residual of 1E-04. The matrices that are loaded into PETSC are exactly the same (I have written them to a file from the matrix which is solved, getting it with MatGetValues). > > > > What can be happening? I know that the fact that only takes one iteration is because the iterative solver is "lucky" and its first guess is the right one, but I don't understand the difference in the performance since the matrix is the same. I would like to use the case 2 since my matrices are quite large and it's much more efficient. > > > > Please help me! Thanks! > > > > Adrian. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aamor at pa.uc3m.es Fri Feb 2 02:27:55 2018 From: aamor at pa.uc3m.es (=?UTF-8?B?QWRyacOhbiBBbW9y?=) Date: Fri, 2 Feb 2018 09:27:55 +0100 Subject: [petsc-users] Problem when solving matrices with identity matrices as diagonal block domains In-Reply-To: <5E4265BD-D50E-4717-B08C-FA9686128CF6@gmail.com> References: <7CBAEA6B-2526-4A5E-88B9-417B2581A7EF@anl.gov> <5E4265BD-D50E-4717-B08C-FA9686128CF6@gmail.com> Message-ID: Thanks for the clarification Barry! And Stefano, thanks for your suggestion! 2018-02-01 20:08 GMT+01:00 Stefano Zampini : > Note that you don?t need to assemble the 2x2 block matrix, as the solution > can be computed via a Schur complement argument > > given the matrix [I B; C I] and rhs [f1,f2], you can solve S x_2 = f1 - B > f2, with S = I - CB, and then obtain x_1 = f1 - B x_2. > > On Feb 1, 2018, at 8:34 PM, Adri?n Amor wrote: > > Thanks, it's true that with MAT_IGNORE_ZERO_ENTRIES I get the same > performance. I assumed that explicitly calling to KSPSetType(petsc_ksp, > KSPBCGS, petsc_ierr) it wouldn't use the direct solver from PETSC. Thank > you for the detailed response, it was really convenient! > > 2018-02-01 16:20 GMT+01:00 Smith, Barry F. : > >> >> 1) By default if you call MatSetValues() with a zero element the sparse >> Mat will store the 0 into the matrix. If you do not call it with zero >> elements then it does not create a zero entry for that location. >> >> 2) Many of the preconditioners in PETSc are based on "nonzero entries" >> in sparse matrices (here a nonzero entry simply means any location in a >> matrix where a value is stored -- even if the value is zero). In particular >> ILU(0) does a LU on the "nonzero" structure of the matrix >> >> Hence in your case it is doing ILU(0) on a dense matrix since you set all >> the entries in the matrix and thus producing a direct solver. >> >> The lesson is you should only be setting true nonzero values into the >> matrix, not zero entries. There is a MatOption MAT_IGNORE_ZERO_ENTRIES >> which, if you set it, prevents the matrix from creating a location for the >> zero values. If you set this first on the matrix then your two approaches >> will result in the same preconditioner and same iterative convergence. >> >> Barry >> >> > On Feb 1, 2018, at 2:45 AM, Adri?n Amor wrote: >> > >> > Hi, >> > >> > First, I am a novice in the use of PETSC so apologies for having a >> newbie mistake, but maybe you can help me! I am solving a matrix of the >> kind: >> > (Identity (50% dense)block >> > (50% dense)block Identity) >> > >> > I have found a problem in the performance of the solver when I treat >> the diagonal blocks as sparse matrices in FORTRAN. In other words, I use >> the routine: >> > MatCreateSeqAIJ >> > To preallocate the matrix, and then I have tried: >> > 1. To call MatSetValues for all the values of the identity matrices. I >> mean, if the identity matrix has a dimension of 22x22, I call MatSetValues >> 22*22 times. >> > 2. To call MatSetValues only once per row. If the identity matrix has a >> dimension of 22x22, I call MatSetValues only 22 times. >> > >> > With the case 1, the iterative solver (I have tried with the default >> one and KSPBCGS) only takes one iteration to converge and it converges with >> a residual of 1E-14. However, with the case 2, the iterative solver takes, >> say, 9 iterations and converges with a residual of 1E-04. The matrices that >> are loaded into PETSC are exactly the same (I have written them to a file >> from the matrix which is solved, getting it with MatGetValues). >> > >> > What can be happening? I know that the fact that only takes one >> iteration is because the iterative solver is "lucky" and its first guess is >> the right one, but I don't understand the difference in the performance >> since the matrix is the same. I would like to use the case 2 since my >> matrices are quite large and it's much more efficient. >> > >> > Please help me! Thanks! >> > >> > Adrian. >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefano.zampini at gmail.com Fri Feb 2 12:27:48 2018 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Fri, 2 Feb 2018 21:27:48 +0300 Subject: [petsc-users] Error Using Matrix-Free Linear Operator with Matrix-Explicit Preconditioner In-Reply-To: References: Message-ID: Jared, the problem arises using MPIAIJ for the uniprocessor case and it is related with BJacobi, nothing to do with your code. If you set the type to MATAIJ and call MatSeqAIJSetPreallocation, it works fine. Barry, should we change the default pc type for MPIAIJ if the size is 1? MatGetFactor and MatSolverTypeGet cannot take this decision. Stefano > On Feb 1, 2018, at 8:48 PM, Jared Crean wrote: > > Hello, > > I am trying to use a matrix-free linear operator with a matrix-explicit preconditioner, but when I try to do the KSP solve it gives the error: > > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers > [0]PETSC ERROR: Could not locate a solver package. Perhaps you must ./configure with --download- > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 > [0]PETSC ERROR: ./testcase on a arch-linux2-c-debug named jared-r15 by jared Thu Feb 1 12:40:57 2018 > [0]PETSC ERROR: Configure options > [0]PETSC ERROR: #1 MatGetFactor() line 4346 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/mat/interface/matrix.c > [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/factor/ilu/ilu.c > [0]PETSC ERROR: #3 PCSetUp() line 924 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #4 KSPSetUp() line 381 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #5 PCSetUpOnBlocks_BJacobi_Singleblock() line 618 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/bjacobi/bjacobi.c > [0]PETSC ERROR: #6 PCSetUpOnBlocks() line 955 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c > [0]PETSC ERROR: #7 KSPSetUpOnBlocks() line 213 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: #8 KSPSolve() line 613 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c > > > The code to reproduce this error is attached. The error is present on Petsc 3.7.6 and 3.8.3. I noticed two things while creating the test case: 1) using a jacobi preconditioner works (using block jacobi with ILU on each block does not), and 2) if I replace the shell matrix with the preconditioner matrix in KSPSetOperators(), there is no error (with the block jacobi ILU preconditioner). > > Is this a bug in Petsc or did I setup the preconditioner incorrectly? > > Jared Crean > > > From bsmith at mcs.anl.gov Fri Feb 2 12:56:45 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Fri, 2 Feb 2018 18:56:45 +0000 Subject: [petsc-users] Error Using Matrix-Free Linear Operator with Matrix-Explicit Preconditioner In-Reply-To: References: Message-ID: <0BD08EA0-548A-4F08-8CE4-BDAEC60DB82A@anl.gov> > On Feb 2, 2018, at 12:27 PM, Stefano Zampini wrote: > > Jared, > > the problem arises using MPIAIJ for the uniprocessor case and it is related with BJacobi, nothing to do with your code. > If you set the type to MATAIJ and call MatSeqAIJSetPreallocation, it works fine. > > Barry, should we change the default pc type for MPIAIJ if the size is 1? MatGetFactor and MatSolverTypeGet cannot take this decision. I think we need to improve the documentation and examples to emphasis the use of MATAIJ and not the particular Seq or MPI version so that people always think in terms of MATAIJ and so naturally write code that works for all cases. Barry We tried once a very long time ago to have MPIAIJ become SeqAIJ on a single process but it ended up causing its own headaches. > > Stefano > >> On Feb 1, 2018, at 8:48 PM, Jared Crean wrote: >> >> Hello, >> >> I am trying to use a matrix-free linear operator with a matrix-explicit preconditioner, but when I try to do the KSP solve it gives the error: >> >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers >> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must ./configure with --download- >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >> [0]PETSC ERROR: ./testcase on a arch-linux2-c-debug named jared-r15 by jared Thu Feb 1 12:40:57 2018 >> [0]PETSC ERROR: Configure options >> [0]PETSC ERROR: #1 MatGetFactor() line 4346 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/mat/interface/matrix.c >> [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/factor/ilu/ilu.c >> [0]PETSC ERROR: #3 PCSetUp() line 924 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: #4 KSPSetUp() line 381 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #5 PCSetUpOnBlocks_BJacobi_Singleblock() line 618 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/bjacobi/bjacobi.c >> [0]PETSC ERROR: #6 PCSetUpOnBlocks() line 955 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >> [0]PETSC ERROR: #7 KSPSetUpOnBlocks() line 213 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >> [0]PETSC ERROR: #8 KSPSolve() line 613 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >> >> >> The code to reproduce this error is attached. The error is present on Petsc 3.7.6 and 3.8.3. I noticed two things while creating the test case: 1) using a jacobi preconditioner works (using block jacobi with ILU on each block does not), and 2) if I replace the shell matrix with the preconditioner matrix in KSPSetOperators(), there is no error (with the block jacobi ILU preconditioner). >> >> Is this a bug in Petsc or did I setup the preconditioner incorrectly? >> >> Jared Crean >> >> >> > From gnw20 at cam.ac.uk Sun Feb 4 12:16:36 2018 From: gnw20 at cam.ac.uk (Garth N. Wells) Date: Sun, 4 Feb 2018 18:16:36 +0000 Subject: [petsc-users] MatZeroRowsColumnsLocal for MatNest Message-ID: Should MatZeroRowsColumnsLocal work for matrices of type MATNEST? Garth -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Feb 4 12:43:14 2018 From: jed at jedbrown.org (Jed Brown) Date: Sun, 04 Feb 2018 11:43:14 -0700 Subject: [petsc-users] MatZeroRowsColumnsLocal for MatNest In-Reply-To: References: Message-ID: <87k1vsoat9.fsf@jedbrown.org> It is not implemented and it's somewhat inconsistent with the MatNest philosophy of specifying everything in terms of local indices, though it could be implemented. "Garth N. Wells" writes: > Should MatZeroRowsColumnsLocal work for matrices of type MATNEST? > > Garth From gnw20 at cam.ac.uk Sun Feb 4 12:49:42 2018 From: gnw20 at cam.ac.uk (Garth N. Wells) Date: Sun, 4 Feb 2018 18:49:42 +0000 Subject: [petsc-users] MatZeroRowsColumnsLocal for MatNest In-Reply-To: <87k1vsoat9.fsf@jedbrown.org> References: <87k1vsoat9.fsf@jedbrown.org> Message-ID: As a note, with MatNest and MatZeroRowsColumns I get [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Mat type nest which is informative, but with MatNest and and MatZeroRowsColumnsLocal I get [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range Garth On 4 February 2018 at 18:43, Jed Brown wrote: > It is not implemented and it's somewhat inconsistent with the MatNest > philosophy of specifying everything in terms of local indices, though it > could be implemented. > > "Garth N. Wells" writes: > > > Should MatZeroRowsColumnsLocal work for matrices of type MATNEST? > > > > Garth > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Sun Feb 4 13:39:32 2018 From: jed at jedbrown.org (Jed Brown) Date: Sun, 04 Feb 2018 12:39:32 -0700 Subject: [petsc-users] MatZeroRowsColumnsLocal for MatNest In-Reply-To: References: <87k1vsoat9.fsf@jedbrown.org> Message-ID: <87fu6go87f.fsf@jedbrown.org> "Garth N. Wells" writes: > As a note, with MatNest and MatZeroRowsColumns I get > > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Mat type nest > > which is informative, but with MatNest and and MatZeroRowsColumnsLocal I get > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range Gross. I assume Barry wrote it to call the ops->zerorowscolumns directly to follow the same pattern as MatZeroRowsLocal (which first checks for a special ops pointer, and then still doesn't check if ops->zerorows is NULL). If you need this, we'll need to add a new ops pointer and implement it for MatNest. If you just need a better error message, we can just fix up the dispatch. > Garth > > On 4 February 2018 at 18:43, Jed Brown wrote: > >> It is not implemented and it's somewhat inconsistent with the MatNest >> philosophy of specifying everything in terms of local indices, though it >> could be implemented. >> >> "Garth N. Wells" writes: >> >> > Should MatZeroRowsColumnsLocal work for matrices of type MATNEST? >> > >> > Garth >> From stefano.zampini at gmail.com Sun Feb 4 13:43:17 2018 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Sun, 4 Feb 2018 22:43:17 +0300 Subject: [petsc-users] MatZeroRowsColumnsLocal for MatNest In-Reply-To: <87fu6go87f.fsf@jedbrown.org> References: <87k1vsoat9.fsf@jedbrown.org> <87fu6go87f.fsf@jedbrown.org> Message-ID: Also, it uses mat->cmap->mapping, which is wrong. It shall use mat->rmap->mapping, and check if mat->rmap->mapping == mat->cmap->mapping > On Feb 4, 2018, at 10:39 PM, Jed Brown wrote: > > "Garth N. Wells" writes: > >> As a note, with MatNest and MatZeroRowsColumns I get >> >> [0]PETSC ERROR: No support for this operation for this object type >> [0]PETSC ERROR: Mat type nest >> >> which is informative, but with MatNest and and MatZeroRowsColumnsLocal I get >> >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range > > Gross. I assume Barry wrote it to call the ops->zerorowscolumns > directly to follow the same pattern as MatZeroRowsLocal (which first > checks for a special ops pointer, and then still doesn't check if > ops->zerorows is NULL). If you need this, we'll need to add a new ops > pointer and implement it for MatNest. If you just need a better error > message, we can just fix up the dispatch. > >> Garth >> >> On 4 February 2018 at 18:43, Jed Brown wrote: >> >>> It is not implemented and it's somewhat inconsistent with the MatNest >>> philosophy of specifying everything in terms of local indices, though it >>> could be implemented. >>> >>> "Garth N. Wells" writes: >>> >>>> Should MatZeroRowsColumnsLocal work for matrices of type MATNEST? >>>> >>>> Garth >>> From jcrean01 at gmail.com Mon Feb 5 11:43:19 2018 From: jcrean01 at gmail.com (Jared Crean) Date: Mon, 5 Feb 2018 12:43:19 -0500 Subject: [petsc-users] Error Using Matrix-Free Linear Operator with Matrix-Explicit Preconditioner In-Reply-To: <0BD08EA0-548A-4F08-8CE4-BDAEC60DB82A@anl.gov> References: <0BD08EA0-548A-4F08-8CE4-BDAEC60DB82A@anl.gov> Message-ID: <604eb548-e9a6-08c4-0b37-3babf8011546@gmail.com> ??? Switching to MPIAIJ fixed the problem. ??? Thanks for looking into this, ??? ??? Jared Crean On 02/02/2018 01:56 PM, Smith, Barry F. wrote: > >> On Feb 2, 2018, at 12:27 PM, Stefano Zampini wrote: >> >> Jared, >> >> the problem arises using MPIAIJ for the uniprocessor case and it is related with BJacobi, nothing to do with your code. >> If you set the type to MATAIJ and call MatSeqAIJSetPreallocation, it works fine. >> >> Barry, should we change the default pc type for MPIAIJ if the size is 1? MatGetFactor and MatSolverTypeGet cannot take this decision. > I think we need to improve the documentation and examples to emphasis the use of MATAIJ and not the particular Seq or MPI version so that people always think in terms of MATAIJ and so naturally write code that works for all cases. > > Barry > > We tried once a very long time ago to have MPIAIJ become SeqAIJ on a single process but it ended up causing its own headaches. > > >> Stefano >> >>> On Feb 1, 2018, at 8:48 PM, Jared Crean wrote: >>> >>> Hello, >>> >>> I am trying to use a matrix-free linear operator with a matrix-explicit preconditioner, but when I try to do the KSP solve it gives the error: >>> >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers >>> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must ./configure with --download- >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>> [0]PETSC ERROR: ./testcase on a arch-linux2-c-debug named jared-r15 by jared Thu Feb 1 12:40:57 2018 >>> [0]PETSC ERROR: Configure options >>> [0]PETSC ERROR: #1 MatGetFactor() line 4346 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/mat/interface/matrix.c >>> [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/factor/ilu/ilu.c >>> [0]PETSC ERROR: #3 PCSetUp() line 924 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #4 KSPSetUp() line 381 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #5 PCSetUpOnBlocks_BJacobi_Singleblock() line 618 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/bjacobi/bjacobi.c >>> [0]PETSC ERROR: #6 PCSetUpOnBlocks() line 955 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: #7 KSPSetUpOnBlocks() line 213 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: #8 KSPSolve() line 613 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>> >>> >>> The code to reproduce this error is attached. The error is present on Petsc 3.7.6 and 3.8.3. I noticed two things while creating the test case: 1) using a jacobi preconditioner works (using block jacobi with ILU on each block does not), and 2) if I replace the shell matrix with the preconditioner matrix in KSPSetOperators(), there is no error (with the block jacobi ILU preconditioner). >>> >>> Is this a bug in Petsc or did I setup the preconditioner incorrectly? >>> >>> Jared Crean >>> >>> >>> From stefano.zampini at gmail.com Mon Feb 5 11:46:20 2018 From: stefano.zampini at gmail.com (Stefano Zampini) Date: Mon, 5 Feb 2018 20:46:20 +0300 Subject: [petsc-users] Error Using Matrix-Free Linear Operator with Matrix-Explicit Preconditioner In-Reply-To: <604eb548-e9a6-08c4-0b37-3babf8011546@gmail.com> References: <0BD08EA0-548A-4F08-8CE4-BDAEC60DB82A@anl.gov> <604eb548-e9a6-08c4-0b37-3babf8011546@gmail.com> Message-ID: You mean MATAIJ? Note that you can run the same code in sequential or in parallel with MATAIJ, i.e. MatCreate(comm,&A); MatSetType(A,MATAIJ); MatSeqAIJSetPreallocation(A,dnz,ddnz); // dummy call if comm.size > 1 MatMPIAIJSetPreallocation(A,dnz,ddnz,onz,oonz); // dummy call if comm.size == 1 or by using MatXAIJSetPreallocation > On Feb 5, 2018, at 8:43 PM, Jared Crean wrote: > > Switching to MPIAIJ fixed the problem. > > Thanks for looking into this, > Jared Crean > > On 02/02/2018 01:56 PM, Smith, Barry F. wrote: >> >>> On Feb 2, 2018, at 12:27 PM, Stefano Zampini wrote: >>> >>> Jared, >>> >>> the problem arises using MPIAIJ for the uniprocessor case and it is related with BJacobi, nothing to do with your code. >>> If you set the type to MATAIJ and call MatSeqAIJSetPreallocation, it works fine. >>> >>> Barry, should we change the default pc type for MPIAIJ if the size is 1? MatGetFactor and MatSolverTypeGet cannot take this decision. >> I think we need to improve the documentation and examples to emphasis the use of MATAIJ and not the particular Seq or MPI version so that people always think in terms of MATAIJ and so naturally write code that works for all cases. >> >> Barry >> >> We tried once a very long time ago to have MPIAIJ become SeqAIJ on a single process but it ended up causing its own headaches. >> >> >>> Stefano >>> >>>> On Feb 1, 2018, at 8:48 PM, Jared Crean wrote: >>>> >>>> Hello, >>>> >>>> I am trying to use a matrix-free linear operator with a matrix-explicit preconditioner, but when I try to do the KSP solve it gives the error: >>>> >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers >>>> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must ./configure with --download- >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>> [0]PETSC ERROR: ./testcase on a arch-linux2-c-debug named jared-r15 by jared Thu Feb 1 12:40:57 2018 >>>> [0]PETSC ERROR: Configure options >>>> [0]PETSC ERROR: #1 MatGetFactor() line 4346 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/mat/interface/matrix.c >>>> [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/factor/ilu/ilu.c >>>> [0]PETSC ERROR: #3 PCSetUp() line 924 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: #4 KSPSetUp() line 381 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: #5 PCSetUpOnBlocks_BJacobi_Singleblock() line 618 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/bjacobi/bjacobi.c >>>> [0]PETSC ERROR: #6 PCSetUpOnBlocks() line 955 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >>>> [0]PETSC ERROR: #7 KSPSetUpOnBlocks() line 213 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>>> [0]PETSC ERROR: #8 KSPSolve() line 613 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>>> >>>> >>>> The code to reproduce this error is attached. The error is present on Petsc 3.7.6 and 3.8.3. I noticed two things while creating the test case: 1) using a jacobi preconditioner works (using block jacobi with ILU on each block does not), and 2) if I replace the shell matrix with the preconditioner matrix in KSPSetOperators(), there is no error (with the block jacobi ILU preconditioner). >>>> >>>> Is this a bug in Petsc or did I setup the preconditioner incorrectly? >>>> >>>> Jared Crean >>>> >>>> >>>> > > From jcrean01 at gmail.com Mon Feb 5 12:23:15 2018 From: jcrean01 at gmail.com (Jared Crean) Date: Mon, 5 Feb 2018 13:23:15 -0500 Subject: [petsc-users] Error Using Matrix-Free Linear Operator with Matrix-Explicit Preconditioner In-Reply-To: References: <0BD08EA0-548A-4F08-8CE4-BDAEC60DB82A@anl.gov> <604eb548-e9a6-08c4-0b37-3babf8011546@gmail.com> Message-ID: <84bce56e-59e8-6b4a-a2a8-ab5d12711f17@gmail.com> ??? Yes, I meant MATAIJ. Sorry for the typo. ??? Jared Crean On 02/05/2018 12:46 PM, Stefano Zampini wrote: > You mean MATAIJ? > > Note that you can run the same code in sequential or in parallel with MATAIJ, i.e. > > MatCreate(comm,&A); > MatSetType(A,MATAIJ); > MatSeqAIJSetPreallocation(A,dnz,ddnz); // dummy call if comm.size > 1 > MatMPIAIJSetPreallocation(A,dnz,ddnz,onz,oonz); // dummy call if comm.size == 1 > > or by using MatXAIJSetPreallocation > >> On Feb 5, 2018, at 8:43 PM, Jared Crean wrote: >> >> Switching to MPIAIJ fixed the problem. >> >> Thanks for looking into this, >> Jared Crean >> >> On 02/02/2018 01:56 PM, Smith, Barry F. wrote: >>>> On Feb 2, 2018, at 12:27 PM, Stefano Zampini wrote: >>>> >>>> Jared, >>>> >>>> the problem arises using MPIAIJ for the uniprocessor case and it is related with BJacobi, nothing to do with your code. >>>> If you set the type to MATAIJ and call MatSeqAIJSetPreallocation, it works fine. >>>> >>>> Barry, should we change the default pc type for MPIAIJ if the size is 1? MatGetFactor and MatSolverTypeGet cannot take this decision. >>> I think we need to improve the documentation and examples to emphasis the use of MATAIJ and not the particular Seq or MPI version so that people always think in terms of MATAIJ and so naturally write code that works for all cases. >>> >>> Barry >>> >>> We tried once a very long time ago to have MPIAIJ become SeqAIJ on a single process but it ended up causing its own headaches. >>> >>> >>>> Stefano >>>> >>>>> On Feb 1, 2018, at 8:48 PM, Jared Crean wrote: >>>>> >>>>> Hello, >>>>> >>>>> I am trying to use a matrix-free linear operator with a matrix-explicit preconditioner, but when I try to do the KSP solve it gives the error: >>>>> >>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/linearsolvertable.html for possible LU and Cholesky solvers >>>>> [0]PETSC ERROR: Could not locate a solver package. Perhaps you must ./configure with --download- >>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>>> [0]PETSC ERROR: ./testcase on a arch-linux2-c-debug named jared-r15 by jared Thu Feb 1 12:40:57 2018 >>>>> [0]PETSC ERROR: Configure options >>>>> [0]PETSC ERROR: #1 MatGetFactor() line 4346 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/mat/interface/matrix.c >>>>> [0]PETSC ERROR: #2 PCSetUp_ILU() line 142 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/factor/ilu/ilu.c >>>>> [0]PETSC ERROR: #3 PCSetUp() line 924 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >>>>> [0]PETSC ERROR: #4 KSPSetUp() line 381 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>>>> [0]PETSC ERROR: #5 PCSetUpOnBlocks_BJacobi_Singleblock() line 618 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/impls/bjacobi/bjacobi.c >>>>> [0]PETSC ERROR: #6 PCSetUpOnBlocks() line 955 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/pc/interface/precon.c >>>>> [0]PETSC ERROR: #7 KSPSetUpOnBlocks() line 213 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>>>> [0]PETSC ERROR: #8 KSPSolve() line 613 in /home/jared/.julia/v0.4/PETSc2/deps/petsc-3.8.3/src/ksp/ksp/interface/itfunc.c >>>>> >>>>> >>>>> The code to reproduce this error is attached. The error is present on Petsc 3.7.6 and 3.8.3. I noticed two things while creating the test case: 1) using a jacobi preconditioner works (using block jacobi with ILU on each block does not), and 2) if I replace the shell matrix with the preconditioner matrix in KSPSetOperators(), there is no error (with the block jacobi ILU preconditioner). >>>>> >>>>> Is this a bug in Petsc or did I setup the preconditioner incorrectly? >>>>> >>>>> Jared Crean >>>>> >>>>> >>>>> >> From mhbaghaei at mail.sjtu.edu.cn Tue Feb 6 15:31:00 2018 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Wed, 7 Feb 2018 05:31:00 +0800 (CST) Subject: [petsc-users] pseudo-transient ? Message-ID: <002601d39f91$c6e35d60$54aa1820$@mail.sjtu.edu.cn> Hi I wanted to use my solve the system of equation using pseudo-transient continuation. I found that we can refer to this version for driven cavity example. However, I could not find it within files (src/snes/examples/tutorials/ex27.c). Would it be possible for you to share this file? Thanks Amir -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Feb 6 15:53:10 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 6 Feb 2018 21:53:10 +0000 Subject: [petsc-users] pseudo-transient ? In-Reply-To: <002601d39f91$c6e35d60$54aa1820$@mail.sjtu.edu.cn> References: <002601d39f91$c6e35d60$54aa1820$@mail.sjtu.edu.cn> Message-ID: <55BDF21E-6A4D-4DB8-8915-591537396B9A@anl.gov> That example has not likely existed for many year. You can use src/ts/examples/tutorials/ex1.c as a starting point. > On Feb 6, 2018, at 3:31 PM, Mohammad Hassan Baghaei wrote: > > Hi > I wanted to use my solve the system of equation using pseudo-transient continuation. I found that we can refer to this version for driven cavity example. However, I could not find it within files (src/snes/examples/tutorials/ex27.c). Would it be possible for you to share this file? > Thanks > Amir From jed at jedbrown.org Tue Feb 6 16:40:08 2018 From: jed at jedbrown.org (Jed Brown) Date: Tue, 06 Feb 2018 15:40:08 -0700 Subject: [petsc-users] pseudo-transient ? In-Reply-To: <55BDF21E-6A4D-4DB8-8915-591537396B9A@anl.gov> References: <002601d39f91$c6e35d60$54aa1820$@mail.sjtu.edu.cn> <55BDF21E-6A4D-4DB8-8915-591537396B9A@anl.gov> Message-ID: <87372dn3nb.fsf@jedbrown.org> src/ts/examples/tutorials/ex26.c solves the same problem as that (since removed) src/snes/examples/tutorials/ex27.c referenced in the Coffey paper. Unfortunately, there is still a dangling reference from the TS example that needs to be fixed. Note that there is a slight mistake in the paper -- they describe the algorithm as linearly implicit but the numerical results were created using nonlinearly implicit Euler. The algorithm as described is more efficient and the default using TSPSEUDO in the TS example. If you want to reproduce the numerical results, you'll have to change the SNES to converge the nonlinear solve instead of just the first step. "Smith, Barry F." writes: > That example has not likely existed for many year. You can use src/ts/examples/tutorials/ex1.c as a starting point. > > > >> On Feb 6, 2018, at 3:31 PM, Mohammad Hassan Baghaei wrote: >> >> Hi >> I wanted to use my solve the system of equation using pseudo-transient continuation. I found that we can refer to this version for driven cavity example. However, I could not find it within files (src/snes/examples/tutorials/ex27.c). Would it be possible for you to share this file? >> Thanks >> Amir From jed at jedbrown.org Tue Feb 6 17:07:38 2018 From: jed at jedbrown.org (Jed Brown) Date: Tue, 06 Feb 2018 16:07:38 -0700 Subject: [petsc-users] pseudo-transient ? In-Reply-To: <002d01d39f9e$485bc7e0$d91357a0$@mail.sjtu.edu.cn> References: <002601d39f91$c6e35d60$54aa1820$@mail.sjtu.edu.cn> <55BDF21E-6A4D-4DB8-8915-591537396B9A@anl.gov> <87372dn3nb.fsf@jedbrown.org> <002d01d39f9e$485bc7e0$d91357a0$@mail.sjtu.edu.cn> Message-ID: <87vaf9lnt1.fsf@jedbrown.org> Please always use "reply-all" so that your messages go to the list. This is standard mailing list etiquette. It is important to preserve threading for people who find this discussion later and so that we do not waste our time re-answering the same questions that have already been answered in private side-conversations. You'll likely get an answer faster that way too. It doesn't affect the final result, just the algorithm to get there. Read this doc update. https://bitbucket.org/petsc/petsc/commits/b813ae3b0839847a3f9c27668d2815c374454af5 I have an email thread with the authors from 2010, but this was the conclusion. I don't have time to test reproducing the figures from the paper and the above-mentioned linearly implicit variant (the algorithm they intended to run), but if you're experimenting with it, we would certainly welcome a patch to that effect. Mohammad Hassan Baghaei writes: > Thank you very much for your help! I am currently reading the Coffey paper > on the pseudo-transient continuation method. Thanks for making clear for > me. In fact, I need that the final converged solution will be in fully > implicit form. So, I think I have to configure the SNES. > > -----Original Message----- > From: Jed Brown [mailto:jed at jedbrown.org] > Sent: Wednesday, February 7, 2018 6:40 AM > To: Smith, Barry F. ; Mohammad Hassan Baghaei > > Cc: petsc-users at mcs.anl.gov > Subject: Re: [petsc-users] pseudo-transient ? > > src/ts/examples/tutorials/ex26.c solves the same problem as that (since > removed) src/snes/examples/tutorials/ex27.c referenced in the Coffey paper. > > Unfortunately, there is still a dangling reference from the TS example that > needs to be fixed. > > Note that there is a slight mistake in the paper -- they describe the > algorithm as linearly implicit but the numerical results were created using > nonlinearly implicit Euler. The algorithm as described is more efficient > and the default using TSPSEUDO in the TS example. If you want to reproduce > the numerical results, you'll have to change the SNES to converge the > nonlinear solve instead of just the first step. > > "Smith, Barry F." writes: > >> That example has not likely existed for many year. You can use > src/ts/examples/tutorials/ex1.c as a starting point. >> >> >> >>> On Feb 6, 2018, at 3:31 PM, Mohammad Hassan Baghaei > wrote: >>> >>> Hi >>> I wanted to use my solve the system of equation using pseudo-transient > continuation. I found that we can refer to this version for driven cavity > example. However, I could not find it within files > (src/snes/examples/tutorials/ex27.c). Would it be possible for you to share > this file? >>> Thanks >>> Amir From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 04:08:17 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 10:08:17 +0000 Subject: [petsc-users] Visualizing structured cell-centered data VTK Message-ID: Dear all, I have structured cell-centered data and would like to visualize this with Paraview. Up to now I use PetscViewerVTKOpen and VecView to write data in *.vts format. I would like to tell PETSc that the fieldtype is PETSC_VTK_CELL_FIELD. I have found PetscViewerVTKAddField. Is this the way to go? I was thinking maybe a DMDASetFieldType exists, but did not find any. If yes, what is the PetscViewerVTKWriteFunction I need to provide? Thank you! Henrik -- Dipl.-Math. Henrik B?sing Institute for Applied Geophysics and Geothermal Energy E.ON Energy Research Center RWTH Aachen University Mathieustr. 10 | Tel +49 (0)241 80 49907 52074 Aachen, Germany | Fax +49 (0)241 80 49889 http://www.eonerc.rwth-aachen.de/GGE hbuesing at eonerc.rwth-aachen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 05:23:58 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 11:23:58 +0000 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK Message-ID: Dear all, I would like to write HDF5 and VTK files in parallel. I found Vec example 19: "Parallel HDF5 Vec Viewing". But I do not understand how I tell PETSc to write a DMDA in parallel when doing VecView. At the moment everything is done on process 0. Can PETSc use parallel HDF5? Regarding VTK: Is it possible that every process dumps his part of the Vec in a separate file and let Paraview combine this. I think Firedrake does it in this way. Thank you! Henrik -- Dipl.-Math. Henrik B?sing Institute for Applied Geophysics and Geothermal Energy E.ON Energy Research Center RWTH Aachen University Mathieustr. 10 | Tel +49 (0)241 80 49907 52074 Aachen, Germany | Fax +49 (0)241 80 49889 http://www.eonerc.rwth-aachen.de/GGE hbuesing at eonerc.rwth-aachen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 7 05:54:55 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Feb 2018 04:54:55 -0700 Subject: [petsc-users] Visualizing structured cell-centered data VTK In-Reply-To: References: Message-ID: <87o9l1koa8.fsf@jedbrown.org> "Buesing, Henrik" writes: > Dear all, > > I have structured cell-centered data and would like to visualize this with Paraview. Up to now I use PetscViewerVTKOpen and VecView to write data in *.vts format. I would like to tell PETSc that the fieldtype is PETSC_VTK_CELL_FIELD. I have found PetscViewerVTKAddField. > > Is this the way to go? I was thinking maybe a DMDASetFieldType exists, but did not find any. If yes, what is the PetscViewerVTKWriteFunction I need to provide? DMDA does not explicitly support distinguishing between cell and point values. PetscViewerVTKAddField is a developer level routine and you would need to implement a function similar to DMDAVTKWriteAll_VTS (not at all trivial and you need to read the code because it is responsible for almost everything). From jed at jedbrown.org Wed Feb 7 06:09:31 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Feb 2018 05:09:31 -0700 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: References: Message-ID: <87inb9knlw.fsf@jedbrown.org> "Buesing, Henrik" writes: > Dear all, > > I would like to write HDF5 and VTK files in parallel. I found Vec example 19: "Parallel HDF5 Vec Viewing". But I do not understand how I tell PETSc to write a DMDA in parallel when doing VecView. At the moment everything is done on process 0. Can PETSc use parallel HDF5? Yeah, the implementation is in VecView_MPI_HDF5_DA and uses H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with MPI? > Regarding VTK: Is it possible that every process dumps his part of the Vec in a separate file and let Paraview combine this. I think Firedrake does it in this way. This creates a filesystem metadata problem and is not supported by PETSc's VTK viewers. It would be possible to write binary-appended files by scattering the metadata back to the processes which then open the output file and seek to the appropriate place (instead of serializing their data through rank 0). This would be worthwhile and not particularly hard to implement if you want to try. The problem is that VTK readers usually don't parallelize over such files so you've just moved the problem. Our usual advice is to use HDF5 with collective IO if running at large scale. From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 06:50:45 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 12:50:45 +0000 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: <87inb9knlw.fsf@jedbrown.org> References: <87inb9knlw.fsf@jedbrown.org> Message-ID: >> Can PETSc use parallel HDF5? > Yeah, the implementation is in VecView_MPI_HDF5_DA and uses > H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with MPI? I just --download-hdf5. Do I need to do something else? > > > Regarding VTK: Is it possible that every process dumps his part of the Vec in > a separate file and let Paraview combine this. I think Firedrake does it in this > way. > > This creates a filesystem metadata problem and is not supported by PETSc's > VTK viewers. > ... > Our usual advice is to use HDF5 with > collective IO if running at large scale. I understand. I will stick to HDF5 then. Thank you! Henrik From jed at jedbrown.org Wed Feb 7 06:53:41 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Feb 2018 05:53:41 -0700 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: References: <87inb9knlw.fsf@jedbrown.org> Message-ID: <87fu6dklka.fsf@jedbrown.org> "Buesing, Henrik" writes: >>> Can PETSc use parallel HDF5? >> Yeah, the implementation is in VecView_MPI_HDF5_DA and uses >> H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with MPI? > > I just --download-hdf5. Do I need to do something else? That should configure it with whichever MPI PETSc is using. How are you concluding that it's only using rank 0? From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 07:03:12 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 13:03:12 +0000 Subject: [petsc-users] Visualizing structured cell-centered data VTK In-Reply-To: <87o9l1koa8.fsf@jedbrown.org> References: <87o9l1koa8.fsf@jedbrown.org> Message-ID: > > I have structured cell-centered data and would like to visualize this with > Paraview. Up to now I use PetscViewerVTKOpen and VecView to write data in > *.vts format. I would like to tell PETSc that the fieldtype is > PETSC_VTK_CELL_FIELD. I have found PetscViewerVTKAddField. > > > > Is this the way to go? I was thinking maybe a DMDASetFieldType exists, but > did not find any. If yes, what is the PetscViewerVTKWriteFunction I need to > provide? > > DMDA does not explicitly support distinguishing between cell and point > values. PetscViewerVTKAddField is a developer level routine and you would > need to implement a function similar to DMDAVTKWriteAll_VTS (not at all > trivial and you need to read the code because it is responsible for almost > everything). I am looking at src/sys/classes/viewer/impls/vtk/vtkv.c. There is a reference to PETSC_VTK_POINT_FIELD vs. PETSC_VTK_CELL_FIELD. Judging from the output I get, I was assuming fieldtype=PETSC_VTK_POINT_FIELD. I would be totally fine with replacing POINT by CELL everywhere, since all my data is cell-centered. The only other references I find are in src/dm/impls/plex/plex.c and plexvtk.c and plexvtu.c. But do I go through plex when just using DMDACreate3d? Thank you! Henrik From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 07:11:53 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 13:11:53 +0000 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: <87fu6dklka.fsf@jedbrown.org> References: <87inb9knlw.fsf@jedbrown.org> <87fu6dklka.fsf@jedbrown.org> Message-ID: > >>> Can PETSc use parallel HDF5? > >> Yeah, the implementation is in VecView_MPI_HDF5_DA and uses > >> H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with > MPI? > > > > I just --download-hdf5. Do I need to do something else? > > That should configure it with whichever MPI PETSc is using. How are you > concluding that it's only using rank 0? I have to ask back to the person working on the parallel IO. But if built and linked correctly, it should work in parallel with PETSCViewerHDF5Open and VecView? From jed at jedbrown.org Wed Feb 7 07:16:00 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Feb 2018 06:16:00 -0700 Subject: [petsc-users] Visualizing structured cell-centered data VTK In-Reply-To: References: <87o9l1koa8.fsf@jedbrown.org> Message-ID: <87a7wlkkj3.fsf@jedbrown.org> "Buesing, Henrik" writes: >> > I have structured cell-centered data and would like to visualize this with >> Paraview. Up to now I use PetscViewerVTKOpen and VecView to write data in >> *.vts format. I would like to tell PETSc that the fieldtype is >> PETSC_VTK_CELL_FIELD. I have found PetscViewerVTKAddField. >> > >> > Is this the way to go? I was thinking maybe a DMDASetFieldType exists, but >> did not find any. If yes, what is the PetscViewerVTKWriteFunction I need to >> provide? >> >> DMDA does not explicitly support distinguishing between cell and point >> values. PetscViewerVTKAddField is a developer level routine and you would >> need to implement a function similar to DMDAVTKWriteAll_VTS (not at all >> trivial and you need to read the code because it is responsible for almost >> everything). > > I am looking at src/sys/classes/viewer/impls/vtk/vtkv.c. There is a reference to PETSC_VTK_POINT_FIELD vs. PETSC_VTK_CELL_FIELD. Judging from the output I get, I was assuming fieldtype=PETSC_VTK_POINT_FIELD. I would be totally fine with replacing POINT by CELL everywhere, since all my data is cell-centered. I think coordinates need to be PointData, not coordinates of cell centroids. DMDA doesn't have that concept. (Maybe it should, but adding it is no small task and hacking the output is likely to create a lot of edge cases.) > The only other references I find are in src/dm/impls/plex/plex.c and plexvtk.c and plexvtu.c. But do I go through plex when just using DMDACreate3d? DMPlex is a different DM and can't be used to write structured output. From jed at jedbrown.org Wed Feb 7 07:16:27 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Feb 2018 06:16:27 -0700 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: References: <87inb9knlw.fsf@jedbrown.org> <87fu6dklka.fsf@jedbrown.org> Message-ID: <877erpkkic.fsf@jedbrown.org> "Buesing, Henrik" writes: >> >>> Can PETSc use parallel HDF5? >> >> Yeah, the implementation is in VecView_MPI_HDF5_DA and uses >> >> H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with >> MPI? >> > >> > I just --download-hdf5. Do I need to do something else? >> >> That should configure it with whichever MPI PETSc is using. How are you >> concluding that it's only using rank 0? > > I have to ask back to the person working on the parallel IO. But if built and linked correctly, it should work in parallel with PETSCViewerHDF5Open and VecView? Yes. From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 07:19:15 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 13:19:15 +0000 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: <877erpkkic.fsf@jedbrown.org> References: <87inb9knlw.fsf@jedbrown.org> <87fu6dklka.fsf@jedbrown.org> <877erpkkic.fsf@jedbrown.org> Message-ID: > >> >>> Can PETSc use parallel HDF5? > >> >> Yeah, the implementation is in VecView_MPI_HDF5_DA and uses > >> >> H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with > >> MPI? > >> > > >> > I just --download-hdf5. Do I need to do something else? > >> > >> That should configure it with whichever MPI PETSc is using. How are > >> you concluding that it's only using rank 0? > > > > I have to ask back to the person working on the parallel IO. But if built and > linked correctly, it should work in parallel with PETSCViewerHDF5Open and > VecView? > > Yes. Awesome! Thank you! Henrik From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 07:39:24 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 13:39:24 +0000 Subject: [petsc-users] Parallel output in PETSc with pHDF5 and VTK In-Reply-To: References: <87inb9knlw.fsf@jedbrown.org> <87fu6dklka.fsf@jedbrown.org> Message-ID: > > >>> Can PETSc use parallel HDF5? > > >> Yeah, the implementation is in VecView_MPI_HDF5_DA and uses > > >> H5FD_MPIO_COLLECTIVE if supported. Did you build your HDF5 with > > MPI? > > > > > > I just --download-hdf5. Do I need to do something else? > > > > That should configure it with whichever MPI PETSc is using. How are > > you concluding that it's only using rank 0? > I have to ask back to the person working on the parallel IO. Ok. The guy was judging from a code part, that was irrelevant and not from doing a profile. Solved! Thank you! Henrik From marco.cisternino at optimad.it Wed Feb 7 11:43:37 2018 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Wed, 7 Feb 2018 17:43:37 +0000 Subject: [petsc-users] Elliptic operator with Neumann conditions Message-ID: Hi everybody, I would like to ask what solution is computed if I try to solve the linear system relative to the problem in subject without creating the null space. I tried with and without the call to MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); and I get zero averaged solution with and the same solution plus a constant without. How does PETSc work in the second case? Does it check the matrix singularity? And is it able to create the null space with the constant automatically? Thanks. Marco Cisternino, PhD marco.cisternino at optimad.it _______________________________ OPTIMAD Engineering srl Via Giacinto Collegno 18, Torino, Italia. +3901119719782 www.optimad.it From knepley at gmail.com Wed Feb 7 11:57:56 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 8 Feb 2018 04:57:56 +1100 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino < marco.cisternino at optimad.it> wrote: > Hi everybody, > I would like to ask what solution is computed if I try to solve the linear > system relative to the problem in subject without creating the null space. > I tried with and without the call to > MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); > and I get zero averaged solution with and the same solution plus a > constant without. > How does PETSc work in the second case? > It depends on the Krylov method you use and the initial residual. We do not do anything special. Thanks, Matt > Does it check the matrix singularity? And is it able to create the null > space with the constant automatically? > Thanks. > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.cisternino at optimad.it Wed Feb 7 12:29:43 2018 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Wed, 7 Feb 2018 18:29:43 +0000 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: References: , Message-ID: Thanks Matt, I'm using KSPFGMRES and I'm sorry but what do you mean with initial residual? I also force a non-zero initial guess. Thanks again Marco Cisternino, PhD marco.cisternino at optimad.it _______________________________ OPTIMAD Engineering srl Via Giacinto Collegno 18, Torino, Italia. +3901119719782 www.optimad.it ________________________________________ Da: Matthew Knepley Inviato: mercoled? 7 febbraio 2018 18:57:56 A: Marco Cisternino Cc: petsc-users Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino > wrote: Hi everybody, I would like to ask what solution is computed if I try to solve the linear system relative to the problem in subject without creating the null space. I tried with and without the call to MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); and I get zero averaged solution with and the same solution plus a constant without. How does PETSc work in the second case? It depends on the Krylov method you use and the initial residual. We do not do anything special. Thanks, Matt Does it check the matrix singularity? And is it able to create the null space with the constant automatically? Thanks. Marco Cisternino, PhD marco.cisternino at optimad.it _______________________________ OPTIMAD Engineering srl Via Giacinto Collegno 18, Torino, Italia. +3901119719782 www.optimad.it -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Wed Feb 7 12:38:08 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 8 Feb 2018 05:38:08 +1100 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 5:29 AM, Marco Cisternino < marco.cisternino at optimad.it> wrote: > Thanks Matt, > I'm using KSPFGMRES and I'm sorry but what do you mean with initial > residual? > I also force a non-zero initial guess. > If your initial residual has a component in the null space of the operator, it is likely to stay there. Matt > Thanks again > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > > > ________________________________________ > Da: Matthew Knepley > Inviato: mercoled? 7 febbraio 2018 18:57:56 > A: Marco Cisternino > Cc: petsc-users > Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions > > On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino < > marco.cisternino at optimad.it> wrote: > Hi everybody, > I would like to ask what solution is computed if I try to solve the linear > system relative to the problem in subject without creating the null space. > I tried with and without the call to > MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); > and I get zero averaged solution with and the same solution plus a > constant without. > How does PETSc work in the second case? > > It depends on the Krylov method you use and the initial residual. We do > not do anything special. > > Thanks, > > Matt > > Does it check the matrix singularity? And is it able to create the null > space with the constant automatically? > Thanks. > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.cisternino at optimad.it Wed Feb 7 12:45:11 2018 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Wed, 7 Feb 2018 18:45:11 +0000 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: References: , Message-ID: I'm sorry Matt but I cannot understand what flexible gmres computes when no null space is created. Could you give me some hints, please? Even in very simple cases... Thanks Marco Cisternino, PhD marco.cisternino at optimad.it _______________________________ OPTIMAD Engineering srl Via Giacinto Collegno 18, Torino, Italia. +3901119719782 www.optimad.it ________________________________________ Da: Matthew Knepley Inviato: mercoled? 7 febbraio 2018 19:38:08 A: Marco Cisternino Cc: petsc-users Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions On Thu, Feb 8, 2018 at 5:29 AM, Marco Cisternino > wrote: Thanks Matt, I'm using KSPFGMRES and I'm sorry but what do you mean with initial residual? I also force a non-zero initial guess. If your initial residual has a component in the null space of the operator, it is likely to stay there. Matt Thanks again Marco Cisternino, PhD marco.cisternino at optimad.it _______________________________ OPTIMAD Engineering srl Via Giacinto Collegno 18, Torino, Italia. +3901119719782 www.optimad.it ________________________________________ Da: Matthew Knepley > Inviato: mercoled? 7 febbraio 2018 18:57:56 A: Marco Cisternino Cc: petsc-users Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino >> wrote: Hi everybody, I would like to ask what solution is computed if I try to solve the linear system relative to the problem in subject without creating the null space. I tried with and without the call to MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); and I get zero averaged solution with and the same solution plus a constant without. How does PETSc work in the second case? It depends on the Krylov method you use and the initial residual. We do not do anything special. Thanks, Matt Does it check the matrix singularity? And is it able to create the null space with the constant automatically? Thanks. Marco Cisternino, PhD marco.cisternino at optimad.it> _______________________________ OPTIMAD Engineering srl Via Giacinto Collegno 18, Torino, Italia. +3901119719782 www.optimad.it -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ From knepley at gmail.com Wed Feb 7 12:58:55 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Feb 2018 13:58:55 -0500 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: References: Message-ID: On Wed, Feb 7, 2018 at 1:45 PM, Marco Cisternino < marco.cisternino at optimad.it> wrote: > I'm sorry Matt but I cannot understand what flexible gmres computes when > no null space is created. > Could you give me some hints, please? Even in very simple cases... > I don't think I can clear it up any further. Thanks, Matt > Thanks > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > > > ________________________________________ > Da: Matthew Knepley > Inviato: mercoled? 7 febbraio 2018 19:38:08 > A: Marco Cisternino > Cc: petsc-users > Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions > > On Thu, Feb 8, 2018 at 5:29 AM, Marco Cisternino < > marco.cisternino at optimad.it> wrote: > Thanks Matt, > I'm using KSPFGMRES and I'm sorry but what do you mean with initial > residual? > I also force a non-zero initial guess. > > If your initial residual has a component in the null space of the > operator, it is likely to stay there. > > Matt > > Thanks again > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > > > ________________________________________ > Da: Matthew Knepley > > Inviato: mercoled? 7 febbraio 2018 18:57:56 > A: Marco Cisternino > Cc: petsc-users > Oggetto: Re: [petsc-users] Elliptic operator with Neumann conditions > > On Thu, Feb 8, 2018 at 4:43 AM, Marco Cisternino < > marco.cisternino at optimad.it marco.cisternino at optimad.it>> wrote: > Hi everybody, > I would like to ask what solution is computed if I try to solve the linear > system relative to the problem in subject without creating the null space. > I tried with and without the call to > MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); > and I get zero averaged solution with and the same solution plus a > constant without. > How does PETSc work in the second case? > > It depends on the Krylov method you use and the initial residual. We do > not do anything special. > > Thanks, > > Matt > > Does it check the matrix singularity? And is it able to create the null > space with the constant automatically? > Thanks. > > > Marco Cisternino, PhD > marco.cisternino at optimad.it marco.cisternino at optimad.it> > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From paulasan at gmail.com Wed Feb 7 13:08:46 2018 From: paulasan at gmail.com (Paula Sanematsu) Date: Wed, 7 Feb 2018 14:08:46 -0500 Subject: [petsc-users] Meaning of the order of PETSC_VIEWER_ASCII_INDEX format in PETSc Message-ID: I am using PETSc 3.7.6 and Fortran. I am trying to output a PETSc vector that contains the solution of a linear system. I am using VecView with the PETSC_VIEWER_ASCII_INDEX format as follows: call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"output.dat",viewer,ierr) call PetscViewerPushFormat(viewer,PETSC_VIEWER_ASCII_INDEX,ierr) call VecView(myVec,viewer,ierr) When I run with 4 processors, my output file looks like: Vec Object: 4 MPI processes type: mpi Process [0] 0: 30.7501 1: 164.001 2: 41.0001 3: 164.001 . . . Process [1] 4988: 60.1443 4989: 157.257 4990: 271.518 4991: 366.669 . . . Process [2] 9977: 114.948 9978: -77.2896 9979: 823.142 9980: -1096.19 . . . Process [3] 14916: 0. 14917: 4.4056 14918: 2.08151 14919: -0.110862 . . . 19843: 0. My question is: each processor outputs the part of the vector that it owns? Or does PETSc collects each processor's parts and then processor 0 sequentially outputs the 1st quarter of the global vector, processor 1 outputs the 2nd quarter of the global vector, processor 2 outputs the 3rd quarter of the global vector, and so on? Or, does PETSc do something else? Thank you! Paula -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 7 13:16:46 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 7 Feb 2018 14:16:46 -0500 Subject: [petsc-users] Meaning of the order of PETSC_VIEWER_ASCII_INDEX format in PETSc In-Reply-To: References: Message-ID: On Wed, Feb 7, 2018 at 2:08 PM, Paula Sanematsu wrote: > I am using PETSc 3.7.6 and Fortran. > > I am trying to output a PETSc vector that contains the solution of a > linear system. I am using VecView with the PETSC_VIEWER_ASCII_INDEX format > as follows: > > call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"output.dat",viewer,ierr) > call PetscViewerPushFormat(viewer,PETSC_VIEWER_ASCII_INDEX,ierr) > call VecView(myVec,viewer,ierr) > > > When I run with 4 processors, my output file looks like: > > Vec Object: 4 MPI processes > type: mpi > Process [0] > 0: 30.7501 > 1: 164.001 > 2: 41.0001 > 3: 164.001 > . > . > . > Process [1] > 4988: 60.1443 > 4989: 157.257 > 4990: 271.518 > 4991: 366.669 > . > . > . > Process [2] > 9977: 114.948 > 9978: -77.2896 > 9979: 823.142 > 9980: -1096.19 > . > . > . > Process [3] > 14916: 0. > 14917: 4.4056 > 14918: 2.08151 > 14919: -0.110862 > . > . > . > 19843: 0. > > > My question is: each processor outputs the part of the vector that it > owns? Or does PETSc collects each processor's parts and then processor 0 > sequentially outputs the 1st quarter of the global vector, processor 1 > outputs the 2nd quarter of the global vector, processor 2 outputs the 3rd > quarter of the global vector, and so on? Or, does PETSc do something else? > There is no difference between those two. The process portions are contiguous. Where are you going to read this in? It seems like there must be a more appropriate format for you. Thanks, Matt > Thank you! > > Paula > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 7 13:24:22 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 7 Feb 2018 19:24:22 +0000 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: References: Message-ID: <0B0AEC45-5B43-4D7B-BA73-AA9BE68B68B7@anl.gov> A square matrix with a null space results in an underdetermined system, that is a solution with more than one solution. The solutions can be written as x + alpha_1 v_1 + ... alpha_n v_n where the v_n form an orthonormal basis for the null space and x is orthogonal to the null space. When you provide the null space KSP Krylov methods find the norm minimizing solution (x) , that is it finds the x with the smallest norm that satisfies the system. This is exactly the same as saying you take any solution of the system and remove all the components in the directions of the null space. If you do not provide the null space then the Krylov space may find you a solution that is not the norm minimizing solution, thus that solution has a component of the null space within it. What component of the null space in the solution depends on what you use for an initial guess and right hand side. When you have a preconditioner then things can get trickier because the preconditioner can (unless you remove them) components in the direction of the null space. These components can get amplified with each iteration of the Krylov method so it looks like the Krylov method is not converging since the norm of the solution is getting larger and larger (these larger components are in the null space.) This is why one should always provide the null space when solving singular systems with singular matrices. Barry > On Feb 7, 2018, at 11:43 AM, Marco Cisternino wrote: > > Hi everybody, > I would like to ask what solution is computed if I try to solve the linear system relative to the problem in subject without creating the null space. > I tried with and without the call to > MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); > and I get zero averaged solution with and the same solution plus a constant without. > How does PETSc work in the second case? > Does it check the matrix singularity? And is it able to create the null space with the constant automatically? > Thanks. > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > From bsmith at mcs.anl.gov Wed Feb 7 13:29:51 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 7 Feb 2018 19:29:51 +0000 Subject: [petsc-users] Meaning of the order of PETSC_VIEWER_ASCII_INDEX format in PETSc In-Reply-To: References: Message-ID: <644B410E-24DA-4471-9ED7-223A194DF6F0@anl.gov> Here is the code VecView_MPI_ASCII() if (format == PETSC_VIEWER_ASCII_INDEX) { ierr = PetscViewerASCIIPrintf(viewer,"%D: ",cnt++);CHKERRQ(ierr); } #if defined(PETSC_USE_COMPLEX) if (PetscImaginaryPart(xarray[i]) > 0.0) { ierr = PetscViewerASCIIPrintf(viewer,"%g + %g i\n",(double)PetscRealPart(xarray[i]),(double)PetscImaginaryPart(xarray[i]));CHKERRQ(ierr); } else if (PetscImaginaryPart(xarray[i]) < 0.0) { ierr = PetscViewerASCIIPrintf(viewer,"%g - %g i\n",(double)PetscRealPart(xarray[i]),-(double)PetscImaginaryPart(xarray[i]));CHKERRQ(ierr); } else { ierr = PetscViewerASCIIPrintf(viewer,"%g\n",(double)PetscRealPart(xarray[i]));CHKERRQ(ierr); } #else ierr = PetscViewerASCIIPrintf(viewer,"%g\n",(double)xarray[i]);CHKERRQ(ierr); #endif } /* receive and print messages */ for (j=1; j 0.0) { ierr = PetscViewerASCIIPrintf(viewer,"%g + %g i\n",(double)PetscRealPart(values[i]),(double)PetscImaginaryPart(values[i]));CHKERRQ(ierr); } else if (PetscImaginaryPart(values[i]) < 0.0) { ierr = PetscViewerASCIIPrintf(viewer,"%g - %g i\n",(double)PetscRealPart(values[i]),-(double)PetscImaginaryPart(values[i]));CHKERRQ(ierr); } else { ierr = PetscViewerASCIIPrintf(viewer,"%g\n",(double)PetscRealPart(values[i]));CHKERRQ(ierr); } #else ierr = PetscViewerASCIIPrintf(viewer,"%g\n",(double)values[i]);CHKERRQ(ierr); #endif } } So each process ships its values to process zero who prints them in order. Note that printing out vectors and matrices as ASCII is just for toys and to help debug. For large runs one should always use some variant of binary output. Barry > On Feb 7, 2018, at 1:08 PM, Paula Sanematsu wrote: > > I am using PETSc 3.7.6 and Fortran. > > I am trying to output a PETSc vector that contains the solution of a linear system. I am using VecView with the PETSC_VIEWER_ASCII_INDEX format as follows: > > call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"output.dat",viewer,ierr) > call PetscViewerPushFormat(viewer,PETSC_VIEWER_ASCII_INDEX,ierr) > call VecView(myVec,viewer,ierr) > > When I run with 4 processors, my output file looks like: > > Vec Object: 4 MPI processes > type: mpi > Process [0] > 0: 30.7501 > 1: 164.001 > 2: 41.0001 > 3: 164.001 > . > . > . > Process [1] > 4988: 60.1443 > 4989: 157.257 > 4990: 271.518 > 4991: 366.669 > . > . > . > Process [2] > 9977: 114.948 > 9978: -77.2896 > 9979: 823.142 > 9980: -1096.19 > . > . > . > Process [3] > 14916: 0. > 14917: 4.4056 > 14918: 2.08151 > 14919: -0.110862 > . > . > . > 19843: 0. > > My question is: each processor outputs the part of the vector that it owns? Or does PETSc collects each processor's parts and then processor 0 sequentially outputs the 1st quarter of the global vector, processor 1 outputs the 2nd quarter of the global vector, processor 2 outputs the 3rd quarter of the global vector, and so on? Or, does PETSc do something else? > > Thank you! > > Paula > From paulasan at gmail.com Wed Feb 7 13:33:50 2018 From: paulasan at gmail.com (Paula Sanematsu) Date: Wed, 7 Feb 2018 14:33:50 -0500 Subject: [petsc-users] Meaning of the order of PETSC_VIEWER_ASCII_INDEX format in PETSc In-Reply-To: <644B410E-24DA-4471-9ED7-223A194DF6F0@anl.gov> References: <644B410E-24DA-4471-9ED7-223A194DF6F0@anl.gov> Message-ID: I see. Thanks for the explanation. Yes, that's the stage I'm in now. I am developing the code and only testing+validating small samples, so I am trying to visualize the results in Matlab. But in the future, Matlab will probably not be feasible so I will probably need to use the binary format and visualize in Avizo. On Wed, Feb 7, 2018 at 2:29 PM, Smith, Barry F. wrote: > > Here is the code VecView_MPI_ASCII() > > if (format == PETSC_VIEWER_ASCII_INDEX) { > ierr = PetscViewerASCIIPrintf(viewer,"%D: > ",cnt++);CHKERRQ(ierr); > } > #if defined(PETSC_USE_COMPLEX) > if (PetscImaginaryPart(xarray[i]) > 0.0) { > ierr = PetscViewerASCIIPrintf(viewer,"%g + %g > i\n",(double)PetscRealPart(xarray[i]),(double) > PetscImaginaryPart(xarray[i]));CHKERRQ(ierr); > } else if (PetscImaginaryPart(xarray[i]) < 0.0) { > ierr = PetscViewerASCIIPrintf(viewer,"%g - %g > i\n",(double)PetscRealPart(xarray[i]),-(double) > PetscImaginaryPart(xarray[i]));CHKERRQ(ierr); > } else { > ierr = PetscViewerASCIIPrintf(viewer, > "%g\n",(double)PetscRealPart(xarray[i]));CHKERRQ(ierr); > } > #else > ierr = PetscViewerASCIIPrintf(viewer,"%g\n",(double)xarray[i]); > CHKERRQ(ierr); > #endif > } > /* receive and print messages */ > for (j=1; j ierr = MPI_Recv(values,(PetscMPIInt)len,MPIU_SCALAR,j,tag, > PetscObjectComm((PetscObject)xin),&status);CHKERRQ(ierr); > ierr = MPI_Get_count(&status,MPIU_SCALAR,&n);CHKERRQ(ierr); > if (format != PETSC_VIEWER_ASCII_COMMON) { > ierr = PetscViewerASCIIPrintf(viewer,"Process > [%d]\n",j);CHKERRQ(ierr); > } > for (i=0; i if (format == PETSC_VIEWER_ASCII_INDEX) { > ierr = PetscViewerASCIIPrintf(viewer,"%D: > ",cnt++);CHKERRQ(ierr); > } > #if defined(PETSC_USE_COMPLEX) > if (PetscImaginaryPart(values[i]) > 0.0) { > ierr = PetscViewerASCIIPrintf(viewer,"%g + %g > i\n",(double)PetscRealPart(values[i]),(double) > PetscImaginaryPart(values[i]));CHKERRQ(ierr); > } else if (PetscImaginaryPart(values[i]) < 0.0) { > ierr = PetscViewerASCIIPrintf(viewer,"%g - %g > i\n",(double)PetscRealPart(values[i]),-(double) > PetscImaginaryPart(values[i]));CHKERRQ(ierr); > } else { > ierr = PetscViewerASCIIPrintf(viewer, > "%g\n",(double)PetscRealPart(values[i]));CHKERRQ(ierr); > } > #else > ierr = PetscViewerASCIIPrintf(viewer,"%g\n",(double)values[i]); > CHKERRQ(ierr); > #endif > } > } > > So each process ships its values to process zero who prints them in order. > > Note that printing out vectors and matrices as ASCII is just for toys and > to help debug. For large runs one should always use some variant of binary > output. > > Barry > > > > On Feb 7, 2018, at 1:08 PM, Paula Sanematsu wrote: > > > > I am using PETSc 3.7.6 and Fortran. > > > > I am trying to output a PETSc vector that contains the solution of a > linear system. I am using VecView with the PETSC_VIEWER_ASCII_INDEX format > as follows: > > > > call PetscViewerASCIIOpen(PETSC_COMM_WORLD,"output.dat",viewer,ierr) > > call PetscViewerPushFormat(viewer,PETSC_VIEWER_ASCII_INDEX,ierr) > > call VecView(myVec,viewer,ierr) > > > > When I run with 4 processors, my output file looks like: > > > > Vec Object: 4 MPI processes > > type: mpi > > Process [0] > > 0: 30.7501 > > 1: 164.001 > > 2: 41.0001 > > 3: 164.001 > > . > > . > > . > > Process [1] > > 4988: 60.1443 > > 4989: 157.257 > > 4990: 271.518 > > 4991: 366.669 > > . > > . > > . > > Process [2] > > 9977: 114.948 > > 9978: -77.2896 > > 9979: 823.142 > > 9980: -1096.19 > > . > > . > > . > > Process [3] > > 14916: 0. > > 14917: 4.4056 > > 14918: 2.08151 > > 14919: -0.110862 > > . > > . > > . > > 19843: 0. > > > > My question is: each processor outputs the part of the vector that it > owns? Or does PETSc collects each processor's parts and then processor 0 > sequentially outputs the 1st quarter of the global vector, processor 1 > outputs the 2nd quarter of the global vector, processor 2 outputs the 3rd > quarter of the global vector, and so on? Or, does PETSc do something else? > > > > Thank you! > > > > Paula > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 15:07:15 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 21:07:15 +0000 Subject: [petsc-users] Visualizing structured cell-centered data VTK In-Reply-To: <87a7wlkkj3.fsf@jedbrown.org> References: <87o9l1koa8.fsf@jedbrown.org> <87a7wlkkj3.fsf@jedbrown.org> Message-ID: > >> > I have structured cell-centered data and would like to visualize > >> > this with > >> Paraview. Up to now I use PetscViewerVTKOpen and VecView to write > >> data in *.vts format. I would like to tell PETSc that the fieldtype > >> is PETSC_VTK_CELL_FIELD. I have found PetscViewerVTKAddField. > >> > > >> > Is this the way to go? I was thinking maybe a DMDASetFieldType > >> > exists, but > >> did not find any. If yes, what is the PetscViewerVTKWriteFunction I > >> need to provide? > >> > >> DMDA does not explicitly support distinguishing between cell and > >> point values. PetscViewerVTKAddField is a developer level routine > >> and you would need to implement a function similar to > >> DMDAVTKWriteAll_VTS (not at all trivial and you need to read the code > >> because it is responsible for almost everything). > > > > I am looking at src/sys/classes/viewer/impls/vtk/vtkv.c. There is a > reference to PETSC_VTK_POINT_FIELD vs. PETSC_VTK_CELL_FIELD. Judging > from the output I get, I was assuming fieldtype=PETSC_VTK_POINT_FIELD. I > would be totally fine with replacing POINT by CELL everywhere, since all my > data is cell-centered. > > I think coordinates need to be PointData, not coordinates of cell centroids. > DMDA doesn't have that concept. (Maybe it should, but adding it is no small > task and hacking the output is likely to create a lot of edge cases.) > I had a look at DMDAVTKWriteAll_VTS. You are totally right that this is not generic. www.vtk.org/VTK/img/file-formats.pdf describes the StructuredGrid format. "Points" contain only one DataArray specifying the coordinates whereas Cells contain three DataArrays with connectivity, offsets and types. What if I use DMDAVTKWriteAll_VTR and write a RectliniearGrid? Then I could just replace PointData with CellData and "hack" only the extent which should leave no edge cases... From marco.cisternino at optimad.it Wed Feb 7 15:13:55 2018 From: marco.cisternino at optimad.it (Marco Cisternino) Date: Wed, 7 Feb 2018 21:13:55 +0000 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: <0B0AEC45-5B43-4D7B-BA73-AA9BE68B68B7@anl.gov> References: , <0B0AEC45-5B43-4D7B-BA73-AA9BE68B68B7@anl.gov> Message-ID: Barry, thanks a lot! Exactly what I wanted to understand and clearly explained. Again thank you very much. Marco Ottieni Outlook per Android ________________________________ From: Smith, Barry F. Sent: Wednesday, February 7, 2018 8:24:22 PM To: Marco Cisternino Cc: petsc-users Subject: Re: [petsc-users] Elliptic operator with Neumann conditions A square matrix with a null space results in an underdetermined system, that is a solution with more than one solution. The solutions can be written as x + alpha_1 v_1 + ... alpha_n v_n where the v_n form an orthonormal basis for the null space and x is orthogonal to the null space. When you provide the null space KSP Krylov methods find the norm minimizing solution (x) , that is it finds the x with the smallest norm that satisfies the system. This is exactly the same as saying you take any solution of the system and remove all the components in the directions of the null space. If you do not provide the null space then the Krylov space may find you a solution that is not the norm minimizing solution, thus that solution has a component of the null space within it. What component of the null space in the solution depends on what you use for an initial guess and right hand side. When you have a preconditioner then things can get trickier because the preconditioner can (unless you remove them) components in the direction of the null space. These components can get amplified with each iteration of the Krylov method so it looks like the Krylov method is not converging since the norm of the solution is getting larger and larger (these larger components are in the null space.) This is why one should always provide the null space when solving singular systems with singular matrices. Barry > On Feb 7, 2018, at 11:43 AM, Marco Cisternino wrote: > > Hi everybody, > I would like to ask what solution is computed if I try to solve the linear system relative to the problem in subject without creating the null space. > I tried with and without the call to > MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); > and I get zero averaged solution with and the same solution plus a constant without. > How does PETSc work in the second case? > Does it check the matrix singularity? And is it able to create the null space with the constant automatically? > Thanks. > > > Marco Cisternino, PhD > marco.cisternino at optimad.it > _______________________________ > OPTIMAD Engineering srl > Via Giacinto Collegno 18, Torino, Italia. > +3901119719782 > www.optimad.it > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed Feb 7 15:17:08 2018 From: hzhang at mcs.anl.gov (Hong) Date: Wed, 7 Feb 2018 15:17:08 -0600 Subject: [petsc-users] petsc and mumps for compute user-specified set of entries in inv(A) In-Reply-To: References: Message-ID: Marius : I added MatMumpsGetInverse(), see https://bitbucket.org/petsc/petsc/commits/f5e16c35adb7810c6c977ea1c9cd95e4afb8512b It works in sequential and parallel. It is in the branch hzhang/mumps-invA, and will be merged to petsc-master after it passed our regression tests. Let me know if you see any problems. Hong On Sat, Jan 20, 2018 at 11:42 AM, Hong wrote: > Marius : > Current petsc-mumps interface supports ICNTL(30), e.g., runtime option '-mat_mumps_icntl_30 > 0' as default. > However, no user ever has tested it. I tested it > using petsc/src/mat/examples/tests/ex130.c with > '-mat_mumps_icntl_30 1' and got an error in MatSolve() -- additional > work needs to be down here. > > I'll read mumps user manual and investigate it next week. > > Meanwhile, you may give it a try and figure out how to set other > parameters, such as icntl_27 and allocate correct rhs and solution > vectors. > > Hong > >> Hi, >> >> Is it possible to interface MUMPS to compute user-specified set of >> entries in inv(A) (ICNTL(30)) using petsc ? >> >> best, >> marius >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Feb 7 15:27:10 2018 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 7 Feb 2018 16:27:10 -0500 Subject: [petsc-users] Elliptic operator with Neumann conditions In-Reply-To: <0B0AEC45-5B43-4D7B-BA73-AA9BE68B68B7@anl.gov> References: <0B0AEC45-5B43-4D7B-BA73-AA9BE68B68B7@anl.gov> Message-ID: On Wed, Feb 7, 2018 at 2:24 PM, Smith, Barry F. wrote: > > A square matrix with a null space results in an underdetermined system, > that is a solution with more than one solution. The solutions can be > written as x + alpha_1 v_1 + ... alpha_n v_n where the v_n form an > orthonormal basis for the null space and x is orthogonal to the null space. > > When you provide the null space KSP Krylov methods find the norm > minimizing solution (x) , that is it finds the x with the smallest norm > that satisfies the system. This is exactly the same as saying you take any > solution of the system and remove all the components in the directions of > the null space. > > If you do not provide the null space then the Krylov space may find you > a solution that is not the norm minimizing solution, thus that solution has > a component of the null space within it. What component of the null space > in the solution depends on what you use for an initial guess and right hand > side. > Additionally, assuming your initial guess is orthogonal to the null space, of course, your solution can "float" away from roundoff error. This is what you were seeing initially w/o the null space. As you saw you can just project it out yourself but as Barry said it is better to let KSP do it. > > When you have a preconditioner then things can get trickier because the > preconditioner can (unless you remove them) components in the direction of > the null space. These components can get amplified with each iteration of > the Krylov method so it looks like the Krylov method is not converging > since the norm of the solution is getting larger and larger (these larger > components are in the null space.) This is why one should always provide > the null space when solving singular systems with singular matrices. > > Barry > > > > On Feb 7, 2018, at 11:43 AM, Marco Cisternino < > marco.cisternino at optimad.it> wrote: > > > > Hi everybody, > > I would like to ask what solution is computed if I try to solve the > linear system relative to the problem in subject without creating the null > space. > > I tried with and without the call to > > MatNullSpaceCreate(m_communicator, PETSC_TRUE, 0, NULL, &nullspace); > > and I get zero averaged solution with and the same solution plus a > constant without. > > How does PETSc work in the second case? > > Does it check the matrix singularity? And is it able to create the null > space with the constant automatically? > > Thanks. > > > > > > Marco Cisternino, PhD > > marco.cisternino at optimad.it > > _______________________________ > > OPTIMAD Engineering srl > > Via Giacinto Collegno 18, Torino, Italia. > > +3901119719782 > > www.optimad.it > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 7 15:36:45 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 07 Feb 2018 14:36:45 -0700 Subject: [petsc-users] Visualizing structured cell-centered data VTK In-Reply-To: References: <87o9l1koa8.fsf@jedbrown.org> <87a7wlkkj3.fsf@jedbrown.org> Message-ID: <87zi4kiis2.fsf@jedbrown.org> "Buesing, Henrik" writes: >> >> > I have structured cell-centered data and would like to visualize >> >> > this with >> >> Paraview. Up to now I use PetscViewerVTKOpen and VecView to write >> >> data in *.vts format. I would like to tell PETSc that the fieldtype >> >> is PETSC_VTK_CELL_FIELD. I have found PetscViewerVTKAddField. >> >> > >> >> > Is this the way to go? I was thinking maybe a DMDASetFieldType >> >> > exists, but >> >> did not find any. If yes, what is the PetscViewerVTKWriteFunction I >> >> need to provide? >> >> >> >> DMDA does not explicitly support distinguishing between cell and >> >> point values. PetscViewerVTKAddField is a developer level routine >> >> and you would need to implement a function similar to >> >> DMDAVTKWriteAll_VTS (not at all trivial and you need to read the code >> >> because it is responsible for almost everything). >> > >> > I am looking at src/sys/classes/viewer/impls/vtk/vtkv.c. There is a >> reference to PETSC_VTK_POINT_FIELD vs. PETSC_VTK_CELL_FIELD. Judging >> from the output I get, I was assuming fieldtype=PETSC_VTK_POINT_FIELD. I >> would be totally fine with replacing POINT by CELL everywhere, since all my >> data is cell-centered. >> >> I think coordinates need to be PointData, not coordinates of cell centroids. >> DMDA doesn't have that concept. (Maybe it should, but adding it is no small >> task and hacking the output is likely to create a lot of edge cases.) >> > I had a look at DMDAVTKWriteAll_VTS. You are totally right that this > is not generic. www.vtk.org/VTK/img/file-formats.pdf describes the > StructuredGrid format. "Points" contain only one DataArray specifying > the coordinates whereas Cells contain three DataArrays with > connectivity, offsets and types. Are you mixing up CellData from the StructuredGrid XML spec with Cells from the UnstructuredGrid? > What if I use DMDAVTKWriteAll_VTR and write a RectliniearGrid? Then I could just replace PointData with CellData and "hack" only the extent which should leave no edge cases... I think it's feasible. You could use DMDAGetInterpolationType() to determine which variant to use. From hbuesing at eonerc.rwth-aachen.de Wed Feb 7 15:57:11 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Wed, 7 Feb 2018 21:57:11 +0000 Subject: [petsc-users] Visualizing structured cell-centered data VTK In-Reply-To: <87zi4kiis2.fsf@jedbrown.org> References: <87o9l1koa8.fsf@jedbrown.org> <87a7wlkkj3.fsf@jedbrown.org> <87zi4kiis2.fsf@jedbrown.org> Message-ID: > >> >> > I have structured cell-centered data and would like to visualize > >> >> > this with > >> >> Paraview. Up to now I use PetscViewerVTKOpen and VecView to write > >> >> data in *.vts format. I would like to tell PETSc that the > >> >> fieldtype is PETSC_VTK_CELL_FIELD. I have found > PetscViewerVTKAddField. > >> >> > > >> >> > Is this the way to go? I was thinking maybe a DMDASetFieldType > >> >> > exists, but > >> >> did not find any. If yes, what is the PetscViewerVTKWriteFunction > >> >> I need to provide? > >> >> > >> >> DMDA does not explicitly support distinguishing between cell and > >> >> point values. PetscViewerVTKAddField is a developer level routine > >> >> and you would need to implement a function similar to > >> >> DMDAVTKWriteAll_VTS (not at all trivial and you need to read the > >> >> code because it is responsible for almost everything). > >> > > >> > I am looking at src/sys/classes/viewer/impls/vtk/vtkv.c. There is a > >> reference to PETSC_VTK_POINT_FIELD vs. PETSC_VTK_CELL_FIELD. Judging > >> from the output I get, I was assuming > >> fieldtype=PETSC_VTK_POINT_FIELD. I would be totally fine with > >> replacing POINT by CELL everywhere, since all my data is cell-centered. > >> > >> I think coordinates need to be PointData, not coordinates of cell centroids. > >> DMDA doesn't have that concept. (Maybe it should, but adding it is > >> no small task and hacking the output is likely to create a lot of > >> edge cases.) > >> > > I had a look at DMDAVTKWriteAll_VTS. You are totally right that this > > is not generic. www.vtk.org/VTK/img/file-formats.pdf describes the > > StructuredGrid format. "Points" contain only one DataArray specifying > > the coordinates whereas Cells contain three DataArrays with > > connectivity, offsets and types. > > Are you mixing up CellData from the StructuredGrid XML spec with Cells from > the UnstructuredGrid? Ah, yes. You are right! Ok wrong reasoning, but maybe good conclusion nevertheless. > > What if I use DMDAVTKWriteAll_VTR and write a RectliniearGrid? Then I > could just replace PointData with CellData and "hack" only the extent which > should leave no edge cases... > > I think it's feasible. You could use DMDAGetInterpolationType() to > determine which variant to use. I will try it and see where I can go with it. From griesser.jan at googlemail.com Sun Feb 11 11:35:10 2018 From: griesser.jan at googlemail.com (=?iso-8859-1?Q?Jan_Grie=DFer?=) Date: Sun, 11 Feb 2018 18:35:10 +0100 Subject: [petsc-users] Transform scipy sparse to partioned, parallel petsc matrix in PETSc4py Message-ID: <005001d3a35e$aaa0e200$ffe2a600$@googlemail.com> Hey, i have a precomputed scipy sparse matrix for which I want to solve the eigenvalue problem for a matrix of size 35000x35000. I don?t really get how to parallelize this problem correctly. Similar to another thread(https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2013-August/01850 1.html) I tried the following code: D = scipy.sparse.load_npz("sparse.npz") B = D.tocsr() # Construct the matrix Ds in parallel Ds = PETSc.Mat().create() Ds.setSizes(CSRmatrix.shape) Ds.assemble() # Fill the matrix rstart, rend = Ds.getOwnershipRange() csr = ( B.indptr[rstart:rend+1] - B.indptr[rstart], B.indices[B.indptr[rstart]:B.indptr[rend]], B.data[B.indptr[rstart]:B.indptr[rend]] ) Ds = PETSc.Mat().createAIJ(size=CSRmatrix.shape, csr=csr) Ds.assemble() # Solve the eigenvalue problem solve_eigensystem(Ds) This code works for 1 processor with mpiexec ?n 1 python example.py, however for increasing number of processors it appears as if al processors try to solve the overall problem instead of splitting it into blocks and solve for a subset of eigenvalues and eigenvectors. Why is this the case or did I miss something? Greetings Jan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aliberkkahraman at yahoo.com Tue Feb 13 06:24:39 2018 From: aliberkkahraman at yahoo.com (Ali Kahraman) Date: Tue, 13 Feb 2018 12:24:39 +0000 (UTC) Subject: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector References: <1102941651.15868.1518524679257.ref@mail.yahoo.com> Message-ID: <1102941651.15868.1518524679257@mail.yahoo.com> Dear All, My problem definition is as follows, I? have an MPI matrix with a random sparsity pattern i.e. I do not know how many nonzeros there are on any row unless I call MatGetRow to learn it. There are possibly unequal numbers of nonzeros on every row. I want to write all the nonzero values of this matrix onto a parallel vector. An example can be as follows. Imagine I have a 4x4 matrix (; denotes next row, . denotes sparse "zeros") [3 . 2 . ; .? 1 .? . ; 4 5 3 2; . . . .]. I want to obtain the vector [3 2 1 4 5 3 2]. I could not find any function that does this. Any idea is appreciated. My thought was to get the number of nonzeros on each process by MatGetInfo, then broadcast this to all processes so all processes know the nonzero number of each other. However, I could not find the MPI command to do this either (this may because I did not know how to look). Any help with this is also appreciated. Best Regards, Ali Berk KahramanM.Sc. Student, Mechanical EngineeringBo?azi?i University, Istanbul, Turkey -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Tue Feb 13 07:46:35 2018 From: jed at jedbrown.org (Jed Brown) Date: Tue, 13 Feb 2018 06:46:35 -0700 Subject: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector In-Reply-To: <1102941651.15868.1518524679257@mail.yahoo.com> References: <1102941651.15868.1518524679257.ref@mail.yahoo.com> <1102941651.15868.1518524679257@mail.yahoo.com> Message-ID: <87wozh3suc.fsf@jedbrown.org> Ali Kahraman writes: > > Dear All, > > My problem definition is as follows, > > I? have an MPI matrix with a random sparsity pattern i.e. I do not know how many nonzeros there are on any row unless I call MatGetRow to learn it. There are possibly unequal numbers of nonzeros on every row. I want to write all the nonzero values of this matrix onto a parallel vector. An example can be as follows. > > > Imagine I have a 4x4 matrix (; denotes next row, . denotes sparse "zeros") [3 . 2 . ; .? 1 .? . ; 4 5 3 2; . . . .]. I want to obtain the vector [3 2 1 4 5 3 2]. I could not find any function that does this. Any idea is appreciated. This seems like an odd thing to want. What are you trying to do? From dalcinl at gmail.com Tue Feb 13 08:45:20 2018 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Tue, 13 Feb 2018 17:45:20 +0300 Subject: [petsc-users] Transform scipy sparse to partioned, parallel petsc matrix in PETSc4py In-Reply-To: <005001d3a35e$aaa0e200$ffe2a600$@googlemail.com> References: <005001d3a35e$aaa0e200$ffe2a600$@googlemail.com> Message-ID: On 11 February 2018 at 20:35, Jan Grie?er wrote: > Hey, > > i have a precomputed scipy sparse matrix for which I want to solve the > eigenvalue problem for a matrix of size 35000x35000. I don?t really get how > to parallelize this problem correctly. > Similar to another > thread(https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2013-August/018501.html) > I tried the following code: > > > > D = scipy.sparse.load_npz("sparse.npz") > > B = D.tocsr() > > > > # Construct the matrix Ds in parallel > > Ds = PETSc.Mat().create() > > Ds.setSizes(CSRmatrix.shape) > > Ds.assemble() > Use DS.setUp() > > > # Fill the matrix > > rstart, rend = Ds.getOwnershipRange() > > csr = ( > > B.indptr[rstart:rend+1] - B.indptr[rstart], > > B.indices[B.indptr[rstart]:B.indptr[rend]], > > B.data[B.indptr[rstart]:B.indptr[rend]] > > ) > This looks just fine > > > Ds = PETSc.Mat().createAIJ(size=CSRmatrix.shape, csr=csr) > > Ds.assemble() > I think you don't need to assemble here. > > > # Solve the eigenvalue problem > > solve_eigensystem(Ds) > > > > This code works for 1 processor with mpiexec ?n 1 python example.py, however > for increasing number of processors it appears as if al processors try to > solve the overall problem instead of splitting it into blocks and solve for > a subset of eigenvalues and eigenvectors. > Why is this the case or did I miss something? > I guess you are using `mpiexec` from a different MPI implementation than the one you used to build PETSc and petsc4py. -- Lisandro Dalcin ============ Research Scientist Computer, Electrical and Mathematical Sciences & Engineering (CEMSE) Extreme Computing Research Center (ECRC) King Abdullah University of Science and Technology (KAUST) http://ecrc.kaust.edu.sa/ 4700 King Abdullah University of Science and Technology al-Khawarizmi Bldg (Bldg 1), Office # 0109 Thuwal 23955-6900, Kingdom of Saudi Arabia http://www.kaust.edu.sa Office Phone: +966 12 808-0459 From aliberkkahraman at yahoo.com Tue Feb 13 09:12:35 2018 From: aliberkkahraman at yahoo.com (Ali Berk Kahraman) Date: Tue, 13 Feb 2018 18:12:35 +0300 Subject: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector In-Reply-To: <87wozh3suc.fsf@jedbrown.org> References: <1102941651.15868.1518524679257.ref@mail.yahoo.com> <1102941651.15868.1518524679257@mail.yahoo.com> <87wozh3suc.fsf@jedbrown.org> Message-ID: <03ef7e58-81d7-0b3f-0a46-d8010cb2fba8@yahoo.com> OK, here is the thing. I have a 2D cartesian regular grid. I am working on wavelet method collocation method, which creates an irregular adaptive grid by turning grid points on an off on the previously mentioned cartesian grid. I store the grid and the values as sparse Mat objects, where each entry to the matrix denotes the x and y location of the value (x:row, y:column). However, to feed the values into PETSc's solver contexts, I have to turn them into vectors. By the way, I believe I have solved the problem. For future reference who looks for this, the algorithm is as follows; For each processor, 1.)Get the local number of nonzero entries on the matrix using MatGetInfo 2.)Call MPI_Allgather so that every process will know exactly how many nonzero entries each other has 3.)Create the Vector and set its size using the data from MPI_Allgather from step 2 (sum of all local nonzero sizes) 4.)Call MatMPIAIJGetLocalMat to get the local portion of the matrix, then call MatSeqAIJGetArray on the local portion to extract its nonzero values as an array 5.)Using the info from step 2 and 4, set the according values on the vector (e.g. if process 0 has 4 nonzeros, process 1 will set the values on Vector's row 4 onwards) I am always open to ideas for improvements. Ali On 13-02-2018 16:46, Jed Brown wrote: > Ali Kahraman writes: > >> >> Dear All, >> >> My problem definition is as follows, >> >> I? have an MPI matrix with a random sparsity pattern i.e. I do not know how many nonzeros there are on any row unless I call MatGetRow to learn it. There are possibly unequal numbers of nonzeros on every row. I want to write all the nonzero values of this matrix onto a parallel vector. An example can be as follows. >> >> >> Imagine I have a 4x4 matrix (; denotes next row, . denotes sparse "zeros") [3 . 2 . ; .? 1 .? . ; 4 5 3 2; . . . .]. I want to obtain the vector [3 2 1 4 5 3 2]. I could not find any function that does this. Any idea is appreciated. > This seems like an odd thing to want. What are you trying to do? From knepley at gmail.com Tue Feb 13 09:19:54 2018 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 13 Feb 2018 10:19:54 -0500 Subject: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector In-Reply-To: <03ef7e58-81d7-0b3f-0a46-d8010cb2fba8@yahoo.com> References: <1102941651.15868.1518524679257.ref@mail.yahoo.com> <1102941651.15868.1518524679257@mail.yahoo.com> <87wozh3suc.fsf@jedbrown.org> <03ef7e58-81d7-0b3f-0a46-d8010cb2fba8@yahoo.com> Message-ID: On Tue, Feb 13, 2018 at 10:12 AM, Ali Berk Kahraman < aliberkkahraman at yahoo.com> wrote: > OK, here is the thing. I have a 2D cartesian regular grid. I am working on > wavelet method collocation method, which creates an irregular adaptive grid > by turning grid points on an off on the previously mentioned cartesian > grid. I store the grid and the values as sparse Mat objects, where each > entry to the matrix denotes the x and y location of the value (x:row, > y:column). However, to feed the values into PETSc's solver contexts, I have > to turn them into vectors. > > > By the way, I believe I have solved the problem. For future reference who > looks for this, the algorithm is as follows; > > For each processor, > > 1.)Get the local number of nonzero entries on the matrix using MatGetInfo > > 2.)Call MPI_Allgather so that every process will know exactly how many > nonzero entries each other has > > 3.)Create the Vector and set its size using the data from MPI_Allgather > from step 2 (sum of all local nonzero sizes) > You do not need 2) since you can just give the local size and PETSC_DETERMINE to VecSetSizes(). Thanks, Matt > 4.)Call MatMPIAIJGetLocalMat to get the local portion of the matrix, then > call MatSeqAIJGetArray on the local portion to extract its nonzero values > as an array > > 5.)Using the info from step 2 and 4, set the according values on the > vector (e.g. if process 0 has 4 nonzeros, process 1 will set the values on > Vector's row 4 onwards) > > > I am always open to ideas for improvements. > > > Ali > > > On 13-02-2018 16:46, Jed Brown wrote: > >> Ali Kahraman writes: >> >> Dear All, >>> My problem definition is as follows, >>> I have an MPI matrix with a random sparsity pattern i.e. I do not >>> know how many nonzeros there are on any row unless I call MatGetRow to >>> learn it. There are possibly unequal numbers of nonzeros on every row. I >>> want to write all the nonzero values of this matrix onto a parallel vector. >>> An example can be as follows. >>> Imagine I have a 4x4 matrix (; denotes next row, . denotes sparse >>> "zeros") [3 . 2 . ; . 1 . . ; 4 5 3 2; . . . .]. I want to obtain the >>> vector [3 2 1 4 5 3 2]. I could not find any function that does this. Any >>> idea is appreciated. >>> >> This seems like an odd thing to want. What are you trying to do? >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Tue Feb 13 10:12:07 2018 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Feb 2018 11:12:07 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: Message-ID: FYI, we were able to get hypre with threads working on KNL on Cori by going down to -O1 optimization. We are getting about 2x speedup with 4 threads and 16 MPI processes per socket. Not bad. There error, flatlined or slightly diverging hypre solves, occurred even in flat MPI runs with openmp=1. We are going to test the Haswell nodes next. On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using maint > it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 and > gamg, but with hypre and -with-openmp=1, even running with flat MPI, the > solver seems flatline (see attached and notice that the residual starts to > creep after a few time steps). > > Maybe you can suggest a hypre test that I can run? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Feb 13 10:30:18 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 13 Feb 2018 16:30:18 +0000 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: Message-ID: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: > > FYI, we were able to get hypre with threads working on KNL on Cori by going down to -O1 optimization. We are getting about 2x speedup with 4 threads and 16 MPI processes per socket. Not bad. In other works using 16 MPI processes with 4 threads per process is twice as fast as running with 64 mpi processes? Could you send the -log_view output for these two cases? > > There error, flatlined or slightly diverging hypre solves, occurred even in flat MPI runs with openmp=1. But the answers are wrong as soon as you turn on OpenMP? Thanks Barry > > We are going to test the Haswell nodes next. > > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 and gamg, but with hypre and -with-openmp=1, even running with flat MPI, the solver seems flatline (see attached and notice that the residual starts to creep after a few time steps). > > Maybe you can suggest a hypre test that I can run? > From mfadams at lbl.gov Tue Feb 13 11:01:10 2018 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Feb 2018 12:01:10 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: > > > > > > There error, flatlined or slightly diverging hypre solves, occurred even > in flat MPI runs with openmp=1. > > But the answers are wrong as soon as you turn on OpenMP? > > No, that is the funny thing, the problem occurs with flat MPI, no OMP. Just an openmp=1 build. I am trying to reproduce this with PETSc tests now. > Thanks > > Barry > > > > > > We are going to test the Haswell nodes next. > > > > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: > > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using > maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 > and gamg, but with hypre and -with-openmp=1, even running with flat MPI, > the solver seems flatline (see attached and notice that the residual starts > to creep after a few time steps). > > > > Maybe you can suggest a hypre test that I can run? > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpovolot at purdue.edu Tue Feb 13 11:10:00 2018 From: mpovolot at purdue.edu (Michael Povolotskyi) Date: Tue, 13 Feb 2018 12:10:00 -0500 Subject: [petsc-users] check status of reading matrix from a file Message-ID: <4e790ba1-2afa-ed5f-1f5c-e1b073e45421@purdue.edu> Dear Petsc developers, I'm reading a matrix from a file like this: PetscViewer viewer; PetscErrorCode ierr; MatCreate(comm,&matrix); MatSetType(matrix,MATDENSE); ierr = PetscViewerBinaryOpen(comm,file_name,FILE_MODE_READ,&viewer); ierr = MatLoad(matrix,viewer); Sometimes the file that is needed is not present and the code execution becomes frozen. Is it possible to catch this situation looking at the ierr variable? Thank you, Michael. From bsmith at mcs.anl.gov Tue Feb 13 11:31:56 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 13 Feb 2018 17:31:56 +0000 Subject: [petsc-users] check status of reading matrix from a file In-Reply-To: <4e790ba1-2afa-ed5f-1f5c-e1b073e45421@purdue.edu> References: <4e790ba1-2afa-ed5f-1f5c-e1b073e45421@purdue.edu> Message-ID: <4E2C95CE-C1C6-4BA3-B00F-89A6201B13C4@anl.gov> Hmm, it shouldn't hang but should crash if the file does not exist. If you want the code to continue running with or without the file you can use PetscTestFile() to see if the file exists and do something else if it does not exist. Barry > On Feb 13, 2018, at 11:10 AM, Michael Povolotskyi wrote: > > Dear Petsc developers, > > I'm reading a matrix from a file like this: > > PetscViewer viewer; > PetscErrorCode ierr; > > MatCreate(comm,&matrix); > > MatSetType(matrix,MATDENSE); > > ierr = PetscViewerBinaryOpen(comm,file_name,FILE_MODE_READ,&viewer); > ierr = MatLoad(matrix,viewer); > > > Sometimes the file that is needed is not present and the code execution becomes frozen. > > Is it possible to catch this situation looking at the ierr variable? > > Thank you, > > Michael. > From jed at jedbrown.org Tue Feb 13 12:23:24 2018 From: jed at jedbrown.org (Jed Brown) Date: Tue, 13 Feb 2018 11:23:24 -0700 Subject: [petsc-users] Write Non-Zero Values of MPI Matrix on an MPI Vector In-Reply-To: <03ef7e58-81d7-0b3f-0a46-d8010cb2fba8@yahoo.com> References: <1102941651.15868.1518524679257.ref@mail.yahoo.com> <1102941651.15868.1518524679257@mail.yahoo.com> <87wozh3suc.fsf@jedbrown.org> <03ef7e58-81d7-0b3f-0a46-d8010cb2fba8@yahoo.com> Message-ID: <87r2po4ulf.fsf@jedbrown.org> Ali Berk Kahraman writes: > OK, here is the thing. I have a 2D cartesian regular grid. I am working > on wavelet method collocation method, which creates an irregular > adaptive grid by turning grid points on an off on the previously > mentioned cartesian grid. I store the grid and the values as sparse Mat > objects, where each entry to the matrix denotes the x and y location of > the value (x:row, y:column). This will have terrible load balance. Better to have a legit data structure (p4est/DMForest may be useful) to represent your adaptive grid. From knepley at gmail.com Tue Feb 13 12:28:32 2018 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 13 Feb 2018 13:28:32 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. wrote: > > > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: > > > > FYI, we were able to get hypre with threads working on KNL on Cori by > going down to -O1 optimization. We are getting about 2x speedup with 4 > threads and 16 MPI processes per socket. Not bad. > > In other works using 16 MPI processes with 4 threads per process is > twice as fast as running with 64 mpi processes? Could you send the > -log_view output for these two cases? Is that what you mean? I took it to mean We ran 16MPI processes and got time T. We ran 16MPI processes with 4 threads each and got time T/2. I would likely eat my shirt if 16x4 was 2x faster than 64. Matt > > > > > There error, flatlined or slightly diverging hypre solves, occurred even > in flat MPI runs with openmp=1. > > But the answers are wrong as soon as you turn on OpenMP? > > Thanks > > Barry > > > > > > We are going to test the Haswell nodes next. > > > > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: > > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using > maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 > and gamg, but with hypre and -with-openmp=1, even running with flat MPI, > the solver seems flatline (see attached and notice that the residual starts > to creep after a few time steps). > > > > Maybe you can suggest a hypre test that I can run? > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkallemov at lbl.gov Tue Feb 13 12:44:45 2018 From: bkallemov at lbl.gov (Bakytzhan Kallemov) Date: Tue, 13 Feb 2018 10:44:45 -0800 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: Hi, I am not sure about 64 flat run, unfortunately I did not save logs since it's easy to run,? but for 16 - here is the plot I got for different number of threads for KSPSolve time Baky On 02/13/2018 10:28 AM, Matthew Knepley wrote: > On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. > wrote: > > > On Feb 13, 2018, at 10:12 AM, Mark Adams > wrote: > > > > FYI, we were able to get hypre with threads working on KNL on > Cori by going down to -O1 optimization. We are getting about 2x > speedup with 4 threads and 16 MPI processes per socket. Not bad. > > ? In other works using 16 MPI processes with 4 threads per process > is twice as fast as running with 64 mpi processes?? Could you send > the -log_view output for these two cases? > > > Is that what you mean? I took it to mean > > ? We ran 16MPI processes and got time T. > ? We ran 16MPI processes with 4 threads each and got time T/2. > > I would likely eat my shirt if 16x4 was 2x faster than 64. > > ? Matt > > > > > > There error, flatlined or slightly diverging hypre solves, > occurred even in flat MPI runs with openmp=1. > > ? But the answers are wrong as soon as you turn on OpenMP? > > ? ?Thanks > > ? ? Barry > > > > > > We are going to test the Haswell nodes next. > > > > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams > wrote: > > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. > Using maint it runs fine with -with-openmp=0, it runs fine with > -with-openmp=1 and gamg, but with hypre and -with-openmp=1, even > running with flat MPI, the solver seems flatline (see attached and > notice that the residual starts to creep after a few time steps). > > > > Maybe you can suggest a hypre test that I can run? > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scaling.eps Type: image/x-eps Size: 20035 bytes Desc: not available URL: From fande.kong at inl.gov Tue Feb 13 13:02:50 2018 From: fande.kong at inl.gov (Kong, Fande) Date: Tue, 13 Feb 2018 12:02:50 -0700 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: Curious about the comparison of 16x4 VS 64. Fande, On Tue, Feb 13, 2018 at 11:44 AM, Bakytzhan Kallemov wrote: > Hi, > > I am not sure about 64 flat run, > > unfortunately I did not save logs since it's easy to run, but for 16 - > here is the plot I got for different number of threads for KSPSolve time > > Baky > > On 02/13/2018 10:28 AM, Matthew Knepley wrote: > > On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. > wrote: >> >> > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: >> > >> > FYI, we were able to get hypre with threads working on KNL on Cori by >> going down to -O1 optimization. We are getting about 2x speedup with 4 >> threads and 16 MPI processes per socket. Not bad. >> >> In other works using 16 MPI processes with 4 threads per process is >> twice as fast as running with 64 mpi processes? Could you send the >> -log_view output for these two cases? > > > Is that what you mean? I took it to mean > > We ran 16MPI processes and got time T. > We ran 16MPI processes with 4 threads each and got time T/2. > > I would likely eat my shirt if 16x4 was 2x faster than 64. > > Matt > > >> >> > >> > There error, flatlined or slightly diverging hypre solves, occurred >> even in flat MPI runs with openmp=1. >> >> But the answers are wrong as soon as you turn on OpenMP? >> >> Thanks >> >> Barry >> >> >> > >> > We are going to test the Haswell nodes next. >> > >> > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: >> > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using >> maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 >> and gamg, but with hypre and -with-openmp=1, even running with flat MPI, >> the solver seems flatline (see attached and notice that the residual starts >> to creep after a few time steps). >> > >> > Maybe you can suggest a hypre test that I can run? >> > >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhbaghaei at mail.sjtu.edu.cn Tue Feb 13 14:21:59 2018 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Wed, 14 Feb 2018 04:21:59 +0800 (CST) Subject: [petsc-users] Accessing a field values of Staggered grid Message-ID: <001e01d3a508$4ba014a0$e2e03de0$@mail.sjtu.edu.cn> Hi I am filling the local vector from dm , has a section layout. The thing is I want to know how I can see the field variable values defined on edges, the staggered grid. In fact, Whenever I output to VTK in preview, I would be able to see the main grid. But the values which are defined on edges, I could not see them and that makes me unsure about the way I fill the local vector. How I would be babe to check the field value on staggered grid? Thanks Amir -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Feb 13 15:17:28 2018 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 13 Feb 2018 16:17:28 -0500 Subject: [petsc-users] Accessing a field values of Staggered grid In-Reply-To: <001e01d3a508$4ba014a0$e2e03de0$@mail.sjtu.edu.cn> References: <001e01d3a508$4ba014a0$e2e03de0$@mail.sjtu.edu.cn> Message-ID: On Tue, Feb 13, 2018 at 3:21 PM, Mohammad Hassan Baghaei < mhbaghaei at mail.sjtu.edu.cn> wrote: > Hi > > I am filling the local vector from dm , has a section layout. The thing is > I want to know how I can see the field variable values defined on edges, > the staggered grid. In fact, Whenever I output to VTK in preview, I would > be able to see the main grid. But the values which are defined on edges, I > could not see them and that makes me unsure about the way I fill the local > vector. How I would be babe to check the field value on staggered grid? > VTK does not have a way to specify data on edges, only on cells or vertices. Thanks, Matt > Thanks > > Amir > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave.mayhem23 at gmail.com Tue Feb 13 15:36:48 2018 From: dave.mayhem23 at gmail.com (Dave May) Date: Tue, 13 Feb 2018 21:36:48 +0000 Subject: [petsc-users] Accessing a field values of Staggered grid In-Reply-To: References: <001e01d3a508$4ba014a0$e2e03de0$@mail.sjtu.edu.cn> Message-ID: On 13 February 2018 at 21:17, Matthew Knepley wrote: > On Tue, Feb 13, 2018 at 3:21 PM, Mohammad Hassan Baghaei < > mhbaghaei at mail.sjtu.edu.cn> wrote: > >> Hi >> >> I am filling the local vector from dm , has a section layout. The thing >> is I want to know how I can see the field variable values defined on edges, >> the staggered grid. In fact, Whenever I output to VTK in preview, I would >> be able to see the main grid. But the values which are defined on edges, I >> could not see them and that makes me unsure about the way I fill the local >> vector. How I would be babe to check the field value on staggered grid? >> > > VTK does not have a way to specify data on edges, only on cells or > vertices. > This is not entirely true. At least for a staggered grid, where you have one DOF per edge, you can represent the edge data via the type VTK_VERTEX. You won't generate a beautiful picture, as your field will be rendered as a set of points (your edge faces) - but you can at least inspect the values within ParaView. Thanks, Dave > > Thanks, > > Matt > > >> Thanks >> >> Amir >> > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhbaghaei at mail.sjtu.edu.cn Tue Feb 13 16:05:41 2018 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Wed, 14 Feb 2018 06:05:41 +0800 (CST) Subject: [petsc-users] Accessing a field values of Staggered grid In-Reply-To: References: <001e01d3a508$4ba014a0$e2e03de0$@mail.sjtu.edu.cn> Message-ID: <002901d3a516$c7b0e980$5712bc80$@mail.sjtu.edu.cn> Thanks for your great note, Dave. Yeah! I would be able to at least view the edge values, although I could not specify the data. Previously I searched in points data, I now view the values by Surface With Edges option. From: Dave May [mailto:dave.mayhem23 at gmail.com] Sent: Wednesday, February 14, 2018 5:37 AM To: Matthew Knepley Cc: Mohammad Hassan Baghaei ; PETSc Subject: Re: [petsc-users] Accessing a field values of Staggered grid On 13 February 2018 at 21:17, Matthew Knepley > wrote: On Tue, Feb 13, 2018 at 3:21 PM, Mohammad Hassan Baghaei > wrote: Hi I am filling the local vector from dm , has a section layout. The thing is I want to know how I can see the field variable values defined on edges, the staggered grid. In fact, Whenever I output to VTK in preview, I would be able to see the main grid. But the values which are defined on edges, I could not see them and that makes me unsure about the way I fill the local vector. How I would be babe to check the field value on staggered grid? VTK does not have a way to specify data on edges, only on cells or vertices. This is not entirely true. At least for a staggered grid, where you have one DOF per edge, you can represent the edge data via the type VTK_VERTEX. You won't generate a beautiful picture, as your field will be rendered as a set of points (your edge faces) - but you can at least inspect the values within ParaView. Thanks, Dave Thanks, Matt Thanks Amir -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Tue Feb 13 17:21:04 2018 From: epscodes at gmail.com (Xiangdong) Date: Tue, 13 Feb 2018 18:21:04 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse Message-ID: Hello everyone, I have a block sparse matrices A created from the DMDA3d. Before passing the matrix to ksp solver, I want to apply a transformation to this matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block diagonal of A. What is the best way to get the transformed matrix? At this moment, I created a new mat IDA=inv(diag(A)) by looping through each row and call MatMatMult to get B=invdiag(A)*A, then destroy the temporary matrix B. However, I prefer the in-place transformation if possible, namely, without the additional matrix B for memory saving purpose. Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? Thanks for your help. Best, Xiangdong -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Feb 13 17:27:08 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 13 Feb 2018 23:27:08 +0000 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: In general you probably don't want to do this. Most good preconditioners (like AMG) rely on the matrix having the "natural" scaling that arises from the discretization and doing a scaling like you describe destroys that natural scaling. You can use PCPBJACOBI to use point block Jacobi preconditioner on the matrix without needing to do the scaling up front. The ILU preconditioners for BAIJ matrices work directly with the block structure so again pre-scaling the matrix buys you nothing. PETSc doesn't have any particularly efficient routines for computing what you desire, the only way to get something truly efficient is to write the code directly using the BAIJ data structure, doable but probably not worth it. Barry > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > Hello everyone, > > I have a block sparse matrices A created from the DMDA3d. Before passing the matrix to ksp solver, I want to apply a transformation to this matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block diagonal of A. What is the best way to get the transformed matrix? > > At this moment, I created a new mat IDA=inv(diag(A)) by looping through each row and call MatMatMult to get B=invdiag(A)*A, then destroy the temporary matrix B. However, I prefer the in-place transformation if possible, namely, without the additional matrix B for memory saving purpose. > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? > > Thanks for your help. > > Best, > Xiangdong > > > > From mfadams at lbl.gov Tue Feb 13 20:56:28 2018 From: mfadams at lbl.gov (Mark Adams) Date: Tue, 13 Feb 2018 21:56:28 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: I agree with Matt, flat 64 will be faster, I would expect, but this code has global metadata that would have to be replicated in a full scale run. We are just doing single socket test now (I think). We have been tracking down what look like compiler bugs and we have only taken at peak performance to make sure we are not wasting our time with threads. I agree 16x4 VS 64 would be interesting to see. Mark On Tue, Feb 13, 2018 at 2:02 PM, Kong, Fande wrote: > Curious about the comparison of 16x4 VS 64. > > Fande, > > On Tue, Feb 13, 2018 at 11:44 AM, Bakytzhan Kallemov > wrote: > >> Hi, >> >> I am not sure about 64 flat run, >> >> unfortunately I did not save logs since it's easy to run, but for 16 - >> here is the plot I got for different number of threads for KSPSolve time >> >> Baky >> >> On 02/13/2018 10:28 AM, Matthew Knepley wrote: >> >> On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. >> wrote: >>> >>> > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: >>> > >>> > FYI, we were able to get hypre with threads working on KNL on Cori by >>> going down to -O1 optimization. We are getting about 2x speedup with 4 >>> threads and 16 MPI processes per socket. Not bad. >>> >>> In other works using 16 MPI processes with 4 threads per process is >>> twice as fast as running with 64 mpi processes? Could you send the >>> -log_view output for these two cases? >> >> >> Is that what you mean? I took it to mean >> >> We ran 16MPI processes and got time T. >> We ran 16MPI processes with 4 threads each and got time T/2. >> >> I would likely eat my shirt if 16x4 was 2x faster than 64. >> >> Matt >> >> >>> >>> > >>> > There error, flatlined or slightly diverging hypre solves, occurred >>> even in flat MPI runs with openmp=1. >>> >>> But the answers are wrong as soon as you turn on OpenMP? >>> >>> Thanks >>> >>> Barry >>> >>> >>> > >>> > We are going to test the Haswell nodes next. >>> > >>> > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: >>> > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using >>> maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 >>> and gamg, but with hypre and -with-openmp=1, even running with flat MPI, >>> the solver seems flatline (see attached and notice that the residual starts >>> to creep after a few time steps). >>> > >>> > Maybe you can suggest a hypre test that I can run? >>> > >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Feb 13 21:07:48 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 14 Feb 2018 03:07:48 +0000 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: > On Feb 13, 2018, at 8:56 PM, Mark Adams wrote: > > I agree with Matt, flat 64 will be faster, I would expect, but this code has global metadata that would have to be replicated in a full scale run.\ Use MPI 3 shared memory to expose the "global metadata" and forget this thread nonsense. > We are just doing single socket test now (I think). > > We have been tracking down what look like compiler bugs and we have only taken at peak performance to make sure we are not wasting our time with threads. You are wasting your time. There are better ways to deal with global metadata than with threads. > > I agree 16x4 VS 64 would be interesting to see. > > Mark > > > > On Tue, Feb 13, 2018 at 2:02 PM, Kong, Fande wrote: > Curious about the comparison of 16x4 VS 64. > > Fande, > > On Tue, Feb 13, 2018 at 11:44 AM, Bakytzhan Kallemov wrote: > Hi, > I am not sure about 64 flat run, > unfortunately I did not save logs since it's easy to run, but for 16 - here is the plot I got for different number of threads for KSPSolve time > Baky > > On 02/13/2018 10:28 AM, Matthew Knepley wrote: >> On Tue, Feb 13, 2018 at 11:30 AM, Smith, Barry F. wrote: >> > On Feb 13, 2018, at 10:12 AM, Mark Adams wrote: >> > >> > FYI, we were able to get hypre with threads working on KNL on Cori by going down to -O1 optimization. We are getting about 2x speedup with 4 threads and 16 MPI processes per socket. Not bad. >> >> In other works using 16 MPI processes with 4 threads per process is twice as fast as running with 64 mpi processes? Could you send the -log_view output for these two cases? >> >> Is that what you mean? I took it to mean >> >> We ran 16MPI processes and got time T. >> We ran 16MPI processes with 4 threads each and got time T/2. >> >> I would likely eat my shirt if 16x4 was 2x faster than 64. >> >> Matt >> >> >> > >> > There error, flatlined or slightly diverging hypre solves, occurred even in flat MPI runs with openmp=1. >> >> But the answers are wrong as soon as you turn on OpenMP? >> >> Thanks >> >> Barry >> >> >> > >> > We are going to test the Haswell nodes next. >> > >> > On Thu, Jan 25, 2018 at 4:16 PM, Mark Adams wrote: >> > Baky (cc'ed) is getting a strange error on Cori/KNL at NERSC. Using maint it runs fine with -with-openmp=0, it runs fine with -with-openmp=1 and gamg, but with hypre and -with-openmp=1, even running with flat MPI, the solver seems flatline (see attached and notice that the residual starts to creep after a few time steps). >> > >> > Maybe you can suggest a hypre test that I can run? >> > >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > > > From mfadams at lbl.gov Wed Feb 14 04:36:09 2018 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 14 Feb 2018 05:36:09 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: > > > > > > We have been tracking down what look like compiler bugs and we have only > taken at peak performance to make sure we are not wasting our time with > threads. > > You are wasting your time. There are better ways to deal with global > metadata than with threads. > OK while agree with Barry let me just add for Baky's benefit if nothing else. You can write efficient code with thread programing models (data shared by default) but a thread PM does not help in developing the good data models that are required for efficient programs. And you can write crappy code with MPI shared memory. While a good start, just putting your shared memory in an MPI shared memory window will not make your code faster. Experience indicates that in general thread models are less efficient in terms of programmer resources. Threads are a pain in the long run. While this experience (Petsc/hypre fails when going from -O1 to -O2 on KNL and -with-openmp=1 even on flat MPI runs) is only anecdotal and HPC is going to involve pain no matter what you do, this may be an example of threads biting you. It is easier for everyone, compiler writers and programmers, to reason about a program where threads live in their own address space, you need to decompose your data at a fine level to get good performance anyway, and you can use MPI shared memory when you really need it. I wish Chombo would get rid of OpenMP but that is not likely to happen any time soon. Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 14 07:27:15 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 08:27:15 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: On Wed, Feb 14, 2018 at 5:36 AM, Mark Adams wrote: > >> > >> > We have been tracking down what look like compiler bugs and we have >> only taken at peak performance to make sure we are not wasting our time >> with threads. >> >> You are wasting your time. There are better ways to deal with global >> metadata than with threads. >> > > OK while agree with Barry let me just add for Baky's benefit if nothing > else. > > You can write efficient code with thread programing models (data shared by > default) but a thread PM does not help in developing the good data models > that are required for efficient programs. And you can write crappy code > with MPI shared memory. While a good start, just putting your shared memory > in an MPI shared memory window will not make your code faster. Experience > indicates that in general thread models are less efficient in terms of > programmer resources. Threads are a pain in the long run. > > While this experience (Petsc/hypre fails when going from -O1 to -O2 on KNL > and -with-openmp=1 even on flat MPI runs) is only anecdotal and HPC is > going to involve pain no matter what you do, this may be an example of > threads biting you. > > It is easier for everyone, compiler writers and programmers, to reason > about a program where threads live in their own address space, you need to > decompose your data at a fine level to get good performance anyway, and you > can use MPI shared memory when you really need it. I wish Chombo would get > rid of OpenMP but that is not likely to happen any time soon. > Your point about data decomposition is a good one. Even if you want to run with threads, you must decompose your data intelligently to get good performance. Can't you do the MPI shared work and still pass it off as work necessary for threading anyway? Matt > Mark > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Feb 14 08:29:32 2018 From: epscodes at gmail.com (Xiangdong) Date: Wed, 14 Feb 2018 09:29:32 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: The reason for the operation invdiag(A)*A is to have a decoupled matrix/physics for preconditioning. For example, after the transformation, the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for bs=3). One can extract a submatrix (e.g. corresponding to only first unknown) and apply special preconditioners for the extracted/decoupled matrix. The motivation is that after the transformation, one can get a better decoupled matrix to preserve the properties of the unknowns. Thanks. Xiangdong On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. wrote: > > In general you probably don't want to do this. Most good preconditioners > (like AMG) rely on the matrix having the "natural" scaling that arises from > the discretization and doing a scaling like you describe destroys that > natural scaling. You can use PCPBJACOBI to use point block Jacobi > preconditioner on the matrix without needing to do the scaling up front. > The ILU preconditioners for BAIJ matrices work directly with the block > structure so again pre-scaling the matrix buys you nothing. PETSc doesn't > have any particularly efficient routines for computing what you desire, the > only way to get something truly efficient is to write the code directly > using the BAIJ data structure, doable but probably not worth it. > > Barry > > > > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > > > Hello everyone, > > > > I have a block sparse matrices A created from the DMDA3d. Before passing > the matrix to ksp solver, I want to apply a transformation to this matrix: > namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block > diagonal of A. What is the best way to get the transformed matrix? > > > > At this moment, I created a new mat IDA=inv(diag(A)) by looping through > each row and call MatMatMult to get B=invdiag(A)*A, then destroy the > temporary matrix B. However, I prefer the in-place transformation if > possible, namely, without the additional matrix B for memory saving purpose. > > > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? > > > > Thanks for your help. > > > > Best, > > Xiangdong > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 14 08:39:52 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 09:39:52 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 9:29 AM, Xiangdong wrote: > The reason for the operation invdiag(A)*A is to have a decoupled > matrix/physics for preconditioning. For example, after the transformation, > the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for > bs=3). One can extract a submatrix (e.g. corresponding to only first > unknown) and apply special preconditioners for the extracted/decoupled > matrix. The motivation is that after the transformation, one can get a > better decoupled matrix to preserve the properties of the unknowns. > Barry's point is that this operation is usually rolled into the preconditioner itself, as in his example of PBJACOBI. Are you building this preconditioner yourself? Matt > Thanks. > > Xiangdong > > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. > wrote: > >> >> In general you probably don't want to do this. Most good preconditioners >> (like AMG) rely on the matrix having the "natural" scaling that arises from >> the discretization and doing a scaling like you describe destroys that >> natural scaling. You can use PCPBJACOBI to use point block Jacobi >> preconditioner on the matrix without needing to do the scaling up front. >> The ILU preconditioners for BAIJ matrices work directly with the block >> structure so again pre-scaling the matrix buys you nothing. PETSc doesn't >> have any particularly efficient routines for computing what you desire, the >> only way to get something truly efficient is to write the code directly >> using the BAIJ data structure, doable but probably not worth it. >> >> Barry >> >> >> > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: >> > >> > Hello everyone, >> > >> > I have a block sparse matrices A created from the DMDA3d. Before >> passing the matrix to ksp solver, I want to apply a transformation to this >> matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the >> block diagonal of A. What is the best way to get the transformed matrix? >> > >> > At this moment, I created a new mat IDA=inv(diag(A)) by looping through >> each row and call MatMatMult to get B=invdiag(A)*A, then destroy the >> temporary matrix B. However, I prefer the in-place transformation if >> possible, namely, without the additional matrix B for memory saving purpose. >> > >> > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? >> > >> > Thanks for your help. >> > >> > Best, >> > Xiangdong >> > >> > >> > >> > >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Wed Feb 14 08:45:52 2018 From: epscodes at gmail.com (Xiangdong) Date: Wed, 14 Feb 2018 09:45:52 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: Yes, the preconditioner is like a multi-stage pccomposite preconditioner. In one stage, I just precondition a subset of matrix corresponding to certain physical unknowns. To get a better decoupled submatrix, I need to apply that operation. Thanks. Xiangdong On Wed, Feb 14, 2018 at 9:39 AM, Matthew Knepley wrote: > On Wed, Feb 14, 2018 at 9:29 AM, Xiangdong wrote: > >> The reason for the operation invdiag(A)*A is to have a decoupled >> matrix/physics for preconditioning. For example, after the transformation, >> the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for >> bs=3). One can extract a submatrix (e.g. corresponding to only first >> unknown) and apply special preconditioners for the extracted/decoupled >> matrix. The motivation is that after the transformation, one can get a >> better decoupled matrix to preserve the properties of the unknowns. >> > > Barry's point is that this operation is usually rolled into the > preconditioner itself, as in his example of PBJACOBI. Are you building this > preconditioner yourself? > > Matt > > >> Thanks. >> >> Xiangdong >> >> On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. >> wrote: >> >>> >>> In general you probably don't want to do this. Most good >>> preconditioners (like AMG) rely on the matrix having the "natural" scaling >>> that arises from the discretization and doing a scaling like you describe >>> destroys that natural scaling. You can use PCPBJACOBI to use point block >>> Jacobi preconditioner on the matrix without needing to do the scaling up >>> front. The ILU preconditioners for BAIJ matrices work directly with the >>> block structure so again pre-scaling the matrix buys you nothing. PETSc >>> doesn't have any particularly efficient routines for computing what you >>> desire, the only way to get something truly efficient is to write the code >>> directly using the BAIJ data structure, doable but probably not worth it. >>> >>> Barry >>> >>> >>> > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: >>> > >>> > Hello everyone, >>> > >>> > I have a block sparse matrices A created from the DMDA3d. Before >>> passing the matrix to ksp solver, I want to apply a transformation to this >>> matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the >>> block diagonal of A. What is the best way to get the transformed matrix? >>> > >>> > At this moment, I created a new mat IDA=inv(diag(A)) by looping >>> through each row and call MatMatMult to get B=invdiag(A)*A, then destroy >>> the temporary matrix B. However, I prefer the in-place transformation if >>> possible, namely, without the additional matrix B for memory saving purpose. >>> > >>> > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? >>> > >>> > Thanks for your help. >>> > >>> > Best, >>> > Xiangdong >>> > >>> > >>> > >>> > >>> >>> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 14 08:49:54 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 14 Feb 2018 14:49:54 +0000 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: Hmm, I never had this idea presented to me, I have no way to know if it is particularly good or bad. So essentially you transform the matrix "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT instead of using PCFIELDSPLIT directly on the original equations. Maybe in the long run this should be an option to PCFIEDLSPLIT. In general we like the solvers to manage any transformations, not require transformations before calling the solvers. I have to think about this. Barry > On Feb 14, 2018, at 8:29 AM, Xiangdong wrote: > > The reason for the operation invdiag(A)*A is to have a decoupled matrix/physics for preconditioning. For example, after the transformation, the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for bs=3). One can extract a submatrix (e.g. corresponding to only first unknown) and apply special preconditioners for the extracted/decoupled matrix. The motivation is that after the transformation, one can get a better decoupled matrix to preserve the properties of the unknowns. > > Thanks. > > Xiangdong > > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. wrote: > > In general you probably don't want to do this. Most good preconditioners (like AMG) rely on the matrix having the "natural" scaling that arises from the discretization and doing a scaling like you describe destroys that natural scaling. You can use PCPBJACOBI to use point block Jacobi preconditioner on the matrix without needing to do the scaling up front. The ILU preconditioners for BAIJ matrices work directly with the block structure so again pre-scaling the matrix buys you nothing. PETSc doesn't have any particularly efficient routines for computing what you desire, the only way to get something truly efficient is to write the code directly using the BAIJ data structure, doable but probably not worth it. > > Barry > > > > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > > > Hello everyone, > > > > I have a block sparse matrices A created from the DMDA3d. Before passing the matrix to ksp solver, I want to apply a transformation to this matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block diagonal of A. What is the best way to get the transformed matrix? > > > > At this moment, I created a new mat IDA=inv(diag(A)) by looping through each row and call MatMatMult to get B=invdiag(A)*A, then destroy the temporary matrix B. However, I prefer the in-place transformation if possible, namely, without the additional matrix B for memory saving purpose. > > > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? > > > > Thanks for your help. > > > > Best, > > Xiangdong > > > > > > > > > > From epscodes at gmail.com Wed Feb 14 09:10:43 2018 From: epscodes at gmail.com (Xiangdong) Date: Wed, 14 Feb 2018 10:10:43 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: The idea goes back to the alternate-block-factorization (ABF) method https://link.springer.com/article/10.1007/BF01932753 and is widely used in the reservoir simulation, where the unknowns are pressure and saturation. Although the coupled equations are parabolic, the pressure equations/variables are more elliptic and the saturation equations are more hyperbolic. People always decouple the transformed linear equation to obtain a better (more elliptical) pressure matrix and then apply the AMG preconditioner on the decoupled matrix. https://link.springer.com/article/10.1007/s00791-016-0273-3 Thanks. Xiangdong On Wed, Feb 14, 2018 at 9:49 AM, Smith, Barry F. wrote: > > Hmm, I never had this idea presented to me, I have no way to know if it > is particularly good or bad. So essentially you transform the matrix > "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT > instead of using PCFIELDSPLIT directly on the original equations. > > Maybe in the long run this should be an option to PCFIEDLSPLIT. In > general we like the solvers to manage any transformations, not require > transformations before calling the solvers. I have to think about this. > > Barry > > > > On Feb 14, 2018, at 8:29 AM, Xiangdong wrote: > > > > The reason for the operation invdiag(A)*A is to have a decoupled > matrix/physics for preconditioning. For example, after the transformation, > the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for > bs=3). One can extract a submatrix (e.g. corresponding to only first > unknown) and apply special preconditioners for the extracted/decoupled > matrix. The motivation is that after the transformation, one can get a > better decoupled matrix to preserve the properties of the unknowns. > > > > Thanks. > > > > Xiangdong > > > > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. > wrote: > > > > In general you probably don't want to do this. Most good > preconditioners (like AMG) rely on the matrix having the "natural" scaling > that arises from the discretization and doing a scaling like you describe > destroys that natural scaling. You can use PCPBJACOBI to use point block > Jacobi preconditioner on the matrix without needing to do the scaling up > front. The ILU preconditioners for BAIJ matrices work directly with the > block structure so again pre-scaling the matrix buys you nothing. PETSc > doesn't have any particularly efficient routines for computing what you > desire, the only way to get something truly efficient is to write the code > directly using the BAIJ data structure, doable but probably not worth it. > > > > Barry > > > > > > > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > > > > > Hello everyone, > > > > > > I have a block sparse matrices A created from the DMDA3d. Before > passing the matrix to ksp solver, I want to apply a transformation to this > matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the > block diagonal of A. What is the best way to get the transformed matrix? > > > > > > At this moment, I created a new mat IDA=inv(diag(A)) by looping > through each row and call MatMatMult to get B=invdiag(A)*A, then destroy > the temporary matrix B. However, I prefer the in-place transformation if > possible, namely, without the additional matrix B for memory saving purpose. > > > > > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? > > > > > > Thanks for your help. > > > > > > Best, > > > Xiangdong > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 14 09:27:57 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 14 Feb 2018 08:27:57 -0700 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: <87r2pnsi9u.fsf@jedbrown.org> "Smith, Barry F." writes: > Hmm, I never had this idea presented to me, I have no way to know if > it is particularly good or bad. So essentially you transform the > matrix "decoupling the physics alone the diagonal" and then do > PCFIELDSPLIT instead of using PCFIELDSPLIT directly on the original > equations. We talked about this several years ago, e.g., in the context of a low-Mach preconditioner (acting in the pressure space) when using conservative variables. From danyang.su at gmail.com Wed Feb 14 12:47:14 2018 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 14 Feb 2018 10:47:14 -0800 Subject: [petsc-users] Add unstructured grid capability to existing structured grid code Message-ID: Dear All, I have a reactive transport code that was first developed using structured grid and parallelized using PETSc. Both sequential version (with or without PETSc) and parallel version work fine. Recently I have finished the unstructured grid capability for the sequential version. Next step work is to modify the necessary part to make the code parallelized using unstructured grid. For the structured grid code, it follows the following steps. !domain decomposition DMDACreate3D() DMDAGetInfo() DMDAGetCorners() !timeloop begins !calculate matrix entry and rhs ... Solve Ax=b using PETSc DMGlobalToLocalBegin() DMGlobalToLocalEnd() ... !end of timeloop So far as I know, the domain decomposition part need to be modified. I plan to use PETSc DMPlex class to do this job. Is this the best way to port the code? DMPlexCreateFromFile() DMPlexDistribute() !timeloop begins !calculate matrix entry and rhs ... Solve Ax=b using PETSc DMGlobalToLocalBegin() DMGlobalToLocalEnd() ... !end of timeloop Thanks, Danyang From bsmith at mcs.anl.gov Wed Feb 14 13:57:20 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 14 Feb 2018 19:57:20 +0000 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: References: Message-ID: <42432050-8F39-4AC7-8FAD-B53F0D27B0CF@anl.gov> In the PETSc git branch barry/feature-baij-blockdiagonal-scale I have done the "heavy lifting" for what you need. See https://bitbucket.org/petsc/petsc/branch/barry/feature-baij-blockdiagonal-scale It scales the Seq BAIJ matrix by its block diagonal. You will need to write a routine to also scale the right hand side vector by the block diagonal and then you can try the preconditioner for sequential code. Write something like VecBlockDiagonalScale(Vec,const PetscScalar *). You get the block size from the vector. Later you or I can add the parallel version (not much more difficult). I don't have time to work on it now. Let us know if you have any difficulties. Barry > On Feb 14, 2018, at 9:10 AM, Xiangdong wrote: > > The idea goes back to the alternate-block-factorization (ABF) method > > https://link.springer.com/article/10.1007/BF01932753 > > and is widely used in the reservoir simulation, where the unknowns are pressure and saturation. Although the coupled equations are parabolic, the pressure equations/variables are more elliptic and the saturation equations are more hyperbolic. People always decouple the transformed linear equation to obtain a better (more elliptical) pressure matrix and then apply the AMG preconditioner on the decoupled matrix. > > https://link.springer.com/article/10.1007/s00791-016-0273-3 > > Thanks. > > Xiangdong > > On Wed, Feb 14, 2018 at 9:49 AM, Smith, Barry F. wrote: > > Hmm, I never had this idea presented to me, I have no way to know if it is particularly good or bad. So essentially you transform the matrix "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT instead of using PCFIELDSPLIT directly on the original equations. > > Maybe in the long run this should be an option to PCFIEDLSPLIT. In general we like the solvers to manage any transformations, not require transformations before calling the solvers. I have to think about this. > > Barry > > > > On Feb 14, 2018, at 8:29 AM, Xiangdong wrote: > > > > The reason for the operation invdiag(A)*A is to have a decoupled matrix/physics for preconditioning. For example, after the transformation, the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for bs=3). One can extract a submatrix (e.g. corresponding to only first unknown) and apply special preconditioners for the extracted/decoupled matrix. The motivation is that after the transformation, one can get a better decoupled matrix to preserve the properties of the unknowns. > > > > Thanks. > > > > Xiangdong > > > > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. wrote: > > > > In general you probably don't want to do this. Most good preconditioners (like AMG) rely on the matrix having the "natural" scaling that arises from the discretization and doing a scaling like you describe destroys that natural scaling. You can use PCPBJACOBI to use point block Jacobi preconditioner on the matrix without needing to do the scaling up front. The ILU preconditioners for BAIJ matrices work directly with the block structure so again pre-scaling the matrix buys you nothing. PETSc doesn't have any particularly efficient routines for computing what you desire, the only way to get something truly efficient is to write the code directly using the BAIJ data structure, doable but probably not worth it. > > > > Barry > > > > > > > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > > > > > Hello everyone, > > > > > > I have a block sparse matrices A created from the DMDA3d. Before passing the matrix to ksp solver, I want to apply a transformation to this matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block diagonal of A. What is the best way to get the transformed matrix? > > > > > > At this moment, I created a new mat IDA=inv(diag(A)) by looping through each row and call MatMatMult to get B=invdiag(A)*A, then destroy the temporary matrix B. However, I prefer the in-place transformation if possible, namely, without the additional matrix B for memory saving purpose. > > > > > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix? > > > > > > Thanks for your help. > > > > > > Best, > > > Xiangdong > > > > > > > > > > > > > > > > > > From epscodes at gmail.com Wed Feb 14 14:52:42 2018 From: epscodes at gmail.com (Xiangdong) Date: Wed, 14 Feb 2018 15:52:42 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: <42432050-8F39-4AC7-8FAD-B53F0D27B0CF@anl.gov> References: <42432050-8F39-4AC7-8FAD-B53F0D27B0CF@anl.gov> Message-ID: Thanks a lot, Barry! I see that you had implemented the bs=3 special case. I will play with these codes and add at least bs=2 case and try to get it working for parallel baij. I will let you know the update. Thank you. Xiangdong On Wed, Feb 14, 2018 at 2:57 PM, Smith, Barry F. wrote: > > In the PETSc git branch barry/feature-baij-blockdiagonal-scale I have > done the "heavy lifting" for what you need. See > https://bitbucket.org/petsc/petsc/branch/barry/feature- > baij-blockdiagonal-scale > > It scales the Seq BAIJ matrix by its block diagonal. You will need to > write a routine to also scale the right hand side vector by the block > diagonal and then you can try the preconditioner for sequential code. Write > something like VecBlockDiagonalScale(Vec,const PetscScalar *). You get > the block size from the vector. > > > Later you or I can add the parallel version (not much more difficult). I > don't have time to work on it now. > > Let us know if you have any difficulties. > > > Barry > > > > On Feb 14, 2018, at 9:10 AM, Xiangdong wrote: > > > > The idea goes back to the alternate-block-factorization (ABF) method > > > > https://link.springer.com/article/10.1007/BF01932753 > > > > and is widely used in the reservoir simulation, where the unknowns are > pressure and saturation. Although the coupled equations are parabolic, the > pressure equations/variables are more elliptic and the saturation equations > are more hyperbolic. People always decouple the transformed linear equation > to obtain a better (more elliptical) pressure matrix and then apply the AMG > preconditioner on the decoupled matrix. > > > > https://link.springer.com/article/10.1007/s00791-016-0273-3 > > > > Thanks. > > > > Xiangdong > > > > On Wed, Feb 14, 2018 at 9:49 AM, Smith, Barry F. > wrote: > > > > Hmm, I never had this idea presented to me, I have no way to know if > it is particularly good or bad. So essentially you transform the matrix > "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT > instead of using PCFIELDSPLIT directly on the original equations. > > > > Maybe in the long run this should be an option to PCFIEDLSPLIT. In > general we like the solvers to manage any transformations, not require > transformations before calling the solvers. I have to think about this. > > > > Barry > > > > > > > On Feb 14, 2018, at 8:29 AM, Xiangdong wrote: > > > > > > The reason for the operation invdiag(A)*A is to have a decoupled > matrix/physics for preconditioning. For example, after the transformation, > the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for > bs=3). One can extract a submatrix (e.g. corresponding to only first > unknown) and apply special preconditioners for the extracted/decoupled > matrix. The motivation is that after the transformation, one can get a > better decoupled matrix to preserve the properties of the unknowns. > > > > > > Thanks. > > > > > > Xiangdong > > > > > > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. > wrote: > > > > > > In general you probably don't want to do this. Most good > preconditioners (like AMG) rely on the matrix having the "natural" scaling > that arises from the discretization and doing a scaling like you describe > destroys that natural scaling. You can use PCPBJACOBI to use point block > Jacobi preconditioner on the matrix without needing to do the scaling up > front. The ILU preconditioners for BAIJ matrices work directly with the > block structure so again pre-scaling the matrix buys you nothing. PETSc > doesn't have any particularly efficient routines for computing what you > desire, the only way to get something truly efficient is to write the code > directly using the BAIJ data structure, doable but probably not worth it. > > > > > > Barry > > > > > > > > > > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > > > > > > > Hello everyone, > > > > > > > > I have a block sparse matrices A created from the DMDA3d. Before > passing the matrix to ksp solver, I want to apply a transformation to this > matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the > block diagonal of A. What is the best way to get the transformed matrix? > > > > > > > > At this moment, I created a new mat IDA=inv(diag(A)) by looping > through each row and call MatMatMult to get B=invdiag(A)*A, then destroy > the temporary matrix B. However, I prefer the in-place transformation if > possible, namely, without the additional matrix B for memory saving purpose. > > > > > > > > Do you have any suggestion on compute invdiag(A)*A for mpibaij > matrix? > > > > > > > > Thanks for your help. > > > > > > > > Best, > > > > Xiangdong > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Feb 14 14:58:59 2018 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 14 Feb 2018 15:58:59 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: > > > Your point about data decomposition is a good one. Even if you want to run > with threads, you must decompose your data intelligently > to get good performance. Can't you do the MPI shared work and still pass > it off as work necessary for threading anyway? > > We don't have any resources to change the code. Baky is an application PD and just has time and interest to work with me to optimize parameters. We are just grabbing low hanging fruit. Then we can see where we are and quantify the potential benefits of implementing a better data model. Mark > Matt > > >> Mark >> > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Feb 14 15:57:22 2018 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 14 Feb 2018 14:57:22 -0700 Subject: [petsc-users] How to efficiently represent a diagonal matrix? Message-ID: Hi All, If a matrix is always diagonal, what a good way to represent the matrix? Still MPIAIJ, MPIBAIJ? Can we have a specific implementation for this? Fande, -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 14 16:26:53 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 17:26:53 -0500 Subject: [petsc-users] Add unstructured grid capability to existing structured grid code In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 1:47 PM, Danyang Su wrote: > Dear All, > > I have a reactive transport code that was first developed using structured > grid and parallelized using PETSc. Both sequential version (with or without > PETSc) and parallel version work fine. Recently I have finished the > unstructured grid capability for the sequential version. Next step work is > to modify the necessary part to make the code parallelized using > unstructured grid. > > For the structured grid code, it follows the following steps. > > !domain decomposition > > DMDACreate3D() > > DMDAGetInfo() > > DMDAGetCorners() > > !timeloop begins > > !calculate matrix entry and rhs > ... > Solve Ax=b using PETSc > > DMGlobalToLocalBegin() > DMGlobalToLocalEnd() > ... > !end of timeloop > > > So far as I know, the domain decomposition part need to be modified. I > plan to use PETSc DMPlex class to do this job. Is this the best way to port > the code? > Yes, this is basically correct. Thanks, Matt > > DMPlexCreateFromFile() > > DMPlexDistribute() > > !timeloop begins > > !calculate matrix entry and rhs > ... > Solve Ax=b using PETSc > > DMGlobalToLocalBegin() > DMGlobalToLocalEnd() > ... > !end of timeloop > > Thanks, > > Danyang > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Feb 14 16:27:50 2018 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 14 Feb 2018 17:27:50 -0500 Subject: [petsc-users] with-openmp error with hypre In-Reply-To: References: <2109537E-FC24-4879-912E-101181A2F0FC@anl.gov> Message-ID: And we found that the code runs fine on Haswell. A KNL compiler bug not a PETSc/hypre bug. Mark On Wed, Feb 14, 2018 at 3:58 PM, Mark Adams wrote: > >> Your point about data decomposition is a good one. Even if you want to >> run with threads, you must decompose your data intelligently >> to get good performance. Can't you do the MPI shared work and still pass >> it off as work necessary for threading anyway? >> >> > We don't have any resources to change the code. Baky is an application PD > and just has time and interest to work with me to optimize parameters. We > are just grabbing low hanging fruit. Then we can see where we are and > quantify the potential benefits of implementing a better data model. > > Mark > > >> Matt >> >> >>> Mark >>> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From heeho.park at gmail.com Wed Feb 14 17:05:37 2018 From: heeho.park at gmail.com (HeeHo Park) Date: Wed, 14 Feb 2018 17:05:37 -0600 Subject: [petsc-users] Fwd: what is the equivalent DMDAVecRestoreArray() function in petsc4py? In-Reply-To: References: Message-ID: I just found a user group on PETSc website. Can someone please answer the question below? Thanks! ---------- Forwarded message ---------- From: HeeHo Park Date: Wed, Feb 14, 2018 at 5:04 PM Subject: what is the equivalent DMDAVecRestoreArray() function in petsc4py? To: dalcinl at gmail.com Hi Lisandro, I cannot find DMDAVecRestoreArray() equivalent in petsc4py. I'm trying to set a 1D initial condition like this. def initial_conditions(ts, U, appctx): da = ts.getDM() mstart,xm = da.getCorners() mstart = mstart[0] xm = xm[0] M = da.getSizes()[0] h = 1.0/M mend = mstart + xm u = da.getVecArray(U) for i in range(mstart, mend): u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) da.getVecRestoreArray(u) Also, is there a better way to ask questions about petsc4py? a forum? or google-group? Thanks, -- HeeHo Daniel Park -- HeeHo Daniel Park -------------- next part -------------- An HTML attachment was scrubbed... URL: From bikash at umich.edu Wed Feb 14 17:25:16 2018 From: bikash at umich.edu (Bikash Kanungo) Date: Wed, 14 Feb 2018 18:25:16 -0500 Subject: [petsc-users] SNESQN number of past states Message-ID: Hi, I'm using the L-BFGS QN solver. In order to set the number of past states (also the restart size if I use Periodic restart), to say 50, I'm using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, it still shows "Stored subspace size: 10", i.e., the default value of 10 is not overwritten. Additionally, I would like to know more about the the -snes_qn_powell_descent option. For Powell restart, one uses a gamma parameter which I believe is defined by the -snes_qn_powell_gamma option*. *What exactly does the descent condition do? It would be useful if there are good references to it. Thanks, Biksah -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 14 17:35:17 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 14 Feb 2018 23:35:17 +0000 Subject: [petsc-users] How to efficiently represent a diagonal matrix? In-Reply-To: References: Message-ID: <3EBBBE1A-237B-40B5-A498-845D7F1972A9@anl.gov> What are you doing with the matrix? We don't have a diagonal matrix but it would be easy to add such a beast if it was performance critical, which it probably isn't. Barry > On Feb 14, 2018, at 3:57 PM, Fande Kong wrote: > > Hi All, > > If a matrix is always diagonal, what a good way to represent the matrix? Still MPIAIJ, MPIBAIJ? Can we have a specific implementation for this? > > > Fande, From knepley at gmail.com Wed Feb 14 17:41:04 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 18:41:04 -0500 Subject: [petsc-users] FEM & conformal mesh In-Reply-To: <6daf01cd-d224-62bb-70f6-44c5670451e5@univ-amu.fr> References: <6daf01cd-d224-62bb-70f6-44c5670451e5@univ-amu.fr> Message-ID: On Tue, Jan 23, 2018 at 11:14 AM, Yann Jobic wrote: > Hello, > > I'm trying to understand the numbering of quadrature points in order to > solve the FEM system, and how you manage this numbering in order to allow > conformal mesh. I looked in several files in order to understand. Here's > what I need to understand what you mean by "quadrature points". I mean the following thing: I want to do an integral over the domain for a variational form: = \int_\Omega v . f(u, x) Now I can break this up into a sum of integrals over each element because integrals are additive = \sum_T \int_T v . f(u) And we normally integrate over a reference element T_r instead = \sum_T \int_{T_r} v_r . f_r(u_r, x) |J| And then we approximate these cell integrals with quadrature = \sum_T \sum_q v_r(x_q) . f_r(u_r(x_q), x_q) |J(x_q)| w_q The quadrature points x_q and weights w_q are defined on the reference element. This means they are not shared by definition. Does this make sense? Thanks, Matt > i understood so far (which is not far...) > I took the example of the jacobian calculus. > > I found this comment in dmplexsnes.c, which explains the basic idea: > 1725: /* 1: Get sizes from dm and dmAux */ > 1726: /* 2: Get geometric data */ > 1727: /* 3: Handle boundary values */ > 1728: /* 4: Loop over domain */ > 1729: /* Extract coefficients */ > 1730: /* Loop over fields */ > 1731: /* Set tiling for FE*/ > 1732: /* Integrate FE residual to get elemVec */ > [...] > 1740: /* Loop over domain */ > 1741: /* Add elemVec to locX */ > > I almost get that. The critical part should be : > loop over fieldI > 2434: PetscFEGetQuadrature(fe, &quad); > 2435: PetscFEGetDimension(fe, &Nb); > 2436: PetscFEGetTileSizes(fe, NULL, &numBlocks, NULL, &numBatches); > 2437: PetscQuadratureGetData(quad, NULL, NULL, &numQuadPoints, NULL, > NULL); > 2438: blockSize = Nb*numQuadPoints; > 2439: batchSize = numBlocks * blockSize; > 2440: PetscFESetTileSizes(fe, blockSize, numBlocks, batchSize, > numBatches); > 2441: numChunks = numCells / (numBatches*batchSize); > 2442: Ne = numChunks*numBatches*batchSize; > 2443: Nr = numCells % (numBatches*batchSize); > 2444: offset = numCells - Nr; > 2445: for (fieldJ = 0; fieldJ < Nf; ++fieldJ) { > > From there, we can have the numbering with (in dtfe.c) > basic idea : > 6560: $ Loop over element matrix entries (f,fc,g,gc --> i,j): > Which leads to : > 4511: PetscPrintf(PETSC_COMM_SELF, "Element matrix for fields %d and > %d\n", fieldI, fieldJ); > 4512: for (fc = 0; fc < NcI; ++fc) { > 4513: for (f = 0; f < NbI; ++f) { > 4514: const PetscInt i = offsetI + f*NcI+fc; > 4515: for (gc = 0; gc < NcJ; ++gc) { > 4516: for (g = 0; g < NbJ; ++g) { > 4517: const PetscInt j = offsetJ + g*NcJ+gc; > 4518: PetscPrintf(PETSC_COMM_SELF, " elemMat[%d,%d,%d,%d]: > %g\n", f, fc, g, gc, PetscRealPart(elemMat[eOffset+i*totDim+j])); > [...] > 4525: cOffset += totDim; > 4526: cOffsetAux += totDimAux; > 4527: eOffset += PetscSqr(totDim); > 4528: } > > But i didn't get how you can find that there are duplicates quadrature > nodes, and how you manage them. > Maybe i looked at the wrong part of the code ? > > Thanks ! > > Best regards, > > Yann > > > --- > L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e par le > logiciel antivirus Avast. > https://www.avast.com/antivirus > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 14 17:43:03 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 14 Feb 2018 23:43:03 +0000 Subject: [petsc-users] SNESQN number of past states In-Reply-To: References: Message-ID: Hmm, 1) make sure you call PetscOptionsSetValue() before you call to SNESSetFromOptions() 2) make sure you call SNESSetFromOptions() 3) did you add a prefix to the SNES object? If so make sure you include it in the PetscOptionsSetValue() call. I can't see a reason why it won't work. Does it work with the PETSc examples for you or not? Regarding the Powell descent option, I'm afraid you'll need to examine the code for exact details. src/snes/impls/qn/qn.c Barry > On Feb 14, 2018, at 5:25 PM, Bikash Kanungo wrote: > > Hi, > > I'm using the L-BFGS QN solver. In order to set the number of past states (also the restart size if I use Periodic restart), to say 50, I'm using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, it still shows "Stored subspace size: 10", i.e., the default value of 10 is not overwritten. > > Additionally, I would like to know more about the the -snes_qn_powell_descent option. For Powell restart, one uses a gamma parameter which I believe is defined by the -snes_qn_powell_gamma option. What exactly does the descent condition do? It would be useful if there are good references to it. > > Thanks, > Biksah > > -- > Bikash S. Kanungo > PhD Student > Computational Materials Physics Group > Mechanical Engineering > University of Michigan > From knepley at gmail.com Wed Feb 14 17:57:09 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 18:57:09 -0500 Subject: [petsc-users] Fwd: what is the equivalent DMDAVecRestoreArray() function in petsc4py? In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 6:05 PM, HeeHo Park wrote: > I just found a user group on PETSc website. Can someone please answer the > question below? > I think it will work using with da.getVecArray(U) as u for i in range(mstart, mend): u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) Does it? Thanks, Matt Thanks! > > ---------- Forwarded message ---------- > From: HeeHo Park > Date: Wed, Feb 14, 2018 at 5:04 PM > Subject: what is the equivalent DMDAVecRestoreArray() function in petsc4py? > To: dalcinl at gmail.com > > > Hi Lisandro, > > I cannot find DMDAVecRestoreArray() equivalent in petsc4py. > I'm trying to set a 1D initial condition like this. > > def initial_conditions(ts, U, appctx): > da = ts.getDM() > mstart,xm = da.getCorners() > mstart = mstart[0] > xm = xm[0] > M = da.getSizes()[0] > h = 1.0/M > mend = mstart + xm > > u = da.getVecArray(U) > for i in range(mstart, mend): > u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) > > da.getVecRestoreArray(u) > > Also, is there a better way to ask questions about petsc4py? a forum? or > google-group? > > Thanks, > > -- > HeeHo Daniel Park > > > > -- > HeeHo Daniel Park > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 14 18:02:32 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 19:02:32 -0500 Subject: [petsc-users] SNESQN number of past states In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 6:43 PM, Smith, Barry F. wrote: > > Hmm, > > 1) make sure you call PetscOptionsSetValue() before you call to > SNESSetFromOptions() > > 2) make sure you call SNESSetFromOptions() > > 3) did you add a prefix to the SNES object? If so make sure you include it > in the PetscOptionsSetValue() call. > > I can't see a reason why it won't work. Does it work with the PETSc > examples for you or not? > > Regarding the Powell descent option, I'm afraid you'll need to examine > the code for exact details. src/snes/impls/qn/qn.c Here is the description https://bitbucket.org/petsc/petsc/src/939b553f045c5ba32242d0d49e80e4934ed3bf76/src/snes/impls/qn/qn.c?at=master&fileviewer=file-view-default#qn.c-451 Thanks, Matt > > Barry > > > > On Feb 14, 2018, at 5:25 PM, Bikash Kanungo wrote: > > > > Hi, > > > > I'm using the L-BFGS QN solver. In order to set the number of past > states (also the restart size if I use Periodic restart), to say 50, I'm > using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, it > still shows "Stored subspace size: 10", i.e., the default value of 10 is > not overwritten. > > > > Additionally, I would like to know more about the the > -snes_qn_powell_descent option. For Powell restart, one uses a gamma > parameter which I believe is defined by the -snes_qn_powell_gamma option. > What exactly does the descent condition do? It would be useful if there are > good references to it. > > > > Thanks, > > Biksah > > > > -- > > Bikash S. Kanungo > > PhD Student > > Computational Materials Physics Group > > Mechanical Engineering > > University of Michigan > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fdkong.jd at gmail.com Wed Feb 14 18:11:40 2018 From: fdkong.jd at gmail.com (Fande Kong) Date: Wed, 14 Feb 2018 17:11:40 -0700 Subject: [petsc-users] How to efficiently represent a diagonal matrix? In-Reply-To: <3EBBBE1A-237B-40B5-A498-845D7F1972A9@anl.gov> References: <3EBBBE1A-237B-40B5-A498-845D7F1972A9@anl.gov> Message-ID: On Wed, Feb 14, 2018 at 4:35 PM, Smith, Barry F. wrote: > > What are you doing with the matrix? > We are doing an explicit method. PDEs are discretized using a finite element method, so there is a mass matrix. The mass matrix will be lumped, and it becomes diagonal. We want to compute the inverse of the lumped matrix, and also do a few of matrix-vector multiplications using the lumped matrix or its inverse. The specific implementation won't make this more efficient? Fande, > > We don't have a diagonal matrix but it would be easy to add such a beast > if it was performance critical, which it probably isn't. > > Barry > > > > > > On Feb 14, 2018, at 3:57 PM, Fande Kong wrote: > > > > Hi All, > > > > If a matrix is always diagonal, what a good way to represent the > matrix? Still MPIAIJ, MPIBAIJ? Can we have a specific implementation for > this? > > > > > > Fande, > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 14 18:18:15 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 14 Feb 2018 19:18:15 -0500 Subject: [petsc-users] How to efficiently represent a diagonal matrix? In-Reply-To: References: <3EBBBE1A-237B-40B5-A498-845D7F1972A9@anl.gov> Message-ID: On Wed, Feb 14, 2018 at 7:11 PM, Fande Kong wrote: > On Wed, Feb 14, 2018 at 4:35 PM, Smith, Barry F. > wrote: > >> >> What are you doing with the matrix? >> > > We are doing an explicit method. PDEs are discretized using a finite > element method, so there is a mass matrix. The mass matrix will be lumped, > and it becomes diagonal. We want to compute the inverse of the lumped > matrix, and also do a few of matrix-vector multiplications using the > lumped matrix or its inverse. > > The specific implementation won't make this more efficient? > I am doing this for Pylith. I think you should just do Vec operations, and pull the inverse mass matrix to the rhs. Matt > Fande, > > >> >> We don't have a diagonal matrix but it would be easy to add such a >> beast if it was performance critical, which it probably isn't. >> >> Barry >> >> >> >> >> > On Feb 14, 2018, at 3:57 PM, Fande Kong wrote: >> > >> > Hi All, >> > >> > If a matrix is always diagonal, what a good way to represent the >> matrix? Still MPIAIJ, MPIBAIJ? Can we have a specific implementation for >> this? >> > >> > >> > Fande, >> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Wed Feb 14 20:29:58 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 14 Feb 2018 19:29:58 -0700 Subject: [petsc-users] How to efficiently represent a diagonal matrix? In-Reply-To: References: <3EBBBE1A-237B-40B5-A498-845D7F1972A9@anl.gov> Message-ID: <87fu63q921.fsf@jedbrown.org> Fande Kong writes: > On Wed, Feb 14, 2018 at 4:35 PM, Smith, Barry F. wrote: > >> >> What are you doing with the matrix? >> > > We are doing an explicit method. PDEs are discretized using a finite > element method, so there is a mass matrix. The mass matrix will be lumped, > and it becomes diagonal. We want to compute the inverse of the lumped > matrix, and also do a few of matrix-vector multiplications using the > lumped matrix or its inverse. > > The specific implementation won't make this more efficient? You can use pretty much any representation and you won't notice the time because you still have to apply the RHS operator and that is vastly more expensive. From bikash at umich.edu Wed Feb 14 21:15:15 2018 From: bikash at umich.edu (Bikash Kanungo) Date: Wed, 14 Feb 2018 22:15:15 -0500 Subject: [petsc-users] SNESQN number of past states In-Reply-To: References: Message-ID: Thanks Barry and Matthew. @Barry: I'm following the same procedure as you've mentioned - PetscOptionsSetValue() precede SNESSetFromOptions. Here's the snippet for my code: ----------------------------------------------------------------------------------------------------------- error = SNESCreate(PETSC_COMM_WORLD,&snes); checkPETScError(error, "SNESCreate failed."); error = SNESSetType(snes, SNESQN); checkPETScError(error, "SNESSetType failed."); error = SNESQNSetType(snes, SNES_QN_LBFGS); checkPETScError(error, "SNESQNSetType failed."); error = SNESQNSetScaleType(snes, SNES_QN_SCALE_SHANNO); checkPETScError(error, "SNESQNSetScaleType failed."); error = SNESQNSetRestartType(snes, SNES_QN_RESTART_PERIODIC); checkPETScError(error, "SNESQNSetRestartType failed."); error = PetscOptionsSetValue("-snes_qn_m","500"); checkPETScError(error, "PETScOptionsSetValue failed."); SNESLineSearch linesearch; error = SNESGetLineSearch(snes,&linesearch); checkPETScError(error, "SNESGetLineSearch failed."); error = SNESLineSearchSetType(linesearch,SNESLINESEARCHCP); checkPETScError(error, "SNESLineSearchSetType failed."); error = PetscOptionsSetValue("-snes_linesearch_max_it", "1"); checkPETScError(error, "PetscOptionsSetValue failed."); error = SNESLineSearchView(linesearch, PETSC_VIEWER_STDOUT_WORLD); checkPETScError(error, "SNESLineSearchView failed."); error =SNESLineSearchSetMonitor(linesearch, PETSC_TRUE); checkPETScError(error, "SNESLineSearchSet Monitor failed."); error = SNESLineSearchSetFromOptions(linesearch); checkPETScError(error, "SNESLineSearchSetFromOptions failed."); SNESLineSearchReason lineSearchReason; error = SNESLineSearchGetReason(linesearch, &lineSearchReason); checkPETScError(error, "SNESLineSearchGetReason failed."); error = SNESSetFunction(snes,r,FormFunction,&petscData); checkPETScError(error, "SNESSetFunction failed."); // // Customize KSP // error = SNESGetKSP(snes,&ksp); checkPETScError(error, "SNESGetKSP failed."); error = KSPSetType(ksp,KSPGMRES); checkPETScError(error, "KSPSetType failed."); error = KSPGMRESSetRestart(ksp,300); checkPETScError(error, "KSPGMRESSetRestart failed."); error = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); checkPETScError(error, "KSPSetInitialGuessNonzero failed."); error = KSPGetPC(ksp,&pc); checkPETScError(error, "KSPGetPC failed."); error = PCSetType(pc,PCJACOBI); checkPETScError(error, "PCSetType failed."); error = PCSetReusePreconditioner(pc,PETSC_TRUE); checkPETScError(error, "PCSetReusePreconditioner failed."); error = KSPSetTolerances(ksp, PETSC_DEFAULT, 1e-15, 1e7, 10000); checkPETScError(error, "KSPSetTolerances failed."); error = KSPSetFromOptions(ksp); checkPETScError(error, "Call to KSPSetFromOptions() failed."); // //get reason for non-convergence // KSPConvergedReason kspReason; error = KSPGetConvergedReason(ksp, &kspReason); checkPETScError(error, "Call to KSPGetConvergedReason() failed."); if(kspReason < 0) { if(debugLevel != 0) std::cout<<"Other kind of divergence in SNES-KSP : "<< kspReason < wrote: > On Wed, Feb 14, 2018 at 6:43 PM, Smith, Barry F. > wrote: > >> >> Hmm, >> >> 1) make sure you call PetscOptionsSetValue() before you call to >> SNESSetFromOptions() >> >> 2) make sure you call SNESSetFromOptions() >> >> 3) did you add a prefix to the SNES object? If so make sure you include >> it in the PetscOptionsSetValue() call. >> >> I can't see a reason why it won't work. Does it work with the PETSc >> examples for you or not? >> >> Regarding the Powell descent option, I'm afraid you'll need to examine >> the code for exact details. src/snes/impls/qn/qn.c > > > Here is the description > > https://bitbucket.org/petsc/petsc/src/939b553f045c5ba32242d0d49e80e4 > 934ed3bf76/src/snes/impls/qn/qn.c?at=master&fileviewer= > file-view-default#qn.c-451 > > Thanks, > > Matt > > >> >> Barry >> >> >> > On Feb 14, 2018, at 5:25 PM, Bikash Kanungo wrote: >> > >> > Hi, >> > >> > I'm using the L-BFGS QN solver. In order to set the number of past >> states (also the restart size if I use Periodic restart), to say 50, I'm >> using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, >> it still shows "Stored subspace size: 10", i.e., the default value of 10 is >> not overwritten. >> > >> > Additionally, I would like to know more about the the >> -snes_qn_powell_descent option. For Powell restart, one uses a gamma >> parameter which I believe is defined by the -snes_qn_powell_gamma option. >> What exactly does the descent condition do? It would be useful if there are >> good references to it. >> > >> > Thanks, >> > Biksah >> > >> > -- >> > Bikash S. Kanungo >> > PhD Student >> > Computational Materials Physics Group >> > Mechanical Engineering >> > University of Michigan >> > >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 14 22:19:28 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 15 Feb 2018 04:19:28 +0000 Subject: [petsc-users] How to efficiently represent a diagonal matrix? In-Reply-To: <87fu63q921.fsf@jedbrown.org> References: <3EBBBE1A-237B-40B5-A498-845D7F1972A9@anl.gov> <87fu63q921.fsf@jedbrown.org> Message-ID: Fande, I think you should just use AIJ, all the algorithms MatMult, MatFactor, MatSolve when the matrix is diagonal are order n work with a relatively small constant, and the overhead of using AIJ instead of a custom format is probably at most a factor of three and since work is order n and it is a small constant any gain would be lost in the much bigger constants for the rest of the computation. Barry I know Rich doesn't have unlimited money and suspect spending it on almost anything else (like improving the load balancing in libMesh) will pay off far far more. > On Feb 14, 2018, at 8:29 PM, Jed Brown wrote: > > Fande Kong writes: > >> On Wed, Feb 14, 2018 at 4:35 PM, Smith, Barry F. wrote: >> >>> >>> What are you doing with the matrix? >>> >> >> We are doing an explicit method. PDEs are discretized using a finite >> element method, so there is a mass matrix. The mass matrix will be lumped, >> and it becomes diagonal. We want to compute the inverse of the lumped >> matrix, and also do a few of matrix-vector multiplications using the >> lumped matrix or its inverse. >> >> The specific implementation won't make this more efficient? > > You can use pretty much any representation and you won't notice the time > because you still have to apply the RHS operator and that is vastly more > expensive. From bsmith at mcs.anl.gov Wed Feb 14 22:28:35 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 15 Feb 2018 04:28:35 +0000 Subject: [petsc-users] SNESQN number of past states In-Reply-To: References: Message-ID: I stuck the line PetscOptionsSetValue(NULL,"-snes_qn_m", "50"); in src/snes/examples/tutorials/ex19.c and called it with -da_refine 2 -snes_monitor -snes_type qn -snes_view and the results showed Stored subspace size: 50 so I am afraid it is something unique to exactly your code that is causing it to be not used. If you can send us a complete code that reproduces the problem we can track it down and fix it but without a reproducing code we can't do anything to resolve the problem. Barry > On Feb 14, 2018, at 9:15 PM, Bikash Kanungo wrote: > > Thanks Barry and Matthew. > > @Barry: I'm following the same procedure as you've mentioned - PetscOptionsSetValue() precede SNESSetFromOptions. Here's the snippet for my code: > > ----------------------------------------------------------------------------------------------------------- > > error = SNESCreate(PETSC_COMM_WORLD,&snes); > checkPETScError(error, > "SNESCreate failed."); > > error = SNESSetType(snes, SNESQN); > checkPETScError(error, > "SNESSetType failed."); > > error = SNESQNSetType(snes, SNES_QN_LBFGS); > checkPETScError(error, > "SNESQNSetType failed."); > > error = SNESQNSetScaleType(snes, SNES_QN_SCALE_SHANNO); > checkPETScError(error, > "SNESQNSetScaleType failed."); > > error = SNESQNSetRestartType(snes, SNES_QN_RESTART_PERIODIC); > checkPETScError(error, > "SNESQNSetRestartType failed."); > > error = PetscOptionsSetValue("-snes_qn_m","500"); > checkPETScError(error, > "PETScOptionsSetValue failed."); > > SNESLineSearch linesearch; > error = SNESGetLineSearch(snes,&linesearch); > checkPETScError(error, > "SNESGetLineSearch failed."); > > error = SNESLineSearchSetType(linesearch,SNESLINESEARCHCP); > checkPETScError(error, > "SNESLineSearchSetType failed."); > > error = PetscOptionsSetValue("-snes_linesearch_max_it", "1"); > checkPETScError(error, > "PetscOptionsSetValue failed."); > > error = SNESLineSearchView(linesearch, PETSC_VIEWER_STDOUT_WORLD); > checkPETScError(error, > "SNESLineSearchView failed."); > > error =SNESLineSearchSetMonitor(linesearch, > PETSC_TRUE); > checkPETScError(error, > "SNESLineSearchSet Monitor failed."); > > error = SNESLineSearchSetFromOptions(linesearch); > checkPETScError(error, > "SNESLineSearchSetFromOptions failed."); > > SNESLineSearchReason lineSearchReason; > error = SNESLineSearchGetReason(linesearch, &lineSearchReason); > checkPETScError(error, > "SNESLineSearchGetReason failed."); > > error = SNESSetFunction(snes,r,FormFunction,&petscData); > checkPETScError(error, > "SNESSetFunction failed."); > > // > // Customize KSP > // > error = SNESGetKSP(snes,&ksp); > checkPETScError(error, > "SNESGetKSP failed."); > > error = KSPSetType(ksp,KSPGMRES); > checkPETScError(error, > "KSPSetType failed."); > > error = KSPGMRESSetRestart(ksp,300); > checkPETScError(error, > "KSPGMRESSetRestart failed."); > > error = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); > checkPETScError(error, > "KSPSetInitialGuessNonzero failed."); > > error = KSPGetPC(ksp,&pc); > checkPETScError(error, > "KSPGetPC failed."); > > error = PCSetType(pc,PCJACOBI); > checkPETScError(error, > "PCSetType failed."); > > error = PCSetReusePreconditioner(pc,PETSC_TRUE); > checkPETScError(error, > "PCSetReusePreconditioner failed."); > > error = KSPSetTolerances(ksp, > PETSC_DEFAULT, > 1e-15, > 1e7, > 10000); > checkPETScError(error, > "KSPSetTolerances failed."); > > error = KSPSetFromOptions(ksp); > checkPETScError(error, > "Call to KSPSetFromOptions() failed."); > > // > //get reason for non-convergence > // > KSPConvergedReason kspReason; > error = KSPGetConvergedReason(ksp, &kspReason); > checkPETScError(error, > "Call to KSPGetConvergedReason() failed."); > > if(kspReason < 0) > { > if(debugLevel != 0) > std::cout<<"Other kind of divergence in SNES-KSP : "<< kspReason < > } > > PetscInt lag = 1; > error = SNESSetLagPreconditioner(snes, > lag); > checkPETScError(error, > "Call to SNESSetLagPreconditioner() failed."); > > PetscInt maxFails = 2; > error = SNESSetMaxLinearSolveFailures(snes,maxFails); > checkPETScError(error, > "Call to SNESSetMaxLinearSolveFailures() failed."); > > PetscReal abstol = 1e-13; // absolute convergence tolerance > PetscInt maxit = 100000; > error = SNESSetTolerances(snes, > abstol, > PETSC_DEFAULT, > PETSC_DEFAULT, > maxit, > maxit); > checkPETScError(error, > "SNESSetTolerances failed."); > > error = SNESView(snes, > PETSC_VIEWER_STDOUT_WORLD); > checkPETScError(error, > "Call to SNESView() failed."); > > error = SNESMonitorSet(snes,SNESMonitorDefault,PETSC_NULL,PETSC_NULL); > checkPETScError(error, > "Call to SNESMonitorSet() failed."); > > error = SNESSetFromOptions(snes); > checkPETScError(error, > "Call to SNESSetFromOptions() failed."); > > > // > // Solve the system > // > error = SNESSolve(snes,PETSC_NULL,x); > checkPETScError(error, > "Call to SNESSolve() failed."); > > SNESConvergedReason reason; > error = SNESGetConvergedReason(snes,&reason); > checkPETScError(error, > "Call to SNESGetConvergedReason() failed."); > > ------------------------------------------------------------------------------------------------------------------------------------ > > Also, I didn't find any SNESQN examples in my snes/examples folder (using petsc-3.6.3). > Moreover, the Powell descent condition seems to be only declared and then assigned a value through the PetscOptionsReal call. Beyond that I didn't find any other mention of it. I was grepping for powell_downhill variable. (Note: powell_downhill features in 3.6.3 and not in 3.7 version). > > Thanks, > Bikash > > > On Wed, Feb 14, 2018 at 7:02 PM, Matthew Knepley wrote: > On Wed, Feb 14, 2018 at 6:43 PM, Smith, Barry F. wrote: > > Hmm, > > 1) make sure you call PetscOptionsSetValue() before you call to SNESSetFromOptions() > > 2) make sure you call SNESSetFromOptions() > > 3) did you add a prefix to the SNES object? If so make sure you include it in the PetscOptionsSetValue() call. > > I can't see a reason why it won't work. Does it work with the PETSc examples for you or not? > > Regarding the Powell descent option, I'm afraid you'll need to examine the code for exact details. src/snes/impls/qn/qn.c > > Here is the description > > https://bitbucket.org/petsc/petsc/src/939b553f045c5ba32242d0d49e80e4934ed3bf76/src/snes/impls/qn/qn.c?at=master&fileviewer=file-view-default#qn.c-451 > > Thanks, > > Matt > > > Barry > > > > On Feb 14, 2018, at 5:25 PM, Bikash Kanungo wrote: > > > > Hi, > > > > I'm using the L-BFGS QN solver. In order to set the number of past states (also the restart size if I use Periodic restart), to say 50, I'm using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, it still shows "Stored subspace size: 10", i.e., the default value of 10 is not overwritten. > > > > Additionally, I would like to know more about the the -snes_qn_powell_descent option. For Powell restart, one uses a gamma parameter which I believe is defined by the -snes_qn_powell_gamma option. What exactly does the descent condition do? It would be useful if there are good references to it. > > > > Thanks, > > Biksah > > > > -- > > Bikash S. Kanungo > > PhD Student > > Computational Materials Physics Group > > Mechanical Engineering > > University of Michigan > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > -- > Bikash S. Kanungo > PhD Student > Computational Materials Physics Group > Mechanical Engineering > University of Michigan From bsmith at mcs.anl.gov Wed Feb 14 22:44:41 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 15 Feb 2018 04:44:41 +0000 Subject: [petsc-users] Fourth annual PETSc users meeting Imperial College, London UK, June 4-6, 2018. Message-ID: PETSc users, We are pleased to announce the fourth annual PETSc user meeting will take place at Imperial College, London UK, June 4-6, 2018. For more information and to register please go to http://www.mcs.anl.gov/petsc/meetings/2018/index.html There is some money available for travel support. Key dates: 25 March 2018: Abstract deadline.11 April 2018: Early registration deadline. Hope to see many of you there, The organizing committee From bikash at umich.edu Thu Feb 15 02:48:25 2018 From: bikash at umich.edu (Bikash Kanungo) Date: Thu, 15 Feb 2018 03:48:25 -0500 Subject: [petsc-users] SNESQN number of past states In-Reply-To: References: Message-ID: Thanks again Barry. I figured it out. My SNESView was called before SNESSetFromOptions and hence was showing the default value. Regards, Bikash On Wed, Feb 14, 2018 at 11:28 PM, Smith, Barry F. wrote: > > I stuck the line > > PetscOptionsSetValue(NULL,"-snes_qn_m", "50"); > > in src/snes/examples/tutorials/ex19.c > > and called it with > > -da_refine 2 -snes_monitor -snes_type qn -snes_view > > and the results showed > > Stored subspace size: 50 > > so I am afraid it is something unique to exactly your code that is causing > it to be not used. If you can send us a complete code that reproduces the > problem we can track it down and fix it but without a reproducing code we > can't do anything to resolve the problem. > > Barry > > > > On Feb 14, 2018, at 9:15 PM, Bikash Kanungo wrote: > > > > Thanks Barry and Matthew. > > > > @Barry: I'm following the same procedure as you've mentioned - > PetscOptionsSetValue() precede SNESSetFromOptions. Here's the snippet for > my code: > > > > ------------------------------------------------------------ > ----------------------------------------------- > > > > error = SNESCreate(PETSC_COMM_WORLD,&snes); > > checkPETScError(error, > > "SNESCreate failed."); > > > > error = SNESSetType(snes, SNESQN); > > checkPETScError(error, > > "SNESSetType failed."); > > > > error = SNESQNSetType(snes, SNES_QN_LBFGS); > > checkPETScError(error, > > "SNESQNSetType failed."); > > > > error = SNESQNSetScaleType(snes, SNES_QN_SCALE_SHANNO); > > checkPETScError(error, > > "SNESQNSetScaleType failed."); > > > > error = SNESQNSetRestartType(snes, SNES_QN_RESTART_PERIODIC); > > checkPETScError(error, > > "SNESQNSetRestartType failed."); > > > > error = PetscOptionsSetValue("-snes_qn_m","500"); > > checkPETScError(error, > > "PETScOptionsSetValue failed."); > > > > SNESLineSearch linesearch; > > error = SNESGetLineSearch(snes,&linesearch); > > checkPETScError(error, > > "SNESGetLineSearch failed."); > > > > error = SNESLineSearchSetType(linesearch,SNESLINESEARCHCP); > > checkPETScError(error, > > "SNESLineSearchSetType failed."); > > > > error = PetscOptionsSetValue("-snes_linesearch_max_it", "1"); > > checkPETScError(error, > > "PetscOptionsSetValue failed."); > > > > error = SNESLineSearchView(linesearch, PETSC_VIEWER_STDOUT_WORLD); > > checkPETScError(error, > > "SNESLineSearchView failed."); > > > > error =SNESLineSearchSetMonitor(linesearch, > > PETSC_TRUE); > > checkPETScError(error, > > "SNESLineSearchSet Monitor failed."); > > > > error = SNESLineSearchSetFromOptions(linesearch); > > checkPETScError(error, > > "SNESLineSearchSetFromOptions failed."); > > > > SNESLineSearchReason lineSearchReason; > > error = SNESLineSearchGetReason(linesearch, &lineSearchReason); > > checkPETScError(error, > > "SNESLineSearchGetReason failed."); > > > > error = SNESSetFunction(snes,r,FormFunction,&petscData); > > checkPETScError(error, > > "SNESSetFunction failed."); > > > > // > > // Customize KSP > > // > > error = SNESGetKSP(snes,&ksp); > > checkPETScError(error, > > "SNESGetKSP failed."); > > > > error = KSPSetType(ksp,KSPGMRES); > > checkPETScError(error, > > "KSPSetType failed."); > > > > error = KSPGMRESSetRestart(ksp,300); > > checkPETScError(error, > > "KSPGMRESSetRestart failed."); > > > > error = KSPSetInitialGuessNonzero(ksp,PETSC_TRUE); > > checkPETScError(error, > > "KSPSetInitialGuessNonzero failed."); > > > > error = KSPGetPC(ksp,&pc); > > checkPETScError(error, > > "KSPGetPC failed."); > > > > error = PCSetType(pc,PCJACOBI); > > checkPETScError(error, > > "PCSetType failed."); > > > > error = PCSetReusePreconditioner(pc,PETSC_TRUE); > > checkPETScError(error, > > "PCSetReusePreconditioner failed."); > > > > error = KSPSetTolerances(ksp, > > PETSC_DEFAULT, > > 1e-15, > > 1e7, > > 10000); > > checkPETScError(error, > > "KSPSetTolerances failed."); > > > > error = KSPSetFromOptions(ksp); > > checkPETScError(error, > > "Call to KSPSetFromOptions() failed."); > > > > // > > //get reason for non-convergence > > // > > KSPConvergedReason kspReason; > > error = KSPGetConvergedReason(ksp, &kspReason); > > checkPETScError(error, > > "Call to KSPGetConvergedReason() failed."); > > > > if(kspReason < 0) > > { > > if(debugLevel != 0) > > std::cout<<"Other kind of divergence in SNES-KSP : "<< > kspReason < > > > } > > > > PetscInt lag = 1; > > error = SNESSetLagPreconditioner(snes, > > lag); > > checkPETScError(error, > > "Call to SNESSetLagPreconditioner() failed."); > > > > PetscInt maxFails = 2; > > error = SNESSetMaxLinearSolveFailures(snes,maxFails); > > checkPETScError(error, > > "Call to SNESSetMaxLinearSolveFailures() failed."); > > > > PetscReal abstol = 1e-13; // absolute convergence tolerance > > PetscInt maxit = 100000; > > error = SNESSetTolerances(snes, > > abstol, > > PETSC_DEFAULT, > > PETSC_DEFAULT, > > maxit, > > maxit); > > checkPETScError(error, > > "SNESSetTolerances failed."); > > > > error = SNESView(snes, > > PETSC_VIEWER_STDOUT_WORLD); > > checkPETScError(error, > > "Call to SNESView() failed."); > > > > error = SNESMonitorSet(snes,SNESMonitorDefault,PETSC_NULL, > PETSC_NULL); > > checkPETScError(error, > > "Call to SNESMonitorSet() failed."); > > > > error = SNESSetFromOptions(snes); > > checkPETScError(error, > > "Call to SNESSetFromOptions() failed."); > > > > > > // > > // Solve the system > > // > > error = SNESSolve(snes,PETSC_NULL,x); > > checkPETScError(error, > > "Call to SNESSolve() failed."); > > > > SNESConvergedReason reason; > > error = SNESGetConvergedReason(snes,&reason); > > checkPETScError(error, > > "Call to SNESGetConvergedReason() failed."); > > > > ------------------------------------------------------------ > ------------------------------------------------------------------------ > > > > Also, I didn't find any SNESQN examples in my snes/examples folder > (using petsc-3.6.3). > > Moreover, the Powell descent condition seems to be only declared and > then assigned a value through the PetscOptionsReal call. Beyond that I > didn't find any other mention of it. I was grepping for powell_downhill > variable. (Note: powell_downhill features in 3.6.3 and not in 3.7 version). > > > > Thanks, > > Bikash > > > > > > On Wed, Feb 14, 2018 at 7:02 PM, Matthew Knepley > wrote: > > On Wed, Feb 14, 2018 at 6:43 PM, Smith, Barry F. > wrote: > > > > Hmm, > > > > 1) make sure you call PetscOptionsSetValue() before you call to > SNESSetFromOptions() > > > > 2) make sure you call SNESSetFromOptions() > > > > 3) did you add a prefix to the SNES object? If so make sure you include > it in the PetscOptionsSetValue() call. > > > > I can't see a reason why it won't work. Does it work with the PETSc > examples for you or not? > > > > Regarding the Powell descent option, I'm afraid you'll need to > examine the code for exact details. src/snes/impls/qn/qn.c > > > > Here is the description > > > > https://bitbucket.org/petsc/petsc/src/939b553f045c5ba32242d0d49e80e4 > 934ed3bf76/src/snes/impls/qn/qn.c?at=master&fileviewer= > file-view-default#qn.c-451 > > > > Thanks, > > > > Matt > > > > > > Barry > > > > > > > On Feb 14, 2018, at 5:25 PM, Bikash Kanungo wrote: > > > > > > Hi, > > > > > > I'm using the L-BFGS QN solver. In order to set the number of past > states (also the restart size if I use Periodic restart), to say 50, I'm > using PetscOptionsSetValue("-snes_qn_m", "50"). However while running, it > still shows "Stored subspace size: 10", i.e., the default value of 10 is > not overwritten. > > > > > > Additionally, I would like to know more about the the > -snes_qn_powell_descent option. For Powell restart, one uses a gamma > parameter which I believe is defined by the -snes_qn_powell_gamma option. > What exactly does the descent condition do? It would be useful if there are > good references to it. > > > > > > Thanks, > > > Biksah > > > > > > -- > > > Bikash S. Kanungo > > > PhD Student > > > Computational Materials Physics Group > > > Mechanical Engineering > > > University of Michigan > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > > > > -- > > Bikash S. Kanungo > > PhD Student > > Computational Materials Physics Group > > Mechanical Engineering > > University of Michigan > > -- Bikash S. Kanungo PhD Student Computational Materials Physics Group Mechanical Engineering University of Michigan -------------- next part -------------- An HTML attachment was scrubbed... URL: From yann.jobic at univ-amu.fr Thu Feb 15 07:38:25 2018 From: yann.jobic at univ-amu.fr (Yann Jobic) Date: Thu, 15 Feb 2018 14:38:25 +0100 Subject: [petsc-users] FEM & conformal mesh In-Reply-To: References: <6daf01cd-d224-62bb-70f6-44c5670451e5@univ-amu.fr> Message-ID: Hello Matt, Le 15/02/2018 ? 00:41, Matthew Knepley a ?crit?: > On Tue, Jan 23, 2018 at 11:14 AM, Yann Jobic > wrote: > > Hello, > > I'm trying to understand the numbering of quadrature points in > order to solve the FEM system, and how you manage this numbering > in order to allow conformal mesh. I looked in several files in > order to understand. Here's what > > > I need to understand what you mean by "quadrature points". I mean the > following thing: > > ? I want to do an integral over the domain for a variational form: > ? ? = \int_\Omega v . f(u, x) > ? Now I can break this up into a sum of integrals over each element > because integrals are additive > ? ? = \sum_T \int_T v . f(u) > ? And we normally integrate over a reference element T_r instead > ? ? = \sum_T \int_{T_r} v_r . f_r(u_r, x) |J| > ? And then we approximate these cell integrals with quadrature > ? ? = \sum_T \sum_q v_r(x_q) . f_r(u_r(x_q), x_q) |J(x_q)| w_q > ? The quadrature points x_q and weights w_q are defined on the > reference element. This means they > ? are not shared by definition. > > Does this make sense? Perfectly. I was mixing the notions of nodes of an element, and the quadrature points. Thanks a lot for the clarification ! Best regards, Yann > > ? Thanks, > > ? ? ?Matt > > i understood so far (which is not far...) > I took the example of the jacobian calculus. > > I found this comment in dmplexsnes.c, which explains the basic idea: > 1725:?? /* 1: Get sizes from dm and dmAux */ > 1726:?? /* 2: Get geometric data */ > 1727:?? /* 3: Handle boundary values */ > 1728:?? /* 4: Loop over domain */ > 1729:?? /*?? Extract coefficients */ > 1730:?? /* Loop over fields */ > 1731:?? /*?? Set tiling for FE*/ > 1732:?? /*?? Integrate FE residual to get elemVec */ > [...] > 1740:?? /* Loop over domain */ > 1741:?? /*?? Add elemVec to locX */ > > I almost get that. The critical part should be : > loop over fieldI > 2434:???? PetscFEGetQuadrature(fe, &quad); > 2435:???? PetscFEGetDimension(fe, &Nb); > 2436:???? PetscFEGetTileSizes(fe, NULL, &numBlocks, NULL, > &numBatches); > 2437:???? PetscQuadratureGetData(quad, NULL, NULL, &numQuadPoints, > NULL, NULL); > 2438:???? blockSize = Nb*numQuadPoints; > 2439:???? batchSize = numBlocks * blockSize; > 2440:???? PetscFESetTileSizes(fe, blockSize, numBlocks, batchSize, > numBatches); > 2441:???? numChunks = numCells / (numBatches*batchSize); > 2442:???? Ne??????? = numChunks*numBatches*batchSize; > 2443:???? Nr??????? = numCells % (numBatches*batchSize); > 2444:???? offset??? = numCells - Nr; > 2445:???? for (fieldJ = 0; fieldJ < Nf; ++fieldJ) { > > From there, we can have the numbering with (in dtfe.c) > basic idea : > 6560: $?? Loop over element matrix entries (f,fc,g,gc --> i,j): > Which leads to : > 4511:?????? PetscPrintf(PETSC_COMM_SELF, "Element matrix for > fields %d and %d\n", fieldI, fieldJ); > 4512:?????? for (fc = 0; fc < NcI; ++fc) { > 4513:???????? for (f = 0; f < NbI; ++f) { > 4514:?????????? const PetscInt i = offsetI + f*NcI+fc; > 4515:?????????? for (gc = 0; gc < NcJ; ++gc) { > 4516:???????????? for (g = 0; g < NbJ; ++g) { > 4517:?????????????? const PetscInt j = offsetJ + g*NcJ+gc; > 4518:?????????????? PetscPrintf(PETSC_COMM_SELF, " > elemMat[%d,%d,%d,%d]: %g\n", f, fc, g, gc, > PetscRealPart(elemMat[eOffset+i*totDim+j])); > [...] > 4525:???? cOffset??? += totDim; > 4526:???? cOffsetAux += totDimAux; > 4527:???? eOffset??? += PetscSqr(totDim); > 4528:?? } > > But i didn't get how you can find that there are duplicates > quadrature nodes, and how you manage them. > Maybe i looked at the wrong part of the code ? > > Thanks ! > > Best regards, > > Yann > > > --- > L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e > par le logiciel antivirus Avast. > https://www.avast.com/antivirus > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -- ___________________________ Yann JOBIC HPC engineer IUSTI-CNRS UMR 7343 - Polytech Marseille Technop?le de Ch?teau Gombert 5 rue Enrico Fermi 13453 Marseille cedex 13 Tel : (33) 4 91 10 69 43 Fax : (33) 4 91 10 69 69 --- L'absence de virus dans ce courrier ?lectronique a ?t? v?rifi?e par le logiciel antivirus Avast. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From heeho.park at gmail.com Thu Feb 15 15:13:58 2018 From: heeho.park at gmail.com (HeeHo Park) Date: Thu, 15 Feb 2018 15:13:58 -0600 Subject: [petsc-users] Fwd: what is the equivalent DMDAVecRestoreArray() function in petsc4py? In-Reply-To: References: Message-ID: Yes, this works. u = da.getVecArray(U) for i in range(mstart, mend): u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) The code above also worked without restoreVecArray. I guess the u just points at the array U. I think your code is clearer to understand what is happening. Thank you, On Wed, Feb 14, 2018 at 5:57 PM, Matthew Knepley wrote: > On Wed, Feb 14, 2018 at 6:05 PM, HeeHo Park wrote: > >> I just found a user group on PETSc website. Can someone please answer the >> question below? >> > > I think it will work using > > with da.getVecArray(U) as u > for i in range(mstart, mend): > u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) > > Does it? > > Thanks, > > Matt > > Thanks! >> >> ---------- Forwarded message ---------- >> From: HeeHo Park >> Date: Wed, Feb 14, 2018 at 5:04 PM >> Subject: what is the equivalent DMDAVecRestoreArray() function in >> petsc4py? >> To: dalcinl at gmail.com >> >> >> Hi Lisandro, >> >> I cannot find DMDAVecRestoreArray() equivalent in petsc4py. >> I'm trying to set a 1D initial condition like this. >> >> def initial_conditions(ts, U, appctx): >> da = ts.getDM() >> mstart,xm = da.getCorners() >> mstart = mstart[0] >> xm = xm[0] >> M = da.getSizes()[0] >> h = 1.0/M >> mend = mstart + xm >> >> u = da.getVecArray(U) >> for i in range(mstart, mend): >> u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) >> >> da.getVecRestoreArray(u) >> >> Also, is there a better way to ask questions about petsc4py? a forum? or >> google-group? >> >> Thanks, >> >> -- >> HeeHo Daniel Park >> >> >> >> -- >> HeeHo Daniel Park >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > -- HeeHo Daniel Park -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at jedbrown.org Thu Feb 15 15:33:27 2018 From: jed at jedbrown.org (Jed Brown) Date: Thu, 15 Feb 2018 14:33:27 -0700 Subject: [petsc-users] Fwd: what is the equivalent DMDAVecRestoreArray() function in petsc4py? In-Reply-To: References: Message-ID: <87y3jundjs.fsf@jedbrown.org> HeeHo Park writes: > Yes, this works. > > u = da.getVecArray(U) > for i in range(mstart, mend): > u[i] = np.sin(np.pi*i*6.*h) + 3.*np.sin(np.pi*i*2.*h) > > The code above also worked without restoreVecArray. I guess the u just > points at the array U. It won't be restored until u goes out of scope, and even that would depend on the Python implementation. Use the "with" context manager. From mhbaghaei at mail.sjtu.edu.cn Thu Feb 15 15:40:59 2018 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Fri, 16 Feb 2018 05:40:59 +0800 (CST) Subject: [petsc-users] Dealing with DMPlexDistribute() Message-ID: <000001d3a6a5$aa1cb8a0$fe5629e0$@mail.sjtu.edu.cn> Hi I am using DMPlex as interface for mesh generation. On single core, I got around 30000 mesh cells. Whenever, I run on multiple core, say 3, in the output file for DM, VTK, I got 3 times mesh cell and point numbers. Does it mean that DMPlexDistribute() does not work properly! Thanks Amir -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 15 16:33:31 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Feb 2018 17:33:31 -0500 Subject: [petsc-users] Dealing with DMPlexDistribute() In-Reply-To: <000001d3a6a5$aa1cb8a0$fe5629e0$@mail.sjtu.edu.cn> References: <000001d3a6a5$aa1cb8a0$fe5629e0$@mail.sjtu.edu.cn> Message-ID: On Thu, Feb 15, 2018 at 4:40 PM, Mohammad Hassan Baghaei < mhbaghaei at mail.sjtu.edu.cn> wrote: > Hi > > I am using DMPlex as interface for mesh generation. On single core, I got > around 30000 mesh cells. Whenever, I run on multiple core, say 3, in the > output file for DM, VTK, I got 3 times mesh cell and point numbers. Does it > mean that DMPlexDistribute() does not work properly! > > Hi Amir, It sounds like you are generating a mesh on every process. How are you generating the mesh? Thanks, Matt > Thanks > > Amir > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhbaghaei at mail.sjtu.edu.cn Thu Feb 15 16:43:38 2018 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Fri, 16 Feb 2018 06:43:38 +0800 (CST) Subject: [petsc-users] Dealing with DMPlexDistribute() In-Reply-To: References: <000001d3a6a5$aa1cb8a0$fe5629e0$@mail.sjtu.edu.cn> Message-ID: <000901d3a6ae$68751420$395f3c60$@mail.sjtu.edu.cn> Hi Matt In fact, I created a routine for my specific mesh generation. I firstly, create the DMPlex object at first, then setting the chart, then doing symmetrize and interpolate. Next, I created coordinate section and giving the coordinates. After finishing giving the coordinates, I declare the distribution in routine, with this two lines: DMPlexDistribute(*dm, 0, NULL, &dmDist); if (dmDist) {DMDestroy(dm); *dm = dmDist;} Thanks Amir From: Matthew Knepley [mailto:knepley at gmail.com] Sent: Friday, February 16, 2018 6:34 AM To: Mohammad Hassan Baghaei Cc: PETSc Subject: Re: [petsc-users] Dealing with DMPlexDistribute() On Thu, Feb 15, 2018 at 4:40 PM, Mohammad Hassan Baghaei > wrote: Hi I am using DMPlex as interface for mesh generation. On single core, I got around 30000 mesh cells. Whenever, I run on multiple core, say 3, in the output file for DM, VTK, I got 3 times mesh cell and point numbers. Does it mean that DMPlexDistribute() does not work properly! Hi Amir, It sounds like you are generating a mesh on every process. How are you generating the mesh? Thanks, Matt Thanks Amir -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 15 16:51:07 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Feb 2018 17:51:07 -0500 Subject: [petsc-users] Dealing with DMPlexDistribute() In-Reply-To: <000901d3a6ae$68751420$395f3c60$@mail.sjtu.edu.cn> References: <000001d3a6a5$aa1cb8a0$fe5629e0$@mail.sjtu.edu.cn> <000901d3a6ae$68751420$395f3c60$@mail.sjtu.edu.cn> Message-ID: On Thu, Feb 15, 2018 at 5:43 PM, Mohammad Hassan Baghaei < mhbaghaei at mail.sjtu.edu.cn> wrote: > Hi Matt > > In fact, I created a routine for my specific mesh generation. I firstly, > create the DMPlex object at first, then setting the chart, then doing > symmetrize and interpolate. Next, I created coordinate section and giving > the coordinates. After finishing giving the coordinates, I declare the > distribution in routine, with this two lines: > > If you are creating the whole mesh, then you want to enclose the creation steps in if (!rank) { } else { DMCreate() DMSetType() } DMPlexSymmetrize() DMPlexInterpolate() There are examples of me doing this in the Plex tests. Thanks, Matt > DMPlexDistribute(*dm, 0, NULL, &dmDist); > > if (dmDist) {DMDestroy(dm); *dm = dmDist;} > > Thanks > > Amir > > > > *From:* Matthew Knepley [mailto:knepley at gmail.com] > *Sent:* Friday, February 16, 2018 6:34 AM > *To:* Mohammad Hassan Baghaei > *Cc:* PETSc > *Subject:* Re: [petsc-users] Dealing with DMPlexDistribute() > > > > On Thu, Feb 15, 2018 at 4:40 PM, Mohammad Hassan Baghaei < > mhbaghaei at mail.sjtu.edu.cn> wrote: > > Hi > > I am using DMPlex as interface for mesh generation. On single core, I got > around 30000 mesh cells. Whenever, I run on multiple core, say 3, in the > output file for DM, VTK, I got 3 times mesh cell and point numbers. Does it > mean that DMPlexDistribute() does not work properly! > > > > Hi Amir, > > > > It sounds like you are generating a mesh on every process. How are you > generating the mesh? > > > > Thanks, > > > > Matt > > > > Thanks > > Amir > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhbaghaei at mail.sjtu.edu.cn Thu Feb 15 17:10:29 2018 From: mhbaghaei at mail.sjtu.edu.cn (Mohammad Hassan Baghaei) Date: Fri, 16 Feb 2018 07:10:29 +0800 (CST) Subject: [petsc-users] Dealing with DMPlexDistribute() In-Reply-To: References: <000001d3a6a5$aa1cb8a0$fe5629e0$@mail.sjtu.edu.cn> <000901d3a6ae$68751420$395f3c60$@mail.sjtu.edu.cn> Message-ID: <001201d3a6b2$28d39220$7a7ab660$@mail.sjtu.edu.cn> Oh! Thanks for your help! I am currently looking at the test examples. I would definitely try your method. In the first impression, I thought it was DMPlexDistribute() problem. From: Matthew Knepley [mailto:knepley at gmail.com] Sent: Friday, February 16, 2018 6:51 AM To: Mohammad Hassan Baghaei Cc: PETSc Subject: Re: [petsc-users] Dealing with DMPlexDistribute() On Thu, Feb 15, 2018 at 5:43 PM, Mohammad Hassan Baghaei > wrote: Hi Matt In fact, I created a routine for my specific mesh generation. I firstly, create the DMPlex object at first, then setting the chart, then doing symmetrize and interpolate. Next, I created coordinate section and giving the coordinates. After finishing giving the coordinates, I declare the distribution in routine, with this two lines: If you are creating the whole mesh, then you want to enclose the creation steps in if (!rank) { } else { DMCreate() DMSetType() } DMPlexSymmetrize() DMPlexInterpolate() There are examples of me doing this in the Plex tests. Thanks, Matt DMPlexDistribute(*dm, 0, NULL, &dmDist); if (dmDist) {DMDestroy(dm); *dm = dmDist;} Thanks Amir From: Matthew Knepley [mailto:knepley at gmail.com ] Sent: Friday, February 16, 2018 6:34 AM To: Mohammad Hassan Baghaei > Cc: PETSc > Subject: Re: [petsc-users] Dealing with DMPlexDistribute() On Thu, Feb 15, 2018 at 4:40 PM, Mohammad Hassan Baghaei > wrote: Hi I am using DMPlex as interface for mesh generation. On single core, I got around 30000 mesh cells. Whenever, I run on multiple core, say 3, in the output file for DM, VTK, I got 3 times mesh cell and point numbers. Does it mean that DMPlexDistribute() does not work properly! Hi Amir, It sounds like you are generating a mesh on every process. How are you generating the mesh? Thanks, Matt Thanks Amir -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Thu Feb 15 18:40:21 2018 From: danyang.su at gmail.com (Danyang Su) Date: Thu, 15 Feb 2018 16:40:21 -0800 Subject: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile Message-ID: Hi Matt, I have a question on DMPlexCreateFromCellList and DMPlexCreateFromFile. When use DMPlexCreateFromFile with Gmsh file input, it works fine and each processor gets its own part. However, when use DMPlexCreateFromCellList, all the processors have the same global mesh. To my understand, I should put the global mesh as input, right? Otherwise, I should use DMPlexCreateFromCellListParallel instead if the input is local mesh. Below is the test code I use, results from method 1 is wrong and that from method 2 is correct. Would you please help to check if I did anything wrong with DMPlexCreateFromCellList input? !test with 4 processor, global num_cells = 8268, global num_nodes = 4250 !correct results ?check rank??????????? 2? istart???????? 2034? iend???????? 3116 ?check rank??????????? 3? istart???????? 2148? iend???????? 3293 ?check rank??????????? 1? istart???????? 2044? iend???????? 3133 ?check rank??????????? 0? istart???????? 2042? iend???????? 3131 !wrong results ? check rank??????????? 0? istart???????? 8268? iend??????? 12518 ? check rank??????????? 1? istart???????? 8268? iend??????? 12518 ? check rank??????????? 2? istart???????? 8268? iend??????? 12518 ? check rank??????????? 3? istart???????? 8268? iend??????? 12518 ????? !c ************* ?? test part??? ********************* ????? !c method 1: create DMPlex from cell list, same duplicated global meshes over all processors ????? !c the input parameters num_cells, num_nodes, dmplex_cells, dmplex_verts are all global parameters (global mesh data) ????? call DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells,?? & num_nodes,num_nodes_per_cell,????? & Petsc_True,dmplex_cells,ndim,????? & dmplex_verts,dmda_flow%da,ierr) ????? CHKERRQ(ierr) ????? !c method 2: create DMPlex from Gmsh file, for test purpose, this works fine, each processor gets its own part ????? call DMPlexCreateFromFile(Petsc_Comm_World,????????????????????? & prefix(:l_prfx)//'.msh',0,???????????? & ??????????????????????????????? dmda_flow%da,ierr) ????? CHKERRQ(ierr) ????? !c *************end of test part********************* ????? distributedMesh = PETSC_NULL_OBJECT ????? !c distribute mesh over processes ????? call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT,????????? & ??????????????????????????? distributedMesh,ierr) ????? CHKERRQ(ierr) ????? !c destroy original global mesh after distribution ????? if (distributedMesh /= PETSC_NULL_OBJECT) then ??????? call DMDestroy(dmda_flow%da,ierr) ??????? CHKERRQ(ierr) ??????? !c set the global mesh as distributed mesh ??????? dmda_flow%da = distributedMesh ????? end if ????? !c get coordinates ????? call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) ????? CHKERRQ(ierr) ????? call DMGetCoordinateDM(dmda_flow%da,cda,ierr) ????? CHKERRQ(ierr) ????? call DMGetDefaultSection(cda,cs,ierr) ????? CHKERRQ(ierr) ????? call PetscSectionGetChart(cs,istart,iend,ierr) ????? CHKERRQ(ierr) #ifdef DEBUG ??????? if(info_debug > 0) then ????????? write(*,*) "check rank ",rank," istart ",istart," iend ",iend ??????? end if #endif Thanks and regards, Danyang -------------- next part -------------- A non-text attachment was scrubbed... Name: stripf.msh Type: model/mesh Size: 382393 bytes Desc: not available URL: From knepley at gmail.com Thu Feb 15 19:57:09 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 15 Feb 2018 20:57:09 -0500 Subject: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile In-Reply-To: References: Message-ID: On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su wrote: > Hi Matt, > > I have a question on DMPlexCreateFromCellList and DMPlexCreateFromFile. > When use DMPlexCreateFromFile with Gmsh file input, it works fine and each > processor gets its own part. However, when use DMPlexCreateFromCellList, > all the processors have the same global mesh. To my understand, I should > put the global mesh as input, right? No. Each process should get part of the mesh in CreateFromCellList(), but the most common thing to do is to feed the whole mesh in on proc 0, and nothing in on the other procs. Thanks, Matt > Otherwise, I should use DMPlexCreateFromCellListParallel instead if the > input is local mesh. > > Below is the test code I use, results from method 1 is wrong and that from > method 2 is correct. Would you please help to check if I did anything wrong > with DMPlexCreateFromCellList input? > > !test with 4 processor, global num_cells = 8268, global num_nodes = 4250 > > !correct results > > check rank 2 istart 2034 iend 3116 > check rank 3 istart 2148 iend 3293 > check rank 1 istart 2044 iend 3133 > check rank 0 istart 2042 iend 3131 > > !wrong results > > check rank 0 istart 8268 iend 12518 > check rank 1 istart 8268 iend 12518 > check rank 2 istart 8268 iend 12518 > check rank 3 istart 8268 iend 12518 > > > !c ************* test part ********************* > !c method 1: create DMPlex from cell list, same duplicated global > meshes over all processors > !c the input parameters num_cells, num_nodes, dmplex_cells, > dmplex_verts are all global parameters (global mesh data) > call DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, & > num_nodes,num_nodes_per_cell, & > Petsc_True,dmplex_cells,ndim, & > dmplex_verts,dmda_flow%da,ierr) > CHKERRQ(ierr) > > > !c method 2: create DMPlex from Gmsh file, for test purpose, this > works fine, each processor gets its own part > call DMPlexCreateFromFile(Petsc_Comm_World, & > prefix(:l_prfx)//'.msh',0, & > dmda_flow%da,ierr) > CHKERRQ(ierr) > > !c *************end of test part********************* > > > distributedMesh = PETSC_NULL_OBJECT > > !c distribute mesh over processes > call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, & > distributedMesh,ierr) > CHKERRQ(ierr) > > !c destroy original global mesh after distribution > if (distributedMesh /= PETSC_NULL_OBJECT) then > call DMDestroy(dmda_flow%da,ierr) > CHKERRQ(ierr) > !c set the global mesh as distributed mesh > dmda_flow%da = distributedMesh > end if > > !c get coordinates > call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) > CHKERRQ(ierr) > > call DMGetCoordinateDM(dmda_flow%da,cda,ierr) > CHKERRQ(ierr) > > call DMGetDefaultSection(cda,cs,ierr) > CHKERRQ(ierr) > > call PetscSectionGetChart(cs,istart,iend,ierr) > CHKERRQ(ierr) > > #ifdef DEBUG > if(info_debug > 0) then > write(*,*) "check rank ",rank," istart ",istart," iend ",iend > end if > #endif > > > Thanks and regards, > > Danyang > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From angelinajohn67 at gmail.com Thu Feb 15 22:30:42 2018 From: angelinajohn67 at gmail.com (Sangeeta Verma) Date: Fri, 16 Feb 2018 10:00:42 +0530 Subject: [petsc-users] Regional Language Translation Services Message-ID: <5a865e7a.1c90620a.adb1e.4af8@mx.google.com> Hello Ma?am/Sir, Hope you doing well. Have you any requirement in Translation or Interpretation? We are an ISO certified and Crisil Rated Translation Company and we actually deal with Multilingual Translation, Content writing, Interpretation and Content Moderation Services. We are offering our services in more than 100 languages. We also provide translators for business meetings, conferences, exhibitions, machine installations, trade fair, seminars etc. for all language. Our areas of excellence: Medical Translation, Marketing Material Translation, Academic Translation, Book Translation, Financial Translation, Technical Translation, Legal Translation, E-learning course translation, Website and Software Localization and much more. Our Clientele:Alstom, HP, NIIT, Samsung Engineering, Fluor,Schniedr Electric, ABB Ltd, Posco, TOYO, Sulzer, Emerson, TATA, Petrofac, BHEL, Siemens Ltd, Flowserve India Controls Pvt. Ltd, Heavy Engineering Corporation, THDC India Ltd, Larsen & Toubro, Honeywell and UOP India Pvt. Ltd. Please share your requirements of translation or interpretation. Thanks with Regards Sangeeta Verma Online Services Executive -------------- next part -------------- An HTML attachment was scrubbed... URL: From hbuesing at eonerc.rwth-aachen.de Fri Feb 16 04:46:02 2018 From: hbuesing at eonerc.rwth-aachen.de (Buesing, Henrik) Date: Fri, 16 Feb 2018 10:46:02 +0000 Subject: [petsc-users] Set Field names in HDF5 output Message-ID: Dear all, I set fieldnames with DMDASetFieldName. These appear in my VTK output, but not in my HDF5 output. Is it possible to set fieldnames in the HDF5 output files? Thank you! Henrik -- Dipl.-Math. Henrik B?sing Institute for Applied Geophysics and Geothermal Energy E.ON Energy Research Center RWTH Aachen University Mathieustr. 10 | Tel +49 (0)241 80 49907 52074 Aachen, Germany | Fax +49 (0)241 80 49889 http://www.eonerc.rwth-aachen.de/GGE hbuesing at eonerc.rwth-aachen.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Feb 16 10:36:55 2018 From: danyang.su at gmail.com (Danyang Su) Date: Fri, 16 Feb 2018 08:36:55 -0800 Subject: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile In-Reply-To: References: Message-ID: On 18-02-15 05:57 PM, Matthew Knepley wrote: > On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su > wrote: > > Hi Matt, > > I have a question on DMPlexCreateFromCellList and > DMPlexCreateFromFile. When use DMPlexCreateFromFile with Gmsh file > input, it works fine and each processor gets its own part. > However, when use DMPlexCreateFromCellList, all the processors > have the same global mesh. To my understand, I should put the > global mesh as input, right? > > > No. Each process should get part of the mesh in CreateFromCellList(), > but the most common thing to do is to > feed the whole mesh in on proc 0, and nothing in on the other procs. Thanks for the explanation. It works now. Danyang > > ? Thanks, > > ? ? Matt > > Otherwise, I should use DMPlexCreateFromCellListParallel instead > if the input is local mesh. > > Below is the test code I use, results from method 1 is wrong and > that from method 2 is correct. Would you please help to check if I > did anything wrong with DMPlexCreateFromCellList input? > > !test with 4 processor, global num_cells = 8268, global num_nodes > = 4250 > > !correct results > > ?check rank??????????? 2? istart???????? 2034 iend???????? 3116 > ?check rank??????????? 3? istart???????? 2148 iend???????? 3293 > ?check rank??????????? 1? istart???????? 2044 iend???????? 3133 > ?check rank??????????? 0? istart???????? 2042 iend???????? 3131 > > !wrong results > > ? check rank??????????? 0? istart???????? 8268 iend??????? 12518 > ? check rank??????????? 1? istart???????? 8268 iend??????? 12518 > ? check rank??????????? 2? istart???????? 8268 iend??????? 12518 > ? check rank??????????? 3? istart???????? 8268 iend??????? 12518 > > > ????? !c ************* ?? test part ********************* > ????? !c method 1: create DMPlex from cell list, same duplicated > global meshes over all processors > ????? !c the input parameters num_cells, num_nodes, dmplex_cells, > dmplex_verts are all global parameters (global mesh data) > ????? call DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, & > num_nodes,num_nodes_per_cell, ???? & > Petsc_True,dmplex_cells,ndim, ???? & > dmplex_verts,dmda_flow%da,ierr) > ????? CHKERRQ(ierr) > > > ????? !c method 2: create DMPlex from Gmsh file, for test purpose, > this works fine, each processor gets its own part > ????? call DMPlexCreateFromFile(Petsc_Comm_World, & > prefix(:l_prfx)//'.msh',0, ???????? & > ? dmda_flow%da,ierr) > ????? CHKERRQ(ierr) > > ????? !c *************end of test part********************* > > > ????? distributedMesh = PETSC_NULL_OBJECT > > ????? !c distribute mesh over processes > ????? call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, & > ??????????????????????????? distributedMesh,ierr) > ????? CHKERRQ(ierr) > > ????? !c destroy original global mesh after distribution > ????? if (distributedMesh /= PETSC_NULL_OBJECT) then > ??????? call DMDestroy(dmda_flow%da,ierr) > ??????? CHKERRQ(ierr) > ??????? !c set the global mesh as distributed mesh > ??????? dmda_flow%da = distributedMesh > ????? end if > > ????? !c get coordinates > ????? call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) > ????? CHKERRQ(ierr) > > ????? call DMGetCoordinateDM(dmda_flow%da,cda,ierr) > ????? CHKERRQ(ierr) > > ????? call DMGetDefaultSection(cda,cs,ierr) > ????? CHKERRQ(ierr) > > ????? call PetscSectionGetChart(cs,istart,iend,ierr) > ????? CHKERRQ(ierr) > > #ifdef DEBUG > ??????? if(info_debug > 0) then > ????????? write(*,*) "check rank ",rank," istart ",istart," iend > ",iend > ??????? end if > #endif > > > Thanks and regards, > > Danyang > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Feb 16 12:13:38 2018 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 16 Feb 2018 13:13:38 -0500 Subject: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 11:36 AM, Danyang Su wrote: > On 18-02-15 05:57 PM, Matthew Knepley wrote: > > On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su wrote: > >> Hi Matt, >> >> I have a question on DMPlexCreateFromCellList and DMPlexCreateFromFile. >> When use DMPlexCreateFromFile with Gmsh file input, it works fine and each >> processor gets its own part. However, when use DMPlexCreateFromCellList, >> all the processors have the same global mesh. To my understand, I should >> put the global mesh as input, right? > > > No. Each process should get part of the mesh in CreateFromCellList(), but > the most common thing to do is to > feed the whole mesh in on proc 0, and nothing in on the other procs. > > Thanks for the explanation. It works now. > Great. Also feel free to suggest improvements, examples, or better documentation. Thanks, Matt > Danyang > > > Thanks, > > Matt > > >> Otherwise, I should use DMPlexCreateFromCellListParallel instead if the >> input is local mesh. >> >> Below is the test code I use, results from method 1 is wrong and that >> from method 2 is correct. Would you please help to check if I did anything >> wrong with DMPlexCreateFromCellList input? >> >> !test with 4 processor, global num_cells = 8268, global num_nodes = 4250 >> >> !correct results >> >> check rank 2 istart 2034 iend 3116 >> check rank 3 istart 2148 iend 3293 >> check rank 1 istart 2044 iend 3133 >> check rank 0 istart 2042 iend 3131 >> >> !wrong results >> >> check rank 0 istart 8268 iend 12518 >> check rank 1 istart 8268 iend 12518 >> check rank 2 istart 8268 iend 12518 >> check rank 3 istart 8268 iend 12518 >> >> >> !c ************* test part ********************* >> !c method 1: create DMPlex from cell list, same duplicated global >> meshes over all processors >> !c the input parameters num_cells, num_nodes, dmplex_cells, >> dmplex_verts are all global parameters (global mesh data) >> call DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, & >> num_nodes,num_nodes_per_cell, & >> Petsc_True,dmplex_cells,ndim, & >> dmplex_verts,dmda_flow%da,ierr) >> CHKERRQ(ierr) >> >> >> !c method 2: create DMPlex from Gmsh file, for test purpose, this >> works fine, each processor gets its own part >> call DMPlexCreateFromFile(Petsc_Comm_World, & >> prefix(:l_prfx)//'.msh',0, & >> dmda_flow%da,ierr) >> CHKERRQ(ierr) >> >> !c *************end of test part********************* >> >> >> distributedMesh = PETSC_NULL_OBJECT >> >> !c distribute mesh over processes >> call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, & >> distributedMesh,ierr) >> CHKERRQ(ierr) >> >> !c destroy original global mesh after distribution >> if (distributedMesh /= PETSC_NULL_OBJECT) then >> call DMDestroy(dmda_flow%da,ierr) >> CHKERRQ(ierr) >> !c set the global mesh as distributed mesh >> dmda_flow%da = distributedMesh >> end if >> >> !c get coordinates >> call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) >> CHKERRQ(ierr) >> >> call DMGetCoordinateDM(dmda_flow%da,cda,ierr) >> CHKERRQ(ierr) >> >> call DMGetDefaultSection(cda,cs,ierr) >> CHKERRQ(ierr) >> >> call PetscSectionGetChart(cs,istart,iend,ierr) >> CHKERRQ(ierr) >> >> #ifdef DEBUG >> if(info_debug > 0) then >> write(*,*) "check rank ",rank," istart ",istart," iend ",iend >> end if >> #endif >> >> >> Thanks and regards, >> >> Danyang >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Feb 16 12:18:48 2018 From: danyang.su at gmail.com (Danyang Su) Date: Fri, 16 Feb 2018 10:18:48 -0800 Subject: [petsc-users] Question on DMPlexCreateFromCellList and DMPlexCreateFromFile In-Reply-To: References: Message-ID: <3de82b2c-6aca-3f8e-badd-18c64580d820@gmail.com> On 18-02-16 10:13 AM, Matthew Knepley wrote: > On Fri, Feb 16, 2018 at 11:36 AM, Danyang Su > wrote: > > On 18-02-15 05:57 PM, Matthew Knepley wrote: > >> On Thu, Feb 15, 2018 at 7:40 PM, Danyang Su > > wrote: >> >> Hi Matt, >> >> I have a question on DMPlexCreateFromCellList and >> DMPlexCreateFromFile. When use DMPlexCreateFromFile with Gmsh >> file input, it works fine and each processor gets its own >> part. However, when use DMPlexCreateFromCellList, all the >> processors have the same global mesh. To my understand, I >> should put the global mesh as input, right? >> >> >> No. Each process should get part of the mesh in >> CreateFromCellList(), but the most common thing to do is to >> feed the whole mesh in on proc 0, and nothing in on the other procs. > Thanks for the explanation. It works now. > > > Great. Also feel free to suggest improvements, examples, or better > documentation. Thanks, I will bother you a lot recently while porting the code from structured grid version to unstructured grid version. Thanks in advance. Danyang > > ? Thanks, > > ? ? ?Matt > > Danyang >> >> ? Thanks, >> >> ? ? Matt >> >> Otherwise, I should use DMPlexCreateFromCellListParallel >> instead if the input is local mesh. >> >> Below is the test code I use, results from method 1 is wrong >> and that from method 2 is correct. Would you please help to >> check if I did anything wrong with DMPlexCreateFromCellList >> input? >> >> !test with 4 processor, global num_cells = 8268, global >> num_nodes = 4250 >> >> !correct results >> >> ?check rank??????????? 2? istart???????? 2034 iend???????? 3116 >> ?check rank??????????? 3? istart???????? 2148 iend???????? 3293 >> ?check rank??????????? 1? istart???????? 2044 iend???????? 3133 >> ?check rank??????????? 0? istart???????? 2042 iend???????? 3131 >> >> !wrong results >> >> ? check rank??????????? 0? istart 8268? iend??????? 12518 >> ? check rank??????????? 1? istart 8268? iend??????? 12518 >> ? check rank??????????? 2? istart 8268? iend??????? 12518 >> ? check rank??????????? 3? istart 8268? iend??????? 12518 >> >> >> ????? !c ************* ?? test part ********************* >> ????? !c method 1: create DMPlex from cell list, same >> duplicated global meshes over all processors >> ????? !c the input parameters num_cells, num_nodes, >> dmplex_cells, dmplex_verts are all global parameters (global >> mesh data) >> ????? call >> DMPlexCreateFromCellList(Petsc_Comm_World,ndim,num_cells, & >> num_nodes,num_nodes_per_cell, ???? & >> Petsc_True,dmplex_cells,ndim, ???? & >> dmplex_verts,dmda_flow%da,ierr) >> ????? CHKERRQ(ierr) >> >> >> ????? !c method 2: create DMPlex from Gmsh file, for test >> purpose, this works fine, each processor gets its own part >> ????? call DMPlexCreateFromFile(Petsc_Comm_World, & >> prefix(:l_prfx)//'.msh',0, & >> dmda_flow%da,ierr) >> ????? CHKERRQ(ierr) >> >> ????? !c *************end of test part********************* >> >> >> ????? distributedMesh = PETSC_NULL_OBJECT >> >> ????? !c distribute mesh over processes >> ????? call DMPlexDistribute(dmda_flow%da,0,PETSC_NULL_OBJECT, & >> distributedMesh,ierr) >> ????? CHKERRQ(ierr) >> >> ????? !c destroy original global mesh after distribution >> ????? if (distributedMesh /= PETSC_NULL_OBJECT) then >> ??????? call DMDestroy(dmda_flow%da,ierr) >> ??????? CHKERRQ(ierr) >> ??????? !c set the global mesh as distributed mesh >> ??????? dmda_flow%da = distributedMesh >> ????? end if >> >> ????? !c get coordinates >> ????? call DMGetCoordinatesLocal(dmda_flow%da,gc,ierr) >> ????? CHKERRQ(ierr) >> >> ????? call DMGetCoordinateDM(dmda_flow%da,cda,ierr) >> ????? CHKERRQ(ierr) >> >> ????? call DMGetDefaultSection(cda,cs,ierr) >> ????? CHKERRQ(ierr) >> >> ????? call PetscSectionGetChart(cs,istart,iend,ierr) >> ????? CHKERRQ(ierr) >> >> #ifdef DEBUG >> ??????? if(info_debug > 0) then >> ????????? write(*,*) "check rank ",rank," istart ",istart," >> iend ",iend >> ??????? end if >> #endif >> >> >> Thanks and regards, >> >> Danyang >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Feb 16 12:45:39 2018 From: danyang.su at gmail.com (Danyang Su) Date: Fri, 16 Feb 2018 10:45:39 -0800 Subject: [petsc-users] Error when use DMPlexGetVertexNumbering Message-ID: <785b4050-1cac-5e7a-2d6e-8db6a75b978f@gmail.com> Hi Matt, I try to get the global vertex index and cell index from local mesh and run into problem. What I need is local to global index (the original index used in DMPlexCreateFromCellList is best, as user know exactly where the node/cell is) for vertices and cells, which will be used to assign material properties and some parameters to the specified cell/vertex. I can use coordinates to select vertex/cell which has already included, but still want to keep this feature. This is pretty straightforward when using structured grid. For the unstructured grid, I just got compiling error saying "You need a ISO C conforming compiler to use the glibc headers" Would you please let me know if I need to change the configuration of PETSc or is there any alternative ways to avoid using DMPlexGetVertexNumbering and DMPlexGetCellNumbering but get local to global index? The error information during compilation is shown below, followed by PETSc configuration. ?-o ../../solver/solver_ddmethod.o ../../solver/solver_ddmethod.F90 In file included from /usr/include/features.h:375:0, ???????????????? from /usr/include/stdio.h:28, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5, ???????????????? from ../../solver/solver_ddmethod.F90:4122: /usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You need a ISO C conforming compiler to us\ e the glibc headers" ?# error "You need a ISO C conforming compiler to use the glibc headers" ?? ^ In file included from /usr/include/features.h:399:0, ???????????????? from /usr/include/stdio.h:28, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7, ???????????????? from /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5, ???????????????? from ../../solver/solver_ddmethod.F90:4122: /usr/include/x86_64-linux-gnu/gnu/stubs.h:7:0: fatal error: gnu/stubs-32.h: No such file or directory ?# include ?^ compilation terminated. make: [../../solver/solver_ddmethod.o] Error 1 (ignored) PETSc configuration --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps --download-scalapack --download-parmetis --download-metis --download-ptscotch --download-fblaslapack --download-mpich --download-hypre --download-superlu_dist --download-hdf5=yes --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native -mtune=native" Thanks and regards, Danyang From knepley at gmail.com Fri Feb 16 12:50:48 2018 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 16 Feb 2018 13:50:48 -0500 Subject: [petsc-users] Error when use DMPlexGetVertexNumbering In-Reply-To: <785b4050-1cac-5e7a-2d6e-8db6a75b978f@gmail.com> References: <785b4050-1cac-5e7a-2d6e-8db6a75b978f@gmail.com> Message-ID: On Fri, Feb 16, 2018 at 1:45 PM, Danyang Su wrote: > Hi Matt, > > I try to get the global vertex index and cell index from local mesh and > run into problem. What I need is local to global index (the original index > used in DMPlexCreateFromCellList is best, as user know exactly where the > node/cell is) for vertices and cells, which will be used to assign material > properties and some parameters to the specified cell/vertex. I would recommend doing this before you distribute the mesh. Just set these properties using a DMLabel and it will be automatically distributed. > I can use coordinates to select vertex/cell which has already included, > but still want to keep this feature. This is pretty straightforward when > using structured grid. For the unstructured grid, I just got compiling > error saying "You need a ISO C conforming compiler to use the glibc headers" > For any compilation problem, you have to send the configure.log and make.log. However, it appears that you are not using the same compiler that you configured with. Matt > Would you please let me know if I need to change the configuration of > PETSc or is there any alternative ways to avoid using > DMPlexGetVertexNumbering and DMPlexGetCellNumbering but get local to global > index? > > The error information during compilation is shown below, followed by PETSc > configuration. > > -o ../../solver/solver_ddmethod.o ../../solver/solver_ddmethod.F90 > > In file included from /usr/include/features.h:375:0, > from /usr/include/stdio.h:28, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscsys.h:161, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscis.h:8, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscvec.h:10, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscmat.h:7, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petsc/private/dmpleximpl.h:5, > from ../../solver/solver_ddmethod.F90:4122: > /usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You need a > ISO C conforming compiler to us\ > e the glibc headers" > # error "You need a ISO C conforming compiler to use the glibc headers" > ^ > In file included from /usr/include/features.h:399:0, > from /usr/include/stdio.h:28, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscsys.h:161, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscis.h:8, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscvec.h:10, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petscmat.h:7, > from /home/dsu/Soft/PETSc/petsc-3.7 > .5/include/petsc/private/dmpleximpl.h:5, > from ../../solver/solver_ddmethod.F90:4122: > /usr/include/x86_64-linux-gnu/gnu/stubs.h:7:0: fatal error: > gnu/stubs-32.h: No such file or directory > # include > ^ > compilation terminated. > make: [../../solver/solver_ddmethod.o] Error 1 (ignored) > > > PETSc configuration > --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps > --download-scalapack --download-parmetis --download-metis > --download-ptscotch --download-fblaslapack --download-mpich > --download-hypre --download-superlu_dist --download-hdf5=yes > --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" > CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 -march=native > -mtune=native" > > Thanks and regards, > > Danyang > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Feb 16 12:56:10 2018 From: danyang.su at gmail.com (Danyang Su) Date: Fri, 16 Feb 2018 10:56:10 -0800 Subject: [petsc-users] Error when use DMPlexGetVertexNumbering In-Reply-To: References: <785b4050-1cac-5e7a-2d6e-8db6a75b978f@gmail.com> Message-ID: On 18-02-16 10:50 AM, Matthew Knepley wrote: > On Fri, Feb 16, 2018 at 1:45 PM, Danyang Su > wrote: > > Hi Matt, > > I try to get the global vertex index and cell index from local > mesh and run into problem. What I need is local to global index > (the original index used in DMPlexCreateFromCellList is best, as > user know exactly where the node/cell is) for vertices and cells, > which will be used to assign material properties and some > parameters to the specified cell/vertex. > > > I would recommend doing this before you distribute the mesh. Just set > these properties using a DMLabel and it will be automatically distributed. I will try this. Thanks. > > I can use coordinates to select vertex/cell which has already > included, but still want to keep this feature. This is pretty > straightforward when using structured grid. For the unstructured > grid, I just got compiling error saying "You need a ISO C > conforming compiler to use the glibc headers" > > > For any compilation problem, you have to send the configure.log and > make.log. However, it appears that you are not using the same compiler > that you configured with. Sorry for confusion. There are linux-gnu-dbg (debug version) and linux-gnu-opt (optimized version) configuration, the one I used to compile is debug version and the attached one is optimized version. I will try you recommendation first. Thanks, Danyang > > ? ?Matt > > Would you please let me know if I need to change the configuration > of PETSc or is there any alternative ways to avoid using > DMPlexGetVertexNumbering and DMPlexGetCellNumbering but get local > to global index? > > The error information during compilation is shown below, followed > by PETSc configuration. > > ?-o ../../solver/solver_ddmethod.o ../../solver/solver_ddmethod.F90 > > In file included from /usr/include/features.h:375:0, > ???????????????? from /usr/include/stdio.h:28, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5, > ???????????????? from ../../solver/solver_ddmethod.F90:4122: > /usr/include/x86_64-linux-gnu/sys/cdefs.h:30:3: error: #error "You > need a ISO C conforming compiler to us\ > e the glibc headers" > ?# error "You need a ISO C conforming compiler to use the glibc > headers" > ?? ^ > In file included from /usr/include/features.h:399:0, > ???????????????? from /usr/include/stdio.h:28, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscsys.h:161, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscis.h:8, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscvec.h:10, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petscmat.h:7, > ???????????????? from > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/private/dmpleximpl.h:5, > ???????????????? from ../../solver/solver_ddmethod.F90:4122: > /usr/include/x86_64-linux-gnu/gnu/stubs.h:7:0: fatal error: > gnu/stubs-32.h: No such file or directory > ?# include > ?^ > compilation terminated. > make: [../../solver/solver_ddmethod.o] Error 1 (ignored) > > > PETSc configuration > --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mumps > --download-scalapack --download-parmetis --download-metis > --download-ptscotch --download-fblaslapack --download-mpich > --download-hypre --download-superlu_dist --download-hdf5=yes > --with-debugging=0 COPTFLAGS="-O3 -march=native -mtune=native" > CXXOPTFLAGS="-O3 -march=native -mtune=native" FOPTFLAGS="-O3 > -march=native -mtune=native" > > Thanks and regards, > > Danyang > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Fri Feb 16 14:15:08 2018 From: epscodes at gmail.com (Xiangdong) Date: Fri, 16 Feb 2018 15:15:08 -0500 Subject: [petsc-users] multiply a mpibaij matrix by its block diagonal inverse In-Reply-To: <42432050-8F39-4AC7-8FAD-B53F0D27B0CF@anl.gov> References: <42432050-8F39-4AC7-8FAD-B53F0D27B0CF@anl.gov> Message-ID: Hi Barry, I need some help on the parallel version of MatBAIJBlockDiagonalScale. My understanding is that MatBAIJBlockDiagonalScale_MPIBAIJ would be a wrapper on the MatBAIJBlockDiagonalScale_SeqBAIJ. However, I am not clear about how to get the local part of MPIBAIJ. Does the local part of MPIBAIJ consist of one or two SeqBAIJ? Can you show me a similar example of writing a method for MPIBAIJ based on the SeqBAIJ method? The MatInvertBlockDiagonal is not similar, as that method only involves the diagonal part A and without the off-diagonal part B. Thank you. Xiangdong On Wed, Feb 14, 2018 at 2:57 PM, Smith, Barry F. wrote: > > In the PETSc git branch barry/feature-baij-blockdiagonal-scale I have > done the "heavy lifting" for what you need. See > https://bitbucket.org/petsc/petsc/branch/barry/feature- > baij-blockdiagonal-scale > > It scales the Seq BAIJ matrix by its block diagonal. You will need to > write a routine to also scale the right hand side vector by the block > diagonal and then you can try the preconditioner for sequential code. Write > something like VecBlockDiagonalScale(Vec,const PetscScalar *). You get > the block size from the vector. > > > Later you or I can add the parallel version (not much more difficult). I > don't have time to work on it now. > > Let us know if you have any difficulties. > > > Barry > > > > On Feb 14, 2018, at 9:10 AM, Xiangdong wrote: > > > > The idea goes back to the alternate-block-factorization (ABF) method > > > > https://link.springer.com/article/10.1007/BF01932753 > > > > and is widely used in the reservoir simulation, where the unknowns are > pressure and saturation. Although the coupled equations are parabolic, the > pressure equations/variables are more elliptic and the saturation equations > are more hyperbolic. People always decouple the transformed linear equation > to obtain a better (more elliptical) pressure matrix and then apply the AMG > preconditioner on the decoupled matrix. > > > > https://link.springer.com/article/10.1007/s00791-016-0273-3 > > > > Thanks. > > > > Xiangdong > > > > On Wed, Feb 14, 2018 at 9:49 AM, Smith, Barry F. > wrote: > > > > Hmm, I never had this idea presented to me, I have no way to know if > it is particularly good or bad. So essentially you transform the matrix > "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT > instead of using PCFIELDSPLIT directly on the original equations. > > > > Maybe in the long run this should be an option to PCFIEDLSPLIT. In > general we like the solvers to manage any transformations, not require > transformations before calling the solvers. I have to think about this. > > > > Barry > > > > > > > On Feb 14, 2018, at 8:29 AM, Xiangdong wrote: > > > > > > The reason for the operation invdiag(A)*A is to have a decoupled > matrix/physics for preconditioning. For example, after the transformation, > the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for > bs=3). One can extract a submatrix (e.g. corresponding to only first > unknown) and apply special preconditioners for the extracted/decoupled > matrix. The motivation is that after the transformation, one can get a > better decoupled matrix to preserve the properties of the unknowns. > > > > > > Thanks. > > > > > > Xiangdong > > > > > > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. > wrote: > > > > > > In general you probably don't want to do this. Most good > preconditioners (like AMG) rely on the matrix having the "natural" scaling > that arises from the discretization and doing a scaling like you describe > destroys that natural scaling. You can use PCPBJACOBI to use point block > Jacobi preconditioner on the matrix without needing to do the scaling up > front. The ILU preconditioners for BAIJ matrices work directly with the > block structure so again pre-scaling the matrix buys you nothing. PETSc > doesn't have any particularly efficient routines for computing what you > desire, the only way to get something truly efficient is to write the code > directly using the BAIJ data structure, doable but probably not worth it. > > > > > > Barry > > > > > > > > > > On Feb 13, 2018, at 5:21 PM, Xiangdong wrote: > > > > > > > > Hello everyone, > > > > > > > > I have a block sparse matrices A created from the DMDA3d. Before > passing the matrix to ksp solver, I want to apply a transformation to this > matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the > block diagonal of A. What is the best way to get the transformed matrix? > > > > > > > > At this moment, I created a new mat IDA=inv(diag(A)) by looping > through each row and call MatMatMult to get B=invdiag(A)*A, then destroy > the temporary matrix B. However, I prefer the in-place transformation if > possible, namely, without the additional matrix B for memory saving purpose. > > > > > > > > Do you have any suggestion on compute invdiag(A)*A for mpibaij > matrix? > > > > > > > > Thanks for your help. > > > > > > > > Best, > > > > Xiangdong > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From damon at ices.utexas.edu Mon Feb 19 11:12:06 2018 From: damon at ices.utexas.edu (Damon McDougall) Date: Mon, 19 Feb 2018 11:12:06 -0600 Subject: [petsc-users] The 9th Annual Scientific Software Days Conference 26th-27th April 2018 Message-ID: <1519060326.495503.1275976040.39C0924A@webmail.messagingengine.com> I believe this conference would be of interest to folks on this mailing list. We have travel support available. Please pass along this information to any interested parties. Here are the details: ============================= The 9th Annual Scientific Software Days Conference (SSD) targets users and developers of scientific software. The conference will be held at the University of Texas at Austin Thursday Apr 26 - Friday Apr 27, 2018 and focuses on two themes: a) sharing best practices across scientific software communities; b) sharing the latest tools and technology relevant to scientific software. In addition, we solicit poster submissions that share novel uses of scientific software. Please send an abstract of less than 250 words to ssd-organizers at googlegroups.com. Limited travel funding for (a) students and early career researchers presenting posters; and (b) members of under-represented groups is available. If you'd like to apply for funding please fill out the funding application form on our website: http://scisoftdays.org Registration fees: Students: $25 Everyone else: $50 More details are on the website: http://scisoftdays.org/ Regards, S. Fomel (UTexas), A. Vargas (LLNL), M. Knepley (Rice), R. Kirby (Baylor), J. Proft (UTexas), D. McDougall (UTexas), S. Pierce (UTexas), B. Adams (Sandia) ============================= From t.appel17 at imperial.ac.uk Mon Feb 19 12:15:57 2018 From: t.appel17 at imperial.ac.uk (Thibaut Appel) Date: Mon, 19 Feb 2018 18:15:57 +0000 Subject: [petsc-users] [SLEPc] Performance of Krylov-Schur with MUMPS-based shift-and-invert Message-ID: <31fdfe68-4e4c-804f-225f-2a34f47210e8@imperial.ac.uk> Good afternoon, I am solving generalized eigenvalue problems {Ax = omegaBx} in complex arithmetic, where A is non-hermitian and B is singular. I think the only way to get round the singularity is to employ a shift-and-invert method, where I am using MUMPS to invert the shifted matrix. I am using the Fortran interface of PETSc 3.8.3 and SLEPc 3.8.2 where my ./configure line was ./configure --with-fortran-kernels=1 --with-scalar-type=complex --with-blaslapack-dir=/home/linuxbrew/.linuxbrew/opt/openblas --PETSC_ARCH=cplx_dble_optim --with-cmake-dir=/home/linuxbrew/.linuxbrew/opt/cmake --with-mpi-dir=/home/linuxbrew/.linuxbrew/opt/openmpi --with-debugging=0 --download-scalapack --download-mumps --COPTFLAGS="-O3 -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3 -march=native" My matrices A and B are assembled correctly in parallel and my preallocation is quasi-optimal in the sense that I don't have any called to mallocs but I may overestimate the required memory for some rows of the matrices. Here is how I setup the EPS problem and solve: ??? CALL EPSSetProblemType(eps,EPS_GNHEP,ierr) ??? CALL EPSSetOperators(eps,MatA,MatB,ierr) ??? CALL EPSSetType(eps,EPSKRYLOVSCHUR,ierr) ??? CALL EPSSetDimensions(eps,nev,ncv,PETSC_DECIDE,ierr) ??? CALL EPSSetTolerances(eps,tol_ev,PETSC_DECIDE,ierr) ??? CALL EPSSetFromOptions(eps,ierr) ??? CALL EPSSetTarget(eps,shift,ierr) ??? CALL EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE,ierr) ??? CALL EPSGetST(eps,st,ierr) ??? CALL STGetKSP(st,ksp,ierr) ??? CALL KSPGetPC(ksp,pc,ierr) ??? CALL STSetType(st,STSINVERT,ierr) ??? CALL KSPSetType(ksp,KSPPREONLY,ierr) ??? CALL PCSetType(pc,PCLU,ierr) ??? CALL PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS,ierr) ??? CALL PCSetFromOptions(pc,ierr) ??? CALL EPSSolve(eps,ierr) ??? CALL EPSGetIterationNumber(eps,iter,ierr) ??? CALL EPSGetConverged(eps,nev_conv,ierr) ??? - Using one MPI process, it takes 1 hour and 22 minutes to retrieve 250 eigenvalues with a Krylov subspace of size 500, a tolerance of 10^-12 when the leading dimension of the matrices is 405000. My matrix A has 98,415,000 non-zero elements and B has 1,215,000 non zero elements. Would you be shocked by that computation time? I would have expected something much lower given the values of nev and ncv I have but could be completely wrong in my understanding of the Krylov-Schur method. ??? - My goal is speed and reliability. Is there anything you notice in my EPS solver that could be improved or corrected? I remember an exchange with Jose E. Roman where he said that the parameters of MUMPS are not worth being changed, however I notice some people play with the -mat_mumps_cntl_1 and? -mat_mumps_cntl_3 which control the relative/absolute pivoting threshold? ??? - Would you advise the use of EPSSetTrueResidual and EPSSetBalance since I am using a spectral transformation? ??? - Would you see anything that would prevent me from getting speedup in parallel executions? Thank you very much in advance and I look forward to exchanging with you about these different points, Thibaut From jroman at dsic.upv.es Mon Feb 19 12:36:18 2018 From: jroman at dsic.upv.es (Jose E. Roman) Date: Mon, 19 Feb 2018 19:36:18 +0100 Subject: [petsc-users] [SLEPc] Performance of Krylov-Schur with MUMPS-based shift-and-invert In-Reply-To: <31fdfe68-4e4c-804f-225f-2a34f47210e8@imperial.ac.uk> References: <31fdfe68-4e4c-804f-225f-2a34f47210e8@imperial.ac.uk> Message-ID: > El 19 feb 2018, a las 19:15, Thibaut Appel escribi?: > > Good afternoon, > > I am solving generalized eigenvalue problems {Ax = omegaBx} in complex arithmetic, where A is non-hermitian and B is singular. I think the only way to get round the singularity is to employ a shift-and-invert method, where I am using MUMPS to invert the shifted matrix. > > I am using the Fortran interface of PETSc 3.8.3 and SLEPc 3.8.2 where my ./configure line was > ./configure --with-fortran-kernels=1 --with-scalar-type=complex --with-blaslapack-dir=/home/linuxbrew/.linuxbrew/opt/openblas --PETSC_ARCH=cplx_dble_optim --with-cmake-dir=/home/linuxbrew/.linuxbrew/opt/cmake --with-mpi-dir=/home/linuxbrew/.linuxbrew/opt/openmpi --with-debugging=0 --download-scalapack --download-mumps --COPTFLAGS="-O3 -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3 -march=native" > > My matrices A and B are assembled correctly in parallel and my preallocation is quasi-optimal in the sense that I don't have any called to mallocs but I may overestimate the required memory for some rows of the matrices. Here is how I setup the EPS problem and solve: > > CALL EPSSetProblemType(eps,EPS_GNHEP,ierr) > CALL EPSSetOperators(eps,MatA,MatB,ierr) > CALL EPSSetType(eps,EPSKRYLOVSCHUR,ierr) > CALL EPSSetDimensions(eps,nev,ncv,PETSC_DECIDE,ierr) > CALL EPSSetTolerances(eps,tol_ev,PETSC_DECIDE,ierr) > > CALL EPSSetFromOptions(eps,ierr) > CALL EPSSetTarget(eps,shift,ierr) > CALL EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE,ierr) > > CALL EPSGetST(eps,st,ierr) > CALL STGetKSP(st,ksp,ierr) > CALL KSPGetPC(ksp,pc,ierr) > > CALL STSetType(st,STSINVERT,ierr) > CALL KSPSetType(ksp,KSPPREONLY,ierr) > CALL PCSetType(pc,PCLU,ierr) > > CALL PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS,ierr) > CALL PCSetFromOptions(pc,ierr) > > CALL EPSSolve(eps,ierr) > CALL EPSGetIterationNumber(eps,iter,ierr) > CALL EPSGetConverged(eps,nev_conv,ierr) The settings seem ok. You can use -eps_view to make sure that everything is set as you want. > > - Using one MPI process, it takes 1 hour and 22 minutes to retrieve 250 eigenvalues with a Krylov subspace of size 500, a tolerance of 10^-12 when the leading dimension of the matrices is 405000. My matrix A has 98,415,000 non-zero elements and B has 1,215,000 non zero elements. Would you be shocked by that computation time? I would have expected something much lower given the values of nev and ncv I have but could be completely wrong in my understanding of the Krylov-Schur method. If you run with -log_view you will see the breakup in the different steps. Most probably a large percentage of the time is in the factorization of the matrix (MatLUFactorSym and MatLUFactorNum). The matrix is quite dense (about 250 nonzero elements per row), so factorizing it is costly. You may want to try inexact shift-and-invert with an iterative method, but you will need a good preconditioner. The time needed for the other steps may be reduced a little bit by setting a smaller subspace size, for instance with -eps_mpd 200 > > - My goal is speed and reliability. Is there anything you notice in my EPS solver that could be improved or corrected? I remember an exchange with Jose E. Roman where he said that the parameters of MUMPS are not worth being changed, however I notice some people play with the -mat_mumps_cntl_1 and -mat_mumps_cntl_3 which control the relative/absolute pivoting threshold? Yes, you can try tuning MUMPS options. Maybe they are relevant in your application. > > - Would you advise the use of EPSSetTrueResidual and EPSSetBalance since I am using a spectral transformation? Are you getting large residual norms? I would not suggest using EPSSetTrueResidual() because it may prevent convergence, especially if the target is not very close to the wanted eigenvalues. EPSSetBalance() sometimes helps, even in the case of using spectral transformation. It is intended for ill-conditioned problems where the obtained residuals are not so good. > > - Would you see anything that would prevent me from getting speedup in parallel executions? I guess it will depend on how MUMPS scales for your problem. Jose > > Thank you very much in advance and I look forward to exchanging with you about these different points, > > Thibaut > From danyang.su at gmail.com Mon Feb 19 14:11:18 2018 From: danyang.su at gmail.com (Danyang Su) Date: Mon, 19 Feb 2018 12:11:18 -0800 Subject: [petsc-users] how to check if cell is local owned in DMPlex Message-ID: Hi Matt, Would you please let me know how to check if a cell is local owned? When overlap is 0 in DMPlexDistribute, all the cells are local owned. How about overlap > 0? It sounds like impossible to check by node because a cell can be local owned even if none of the nodes in this cell is local owned. Thanks, Danyang From tsltaywb at nus.edu.sg Tue Feb 20 01:56:07 2018 From: tsltaywb at nus.edu.sg (TAY Wee Beng) Date: Tue, 20 Feb 2018 15:56:07 +0800 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 Message-ID: Hi, I was previously using PETSc 3.7.6 on different clusters with both Intel Fortran and GNU Fortran. After upgrading, I met some problems when trying to compile: On Intel Fortran: Previously, I was using: #include "petsc/finclude/petsc.h90" in *.F90 when requires the use of PETSc I read in the change log that h90 is no longer there and so I replaced with #include "petsc/finclude/petsc.h" It worked. But I also have some *.F90 which do not use PETSc. However, they use some modules which uses PETSc. Now I can't compile them. The error is : math_routine.f90(3): error #7002: Error in opening the compiled module file. Check INCLUDE paths. [PETSC] use mpi_subroutines mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. The solution is that I have to compile e.g. math_routine.F90 as if they use PETSc, by including PETSc include and lib files. May I know why this is so? It was not necessary before. Anyway, it managed to compile until it reached hypre.F90. Previously, due to some bugs, I have to compile hypre with the -r8 option. Also, I have to use: integer(8) mpi_comm mpi_comm = MPI_COMM_WORLD to make my codes work with HYPRE. But now, compiling gives the error: hypre.F90(11): error #6401: The attributes of this name conflict with those made accessible by a USE statement. [MPI_COMM] integer(8) mpi_comm --------------------------------------^ hypre.F90(84): error #6478: A type-name must not be used as a variable. [MPI_COMM] mpi_comm = MPI_COMM_WORLD ----^ hypre.F90(84): error #6303: The assignment operation or the binary expression operation is invalid for the data types of the two operands. [1140850688] mpi_comm = MPI_COMM_WORLD ---------------^ hypre.F90(100): error #6478: A type-name must not be used as a variable. [MPI_COMM] call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) ... What's actually happening? Why can't I compile now? On GNU gfortran: I tried to use similar tactics as above here. However, when compiling math_routine.F90, I got the error: math_routine.F90:1333:21: call subb(orig,vert1,tvec) 1 Error: Invalid procedure argument at (1) math_routine.F90:1339:18: qvec = cross_pdt2(tvec,edge1) 1 Error: Invalid procedure argument at (1) math_routine.F90:1345:21: uu = dot_product(tvec,pvec) 1 Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be numeric or LOGICAL math_routine.F90:1371:21: uu = dot_product(tvec,pvec) These errors were not present before. My variables are mostly vectors: real(8), intent(in) :: orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) real(8) :: uu,vv,dir(3) real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t I wonder what happened? Please advice. -- Thank you very much. Yours sincerely, ================================================ TAY Wee-Beng ??? Research Scientist Experimental AeroScience Group Temasek Laboratories National University of Singapore T-Lab Building 5A, Engineering Drive 1, #02-02 Singapore 117411 Phone: +65 65167330 E-mail: tsltaywb at nus.edu.sg http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php Personal research webpage: http://tayweebeng.wixsite.com/website Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA linkedin: www.linkedin.com/in/tay-weebeng ================================================ ________________________________ Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From zonexo at gmail.com Tue Feb 20 02:16:56 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 20 Feb 2018 16:16:56 +0800 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 Message-ID: <3415e841-36ee-51dd-678a-f6c7172093bf@gmail.com> Sorry I sent an email from the wrong email address. Anyway, here's the email: Hi, I was previously using PETSc 3.7.6 on different clusters with both Intel Fortran and GNU Fortran. After upgrading, I met some problems when trying to compile: On Intel Fortran: Previously, I was using: #include "petsc/finclude/petsc.h90" in *.F90 when requires the use of PETSc I read in the change log that h90 is no longer there and so I replaced with #include "petsc/finclude/petsc.h" It worked. But I also have some *.F90 which do not use PETSc. However, they use some modules which uses PETSc. Now I can't compile them. The error is : math_routine.f90(3): error #7002: Error in opening the compiled module file.? Check INCLUDE paths.?? [PETSC] use mpi_subroutines mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. The solution is that I have to compile e.g.? math_routine.F90 as if they use PETSc, by including PETSc include and lib files. May I know why this is so? It was not necessary before. Anyway, it managed to compile until it reached hypre.F90. Previously, due to some bugs, I have to compile hypre with the -r8 option. Also, I have to use: integer(8) mpi_comm mpi_comm = MPI_COMM_WORLD to make my codes work with HYPRE. But now, compiling gives the error: hypre.F90(11): error #6401: The attributes of this name conflict with those made accessible by a USE statement.?? [MPI_COMM] integer(8) mpi_comm --------------------------------------^ hypre.F90(84): error #6478: A type-name must not be used as a variable.?? [MPI_COMM] ??? mpi_comm = MPI_COMM_WORLD ----^ hypre.F90(84): error #6303: The assignment operation or the binary expression operation is invalid for the data types of the two operands.?? [1140850688] ??? mpi_comm = MPI_COMM_WORLD ---------------^ hypre.F90(100): error #6478: A type-name must not be used as a variable.?? [MPI_COMM] ??????? call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) ... What's actually happening? Why can't I compile now? On GNU gfortran: I tried to use similar tactics as above here. However, when compiling math_routine.F90, I got the error: math_routine.F90:1333:21: ?call subb(orig,vert1,tvec) ???????????????????? 1 Error: Invalid procedure argument at (1) math_routine.F90:1339:18: ?qvec = cross_pdt2(tvec,edge1) ????????????????? 1 Error: Invalid procedure argument at (1) math_routine.F90:1345:21: ???? uu = dot_product(tvec,pvec) ???????????????????? 1 Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be numeric or LOGICAL math_routine.F90:1371:21: ???? uu = dot_product(tvec,pvec) These errors were not present before. My variables are mostly vectors: real(8), intent(in) :: orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) real(8) :: uu,vv,dir(3) real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t I wonder what happened? Please advice. -- Thank you very much. Yours sincerely, ================================================ TAY Wee-Beng (Zheng Weiming) ??? Personal research webpage: http://tayweebeng.wixsite.com/website Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA linkedin: www.linkedin.com/in/tay-weebeng ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Feb 20 08:35:09 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 20 Feb 2018 14:35:09 +0000 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: References: Message-ID: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. > On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: > > Hi, > > I was previously using PETSc 3.7.6 on different clusters with both Intel > Fortran and GNU Fortran. After upgrading, I met some problems when > trying to compile: > > On Intel Fortran: > > Previously, I was using: > > #include "petsc/finclude/petsc.h90" > > in *.F90 when requires the use of PETSc > > I read in the change log that h90 is no longer there and so I replaced > with #include "petsc/finclude/petsc.h" > > It worked. But I also have some *.F90 which do not use PETSc. However, > they use some modules which uses PETSc. > > Now I can't compile them. The error is : > > math_routine.f90(3): error #7002: Error in opening the compiled module > file. Check INCLUDE paths. [PETSC] > use mpi_subroutines > > mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. > > The solution is that I have to compile e.g. math_routine.F90 as if they > use PETSc, by including PETSc include and lib files. > > May I know why this is so? It was not necessary before. > > Anyway, it managed to compile until it reached hypre.F90. > > Previously, due to some bugs, I have to compile hypre with the -r8 > option. Also, I have to use: > > integer(8) mpi_comm > > mpi_comm = MPI_COMM_WORLD > > to make my codes work with HYPRE. > > But now, compiling gives the error: > > hypre.F90(11): error #6401: The attributes of this name conflict with > those made accessible by a USE statement. [MPI_COMM] > integer(8) mpi_comm > --------------------------------------^ > hypre.F90(84): error #6478: A type-name must not be used as a > variable. [MPI_COMM] > mpi_comm = MPI_COMM_WORLD > ----^ > hypre.F90(84): error #6303: The assignment operation or the binary > expression operation is invalid for the data types of the two > operands. [1140850688] > mpi_comm = MPI_COMM_WORLD > ---------------^ > hypre.F90(100): error #6478: A type-name must not be used as a > variable. [MPI_COMM] > call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) > ... > > What's actually happening? Why can't I compile now? > > On GNU gfortran: > > I tried to use similar tactics as above here. However, when compiling > math_routine.F90, I got the error: > > math_routine.F90:1333:21: > > call subb(orig,vert1,tvec) > 1 > Error: Invalid procedure argument at (1) > math_routine.F90:1339:18: > > qvec = cross_pdt2(tvec,edge1) > 1 > Error: Invalid procedure argument at (1) > math_routine.F90:1345:21: > > uu = dot_product(tvec,pvec) > 1 > Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be > numeric or LOGICAL > math_routine.F90:1371:21: > > uu = dot_product(tvec,pvec) > > These errors were not present before. My variables are mostly vectors: > > real(8), intent(in) :: > orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) > > real(8) :: uu,vv,dir(3) > > real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t > > I wonder what happened? > > Please advice. > > > -- > Thank you very much. > > Yours sincerely, > > ================================================ > TAY Wee-Beng ??? > Research Scientist > Experimental AeroScience Group > Temasek Laboratories > National University of Singapore > T-Lab Building > 5A, Engineering Drive 1, #02-02 > Singapore 117411 > Phone: +65 65167330 > E-mail: tsltaywb at nus.edu.sg > http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php > Personal research webpage: http://tayweebeng.wixsite.com/website > Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA > linkedin: www.linkedin.com/in/tay-weebeng > ================================================ > > > ________________________________ > > Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From jroman at dsic.upv.es Tue Feb 20 08:46:48 2018 From: jroman at dsic.upv.es (Jose E. Roman) Date: Tue, 20 Feb 2018 15:46:48 +0100 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> Message-ID: <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h Jose > El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: > > > Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. > >> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >> >> Hi, >> >> I was previously using PETSc 3.7.6 on different clusters with both Intel >> Fortran and GNU Fortran. After upgrading, I met some problems when >> trying to compile: >> >> On Intel Fortran: >> >> Previously, I was using: >> >> #include "petsc/finclude/petsc.h90" >> >> in *.F90 when requires the use of PETSc >> >> I read in the change log that h90 is no longer there and so I replaced >> with #include "petsc/finclude/petsc.h" >> >> It worked. But I also have some *.F90 which do not use PETSc. However, >> they use some modules which uses PETSc. >> >> Now I can't compile them. The error is : >> >> math_routine.f90(3): error #7002: Error in opening the compiled module >> file. Check INCLUDE paths. [PETSC] >> use mpi_subroutines >> >> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >> >> The solution is that I have to compile e.g. math_routine.F90 as if they >> use PETSc, by including PETSc include and lib files. >> >> May I know why this is so? It was not necessary before. >> >> Anyway, it managed to compile until it reached hypre.F90. >> >> Previously, due to some bugs, I have to compile hypre with the -r8 >> option. Also, I have to use: >> >> integer(8) mpi_comm >> >> mpi_comm = MPI_COMM_WORLD >> >> to make my codes work with HYPRE. >> >> But now, compiling gives the error: >> >> hypre.F90(11): error #6401: The attributes of this name conflict with >> those made accessible by a USE statement. [MPI_COMM] >> integer(8) mpi_comm >> --------------------------------------^ >> hypre.F90(84): error #6478: A type-name must not be used as a >> variable. [MPI_COMM] >> mpi_comm = MPI_COMM_WORLD >> ----^ >> hypre.F90(84): error #6303: The assignment operation or the binary >> expression operation is invalid for the data types of the two >> operands. [1140850688] >> mpi_comm = MPI_COMM_WORLD >> ---------------^ >> hypre.F90(100): error #6478: A type-name must not be used as a >> variable. [MPI_COMM] >> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >> ... >> >> What's actually happening? Why can't I compile now? >> >> On GNU gfortran: >> >> I tried to use similar tactics as above here. However, when compiling >> math_routine.F90, I got the error: >> >> math_routine.F90:1333:21: >> >> call subb(orig,vert1,tvec) >> 1 >> Error: Invalid procedure argument at (1) >> math_routine.F90:1339:18: >> >> qvec = cross_pdt2(tvec,edge1) >> 1 >> Error: Invalid procedure argument at (1) >> math_routine.F90:1345:21: >> >> uu = dot_product(tvec,pvec) >> 1 >> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >> numeric or LOGICAL >> math_routine.F90:1371:21: >> >> uu = dot_product(tvec,pvec) >> >> These errors were not present before. My variables are mostly vectors: >> >> real(8), intent(in) :: >> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >> >> real(8) :: uu,vv,dir(3) >> >> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >> >> I wonder what happened? >> >> Please advice. >> >> >> -- >> Thank you very much. >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng ??? >> Research Scientist >> Experimental AeroScience Group >> Temasek Laboratories >> National University of Singapore >> T-Lab Building >> 5A, Engineering Drive 1, #02-02 >> Singapore 117411 >> Phone: +65 65167330 >> E-mail: tsltaywb at nus.edu.sg >> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >> Personal research webpage: http://tayweebeng.wixsite.com/website >> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >> linkedin: www.linkedin.com/in/tay-weebeng >> ================================================ >> >> >> ________________________________ >> >> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. > From knepley at gmail.com Mon Feb 19 17:30:00 2018 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 19 Feb 2018 18:30:00 -0500 Subject: [petsc-users] *****SPAM*****Re: how to check if cell is local owned in DMPlex In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 3:11 PM, Danyang Su wrote: > Hi Matt, > > Would you please let me know how to check if a cell is local owned? When > overlap is 0 in DMPlexDistribute, all the cells are local owned. How about > overlap > 0? It sounds like impossible to check by node because a cell can > be local owned even if none of the nodes in this cell is local owned. > If a cell is in the PetscSF, then it is not locally owned. The local nodes in the SF are sorted, so I use PetscFindInt ( http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscFindInt.html ). Thanks, Matt > Thanks, > > Danyang > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From aliberkkahraman at yahoo.com Tue Feb 20 10:30:40 2018 From: aliberkkahraman at yahoo.com (Ali Berk Kahraman) Date: Tue, 20 Feb 2018 19:30:40 +0300 Subject: [petsc-users] Installation without BLAS/LAPACK Message-ID: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Hello All, I have access to a common computer in my school, and I want to use petsc on it. The problem is that I do not have root access, and neither do I want it. The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc somehow without having any BLAS commands? If not, can I install BLAS somehow only on my own folder (/home/myfolder) without touching anything inside /usr/ folder? Best Regards, Ali Berk Kahraman From balay at mcs.anl.gov Tue Feb 20 10:34:26 2018 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 20 Feb 2018 10:34:26 -0600 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: > Hello All, > > I have access to a common computer in my school, and I want to use petsc on > it. The problem is that I do not have root access, and neither do I want it. > The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc > somehow without having any BLAS commands? If not, can I install BLAS somehow > only on my own folder (/home/myfolder) without touching anything inside /usr/ > folder? You don't need root access to install/use PETSc. And you can ask petsc configure to install any required or missing packages. ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich make If you wish to install PETSc with a preinstalled mpi - you can do: ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack make Satish From balay at mcs.anl.gov Tue Feb 20 10:36:54 2018 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 20 Feb 2018 10:36:54 -0600 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: On Tue, 20 Feb 2018, Satish Balay wrote: > On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: > > > Hello All, > > > > I have access to a common computer in my school, and I want to use petsc on > > it. The problem is that I do not have root access, and neither do I want it. > > The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc > > somehow without having any BLAS commands? If not, can I install BLAS somehow > > only on my own folder (/home/myfolder) without touching anything inside /usr/ > > folder? > > You don't need root access to install/use PETSc. > > And you can ask petsc configure to install any required or missing packages. > > ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich > make > > If you wish to install PETSc with a preinstalled mpi - you can do: > > ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack > make ops - that should be: --download-fblaslapack Satish From aliberkkahraman at yahoo.com Tue Feb 20 10:41:02 2018 From: aliberkkahraman at yahoo.com (Ali Berk Kahraman) Date: Tue, 20 Feb 2018 19:41:02 +0300 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: When I call --download-blaslapack, what does it do exactly? Where does it install the library? Does it touch anything anything else (such as updating versions of mpicc) ? My concern is that if I call download-blaslapack I will "change" some stuff in the /usr/bin directory that might disable some other program, package installed on the computer. On 20-02-2018 19:34, Satish Balay wrote: > On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: > >> Hello All, >> >> I have access to a common computer in my school, and I want to use petsc on >> it. The problem is that I do not have root access, and neither do I want it. >> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc >> somehow without having any BLAS commands? If not, can I install BLAS somehow >> only on my own folder (/home/myfolder) without touching anything inside /usr/ >> folder? > You don't need root access to install/use PETSc. > > And you can ask petsc configure to install any required or missing packages. > > ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich > make > > If you wish to install PETSc with a preinstalled mpi - you can do: > > ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack > make > > Satish From bsmith at mcs.anl.gov Tue Feb 20 10:45:44 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 20 Feb 2018 16:45:44 +0000 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: <1F49A4D7-DFB2-4901-891D-681AA4EB7952@anl.gov> > On Feb 20, 2018, at 10:30 AM, Ali Berk Kahraman wrote: > > Hello All, > > I have access to a common computer in my school, and I want to use petsc on it. The problem is that I do not have root access, and neither do I want it. The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc somehow without having any BLAS commands? If not, can I install BLAS somehow only on my own folder (/home/myfolder) without touching anything inside /usr/ folder? Of course, PETSc NEVER requires you to have any kind of root access to install it and its packages. Just use --download-fblaslapack as a ./configure option. It will keep the BLASLAPACK inside the PETSc directory with the PETSc libraries. Barry > > Best Regards, > > Ali Berk Kahraman > > From bsmith at mcs.anl.gov Tue Feb 20 10:46:52 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 20 Feb 2018 16:46:52 +0000 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: > On Feb 20, 2018, at 10:41 AM, Ali Berk Kahraman wrote: > > When I call --download-blaslapack, what does it do exactly? Where does it install the library? Does it touch anything anything else (such as updating versions of mpicc) ? My concern is that if I call download-blaslapack I will "change" some stuff in the /usr/bin directory that might disable some other program, package installed on the computer. It puts everything in your PETSC_DIR and does not, nor could not change anything in /usr Barry > > > On 20-02-2018 19:34, Satish Balay wrote: >> On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: >> >>> Hello All, >>> >>> I have access to a common computer in my school, and I want to use petsc on >>> it. The problem is that I do not have root access, and neither do I want it. >>> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc >>> somehow without having any BLAS commands? If not, can I install BLAS somehow >>> only on my own folder (/home/myfolder) without touching anything inside /usr/ >>> folder? >> You don't need root access to install/use PETSc. >> >> And you can ask petsc configure to install any required or missing packages. >> >> ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich >> make >> >> If you wish to install PETSc with a preinstalled mpi - you can do: >> >> ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack >> make >> >> Satish > From aliberkkahraman at yahoo.com Tue Feb 20 10:48:24 2018 From: aliberkkahraman at yahoo.com (Ali Berk Kahraman) Date: Tue, 20 Feb 2018 19:48:24 +0300 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: <27814474-abe9-1fba-ea46-8b4cc5de8b11@yahoo.com> Exactly the words I wanted to hear. Thank you very much. On 20-02-2018 19:46, Smith, Barry F. wrote: > >> On Feb 20, 2018, at 10:41 AM, Ali Berk Kahraman wrote: >> >> When I call --download-blaslapack, what does it do exactly? Where does it install the library? Does it touch anything anything else (such as updating versions of mpicc) ? My concern is that if I call download-blaslapack I will "change" some stuff in the /usr/bin directory that might disable some other program, package installed on the computer. > It puts everything in your PETSC_DIR and does not, nor could not change anything in /usr > > Barry > >> >> On 20-02-2018 19:34, Satish Balay wrote: >>> On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: >>> >>>> Hello All, >>>> >>>> I have access to a common computer in my school, and I want to use petsc on >>>> it. The problem is that I do not have root access, and neither do I want it. >>>> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc >>>> somehow without having any BLAS commands? If not, can I install BLAS somehow >>>> only on my own folder (/home/myfolder) without touching anything inside /usr/ >>>> folder? >>> You don't need root access to install/use PETSc. >>> >>> And you can ask petsc configure to install any required or missing packages. >>> >>> ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack --download-mpich >>> make >>> >>> If you wish to install PETSc with a preinstalled mpi - you can do: >>> >>> ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack >>> make >>> >>> Satish From balay at mcs.anl.gov Tue Feb 20 10:48:59 2018 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 20 Feb 2018 10:48:59 -0600 Subject: [petsc-users] Installation without BLAS/LAPACK In-Reply-To: References: <9807efca-0dae-118e-af80-299812d9f306@yahoo.com> Message-ID: [sorry - its --download-fblaslapack] What it does is: 1. gets the source tarball specified in config/BuildSystem/config/packages/fblaslapack.py i.e http://ftp.mcs.anl.gov/pub/petsc/externalpackages/fblaslapack-3.4.2.tar.gz 2. compiles it 3. installs it in PETSC_DIR/PETSC_ARCH/lib 4. Then configures PETSc to use this library Note: PETSc libraries also get installed in PETSC_DIR/PETSC_ARCH/lib Also check installation instructions at http://www.mcs.anl.gov/petsc/documentation/installation.html Satish On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: > When I call --download-blaslapack, what does it do exactly? Where does it > install the library? Does it touch anything anything else (such as updating > versions of mpicc) ? My concern is that if I call download-blaslapack I will > "change" some stuff in the /usr/bin directory that might disable some other > program, package installed on the computer. > > > On 20-02-2018 19:34, Satish Balay wrote: > > On Tue, 20 Feb 2018, Ali Berk Kahraman wrote: > > > >> Hello All, > >> > >> I have access to a common computer in my school, and I want to use petsc on > >> it. The problem is that I do not have root access, and neither do I want > >> it. > >> The machine has OpenMPI installed in it, but no BLAS. Can I configure petsc > >> somehow without having any BLAS commands? If not, can I install BLAS > >> somehow > >> only on my own folder (/home/myfolder) without touching anything inside > >> /usr/ > >> folder? > > You don't need root access to install/use PETSc. > > > > And you can ask petsc configure to install any required or missing packages. > > > > ./configure CC=gcc FC=gfortran CXX=g++ --download-blaslapack > > --download-mpich > > make > > > > If you wish to install PETSc with a preinstalled mpi - you can do: > > > > ./configure CC=mpicc FC=mpif90 CXX=mpicxx --download-blaslapack > > make > > > > Satish > > > From danyang.su at gmail.com Tue Feb 20 11:30:15 2018 From: danyang.su at gmail.com (Danyang Su) Date: Tue, 20 Feb 2018 09:30:15 -0800 Subject: [petsc-users] Question on DMPlexCreateSection for Fortran Message-ID: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> Hi All, I tried to compile the DMPlexCreateSection code but got error information as shown below. Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code can be compiled but run into Segmentation Violation error in DMPlexCreateSection. dmda_flow%da is distributed dm object that works fine. The fortran example I follow is http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90. What parameters should I use if passing null to bcField, bcComps, bcPoints and perm. PetscErrorCode DMPlexCreateSection (DM dm,PetscInt dim,PetscInt numFields,constPetscInt numComp[],constPetscInt numDof[],PetscInt numBC,constPetscInt bcField[], constIS bcComps[], constIS bcPoints[],IS perm,PetscSection *section) #include #include #include ... #ifdef USG ??????? numFields = 1 ??????? numComp(1) = 1 ??????? pNumComp => numComp ??????? do i = 1, numFields*(dmda_flow%dim+1) ????????? numDof(i) = 0 ??????? end do ??????? numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof ??????? pNumDof => numDof ??????? numBC = 0 ??????? call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,????????? & numFields,pNumComp,pNumDof,?????????????????????????? & numBC,PETSC_NULL_INTEGER,?????????????????????????????? & PETSC_NULL_IS,PETSC_NULL_IS, &???????????? !Error here ???????????????????????????????? PETSC_NULL_IS,section,ierr) ??????? CHKERRQ(ierr) ??????? call PetscSectionSetFieldName(section,0,'flow',ierr) ??????? CHKERRQ(ierr) ??????? call DMSetDefaultSection(dmda_flow%da,section,ierr) ??????? CHKERRQ(ierr) ??????? call PetscSectionDestroy(section,ierr) ??????? CHKERRQ(ierr) #endif Thanks, Danyang -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Feb 20 11:52:02 2018 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Feb 2018 12:52:02 -0500 Subject: [petsc-users] Question on DMPlexCreateSection for Fortran In-Reply-To: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> References: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> Message-ID: On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su wrote: > Hi All, > > I tried to compile the DMPlexCreateSection code but got error information > as shown below. > > Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type > > I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code > can be compiled but run into Segmentation Violation error in > DMPlexCreateSection. > >From the webpage http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays would have been too painful. Thanks, Matt > dmda_flow%da is distributed dm object that works fine. > > The fortran example I follow is http://www.mcs.anl.gov/petsc/ > petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90. > > What parameters should I use if passing null to bcField, bcComps, bcPoints > and perm. > > PetscErrorCode DMPlexCreateSection (DM dm, PetscInt dim, PetscInt numFields,const PetscInt numComp[],const PetscInt numDof[], PetscInt numBC,const PetscInt bcField[], > const IS bcComps[], const IS bcPoints[], IS perm, PetscSection *section) > > > #include > #include > #include > > ... > > #ifdef USG > numFields = 1 > numComp(1) = 1 > pNumComp => numComp > > do i = 1, numFields*(dmda_flow%dim+1) > numDof(i) = 0 > end do > numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof > pNumDof => numDof > > numBC = 0 > > call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, & > numFields,pNumComp,pNumDof, > & > numBC,PETSC_NULL_INTEGER, > & > PETSC_NULL_IS,PETSC_NULL_IS, > & !Error here > PETSC_NULL_IS,section,ierr) > CHKERRQ(ierr) > > call PetscSectionSetFieldName(section,0,'flow',ierr) > CHKERRQ(ierr) > > call DMSetDefaultSection(dmda_flow%da,section,ierr) > CHKERRQ(ierr) > > call PetscSectionDestroy(section,ierr) > CHKERRQ(ierr) > #endif > > Thanks, > > Danyang > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Tue Feb 20 12:07:27 2018 From: danyang.su at gmail.com (Danyang Su) Date: Tue, 20 Feb 2018 10:07:27 -0800 Subject: [petsc-users] Question on DMPlexCreateSection for Fortran In-Reply-To: References: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> Message-ID: On 18-02-20 09:52 AM, Matthew Knepley wrote: > On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su > wrote: > > Hi All, > > I tried to compile the DMPlexCreateSection code but got error > information as shown below. > > Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type > > I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then > the code can be compiled but run into Segmentation Violation error > in DMPlexCreateSection. > > From the webpage > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html > > > The F90 version is?DMPlexCreateSectionF90. Doing this with F77 arrays > would have been too painful. Hi Matt, Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 instead of DMPlexCreateSection. Would you please tell me in more details? undefined reference to `dmplexcreatesectionf90_' then I #include , but this throws more error during compilation. ??? Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: ??? Included at ../../solver/solver_ddmethod.F90:62: ????????? PETSCSECTION_HIDE section ????????? 1 Error: Unclassifiable statement at (1) /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10: ??? Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: ??? Included at ../../solver/solver_ddmethod.F90:62: ????????? PETSCSECTION_HIDE section ????????? 1 Error: Unclassifiable statement at (1) /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10: ??? Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: ??? Included at ../../solver/solver_ddmethod.F90:62: > > ? Thanks, > > ? ? ?Matt > > dmda_flow%da is distributed dm object that works fine. > > The fortran example I follow is > http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90 > . > > > What parameters should I use if passing null to bcField, bcComps, > bcPoints and perm. > > PetscErrorCode > DMPlexCreateSection > (DM > dm,PetscInt > dim,PetscInt > numFields,constPetscInt > numComp[],constPetscInt > numDof[],PetscInt > numBC,constPetscInt > bcField[], > constIS > bcComps[], constIS > bcPoints[],IS > perm,PetscSection > *section) > > #include > #include > #include > > ... > > #ifdef USG > ??????? numFields = 1 > ??????? numComp(1) = 1 > ??????? pNumComp => numComp > > ??????? do i = 1, numFields*(dmda_flow%dim+1) > ????????? numDof(i) = 0 > ??????? end do > ??????? numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof > ??????? pNumDof => numDof > > ??????? numBC = 0 > > ??????? call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, & > numFields,pNumComp,pNumDof, & > numBC,PETSC_NULL_INTEGER, & > PETSC_NULL_IS,PETSC_NULL_IS, &???????????? !Error here > PETSC_NULL_IS,section,ierr) > ??????? CHKERRQ(ierr) > > ??????? call PetscSectionSetFieldName(section,0,'flow',ierr) > ??????? CHKERRQ(ierr) > > ??????? call DMSetDefaultSection(dmda_flow%da,section,ierr) > ??????? CHKERRQ(ierr) > > ??????? call PetscSectionDestroy(section,ierr) > ??????? CHKERRQ(ierr) > #endif > > Thanks, > > Danyang > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Feb 20 18:54:57 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 21 Feb 2018 08:54:57 +0800 Subject: [petsc-users] Compiling with PETSc 64-bit indices Message-ID: Hi, When I run my CFD code with a grid size of 1119x1119x499 ( total grid size =??? 624828339 ), I got the error saying I need to compile PETSc with 64-bit indices. So I tried to compile PETSc again and then compile my CFD code with the newly compiled PETSc. However, now I got segmentation error: rm: cannot remove `log': No such file or directory [409]PETSC ERROR: ------------------------------------------------------------------------ [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: ------------------------------------------------------------------------ [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [410]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [410]PETSC ERROR: [536]PETSC ERROR: ------------------------------------------------------------------------ [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [536]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [536]PETSC ERROR: likely location of problem given in stack below [536]PETSC ERROR: ---------------------? Stack Frames ------------------------------------ [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [536]PETSC ERROR:?????? INSTEAD the line number of the start of the function [536]PETSC ERROR:?????? is given. [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581 /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [410]PETSC ERROR: likely location of problem given in stack below [410]PETSC ERROR: ---------------------? Stack Frames ------------------------------------ [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c [536]PETSC ERROR: [536] DMDACreate3d line 1434 /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c [536]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- The CFD code worked previously but increasing the problem size results in segmentation error. It seems to be related to DMDACreate3d and DMDASetOwnershipRanges. Any idea where the problem lies? Besides, I want to know when and why do I have to use PETSc with 64-bit indices? Also, can I use the 64-bit indices version with smaller sized problems? And is there a speed difference between using the 32-bit and 64-bit indices ver? -- Thank you very much. Yours sincerely, ================================================ TAY Wee-Beng (Zheng Weiming) ??? Personal research webpage: http://tayweebeng.wixsite.com/website Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA linkedin: www.linkedin.com/in/tay-weebeng ================================================ From knepley at gmail.com Tue Feb 20 19:00:39 2018 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Feb 2018 20:00:39 -0500 Subject: [petsc-users] Compiling with PETSc 64-bit indices In-Reply-To: References: Message-ID: On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng wrote: > Hi, > > When I run my CFD code with a grid size of 1119x1119x499 ( total grid size > = 624828339 ), I got the error saying I need to compile PETSc with > 64-bit indices. > > So I tried to compile PETSc again and then compile my CFD code with the > newly compiled PETSc. However, now I got segmentation error: > > rm: cannot remove `log': No such file or directory > [409]PETSC ERROR: ------------------------------ > ------------------------------------------ > [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: > ------------------------------------------------------------------------ > [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [410]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [410]PETSC ERROR: [536]PETSC ERROR: ------------------------------ > ------------------------------------------ > [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [536]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d > ocumentation/faq.html#valgrind > [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X to find memory corruption errors > [536]PETSC ERROR: likely location of problem given in stack below > [536]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [536]PETSC ERROR: INSTEAD the line number of the start of the > function > [536]PETSC ERROR: is given. > [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581 > /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c > [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d > ocumentation/faq.html#valgrind > [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac > OS X to find memory corruption errors > [410]PETSC ERROR: likely location of problem given in stack below > [410]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 > /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c > [536]PETSC ERROR: [536] DMDACreate3d line 1434 > /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c > [536]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > The CFD code worked previously but increasing the problem size results in > segmentation error. It seems to be related to DMDACreate3d and > DMDASetOwnershipRanges. Any idea where the problem lies? > > Besides, I want to know when and why do I have to use PETSc with 64-bit > indices? > 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you have a 3D velocity, pressure, and energy, you already have 3e9 unknowns, before you even start to count nonzero entries in the matrix. 64-bit integers allow you to handle these big sizes. > Also, can I use the 64-bit indices version with smaller sized problems? > 2) Yes > And is there a speed difference between using the 32-bit and 64-bit > indices ver? 3) I have seen no evidence of this 4) My guess is that you have defines regular integers in your code and passed them to PETSc, rather than using PetscInt as the type. Thanks, Matt > > -- > Thank you very much. > > Yours sincerely, > > ================================================ > TAY Wee-Beng (Zheng Weiming) ??? > Personal research webpage: http://tayweebeng.wixsite.com/website > Youtube research showcase: https://www.youtube.com/channe > l/UC72ZHtvQNMpNs2uRTSToiLA > linkedin: www.linkedin.com/in/tay-weebeng > ================================================ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Feb 20 19:08:22 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 21 Feb 2018 09:08:22 +0800 Subject: [petsc-users] Compiling with PETSc 64-bit indices In-Reply-To: References: Message-ID: <0918f242-f7d2-15f1-6d9f-9887ecd7ef0f@gmail.com> On 21/2/2018 9:00 AM, Matthew Knepley wrote: > On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng > wrote: > > Hi, > > When I run my CFD code with a grid size of 1119x1119x499 ( total > grid size =??? 624828339 ), I got the error saying I need to > compile PETSc with 64-bit indices. > > So I tried to compile PETSc again and then compile my CFD code > with the newly compiled PETSc. However, now I got segmentation error: > > rm: cannot remove `log': No such file or directory > [409]PETSC ERROR: > ------------------------------------------------------------------------ > [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: > ------------------------------------------------------------------------ > [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [410]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [410]PETSC ERROR: [536]PETSC ERROR: > ------------------------------------------------------------------------ > [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation > Violation, probably memory access out of range > [536]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > [536]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and > Apple Mac OS X to find memory corruption errors > [536]PETSC ERROR: likely location of problem given in stack below > [536]PETSC ERROR: ---------------------? Stack Frames > ------------------------------------ > [536]PETSC ERROR: Note: The EXACT line numbers in the stack are > not available, > [536]PETSC ERROR:?????? INSTEAD the line number of the start of > the function > [536]PETSC ERROR:?????? is given. > [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581 > /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c > [536]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind > > [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and > Apple Mac OS X to find memory corruption errors > [410]PETSC ERROR: likely location of problem given in stack below > [410]PETSC ERROR: ---------------------? Stack Frames > ------------------------------------ > [410]PETSC ERROR: Note: The EXACT line numbers in the stack are > not available, > [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 > /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c > [536]PETSC ERROR: [536] DMDACreate3d line 1434 > /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c > [536]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > > The CFD code worked previously but increasing the problem size > results in segmentation error. It seems to be related to > DMDACreate3d and DMDASetOwnershipRanges. Any idea where the > problem lies? > > Besides, I want to know when and why do I have to use PETSc with > 64-bit indices? > > > 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you > have a 3D velocity, pressure, and energy, you already have 3e9 unknowns, > ? ? before you even start to count nonzero entries in the matrix. > 64-bit integers allow you to handle these big sizes. > > Also, can I use the 64-bit indices version with smaller sized > problems? > > > 2) Yes > > And is there a speed difference between using the 32-bit and > 64-bit indices ver? > > > 3) I have seen no evidence of this > > 4) My guess is that you have defines regular integers in your code and > passed them to PETSc, rather than using PetscInt as the type. Oh that seems probable. So I am still using integer(4) when it should be integer(8) for some values, is that so? If I use PetscInt, is it the same as integer(8)? Or does it depend on the actual number? I wonder if I replace all my integer to PetscInt, will there be a large increase in memory usage, because all integer(4) now becomes integer(8)? Thanks. > > ? Thanks, > > ? ? ?Matt > > > -- > Thank you very much. > > Yours sincerely, > > ================================================ > TAY Wee-Beng (Zheng Weiming) ??? > Personal research webpage: http://tayweebeng.wixsite.com/website > > Youtube research showcase: > https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA > > linkedin: www.linkedin.com/in/tay-weebeng > > ================================================ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Tue Feb 20 19:12:30 2018 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 20 Feb 2018 20:12:30 -0500 Subject: [petsc-users] Compiling with PETSc 64-bit indices In-Reply-To: <0918f242-f7d2-15f1-6d9f-9887ecd7ef0f@gmail.com> References: <0918f242-f7d2-15f1-6d9f-9887ecd7ef0f@gmail.com> Message-ID: On Tue, Feb 20, 2018 at 8:08 PM, TAY wee-beng wrote: > > On 21/2/2018 9:00 AM, Matthew Knepley wrote: > > On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng wrote: > >> Hi, >> >> When I run my CFD code with a grid size of 1119x1119x499 ( total grid >> size = 624828339 ), I got the error saying I need to compile PETSc with >> 64-bit indices. >> >> So I tried to compile PETSc again and then compile my CFD code with the >> newly compiled PETSc. However, now I got segmentation error: >> >> rm: cannot remove `log': No such file or directory >> [409]PETSC ERROR: ------------------------------ >> ------------------------------------------ >> [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: >> ------------------------------------------------------------------------ >> [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [410]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger >> [410]PETSC ERROR: [536]PETSC ERROR: ------------------------------ >> ------------------------------------------ >> [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >> probably memory access out of range >> [536]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger >> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d >> ocumentation/faq.html#valgrind >> [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac >> OS X to find memory corruption errors >> [536]PETSC ERROR: likely location of problem given in stack below >> [536]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [536]PETSC ERROR: INSTEAD the line number of the start of the >> function >> [536]PETSC ERROR: is given. >> [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581 >> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c >> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d >> ocumentation/faq.html#valgrind >> [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac >> OS X to find memory corruption errors >> [410]PETSC ERROR: likely location of problem given in stack below >> [410]PETSC ERROR: --------------------- Stack Frames >> ------------------------------------ >> [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not >> available, >> [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 >> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c >> [536]PETSC ERROR: [536] DMDACreate3d line 1434 >> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c >> [536]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> >> The CFD code worked previously but increasing the problem size results in >> segmentation error. It seems to be related to DMDACreate3d and >> DMDASetOwnershipRanges. Any idea where the problem lies? >> >> Besides, I want to know when and why do I have to use PETSc with 64-bit >> indices? >> > > 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you have a > 3D velocity, pressure, and energy, you already have 3e9 unknowns, > before you even start to count nonzero entries in the matrix. 64-bit > integers allow you to handle these big sizes. > > >> Also, can I use the 64-bit indices version with smaller sized problems? >> > > 2) Yes > > >> And is there a speed difference between using the 32-bit and 64-bit >> indices ver? > > > 3) I have seen no evidence of this > > 4) My guess is that you have defines regular integers in your code and > passed them to PETSc, rather than using PetscInt as the type. > > Oh that seems probable. So I am still using integer(4) when it should be > integer(8) for some values, is that so? If I use PetscInt, is it the same > as integer(8)? Or does it depend on the actual number? > PetscInt will be integer(4) if you configure with 32-bit ints, and integer(8) if you configure with 64-bit ints. If you use it consistently, you can avoid problems with matching the PETSc API. I wonder if I replace all my integer to PetscInt, will there be a large > increase in memory usage, because all integer(4) now becomes integer(8)? > Only if you have large integer storage. Most codes do not. Thanks, Matt > Thanks. > > > Thanks, > > Matt > > >> >> -- >> Thank you very much. >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng (Zheng Weiming) ??? >> Personal research webpage: http://tayweebeng.wixsite.com/website >> Youtube research showcase: https://www.youtube.com/channe >> l/UC72ZHtvQNMpNs2uRTSToiLA >> linkedin: www.linkedin.com/in/tay-weebeng >> ================================================ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Feb 20 20:40:08 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 21 Feb 2018 10:40:08 +0800 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> Message-ID: <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> Hi, Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: call DMCreateGlobalVector(da_u,u_global,ierr) The error is: [0]PETSC ERROR: --------------------- Error Message ---------------------------- ---------------------------------- [0]PETSC ERROR: Null argument, when expecting valid pointer [0]PETSC ERROR: Null Object: Parameter # 2 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou ble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... But all I changed is from: module global_data #include "petsc/finclude/petsc.h" use petsc use kdtree2_module implicit none save !grid variables integer :: size_x,s.... ... to module global_data use kdtree2_module implicit none save #include "petsc/finclude/petsc.h90" !grid variables integer :: size_x,s... ... da_u, u_global were declared thru: DM? da_u,da_v,... DM? da_cu_types ... Vec u_local,u_global,v_local... So what could be the problem? Thank you very much. Yours sincerely, ================================================ TAY Wee-Beng (Zheng Weiming) ??? Personal research webpage: http://tayweebeng.wixsite.com/website Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA linkedin: www.linkedin.com/in/tay-weebeng ================================================ On 20/2/2018 10:46 PM, Jose E. Roman wrote: > Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. > > The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h > > Jose > > >> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >> >> >> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >> >>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>> >>> Hi, >>> >>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>> Fortran and GNU Fortran. After upgrading, I met some problems when >>> trying to compile: >>> >>> On Intel Fortran: >>> >>> Previously, I was using: >>> >>> #include "petsc/finclude/petsc.h90" >>> >>> in *.F90 when requires the use of PETSc >>> >>> I read in the change log that h90 is no longer there and so I replaced >>> with #include "petsc/finclude/petsc.h" >>> >>> It worked. But I also have some *.F90 which do not use PETSc. However, >>> they use some modules which uses PETSc. >>> >>> Now I can't compile them. The error is : >>> >>> math_routine.f90(3): error #7002: Error in opening the compiled module >>> file. Check INCLUDE paths. [PETSC] >>> use mpi_subroutines >>> >>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>> >>> The solution is that I have to compile e.g. math_routine.F90 as if they >>> use PETSc, by including PETSc include and lib files. >>> >>> May I know why this is so? It was not necessary before. >>> >>> Anyway, it managed to compile until it reached hypre.F90. >>> >>> Previously, due to some bugs, I have to compile hypre with the -r8 >>> option. Also, I have to use: >>> >>> integer(8) mpi_comm >>> >>> mpi_comm = MPI_COMM_WORLD >>> >>> to make my codes work with HYPRE. >>> >>> But now, compiling gives the error: >>> >>> hypre.F90(11): error #6401: The attributes of this name conflict with >>> those made accessible by a USE statement. [MPI_COMM] >>> integer(8) mpi_comm >>> --------------------------------------^ >>> hypre.F90(84): error #6478: A type-name must not be used as a >>> variable. [MPI_COMM] >>> mpi_comm = MPI_COMM_WORLD >>> ----^ >>> hypre.F90(84): error #6303: The assignment operation or the binary >>> expression operation is invalid for the data types of the two >>> operands. [1140850688] >>> mpi_comm = MPI_COMM_WORLD >>> ---------------^ >>> hypre.F90(100): error #6478: A type-name must not be used as a >>> variable. [MPI_COMM] >>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>> ... >>> >>> What's actually happening? Why can't I compile now? >>> >>> On GNU gfortran: >>> >>> I tried to use similar tactics as above here. However, when compiling >>> math_routine.F90, I got the error: >>> >>> math_routine.F90:1333:21: >>> >>> call subb(orig,vert1,tvec) >>> 1 >>> Error: Invalid procedure argument at (1) >>> math_routine.F90:1339:18: >>> >>> qvec = cross_pdt2(tvec,edge1) >>> 1 >>> Error: Invalid procedure argument at (1) >>> math_routine.F90:1345:21: >>> >>> uu = dot_product(tvec,pvec) >>> 1 >>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>> numeric or LOGICAL >>> math_routine.F90:1371:21: >>> >>> uu = dot_product(tvec,pvec) >>> >>> These errors were not present before. My variables are mostly vectors: >>> >>> real(8), intent(in) :: >>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>> >>> real(8) :: uu,vv,dir(3) >>> >>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>> >>> I wonder what happened? >>> >>> Please advice. >>> >>> >>> -- >>> Thank you very much. >>> >>> Yours sincerely, >>> >>> ================================================ >>> TAY Wee-Beng ??? >>> Research Scientist >>> Experimental AeroScience Group >>> Temasek Laboratories >>> National University of Singapore >>> T-Lab Building >>> 5A, Engineering Drive 1, #02-02 >>> Singapore 117411 >>> Phone: +65 65167330 >>> E-mail: tsltaywb at nus.edu.sg >>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>> Personal research webpage: http://tayweebeng.wixsite.com/website >>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>> linkedin: www.linkedin.com/in/tay-weebeng >>> ================================================ >>> >>> >>> ________________________________ >>> >>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From bsmith at mcs.anl.gov Tue Feb 20 20:47:23 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 21 Feb 2018 02:47:23 +0000 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> Message-ID: Try setting u_global = tVec(1) immediately before the call to DMCreateGlobalVector() > On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: > > Hi, > > Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: > > call DMCreateGlobalVector(da_u,u_global,ierr) > > The error is: > > [0]PETSC ERROR: --------------------- Error Message ---------------------------- > ---------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Object: Parameter # 2 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou > ble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 > [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. > 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 > [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo > rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri > ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... > > But all I changed is from: > > module global_data > #include "petsc/finclude/petsc.h" > use petsc > use kdtree2_module > implicit none > save > !grid variables > > integer :: size_x,s.... > > ... > > to > > module global_data > use kdtree2_module > implicit none > save > #include "petsc/finclude/petsc.h90" > !grid variables > integer :: size_x,s... > > ... > > da_u, u_global were declared thru: > > DM da_u,da_v,... > DM da_cu_types ... > Vec u_local,u_global,v_local... > > So what could be the problem? > > > Thank you very much. > > Yours sincerely, > > ================================================ > TAY Wee-Beng (Zheng Weiming) ??? > Personal research webpage: http://tayweebeng.wixsite.com/website > Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA > linkedin: www.linkedin.com/in/tay-weebeng > ================================================ > > On 20/2/2018 10:46 PM, Jose E. Roman wrote: >> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >> >> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >> >> Jose >> >> >>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>> >>> >>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>> >>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>> >>>> Hi, >>>> >>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>> trying to compile: >>>> >>>> On Intel Fortran: >>>> >>>> Previously, I was using: >>>> >>>> #include "petsc/finclude/petsc.h90" >>>> >>>> in *.F90 when requires the use of PETSc >>>> >>>> I read in the change log that h90 is no longer there and so I replaced >>>> with #include "petsc/finclude/petsc.h" >>>> >>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>> they use some modules which uses PETSc. >>>> >>>> Now I can't compile them. The error is : >>>> >>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>> file. Check INCLUDE paths. [PETSC] >>>> use mpi_subroutines >>>> >>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>> >>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>> use PETSc, by including PETSc include and lib files. >>>> >>>> May I know why this is so? It was not necessary before. >>>> >>>> Anyway, it managed to compile until it reached hypre.F90. >>>> >>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>> option. Also, I have to use: >>>> >>>> integer(8) mpi_comm >>>> >>>> mpi_comm = MPI_COMM_WORLD >>>> >>>> to make my codes work with HYPRE. >>>> >>>> But now, compiling gives the error: >>>> >>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>> those made accessible by a USE statement. [MPI_COMM] >>>> integer(8) mpi_comm >>>> --------------------------------------^ >>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>> variable. [MPI_COMM] >>>> mpi_comm = MPI_COMM_WORLD >>>> ----^ >>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>> expression operation is invalid for the data types of the two >>>> operands. [1140850688] >>>> mpi_comm = MPI_COMM_WORLD >>>> ---------------^ >>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>> variable. [MPI_COMM] >>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>> ... >>>> >>>> What's actually happening? Why can't I compile now? >>>> >>>> On GNU gfortran: >>>> >>>> I tried to use similar tactics as above here. However, when compiling >>>> math_routine.F90, I got the error: >>>> >>>> math_routine.F90:1333:21: >>>> >>>> call subb(orig,vert1,tvec) >>>> 1 >>>> Error: Invalid procedure argument at (1) >>>> math_routine.F90:1339:18: >>>> >>>> qvec = cross_pdt2(tvec,edge1) >>>> 1 >>>> Error: Invalid procedure argument at (1) >>>> math_routine.F90:1345:21: >>>> >>>> uu = dot_product(tvec,pvec) >>>> 1 >>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>> numeric or LOGICAL >>>> math_routine.F90:1371:21: >>>> >>>> uu = dot_product(tvec,pvec) >>>> >>>> These errors were not present before. My variables are mostly vectors: >>>> >>>> real(8), intent(in) :: >>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>> >>>> real(8) :: uu,vv,dir(3) >>>> >>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>> >>>> I wonder what happened? >>>> >>>> Please advice. >>>> >>>> >>>> -- >>>> Thank you very much. >>>> >>>> Yours sincerely, >>>> >>>> ================================================ >>>> TAY Wee-Beng ??? >>>> Research Scientist >>>> Experimental AeroScience Group >>>> Temasek Laboratories >>>> National University of Singapore >>>> T-Lab Building >>>> 5A, Engineering Drive 1, #02-02 >>>> Singapore 117411 >>>> Phone: +65 65167330 >>>> E-mail: tsltaywb at nus.edu.sg >>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>> linkedin: www.linkedin.com/in/tay-weebeng >>>> ================================================ >>>> >>>> >>>> ________________________________ >>>> >>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. > From zonexo at gmail.com Tue Feb 20 21:35:14 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 21 Feb 2018 11:35:14 +0800 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> Message-ID: On 21/2/2018 10:47 AM, Smith, Barry F. wrote: > Try setting > > u_global = tVec(1) > > immediately before the call to DMCreateGlobalVector() > > Hi, I added the line in but still got the same error below. Btw, my code is organised as: module global_data #include "petsc/finclude/petsc.h" use petsc use kdtree2_module implicit none save ... Vec u_local,u_global ... ... contains subroutine allo_var ... u_global = tVec(1) call DMCreateGlobalVector(da_u,u_global,ierr) ... [0]PETSC ERROR: --------------------- Error Message ---------------------------- ---------------------------------- [0]PETSC ERROR: Null argument, when expecting valid pointer [0]PETSC ERROR: Null Object: Parameter # 2 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou ble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018 [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program Fil es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 --with-file-create-pause=1 --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 --with-mpi-lib="[/c ygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/ c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" --with-shared-libra ries=0 [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in C:\Source\PETSC-~2.3\ src\vec\vec\INTERF~1\vector.c [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in C:\Source\PETSC-~2.3\src \dm\impls\da\dadist.c [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in C:\Source\PETSC-~2.3\src\d m\INTERF~1\dm.c Thanks. >> On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: >> >> Hi, >> >> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: >> >> call DMCreateGlobalVector(da_u,u_global,ierr) >> >> The error is: >> >> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >> ---------------------------------- >> [0]PETSC ERROR: Null argument, when expecting valid pointer >> [0]PETSC ERROR: Null Object: Parameter # 2 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >> ble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 >> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... >> >> But all I changed is from: >> >> module global_data >> #include "petsc/finclude/petsc.h" >> use petsc >> use kdtree2_module >> implicit none >> save >> !grid variables >> >> integer :: size_x,s.... >> >> ... >> >> to >> >> module global_data >> use kdtree2_module >> implicit none >> save >> #include "petsc/finclude/petsc.h90" >> !grid variables >> integer :: size_x,s... >> >> ... >> >> da_u, u_global were declared thru: >> >> DM da_u,da_v,... >> DM da_cu_types ... >> Vec u_local,u_global,v_local... >> >> So what could be the problem? >> >> >> Thank you very much. >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng (Zheng Weiming) ??? >> Personal research webpage: http://tayweebeng.wixsite.com/website >> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >> linkedin: www.linkedin.com/in/tay-weebeng >> ================================================ >> >> On 20/2/2018 10:46 PM, Jose E. Roman wrote: >>> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >>> >>> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >>> >>> Jose >>> >>> >>>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>>> >>>> >>>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>>> >>>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>>> >>>>> Hi, >>>>> >>>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>>> trying to compile: >>>>> >>>>> On Intel Fortran: >>>>> >>>>> Previously, I was using: >>>>> >>>>> #include "petsc/finclude/petsc.h90" >>>>> >>>>> in *.F90 when requires the use of PETSc >>>>> >>>>> I read in the change log that h90 is no longer there and so I replaced >>>>> with #include "petsc/finclude/petsc.h" >>>>> >>>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>>> they use some modules which uses PETSc. >>>>> >>>>> Now I can't compile them. The error is : >>>>> >>>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>>> file. Check INCLUDE paths. [PETSC] >>>>> use mpi_subroutines >>>>> >>>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>>> >>>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>>> use PETSc, by including PETSc include and lib files. >>>>> >>>>> May I know why this is so? It was not necessary before. >>>>> >>>>> Anyway, it managed to compile until it reached hypre.F90. >>>>> >>>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>>> option. Also, I have to use: >>>>> >>>>> integer(8) mpi_comm >>>>> >>>>> mpi_comm = MPI_COMM_WORLD >>>>> >>>>> to make my codes work with HYPRE. >>>>> >>>>> But now, compiling gives the error: >>>>> >>>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>>> those made accessible by a USE statement. [MPI_COMM] >>>>> integer(8) mpi_comm >>>>> --------------------------------------^ >>>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>>> variable. [MPI_COMM] >>>>> mpi_comm = MPI_COMM_WORLD >>>>> ----^ >>>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>>> expression operation is invalid for the data types of the two >>>>> operands. [1140850688] >>>>> mpi_comm = MPI_COMM_WORLD >>>>> ---------------^ >>>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>>> variable. [MPI_COMM] >>>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>>> ... >>>>> >>>>> What's actually happening? Why can't I compile now? >>>>> >>>>> On GNU gfortran: >>>>> >>>>> I tried to use similar tactics as above here. However, when compiling >>>>> math_routine.F90, I got the error: >>>>> >>>>> math_routine.F90:1333:21: >>>>> >>>>> call subb(orig,vert1,tvec) >>>>> 1 >>>>> Error: Invalid procedure argument at (1) >>>>> math_routine.F90:1339:18: >>>>> >>>>> qvec = cross_pdt2(tvec,edge1) >>>>> 1 >>>>> Error: Invalid procedure argument at (1) >>>>> math_routine.F90:1345:21: >>>>> >>>>> uu = dot_product(tvec,pvec) >>>>> 1 >>>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>>> numeric or LOGICAL >>>>> math_routine.F90:1371:21: >>>>> >>>>> uu = dot_product(tvec,pvec) >>>>> >>>>> These errors were not present before. My variables are mostly vectors: >>>>> >>>>> real(8), intent(in) :: >>>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>>> >>>>> real(8) :: uu,vv,dir(3) >>>>> >>>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>>> >>>>> I wonder what happened? >>>>> >>>>> Please advice. >>>>> >>>>> >>>>> -- >>>>> Thank you very much. >>>>> >>>>> Yours sincerely, >>>>> >>>>> ================================================ >>>>> TAY Wee-Beng ??? >>>>> Research Scientist >>>>> Experimental AeroScience Group >>>>> Temasek Laboratories >>>>> National University of Singapore >>>>> T-Lab Building >>>>> 5A, Engineering Drive 1, #02-02 >>>>> Singapore 117411 >>>>> Phone: +65 65167330 >>>>> E-mail: tsltaywb at nus.edu.sg >>>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>> ================================================ >>>>> >>>>> >>>>> ________________________________ >>>>> >>>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From bsmith at mcs.anl.gov Tue Feb 20 21:44:36 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 21 Feb 2018 03:44:36 +0000 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> Message-ID: <6DF75825-0B14-4BE9-8CF6-563418F9CDA4@mcs.anl.gov> Did you follow the directions in the changes file for 3.8?
  • Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] DMSetUp()
  • DMDACreateXd() no longer can take negative values for dimensons, instead pass positive values and call DMSetFromOptions() immediately after
  • I suspect you are not calling DMSetUp() and this is causing the problem. Barry > On Feb 20, 2018, at 7:35 PM, TAY wee-beng wrote: > > > On 21/2/2018 10:47 AM, Smith, Barry F. wrote: >> Try setting >> >> u_global = tVec(1) >> >> immediately before the call to DMCreateGlobalVector() >> >> > Hi, > > I added the line in but still got the same error below. Btw, my code is organised as: > > module global_data > > #include "petsc/finclude/petsc.h" > use petsc > use kdtree2_module > implicit none > save > ... > Vec u_local,u_global ... > ... > contains > > subroutine allo_var > ... > u_global = tVec(1) > call DMCreateGlobalVector(da_u,u_global,ierr) > ... > > > > > [0]PETSC ERROR: --------------------- Error Message ---------------------------- > ---------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Object: Parameter # 2 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou > ble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 > [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. > 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018 > [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo > rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri > ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x > 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program Fil > es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 --with-file-create-pause=1 > --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 --with-mpi-lib="[/c > ygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/ > c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" --with-shared-libra > ries=0 > [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in C:\Source\PETSC-~2.3\ > src\vec\vec\INTERF~1\vector.c > [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in C:\Source\PETSC-~2.3\src > \dm\impls\da\dadist.c > [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in C:\Source\PETSC-~2.3\src\d > m\INTERF~1\dm.c > > Thanks. >>> On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: >>> >>> Hi, >>> >>> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: >>> >>> call DMCreateGlobalVector(da_u,u_global,ierr) >>> >>> The error is: >>> >>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>> ---------------------------------- >>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>> [0]PETSC ERROR: Null Object: Parameter # 2 >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>> ble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 >>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... >>> >>> But all I changed is from: >>> >>> module global_data >>> #include "petsc/finclude/petsc.h" >>> use petsc >>> use kdtree2_module >>> implicit none >>> save >>> !grid variables >>> >>> integer :: size_x,s.... >>> >>> ... >>> >>> to >>> >>> module global_data >>> use kdtree2_module >>> implicit none >>> save >>> #include "petsc/finclude/petsc.h90" >>> !grid variables >>> integer :: size_x,s... >>> >>> ... >>> >>> da_u, u_global were declared thru: >>> >>> DM da_u,da_v,... >>> DM da_cu_types ... >>> Vec u_local,u_global,v_local... >>> >>> So what could be the problem? >>> >>> >>> Thank you very much. >>> >>> Yours sincerely, >>> >>> ================================================ >>> TAY Wee-Beng (Zheng Weiming) ??? >>> Personal research webpage: http://tayweebeng.wixsite.com/website >>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>> linkedin: www.linkedin.com/in/tay-weebeng >>> ================================================ >>> >>> On 20/2/2018 10:46 PM, Jose E. Roman wrote: >>>> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >>>> >>>> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >>>> >>>> Jose >>>> >>>> >>>>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>>>> >>>>> >>>>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>>>> >>>>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>>>> trying to compile: >>>>>> >>>>>> On Intel Fortran: >>>>>> >>>>>> Previously, I was using: >>>>>> >>>>>> #include "petsc/finclude/petsc.h90" >>>>>> >>>>>> in *.F90 when requires the use of PETSc >>>>>> >>>>>> I read in the change log that h90 is no longer there and so I replaced >>>>>> with #include "petsc/finclude/petsc.h" >>>>>> >>>>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>>>> they use some modules which uses PETSc. >>>>>> >>>>>> Now I can't compile them. The error is : >>>>>> >>>>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>>>> file. Check INCLUDE paths. [PETSC] >>>>>> use mpi_subroutines >>>>>> >>>>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>>>> >>>>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>>>> use PETSc, by including PETSc include and lib files. >>>>>> >>>>>> May I know why this is so? It was not necessary before. >>>>>> >>>>>> Anyway, it managed to compile until it reached hypre.F90. >>>>>> >>>>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>>>> option. Also, I have to use: >>>>>> >>>>>> integer(8) mpi_comm >>>>>> >>>>>> mpi_comm = MPI_COMM_WORLD >>>>>> >>>>>> to make my codes work with HYPRE. >>>>>> >>>>>> But now, compiling gives the error: >>>>>> >>>>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>>>> those made accessible by a USE statement. [MPI_COMM] >>>>>> integer(8) mpi_comm >>>>>> --------------------------------------^ >>>>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>>>> variable. [MPI_COMM] >>>>>> mpi_comm = MPI_COMM_WORLD >>>>>> ----^ >>>>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>>>> expression operation is invalid for the data types of the two >>>>>> operands. [1140850688] >>>>>> mpi_comm = MPI_COMM_WORLD >>>>>> ---------------^ >>>>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>>>> variable. [MPI_COMM] >>>>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>>>> ... >>>>>> >>>>>> What's actually happening? Why can't I compile now? >>>>>> >>>>>> On GNU gfortran: >>>>>> >>>>>> I tried to use similar tactics as above here. However, when compiling >>>>>> math_routine.F90, I got the error: >>>>>> >>>>>> math_routine.F90:1333:21: >>>>>> >>>>>> call subb(orig,vert1,tvec) >>>>>> 1 >>>>>> Error: Invalid procedure argument at (1) >>>>>> math_routine.F90:1339:18: >>>>>> >>>>>> qvec = cross_pdt2(tvec,edge1) >>>>>> 1 >>>>>> Error: Invalid procedure argument at (1) >>>>>> math_routine.F90:1345:21: >>>>>> >>>>>> uu = dot_product(tvec,pvec) >>>>>> 1 >>>>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>>>> numeric or LOGICAL >>>>>> math_routine.F90:1371:21: >>>>>> >>>>>> uu = dot_product(tvec,pvec) >>>>>> >>>>>> These errors were not present before. My variables are mostly vectors: >>>>>> >>>>>> real(8), intent(in) :: >>>>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>>>> >>>>>> real(8) :: uu,vv,dir(3) >>>>>> >>>>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>>>> >>>>>> I wonder what happened? >>>>>> >>>>>> Please advice. >>>>>> >>>>>> >>>>>> -- >>>>>> Thank you very much. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> ================================================ >>>>>> TAY Wee-Beng ??? >>>>>> Research Scientist >>>>>> Experimental AeroScience Group >>>>>> Temasek Laboratories >>>>>> National University of Singapore >>>>>> T-Lab Building >>>>>> 5A, Engineering Drive 1, #02-02 >>>>>> Singapore 117411 >>>>>> Phone: +65 65167330 >>>>>> E-mail: tsltaywb at nus.edu.sg >>>>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>> ================================================ >>>>>> >>>>>> >>>>>> ________________________________ >>>>>> >>>>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. > From knepley at gmail.com Wed Feb 21 05:25:50 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 21 Feb 2018 06:25:50 -0500 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> Message-ID: On Tue, Feb 20, 2018 at 9:40 PM, TAY wee-beng wrote: > Hi, > > Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to > debug step by step. I got into problem when calling: > > call DMCreateGlobalVector(da_u,u_global,ierr) > > The error is: > > [0]PETSC ERROR: --------------------- Error Message > ---------------------------- > ---------------------------------- > [0]PETSC ERROR: Null argument, when expecting valid pointer > [0]PETSC ERROR: Null Object: Parameter # 2 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trou > ble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 > [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a > petsc-3.8. > 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 > [0]PETSC ERROR: Configure options --with-cc="win32fe icl" > --with-fc="win32fe ifo > rt" --with-cxx="win32fe icl" --download-fblaslapack > --with-mpi-include="[/cygdri > ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program > Files (x.... > > But all I changed is from: > > module global_data > #include "petsc/finclude/petsc.h" > use petsc > use kdtree2_module > implicit none > save > !grid variables > > integer :: size_x,s.... > > ... > > to > > module global_data > use kdtree2_module > implicit none > save > #include "petsc/finclude/petsc.h90" > !grid variables > integer :: size_x,s... > > ... > > da_u, u_global were declared thru: > > DM da_u,da_v,... > DM da_cu_types ... > Vec u_local,u_global,v_local... > > So what could be the problem? > If you are using the latest release (or master) then http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/UsingFortran.html you include petsc.h not petsc.h90, and you need to 'use petsc' as well, so the first form looks correct, not the second. Thanks, Matt > > Thank you very much. > > Yours sincerely, > > ================================================ > TAY Wee-Beng (Zheng Weiming) ??? > Personal research webpage: http://tayweebeng.wixsite.com/website > Youtube research showcase: https://www.youtube.com/channe > l/UC72ZHtvQNMpNs2uRTSToiLA > linkedin: www.linkedin.com/in/tay-weebeng > ================================================ > > On 20/2/2018 10:46 PM, Jose E. Roman wrote: > >> Probably the first error is produced by using a variable (mpi_comm) with >> the same name as an MPI type. >> >> The second error I guess is due to variable tvec, since a Fortran type >> tVec is now being defined in src/vec/f90-mod/petscvec.h >> >> Jose >> >> >> El 20 feb 2018, a las 15:35, Smith, Barry F. >>> escribi?: >>> >>> >>> Please run a clean compile of everything and cut and paste all the >>> output. This will make it much easier to debug than trying to understand >>> your snippets of what is going wrong. >>> >>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>> >>>> Hi, >>>> >>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>> trying to compile: >>>> >>>> On Intel Fortran: >>>> >>>> Previously, I was using: >>>> >>>> #include "petsc/finclude/petsc.h90" >>>> >>>> in *.F90 when requires the use of PETSc >>>> >>>> I read in the change log that h90 is no longer there and so I replaced >>>> with #include "petsc/finclude/petsc.h" >>>> >>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>> they use some modules which uses PETSc. >>>> >>>> Now I can't compile them. The error is : >>>> >>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>> file. Check INCLUDE paths. [PETSC] >>>> use mpi_subroutines >>>> >>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o >>>> problem. >>>> >>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>> use PETSc, by including PETSc include and lib files. >>>> >>>> May I know why this is so? It was not necessary before. >>>> >>>> Anyway, it managed to compile until it reached hypre.F90. >>>> >>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>> option. Also, I have to use: >>>> >>>> integer(8) mpi_comm >>>> >>>> mpi_comm = MPI_COMM_WORLD >>>> >>>> to make my codes work with HYPRE. >>>> >>>> But now, compiling gives the error: >>>> >>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>> those made accessible by a USE statement. [MPI_COMM] >>>> integer(8) mpi_comm >>>> --------------------------------------^ >>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>> variable. [MPI_COMM] >>>> mpi_comm = MPI_COMM_WORLD >>>> ----^ >>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>> expression operation is invalid for the data types of the two >>>> operands. [1140850688] >>>> mpi_comm = MPI_COMM_WORLD >>>> ---------------^ >>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>> variable. [MPI_COMM] >>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>> ... >>>> >>>> What's actually happening? Why can't I compile now? >>>> >>>> On GNU gfortran: >>>> >>>> I tried to use similar tactics as above here. However, when compiling >>>> math_routine.F90, I got the error: >>>> >>>> math_routine.F90:1333:21: >>>> >>>> call subb(orig,vert1,tvec) >>>> 1 >>>> Error: Invalid procedure argument at (1) >>>> math_routine.F90:1339:18: >>>> >>>> qvec = cross_pdt2(tvec,edge1) >>>> 1 >>>> Error: Invalid procedure argument at (1) >>>> math_routine.F90:1345:21: >>>> >>>> uu = dot_product(tvec,pvec) >>>> 1 >>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>> numeric or LOGICAL >>>> math_routine.F90:1371:21: >>>> >>>> uu = dot_product(tvec,pvec) >>>> >>>> These errors were not present before. My variables are mostly vectors: >>>> >>>> real(8), intent(in) :: >>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>> >>>> real(8) :: uu,vv,dir(3) >>>> >>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilo >>>> n,d,t >>>> >>>> I wonder what happened? >>>> >>>> Please advice. >>>> >>>> >>>> -- >>>> Thank you very much. >>>> >>>> Yours sincerely, >>>> >>>> ================================================ >>>> TAY Wee-Beng ??? >>>> Research Scientist >>>> Experimental AeroScience Group >>>> Temasek Laboratories >>>> National University of Singapore >>>> T-Lab Building >>>> 5A, Engineering Drive 1, #02-02 >>>> Singapore 117411 >>>> Phone: +65 65167330 >>>> E-mail: tsltaywb at nus.edu.sg >>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexper >>>> imental_tsltaywb.php >>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>> Youtube research showcase: https://www.youtube.com/channe >>>> l/UC72ZHtvQNMpNs2uRTSToiLA >>>> linkedin: www.linkedin.com/in/tay-weebeng >>>> ================================================ >>>> >>>> >>>> ________________________________ >>>> >>>> Important: This email is confidential and may be privileged. If you are >>>> not the intended recipient, please delete it and notify us immediately; you >>>> should not copy or use it for any purpose, nor disclose its contents to any >>>> other person. Thank you. >>>> >>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Wed Feb 21 11:22:12 2018 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 21 Feb 2018 09:22:12 -0800 Subject: [petsc-users] Question on DMPlexCreateSection for Fortran In-Reply-To: References: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> Message-ID: <25e20ee5-23ee-a3fc-7b45-f981563f03b4@gmail.com> Hi Matt, To test the Segmentation Violation problem in my code, I modified the example ex1f90.F to reproduce the problem I have in my own code. If use DMPlexCreateBoxMesh to generate the mesh, the code works fine. However, if I use DMPlexCreateGmshFromFile, using the same mesh exported from "DMPlexCreateBoxMesh", it gives Segmentation Violation error. Did I miss something in the input mesh file? My first guess is the label "marker" used in the code, but I couldn't find any place to set this label. Would you please let me know how to solve this problem. My code is done in a similar way as ex1f90, it reads mesh from external file or creates from cell list, distributes the mesh (these already work), and then creates sections and sets ndof to the nodes. Thanks, Danyang On 18-02-20 10:07 AM, Danyang Su wrote: > On 18-02-20 09:52 AM, Matthew Knepley wrote: >> On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su > > wrote: >> >> Hi All, >> >> I tried to compile the DMPlexCreateSection code but got error >> information as shown below. >> >> Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type >> >> I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then >> the code can be compiled but run into Segmentation Violation >> error in DMPlexCreateSection. >> >> From the webpage >> >> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html >> >> >> The F90 version is?DMPlexCreateSectionF90. Doing this with F77 arrays >> would have been too painful. > Hi Matt, > > Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 > instead of DMPlexCreateSection. Would you please tell me in more details? > > undefined reference to `dmplexcreatesectionf90_' > > then I #include , but this throws more > error during compilation. > > > ??? Included at > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: > ??? Included at ../../solver/solver_ddmethod.F90:62: > > ????????? PETSCSECTION_HIDE section > ????????? 1 > Error: Unclassifiable statement at (1) > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10: > ??? Included at > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: > ??? Included at ../../solver/solver_ddmethod.F90:62: > > ????????? PETSCSECTION_HIDE section > ????????? 1 > Error: Unclassifiable statement at (1) > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10: > ??? Included at > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: > ??? Included at ../../solver/solver_ddmethod.F90:62: > >> >> ? Thanks, >> >> ? ? ?Matt >> >> dmda_flow%da is distributed dm object that works fine. >> >> The fortran example I follow is >> http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90 >> . >> >> >> What parameters should I use if passing null to bcField, bcComps, >> bcPoints and perm. >> >> PetscErrorCode >> DMPlexCreateSection >> (DM >> dm,PetscInt >> dim,PetscInt >> numFields,constPetscInt >> numComp[],constPetscInt >> numDof[],PetscInt >> numBC,constPetscInt >> bcField[], >> constIS >> bcComps[], constIS >> bcPoints[],IS >> perm,PetscSection >> *section) >> >> #include >> #include >> #include >> >> ... >> >> #ifdef USG >> ??????? numFields = 1 >> ??????? numComp(1) = 1 >> ??????? pNumComp => numComp >> >> ??????? do i = 1, numFields*(dmda_flow%dim+1) >> ????????? numDof(i) = 0 >> ??????? end do >> ??????? numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof >> ??????? pNumDof => numDof >> >> ??????? numBC = 0 >> >> ??????? call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, & >> numFields,pNumComp,pNumDof, & >> numBC,PETSC_NULL_INTEGER, & >> PETSC_NULL_IS,PETSC_NULL_IS, &???????????? !Error here >> PETSC_NULL_IS,section,ierr) >> ??????? CHKERRQ(ierr) >> >> ??????? call PetscSectionSetFieldName(section,0,'flow',ierr) >> ??????? CHKERRQ(ierr) >> >> ??????? call DMSetDefaultSection(dmda_flow%da,section,ierr) >> ??????? CHKERRQ(ierr) >> >> ??????? call PetscSectionDestroy(section,ierr) >> ??????? CHKERRQ(ierr) >> #endif >> >> Thanks, >> >> Danyang >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1f90.F Type: text/x-fortran Size: 4646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sol.msh Type: model/mesh Size: 251 bytes Desc: not available URL: From mistloin at unist.ac.kr Wed Feb 21 22:10:09 2018 From: mistloin at unist.ac.kr (=?ks_c_5601-1987?B?vK29wsH4ICix4rDox9ew+LnXv/jA2rfCsPjH0LrOKQ==?=) Date: Thu, 22 Feb 2018 04:10:09 +0000 Subject: [petsc-users] Question about PETSC with finite volume approach Message-ID: Dear Petsc-User, Hi, I am Seungjin Seo, and I am trying to use PETSC to solve my problem. I want to solve a heat conduction equation using finite volume methods in 2D and 3D. I am going to import Gmsh files. Can you recommend me an example case for this purpose? Also, can I set different boundary conditions (two neumann and two dirichlet boundary conditions) to each boundary in 2D and 3D geometry using finite volume methods when I import Gmsh files? Thanks for reading my email. Best regard, Seungjin Seo. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Feb 22 00:23:04 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 22 Feb 2018 14:23:04 +0800 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: <6DF75825-0B14-4BE9-8CF6-563418F9CDA4@mcs.anl.gov> References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> <6DF75825-0B14-4BE9-8CF6-563418F9CDA4@mcs.anl.gov> Message-ID: <4775e526-7824-f601-1c66-fdd82c5395f9@gmail.com> On 21/2/2018 11:44 AM, Smith, Barry F. wrote: > Did you follow the directions in the changes file for 3.8? > >
  • Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] DMSetUp()
  • >
  • DMDACreateXd() no longer can take negative values for dimensons, instead pass positive values and call DMSetFromOptions() immediately after
  • > > I suspect you are not calling DMSetUp() and this is causing the problem. > > Barry Ops sorry, indeed I didn't change that part. Got it compiled now. However, I have got a new problem. Previously, I was using Intel 2016 with PETSc 3.7.6. During compile, I used -O3 for all modules except one, which will give error (due to DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90). Hence, I need to use -O1. Now, I'm using Intel 2018 with PETSc 3.8.3 and I got the error: M Diverged but why?, time =??????????? 2 ?reason =?????????? -9 I tried to change all *.F90 from using -O3 to -O1 and although there's no diverged err printed, my values are different: 1????? 0.01600000????? 0.46655767????? 0.46310378????? 1.42427154 -0.81598016E+02 -0.11854431E-01? 0.42046197E+06 ?????? 2????? 0.00956350????? 0.67395693????? 0.64698638 1.44166606 -0.12828928E+03? 0.12179394E-01? 0.41961824E+06 vs 1????? 0.01600000????? 0.49096543????? 0.46259333????? 1.41828130 -0.81561221E+02 -0.16146574E-01? 0.42046335E+06 ?????? 2????? 0.00956310????? 0.68342495????? 0.63682485 1.44353571 -0.12813998E+03? 0.24226242E+00? 0.41962121E+06 The latter values are obtained using the debug built and they compared correctly with another cluster, which use GNU. What going on and how should I troubleshoot? Thanks > > >> On Feb 20, 2018, at 7:35 PM, TAY wee-beng wrote: >> >> >> On 21/2/2018 10:47 AM, Smith, Barry F. wrote: >>> Try setting >>> >>> u_global = tVec(1) >>> >>> immediately before the call to DMCreateGlobalVector() >>> >>> >> Hi, >> >> I added the line in but still got the same error below. Btw, my code is organised as: >> >> module global_data >> >> #include "petsc/finclude/petsc.h" >> use petsc >> use kdtree2_module >> implicit none >> save >> ... >> Vec u_local,u_global ... >> ... >> contains >> >> subroutine allo_var >> ... >> u_global = tVec(1) >> call DMCreateGlobalVector(da_u,u_global,ierr) >> ... >> >> >> >> >> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >> ---------------------------------- >> [0]PETSC ERROR: Null argument, when expecting valid pointer >> [0]PETSC ERROR: Null Object: Parameter # 2 >> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >> ble shooting. >> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018 >> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x >> 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program Fil >> es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 --with-file-create-pause=1 >> --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 --with-mpi-lib="[/c >> ygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/ >> c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" --with-shared-libra >> ries=0 >> [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in C:\Source\PETSC-~2.3\ >> src\vec\vec\INTERF~1\vector.c >> [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in C:\Source\PETSC-~2.3\src >> \dm\impls\da\dadist.c >> [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in C:\Source\PETSC-~2.3\src\d >> m\INTERF~1\dm.c >> >> Thanks. >>>> On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: >>>> >>>> Hi, >>>> >>>> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: >>>> >>>> call DMCreateGlobalVector(da_u,u_global,ierr) >>>> >>>> The error is: >>>> >>>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>>> ---------------------------------- >>>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>>> [0]PETSC ERROR: Null Object: Parameter # 2 >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>>> ble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 >>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... >>>> >>>> But all I changed is from: >>>> >>>> module global_data >>>> #include "petsc/finclude/petsc.h" >>>> use petsc >>>> use kdtree2_module >>>> implicit none >>>> save >>>> !grid variables >>>> >>>> integer :: size_x,s.... >>>> >>>> ... >>>> >>>> to >>>> >>>> module global_data >>>> use kdtree2_module >>>> implicit none >>>> save >>>> #include "petsc/finclude/petsc.h90" >>>> !grid variables >>>> integer :: size_x,s... >>>> >>>> ... >>>> >>>> da_u, u_global were declared thru: >>>> >>>> DM da_u,da_v,... >>>> DM da_cu_types ... >>>> Vec u_local,u_global,v_local... >>>> >>>> So what could be the problem? >>>> >>>> >>>> Thank you very much. >>>> >>>> Yours sincerely, >>>> >>>> ================================================ >>>> TAY Wee-Beng (Zheng Weiming) ??? >>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>> linkedin: www.linkedin.com/in/tay-weebeng >>>> ================================================ >>>> >>>> On 20/2/2018 10:46 PM, Jose E. Roman wrote: >>>>> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >>>>> >>>>> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >>>>> >>>>> Jose >>>>> >>>>> >>>>>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>>>>> >>>>>> >>>>>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>>>>> >>>>>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>>>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>>>>> trying to compile: >>>>>>> >>>>>>> On Intel Fortran: >>>>>>> >>>>>>> Previously, I was using: >>>>>>> >>>>>>> #include "petsc/finclude/petsc.h90" >>>>>>> >>>>>>> in *.F90 when requires the use of PETSc >>>>>>> >>>>>>> I read in the change log that h90 is no longer there and so I replaced >>>>>>> with #include "petsc/finclude/petsc.h" >>>>>>> >>>>>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>>>>> they use some modules which uses PETSc. >>>>>>> >>>>>>> Now I can't compile them. The error is : >>>>>>> >>>>>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>>>>> file. Check INCLUDE paths. [PETSC] >>>>>>> use mpi_subroutines >>>>>>> >>>>>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>>>>> >>>>>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>>>>> use PETSc, by including PETSc include and lib files. >>>>>>> >>>>>>> May I know why this is so? It was not necessary before. >>>>>>> >>>>>>> Anyway, it managed to compile until it reached hypre.F90. >>>>>>> >>>>>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>>>>> option. Also, I have to use: >>>>>>> >>>>>>> integer(8) mpi_comm >>>>>>> >>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>> >>>>>>> to make my codes work with HYPRE. >>>>>>> >>>>>>> But now, compiling gives the error: >>>>>>> >>>>>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>>>>> those made accessible by a USE statement. [MPI_COMM] >>>>>>> integer(8) mpi_comm >>>>>>> --------------------------------------^ >>>>>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>>>>> variable. [MPI_COMM] >>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>> ----^ >>>>>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>>>>> expression operation is invalid for the data types of the two >>>>>>> operands. [1140850688] >>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>> ---------------^ >>>>>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>>>>> variable. [MPI_COMM] >>>>>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>>>>> ... >>>>>>> >>>>>>> What's actually happening? Why can't I compile now? >>>>>>> >>>>>>> On GNU gfortran: >>>>>>> >>>>>>> I tried to use similar tactics as above here. However, when compiling >>>>>>> math_routine.F90, I got the error: >>>>>>> >>>>>>> math_routine.F90:1333:21: >>>>>>> >>>>>>> call subb(orig,vert1,tvec) >>>>>>> 1 >>>>>>> Error: Invalid procedure argument at (1) >>>>>>> math_routine.F90:1339:18: >>>>>>> >>>>>>> qvec = cross_pdt2(tvec,edge1) >>>>>>> 1 >>>>>>> Error: Invalid procedure argument at (1) >>>>>>> math_routine.F90:1345:21: >>>>>>> >>>>>>> uu = dot_product(tvec,pvec) >>>>>>> 1 >>>>>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>>>>> numeric or LOGICAL >>>>>>> math_routine.F90:1371:21: >>>>>>> >>>>>>> uu = dot_product(tvec,pvec) >>>>>>> >>>>>>> These errors were not present before. My variables are mostly vectors: >>>>>>> >>>>>>> real(8), intent(in) :: >>>>>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>>>>> >>>>>>> real(8) :: uu,vv,dir(3) >>>>>>> >>>>>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>>>>> >>>>>>> I wonder what happened? >>>>>>> >>>>>>> Please advice. >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Thank you very much. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> ================================================ >>>>>>> TAY Wee-Beng ??? >>>>>>> Research Scientist >>>>>>> Experimental AeroScience Group >>>>>>> Temasek Laboratories >>>>>>> National University of Singapore >>>>>>> T-Lab Building >>>>>>> 5A, Engineering Drive 1, #02-02 >>>>>>> Singapore 117411 >>>>>>> Phone: +65 65167330 >>>>>>> E-mail: tsltaywb at nus.edu.sg >>>>>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>>> ================================================ >>>>>>> >>>>>>> >>>>>>> ________________________________ >>>>>>> >>>>>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From zonexo at gmail.com Thu Feb 22 00:24:11 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 22 Feb 2018 14:24:11 +0800 Subject: [petsc-users] Compiling with PETSc 64-bit indices In-Reply-To: References: <0918f242-f7d2-15f1-6d9f-9887ecd7ef0f@gmail.com> Message-ID: <3407d93f-9eac-9675-d6bf-26005e912ca5@gmail.com> On 21/2/2018 9:12 AM, Matthew Knepley wrote: > On Tue, Feb 20, 2018 at 8:08 PM, TAY wee-beng > wrote: > > > On 21/2/2018 9:00 AM, Matthew Knepley wrote: >> On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng > > wrote: >> >> Hi, >> >> When I run my CFD code with a grid size of 1119x1119x499 ( >> total grid size =??? 624828339 ), I got the error saying I >> need to compile PETSc with 64-bit indices. >> >> So I tried to compile PETSc again and then compile my CFD >> code with the newly compiled PETSc. However, now I got >> segmentation error: >> >> rm: cannot remove `log': No such file or directory >> [409]PETSC ERROR: >> ------------------------------------------------------------------------ >> [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: >> ------------------------------------------------------------------------ >> [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >> Violation, probably memory access out of range >> [410]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger >> [410]PETSC ERROR: [536]PETSC ERROR: >> ------------------------------------------------------------------------ >> [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >> Violation, probably memory access out of range >> [536]PETSC ERROR: Try option -start_in_debugger or >> -on_error_attach_debugger >> [536]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >> [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and >> Apple Mac OS X to find memory corruption errors >> [536]PETSC ERROR: likely location of problem given in stack below >> [536]PETSC ERROR: ---------------------? Stack Frames >> ------------------------------------ >> [536]PETSC ERROR: Note: The EXACT line numbers in the stack >> are not available, >> [536]PETSC ERROR:?????? INSTEAD the line number of the start >> of the function >> [536]PETSC ERROR:?????? is given. >> [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line >> 581 >> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c >> [536]PETSC ERROR: or see >> http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind >> >> [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and >> Apple Mac OS X to find memory corruption errors >> [410]PETSC ERROR: likely location of problem given in stack below >> [410]PETSC ERROR: ---------------------? Stack Frames >> ------------------------------------ >> [410]PETSC ERROR: Note: The EXACT line numbers in the stack >> are not available, >> [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 >> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c >> [536]PETSC ERROR: [536] DMDACreate3d line 1434 >> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c >> [536]PETSC ERROR: --------------------- Error Message >> -------------------------------------------------------------- >> >> The CFD code worked previously but increasing the problem >> size results in segmentation error. It seems to be related to >> DMDACreate3d and DMDASetOwnershipRanges. Any idea where the >> problem lies? >> >> Besides, I want to know when and why do I have to use PETSc >> with 64-bit indices? >> >> >> 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if >> you have a 3D velocity, pressure, and energy, you already have >> 3e9 unknowns, >> ? ? before you even start to count nonzero entries in the matrix. >> 64-bit integers allow you to handle these big sizes. >> >> Also, can I use the 64-bit indices version with smaller sized >> problems? >> >> >> 2) Yes >> >> And is there a speed difference between using the 32-bit and >> 64-bit indices ver? >> >> >> 3) I have seen no evidence of this >> >> 4) My guess is that you have defines regular integers in your >> code and passed them to PETSc, rather than using PetscInt as the >> type. > Oh that seems probable. So I am still using integer(4) when it > should be integer(8) for some values, is that so? If I use > PetscInt, is it the same as integer(8)? Or does it depend on the > actual number? > > > PetscInt will be integer(4) if you configure with 32-bit ints, and > integer(8) if you configure with 64-bit ints. If you use it > consistently, you can avoid problems > with matching the PETSc API. > > I wonder if I replace all my integer to PetscInt, will there be a > large increase in memory usage, because all integer(4) now becomes > integer(8)? > > > Only if you have large integer storage. Most codes do not. Hi, What do you mean by "large integer storage"? Btw, I got the following error when I ran a simple small test case with my CFD code: [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Out of memory. This could be due to allocating [0]PETSC ERROR: too large an object or bleeding by not properly [0]PETSC ERROR: destroying unneeded objects. [0]PETSC ERROR: Memory allocated 0 Memory used by process 52858880 [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. [0]PETSC ERROR: Memory requested 6917565139726106624 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 [0]PETSC ERROR: ./a.out on a petsc-3.8.3_intel_64_rel named nus02 by tsltaywb Thu Feb 22 10:34:29 2018 [0]PETSC ERROR: Configure options --with-mpi-dir=/app/intel/xe2018/compilers_and_libraries_2018.0.128/linux/mpi/intel64 --with-blaslapack-dir=/app/intel/xe2018/compilers_and_libraries_2018.0.128/linux/mkl/lib/intel64 --download-hypre=/home/users/nus/tsltaywb/source/git.hypre.tar.gz --with-debugging=0 --prefix=/home/users/nus/tsltaywb/lib/petsc-3.8.3_intel_64_rel --with-shared-libraries=0 --known-mpi-shared-libraries=0 --with-fortran-interfaces=1 --CFLAGS="-xHost -g -O3" --CXXFLAGS="-xHost -g -O3" --FFLAGS="-xHost -g -O3" --with-64-bit-indices [0]PETSC ERROR: #105 DMSetUp_DA() line 18 in /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/dareg.c [0]PETSC ERROR: #106 DMSetUp_DA() line 18 in /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/dareg.c [0]PETSC ERROR: #107 DMSetUp() line 720 in /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/interface/dm.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Arguments are incompatible [0]PETSC ERROR: Ownership ranges sum to 4294967337 but global dimension is 41 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 [0]PETSC ERROR: ./a.out on a petsc-3.8.3_intel_64_rel named nus02 by tsltaywb Thu Feb 22 10:34:29 2018 [0]PETSC ERROR: Configure options --with-mpi-dir=/app/intel/xe2018/compilers_and_libraries_2018.0.128/linux/mpi/intel64 --with-blaslapack-dir=/app/intel/xe2018/compilers_and_libraries_2018.0.128/linux/mkl/lib/intel64 --download-hypre=/home/users/nus/tsltaywb/source/git.hypre.tar.gz --with-debugging=0 --prefix=/home/users/nus/tsltaywb/lib/petsc-3.8.3_intel_64_rel --with-shared-libraries=0 --known-mpi-shared-libraries=0 --with-fortran-interfaces=1 --CFLAGS="-xHost -g -O3" --CXXFLAGS="-xHost -g -O3" --FFLAGS="-xHost -g -O3" --with-64-bit-indices [0]PETSC ERROR: #108 DMDACheckOwnershipRanges_Private() line 548 in /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/da.c [0]PETSC ERROR: #109 DMDASetOwnershipRanges() line 580 in /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/da.c [0]PETSC ERROR: #110 DMDACreate3d() line 1444 in /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/da3.c What's the problem? Thanks. > > ? Thanks, > > ? ? Matt > > Thanks. >> >> ? Thanks, >> >> ? ? ?Matt >> >> >> -- >> Thank you very much. >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng (Zheng Weiming) ??? >> Personal research webpage: >> http://tayweebeng.wixsite.com/website >> >> Youtube research showcase: >> https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >> >> linkedin: www.linkedin.com/in/tay-weebeng >> >> ================================================ >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rubby0605 at hotmail.com Thu Feb 22 01:16:18 2018 From: rubby0605 at hotmail.com (Lin Tu) Date: Thu, 22 Feb 2018 07:16:18 +0000 Subject: [petsc-users] Request for being added into mailing list Message-ID: Dear Petsc team, I'm new and please add me in the mailing list, thank you very much in advance! Sincerely, Lin Tu Lin (Ruby) Tu PhD student Institute for Astrophysics University of Vienna T??rkenschanzstra??e 17 1180 Wien -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 22 04:32:00 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Feb 2018 05:32:00 -0500 Subject: [petsc-users] Compiling with PETSc 64-bit indices In-Reply-To: <3407d93f-9eac-9675-d6bf-26005e912ca5@gmail.com> References: <0918f242-f7d2-15f1-6d9f-9887ecd7ef0f@gmail.com> <3407d93f-9eac-9675-d6bf-26005e912ca5@gmail.com> Message-ID: On Thu, Feb 22, 2018 at 1:24 AM, TAY wee-beng wrote: > > On 21/2/2018 9:12 AM, Matthew Knepley wrote: > > On Tue, Feb 20, 2018 at 8:08 PM, TAY wee-beng wrote: > >> >> On 21/2/2018 9:00 AM, Matthew Knepley wrote: >> >> On Tue, Feb 20, 2018 at 7:54 PM, TAY wee-beng wrote: >> >>> Hi, >>> >>> When I run my CFD code with a grid size of 1119x1119x499 ( total grid >>> size = 624828339 ), I got the error saying I need to compile PETSc with >>> 64-bit indices. >>> >>> So I tried to compile PETSc again and then compile my CFD code with the >>> newly compiled PETSc. However, now I got segmentation error: >>> >>> rm: cannot remove `log': No such file or directory >>> [409]PETSC ERROR: ------------------------------ >>> ------------------------------------------ >>> [409]PETSC ERROR: [535]PETSC ERROR: [410]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [410]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [410]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [410]PETSC ERROR: [536]PETSC ERROR: ------------------------------ >>> ------------------------------------------ >>> [536]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [536]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d >>> ocumentation/faq.html#valgrind >>> [536]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac >>> OS X to find memory corruption errors >>> [536]PETSC ERROR: likely location of problem given in stack below >>> [536]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [536]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [536]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [536]PETSC ERROR: is given. >>> [536]PETSC ERROR: [536] DMDACheckOwnershipRanges_Private line 581 >>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c >>> [536]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/d >>> ocumentation/faq.html#valgrind >>> [410]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac >>> OS X to find memory corruption errors >>> [410]PETSC ERROR: likely location of problem given in stack below >>> [410]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [410]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [897]PETSC ERROR: [536] DMDASetOwnershipRanges line 613 >>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da.c >>> [536]PETSC ERROR: [536] DMDACreate3d line 1434 >>> /home/users/nus/tsltaywb/source/petsc-3.7.6/src/dm/impls/da/da3.c >>> [536]PETSC ERROR: --------------------- Error Message >>> -------------------------------------------------------------- >>> >>> The CFD code worked previously but increasing the problem size results >>> in segmentation error. It seems to be related to DMDACreate3d and >>> DMDASetOwnershipRanges. Any idea where the problem lies? >>> >>> Besides, I want to know when and why do I have to use PETSc with 64-bit >>> indices? >>> >> >> 1) A 32-bit integer can hold numbers up to 2^32 = 4.2e9, so if you have a >> 3D velocity, pressure, and energy, you already have 3e9 unknowns, >> before you even start to count nonzero entries in the matrix. 64-bit >> integers allow you to handle these big sizes. >> >> >>> Also, can I use the 64-bit indices version with smaller sized problems? >>> >> >> 2) Yes >> >> >>> And is there a speed difference between using the 32-bit and 64-bit >>> indices ver? >> >> >> 3) I have seen no evidence of this >> >> 4) My guess is that you have defines regular integers in your code and >> passed them to PETSc, rather than using PetscInt as the type. >> >> Oh that seems probable. So I am still using integer(4) when it should be >> integer(8) for some values, is that so? If I use PetscInt, is it the same >> as integer(8)? Or does it depend on the actual number? >> > > PetscInt will be integer(4) if you configure with 32-bit ints, and > integer(8) if you configure with 64-bit ints. If you use it consistently, > you can avoid problems > with matching the PETSc API. > > I wonder if I replace all my integer to PetscInt, will there be a large >> increase in memory usage, because all integer(4) now becomes integer(8)? >> > > Only if you have large integer storage. Most codes do not. > > Hi, > > What do you mean by "large integer storage"? > You have some structure that stores 1B integers. Most codes do not. > Btw, I got the following error when I ran a simple small test case with my > CFD code: > You requested 7M TB of memory. This looks like an uninitialized integer. Thanks, Matt > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Out of memory. This could be due to allocating > [0]PETSC ERROR: too large an object or bleeding by not properly > [0]PETSC ERROR: destroying unneeded objects. > [0]PETSC ERROR: Memory allocated 0 Memory used by process 52858880 > [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info. > [0]PETSC ERROR: Memory requested 6917565139726106624 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 > [0]PETSC ERROR: ./a.out on a petsc-3.8.3_intel_64_rel named nus02 by > tsltaywb Thu Feb 22 10:34:29 2018 > [0]PETSC ERROR: Configure options --with-mpi-dir=/app/intel/ > xe2018/compilers_and_libraries_2018.0.128/linux/mpi/intel64 > --with-blaslapack-dir=/app/intel/xe2018/compilers_and_ > libraries_2018.0.128/linux/mkl/lib/intel64 --download-hypre=/home/users/ > nus/tsltaywb/source/git.hypre.tar.gz --with-debugging=0 > --prefix=/home/users/nus/tsltaywb/lib/petsc-3.8.3_intel_64_rel > --with-shared-libraries=0 --known-mpi-shared-libraries=0 > --with-fortran-interfaces=1 --CFLAGS="-xHost -g -O3" --CXXFLAGS="-xHost -g > -O3" --FFLAGS="-xHost -g -O3" --with-64-bit-indices > [0]PETSC ERROR: #105 DMSetUp_DA() line 18 in /home/users/nus/tsltaywb/ > source/petsc-3.8.3/src/dm/impls/da/dareg.c > [0]PETSC ERROR: #106 DMSetUp_DA() line 18 in /home/users/nus/tsltaywb/ > source/petsc-3.8.3/src/dm/impls/da/dareg.c > [0]PETSC ERROR: #107 DMSetUp() line 720 in /home/users/nus/tsltaywb/ > source/petsc-3.8.3/src/dm/interface/dm.c > [0]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [0]PETSC ERROR: Arguments are incompatible > [0]PETSC ERROR: Ownership ranges sum to 4294967337 but global dimension is > 41 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 > [0]PETSC ERROR: ./a.out on a petsc-3.8.3_intel_64_rel named nus02 by > tsltaywb Thu Feb 22 10:34:29 2018 > [0]PETSC ERROR: Configure options --with-mpi-dir=/app/intel/ > xe2018/compilers_and_libraries_2018.0.128/linux/mpi/intel64 > --with-blaslapack-dir=/app/intel/xe2018/compilers_and_ > libraries_2018.0.128/linux/mkl/lib/intel64 --download-hypre=/home/users/ > nus/tsltaywb/source/git.hypre.tar.gz --with-debugging=0 > --prefix=/home/users/nus/tsltaywb/lib/petsc-3.8.3_intel_64_rel > --with-shared-libraries=0 --known-mpi-shared-libraries=0 > --with-fortran-interfaces=1 --CFLAGS="-xHost -g -O3" --CXXFLAGS="-xHost -g > -O3" --FFLAGS="-xHost -g -O3" --with-64-bit-indices > [0]PETSC ERROR: #108 DMDACheckOwnershipRanges_Private() line 548 in > /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/da.c > [0]PETSC ERROR: #109 DMDASetOwnershipRanges() line 580 in > /home/users/nus/tsltaywb/source/petsc-3.8.3/src/dm/impls/da/da.c > [0]PETSC ERROR: #110 DMDACreate3d() line 1444 in /home/users/nus/tsltaywb/ > source/petsc-3.8.3/src/dm/impls/da/da3.c > > What's the problem? > > Thanks. > > > Thanks, > > Matt > > >> Thanks. >> >> >> Thanks, >> >> Matt >> >> >>> >>> -- >>> Thank you very much. >>> >>> Yours sincerely, >>> >>> ================================================ >>> TAY Wee-Beng (Zheng Weiming) ??? >>> Personal research webpage: http://tayweebeng.wixsite.com/website >>> Youtube research showcase: https://www.youtube.com/channe >>> l/UC72ZHtvQNMpNs2uRTSToiLA >>> linkedin: www.linkedin.com/in/tay-weebeng >>> ================================================ >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 22 04:43:17 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Feb 2018 05:43:17 -0500 Subject: [petsc-users] Question about PETSC with finite volume approach In-Reply-To: References: Message-ID: On Wed, Feb 21, 2018 at 11:10 PM, ??? (???????????) wrote: > Dear Petsc-User, > > > Hi, I am Seungjin Seo, and I am trying to use PETSC to solve my problem. > > > I want to solve a heat conduction equation using finite volume methods in > 2D and 3D. > > I am going to import Gmsh files. Can you recommend me an example case for > this purpose? > > > > Also, can I set different boundary conditions (two neumann and two > dirichlet boundary conditions) to each boundary in 2D and 3D geometry using > finite volume methods when I import Gmsh files? > We do not have developed FV support. Anything you do would require significant programming,as opposed to something like http://openfvm.sourceforge.net/, which supports Gmsh. We do have good support for reading in GMsh files, through DMPlex. Thanks, Matt > Thanks for reading my email. > > Best regard, > > Seungjin Seo. > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From epscodes at gmail.com Thu Feb 22 08:55:22 2018 From: epscodes at gmail.com (Xiangdong) Date: Thu, 22 Feb 2018 09:55:22 -0500 Subject: [petsc-users] question about MatInvertBlockDiagonal_SeqBAIJ Message-ID: Hello everyone, I am curious about the purpose of mdiag in MatInverseBlockDiagonal_SeqBAIJ http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/seq/baij.c.html It seems that the inverse of the diagonal block is stored is a->idiag, and the extra copy of diagonal block itself is stored in mdiag or a->idiag+bs2*mbs. What is the purpose of storing this mdiag as an extra copy of diagonal block? When will this mdiag be used? Thank you. Best, Xiangdong if (!a->idiag) { 38: PetscMalloc1(2*bs2*mbs,&a->idiag); 39: PetscLogObjectMemory((PetscObject)A,2*bs2*mbs*sizeof(PetscScalar)); 40: } 41: diag = a->idiag; 42: mdiag = a->idiag+bs2*mbs; 138: for (i=0; ifactorerrortype = MAT_FACTOR_NUMERIC_ZEROPIVOT; 144: diag += bs2; 145: mdiag += bs2; -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Feb 22 09:33:37 2018 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 22 Feb 2018 09:33:37 -0600 Subject: [petsc-users] Request for being added into mailing list In-Reply-To: References: Message-ID: Added now. Note: Normally one can subscribe with instructions from http://www.mcs.anl.gov/petsc/miscellaneous/mailing-lists.html Satish On Thu, 22 Feb 2018, Lin Tu wrote: > Dear Petsc team, > > > I'm new and please add me in the mailing list, thank you very much in advance! > > > Sincerely, > > Lin Tu > > > > Lin (Ruby) Tu > PhD student > Institute for Astrophysics > University of Vienna > T??rkenschanzstra??e 17 > 1180 Wien > > From bsmith at mcs.anl.gov Thu Feb 22 10:15:37 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 22 Feb 2018 16:15:37 +0000 Subject: [petsc-users] question about MatInvertBlockDiagonal_SeqBAIJ In-Reply-To: References: Message-ID: <18BD3A5F-C16B-4469-8B21-D05490A86BD9@anl.gov> It looks like this was copied from the AIJ format but is never used for BAIJ. Could probably be removed. Barry > On Feb 22, 2018, at 6:55 AM, Xiangdong wrote: > > Hello everyone, > > I am curious about the purpose of mdiag in MatInverseBlockDiagonal_SeqBAIJ > > http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/seq/baij.c.html > > It seems that the inverse of the diagonal block is stored is a->idiag, and the extra copy of diagonal block itself is stored in mdiag or a->idiag+bs2*mbs. What is the purpose of storing this mdiag as an extra copy of diagonal block? When will this mdiag be used? > > Thank you. > > Best, > Xiangdong > > > if (!a->idiag) { > 38: PetscMalloc1(2*bs2*mbs,&a->idiag); > 39: PetscLogObjectMemory((PetscObject)A,2*bs2*mbs*sizeof(PetscScalar)); > 40: } > 41: diag = a->idiag; > 42: mdiag = a->idiag+bs2*mbs; > > > 138: for (i=0; i 139: odiag = v + bs2*diag_offset[i]; > 140: PetscMemcpy(diag,odiag,bs2*sizeof(PetscScalar)); > 141: PetscMemcpy(mdiag,odiag,bs2*sizeof(PetscScalar)); > 142: PetscKernel_A_gets_inverse_A(bs,diag,v_pivots,v_work,allowzeropivot,&zeropivotdetected); > 143: if (zeropivotdetected) A->factorerrortype = MAT_FACTOR_NUMERIC_ZEROPIVOT; > 144: diag += bs2; > 145: mdiag += bs2; > From bsmith at mcs.anl.gov Thu Feb 22 11:40:05 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 22 Feb 2018 17:40:05 +0000 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: <4775e526-7824-f601-1c66-fdd82c5395f9@gmail.com> References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> <6DF75825-0B14-4BE9-8CF6-563418F9CDA4@mcs.anl.gov> <4775e526-7824-f601-1c66-fdd82c5395f9@gmail.com> Message-ID: First run under valgrind to look for memory issues. Second I would change 1 thing at a time. So use the intel 2017 compiler with PETSc 2.8.3 so the only change is your needed changes to match 2.8.3 and does not include a compiler change. I am not sure what numbers you are printing below but often changing optimization levels can and will change numerical values slightly so change in numerical values may not indicate anything is wrong (or it may indicate something is wrong depending on how different the numerical values are). Barry > On Feb 21, 2018, at 10:23 PM, TAY wee-beng wrote: > > > On 21/2/2018 11:44 AM, Smith, Barry F. wrote: >> Did you follow the directions in the changes file for 3.8? >> >>
  • Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] DMSetUp()
  • >>
  • DMDACreateXd() no longer can take negative values for dimensons, instead pass positive values and call DMSetFromOptions() immediately after
  • >> >> I suspect you are not calling DMSetUp() and this is causing the problem. >> >> Barry > Ops sorry, indeed I didn't change that part. Got it compiled now. > > However, I have got a new problem. Previously, I was using Intel 2016 with PETSc 3.7.6. During compile, I used -O3 for all modules except one, which will give error (due to DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90). Hence, I need to use -O1. > > Now, I'm using Intel 2018 with PETSc 3.8.3 and I got the error: > > M Diverged but why?, time = 2 > reason = -9 > > I tried to change all *.F90 from using -O3 to -O1 and although there's no diverged err printed, my values are different: > > 1 0.01600000 0.46655767 0.46310378 1.42427154 -0.81598016E+02 -0.11854431E-01 0.42046197E+06 > 2 0.00956350 0.67395693 0.64698638 1.44166606 -0.12828928E+03 0.12179394E-01 0.41961824E+06 > > vs > > 1 0.01600000 0.49096543 0.46259333 1.41828130 -0.81561221E+02 -0.16146574E-01 0.42046335E+06 > 2 0.00956310 0.68342495 0.63682485 1.44353571 -0.12813998E+03 0.24226242E+00 0.41962121E+06 > > The latter values are obtained using the debug built and they compared correctly with another cluster, which use GNU. > > What going on and how should I troubleshoot? > Thanks >> >> >>> On Feb 20, 2018, at 7:35 PM, TAY wee-beng wrote: >>> >>> >>> On 21/2/2018 10:47 AM, Smith, Barry F. wrote: >>>> Try setting >>>> >>>> u_global = tVec(1) >>>> >>>> immediately before the call to DMCreateGlobalVector() >>>> >>>> >>> Hi, >>> >>> I added the line in but still got the same error below. Btw, my code is organised as: >>> >>> module global_data >>> >>> #include "petsc/finclude/petsc.h" >>> use petsc >>> use kdtree2_module >>> implicit none >>> save >>> ... >>> Vec u_local,u_global ... >>> ... >>> contains >>> >>> subroutine allo_var >>> ... >>> u_global = tVec(1) >>> call DMCreateGlobalVector(da_u,u_global,ierr) >>> ... >>> >>> >>> >>> >>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>> ---------------------------------- >>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>> [0]PETSC ERROR: Null Object: Parameter # 2 >>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>> ble shooting. >>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018 >>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x >>> 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program Fil >>> es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 --with-file-create-pause=1 >>> --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 --with-mpi-lib="[/c >>> ygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/ >>> c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" --with-shared-libra >>> ries=0 >>> [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in C:\Source\PETSC-~2.3\ >>> src\vec\vec\INTERF~1\vector.c >>> [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in C:\Source\PETSC-~2.3\src >>> \dm\impls\da\dadist.c >>> [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in C:\Source\PETSC-~2.3\src\d >>> m\INTERF~1\dm.c >>> >>> Thanks. >>>>> On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: >>>>> >>>>> Hi, >>>>> >>>>> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: >>>>> >>>>> call DMCreateGlobalVector(da_u,u_global,ierr) >>>>> >>>>> The error is: >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>>>> ---------------------------------- >>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>>>> [0]PETSC ERROR: Null Object: Parameter # 2 >>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>>>> ble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 >>>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>>>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... >>>>> >>>>> But all I changed is from: >>>>> >>>>> module global_data >>>>> #include "petsc/finclude/petsc.h" >>>>> use petsc >>>>> use kdtree2_module >>>>> implicit none >>>>> save >>>>> !grid variables >>>>> >>>>> integer :: size_x,s.... >>>>> >>>>> ... >>>>> >>>>> to >>>>> >>>>> module global_data >>>>> use kdtree2_module >>>>> implicit none >>>>> save >>>>> #include "petsc/finclude/petsc.h90" >>>>> !grid variables >>>>> integer :: size_x,s... >>>>> >>>>> ... >>>>> >>>>> da_u, u_global were declared thru: >>>>> >>>>> DM da_u,da_v,... >>>>> DM da_cu_types ... >>>>> Vec u_local,u_global,v_local... >>>>> >>>>> So what could be the problem? >>>>> >>>>> >>>>> Thank you very much. >>>>> >>>>> Yours sincerely, >>>>> >>>>> ================================================ >>>>> TAY Wee-Beng (Zheng Weiming) ??? >>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>> ================================================ >>>>> >>>>> On 20/2/2018 10:46 PM, Jose E. Roman wrote: >>>>>> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >>>>>> >>>>>> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >>>>>> >>>>>> Jose >>>>>> >>>>>> >>>>>>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>>>>>> >>>>>>> >>>>>>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>>>>>> >>>>>>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>>>>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>>>>>> trying to compile: >>>>>>>> >>>>>>>> On Intel Fortran: >>>>>>>> >>>>>>>> Previously, I was using: >>>>>>>> >>>>>>>> #include "petsc/finclude/petsc.h90" >>>>>>>> >>>>>>>> in *.F90 when requires the use of PETSc >>>>>>>> >>>>>>>> I read in the change log that h90 is no longer there and so I replaced >>>>>>>> with #include "petsc/finclude/petsc.h" >>>>>>>> >>>>>>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>>>>>> they use some modules which uses PETSc. >>>>>>>> >>>>>>>> Now I can't compile them. The error is : >>>>>>>> >>>>>>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>>>>>> file. Check INCLUDE paths. [PETSC] >>>>>>>> use mpi_subroutines >>>>>>>> >>>>>>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>>>>>> >>>>>>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>>>>>> use PETSc, by including PETSc include and lib files. >>>>>>>> >>>>>>>> May I know why this is so? It was not necessary before. >>>>>>>> >>>>>>>> Anyway, it managed to compile until it reached hypre.F90. >>>>>>>> >>>>>>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>>>>>> option. Also, I have to use: >>>>>>>> >>>>>>>> integer(8) mpi_comm >>>>>>>> >>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>> >>>>>>>> to make my codes work with HYPRE. >>>>>>>> >>>>>>>> But now, compiling gives the error: >>>>>>>> >>>>>>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>>>>>> those made accessible by a USE statement. [MPI_COMM] >>>>>>>> integer(8) mpi_comm >>>>>>>> --------------------------------------^ >>>>>>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>>>>>> variable. [MPI_COMM] >>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>> ----^ >>>>>>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>>>>>> expression operation is invalid for the data types of the two >>>>>>>> operands. [1140850688] >>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>> ---------------^ >>>>>>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>>>>>> variable. [MPI_COMM] >>>>>>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>>>>>> ... >>>>>>>> >>>>>>>> What's actually happening? Why can't I compile now? >>>>>>>> >>>>>>>> On GNU gfortran: >>>>>>>> >>>>>>>> I tried to use similar tactics as above here. However, when compiling >>>>>>>> math_routine.F90, I got the error: >>>>>>>> >>>>>>>> math_routine.F90:1333:21: >>>>>>>> >>>>>>>> call subb(orig,vert1,tvec) >>>>>>>> 1 >>>>>>>> Error: Invalid procedure argument at (1) >>>>>>>> math_routine.F90:1339:18: >>>>>>>> >>>>>>>> qvec = cross_pdt2(tvec,edge1) >>>>>>>> 1 >>>>>>>> Error: Invalid procedure argument at (1) >>>>>>>> math_routine.F90:1345:21: >>>>>>>> >>>>>>>> uu = dot_product(tvec,pvec) >>>>>>>> 1 >>>>>>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>>>>>> numeric or LOGICAL >>>>>>>> math_routine.F90:1371:21: >>>>>>>> >>>>>>>> uu = dot_product(tvec,pvec) >>>>>>>> >>>>>>>> These errors were not present before. My variables are mostly vectors: >>>>>>>> >>>>>>>> real(8), intent(in) :: >>>>>>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>>>>>> >>>>>>>> real(8) :: uu,vv,dir(3) >>>>>>>> >>>>>>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>>>>>> >>>>>>>> I wonder what happened? >>>>>>>> >>>>>>>> Please advice. >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Thank you very much. >>>>>>>> >>>>>>>> Yours sincerely, >>>>>>>> >>>>>>>> ================================================ >>>>>>>> TAY Wee-Beng ??? >>>>>>>> Research Scientist >>>>>>>> Experimental AeroScience Group >>>>>>>> Temasek Laboratories >>>>>>>> National University of Singapore >>>>>>>> T-Lab Building >>>>>>>> 5A, Engineering Drive 1, #02-02 >>>>>>>> Singapore 117411 >>>>>>>> Phone: +65 65167330 >>>>>>>> E-mail: tsltaywb at nus.edu.sg >>>>>>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>>>> ================================================ >>>>>>>> >>>>>>>> >>>>>>>> ________________________________ >>>>>>>> >>>>>>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. > From bsmith at mcs.anl.gov Thu Feb 22 12:42:10 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 22 Feb 2018 18:42:10 +0000 Subject: [petsc-users] question about MatInvertBlockDiagonal_SeqBAIJ In-Reply-To: References: Message-ID: Removed in the branch barry/remove-mdiag-baij Barry > On Feb 22, 2018, at 6:55 AM, Xiangdong wrote: > > Hello everyone, > > I am curious about the purpose of mdiag in MatInverseBlockDiagonal_SeqBAIJ > > http://www.mcs.anl.gov/petsc/petsc-current/src/mat/impls/baij/seq/baij.c.html > > It seems that the inverse of the diagonal block is stored is a->idiag, and the extra copy of diagonal block itself is stored in mdiag or a->idiag+bs2*mbs. What is the purpose of storing this mdiag as an extra copy of diagonal block? When will this mdiag be used? > > Thank you. > > Best, > Xiangdong > > > if (!a->idiag) { > 38: PetscMalloc1(2*bs2*mbs,&a->idiag); > 39: PetscLogObjectMemory((PetscObject)A,2*bs2*mbs*sizeof(PetscScalar)); > 40: } > 41: diag = a->idiag; > 42: mdiag = a->idiag+bs2*mbs; > > > 138: for (i=0; i 139: odiag = v + bs2*diag_offset[i]; > 140: PetscMemcpy(diag,odiag,bs2*sizeof(PetscScalar)); > 141: PetscMemcpy(mdiag,odiag,bs2*sizeof(PetscScalar)); > 142: PetscKernel_A_gets_inverse_A(bs,diag,v_pivots,v_work,allowzeropivot,&zeropivotdetected); > 143: if (zeropivotdetected) A->factorerrortype = MAT_FACTOR_NUMERIC_ZEROPIVOT; > 144: diag += bs2; > 145: mdiag += bs2; > From danyang.su at gmail.com Thu Feb 22 17:10:15 2018 From: danyang.su at gmail.com (Danyang Su) Date: Thu, 22 Feb 2018 15:10:15 -0800 Subject: [petsc-users] Question on DMPlexCreateSection for Fortran In-Reply-To: <25e20ee5-23ee-a3fc-7b45-f981563f03b4@gmail.com> References: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> <25e20ee5-23ee-a3fc-7b45-f981563f03b4@gmail.com> Message-ID: <7f5c629f-4258-5f9d-237c-9a1fd9d4c3df@gmail.com> Hi Matt, Just to let you know that after updating to PETSc 3.8.3 version, the DMPlexCreateSection in my code now works. One more question, what is the PETSC_NULL_XXX for IS pointer, as shown below, in C code, it just pass NULL, but in fortran, what is the name of null object for pBcCompIS and pBcPointIS. ??????? call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim,?????????? & numFields,pNumComp,pNumDof,?????????? & numBC,pBcField,?????????????????????? & pBcCompIS,pBcPointIS,???????????????? & ???????????????????????????????? PETSC_NULL_IS,section,ierr) ??????? CHKERRQ(ierr) Thanks, Danyang On 18-02-21 09:22 AM, Danyang Su wrote: > > Hi Matt, > > To test the Segmentation Violation problem in my code, I modified the > example ex1f90.F to reproduce the problem I have in my own code. > > If use DMPlexCreateBoxMesh to generate the mesh, the code works fine. > However, if I use DMPlexCreateGmshFromFile, using the same mesh > exported from "DMPlexCreateBoxMesh", it gives Segmentation Violation > error. > > Did I miss something in the input mesh file? My first guess is the > label "marker" used in the code, but I couldn't find any place to set > this label. > > Would you please let me know how to solve this problem. My code is > done in a similar way as ex1f90, it reads mesh from external file or > creates from cell list, distributes the mesh (these already work), and > then creates sections and sets ndof to the nodes. > > Thanks, > > Danyang > > > On 18-02-20 10:07 AM, Danyang Su wrote: >> On 18-02-20 09:52 AM, Matthew Knepley wrote: >>> On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su >> > wrote: >>> >>> Hi All, >>> >>> I tried to compile the DMPlexCreateSection code but got error >>> information as shown below. >>> >>> Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type >>> >>> I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then >>> the code can be compiled but run into Segmentation Violation >>> error in DMPlexCreateSection. >>> >>> From the webpage >>> >>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/DMPlexCreateSection.html >>> >>> >>> The F90 version is?DMPlexCreateSectionF90. Doing this with F77 >>> arrays would have been too painful. >> Hi Matt, >> >> Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 >> instead of DMPlexCreateSection. Would you please tell me in more >> details? >> >> undefined reference to `dmplexcreatesectionf90_' >> >> then I #include , but this throws >> more error during compilation. >> >> >> ??? Included at >> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: >> ??? Included at ../../solver/solver_ddmethod.F90:62: >> >> ????????? PETSCSECTION_HIDE section >> ????????? 1 >> Error: Unclassifiable statement at (1) >> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:167.10: >> ??? Included at >> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: >> ??? Included at ../../solver/solver_ddmethod.F90:62: >> >> ????????? PETSCSECTION_HIDE section >> ????????? 1 >> Error: Unclassifiable statement at (1) >> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ftn-custom/petscdmplex.h90:179.10: >> ??? Included at >> /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/petscdmplex.h90:6: >> ??? Included at ../../solver/solver_ddmethod.F90:62: >> >>> >>> ? Thanks, >>> >>> ? ? ?Matt >>> >>> dmda_flow%da is distributed dm object that works fine. >>> >>> The fortran example I follow is >>> http://www.mcs.anl.gov/petsc/petsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90 >>> . >>> >>> >>> What parameters should I use if passing null to bcField, >>> bcComps, bcPoints and perm. >>> >>> PetscErrorCode >>> DMPlexCreateSection >>> (DM >>> dm,PetscInt >>> dim,PetscInt >>> numFields,constPetscInt >>> numComp[],constPetscInt >>> numDof[],PetscInt >>> numBC,constPetscInt >>> bcField[], >>> constIS >>> bcComps[], constIS >>> bcPoints[],IS >>> perm,PetscSection >>> *section) >>> >>> #include >>> #include >>> #include >>> >>> ... >>> >>> #ifdef USG >>> ??????? numFields = 1 >>> ??????? numComp(1) = 1 >>> ??????? pNumComp => numComp >>> >>> ??????? do i = 1, numFields*(dmda_flow%dim+1) >>> ????????? numDof(i) = 0 >>> ??????? end do >>> ??????? numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof >>> ??????? pNumDof => numDof >>> >>> ??????? numBC = 0 >>> >>> ??????? call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, & >>> numFields,pNumComp,pNumDof, & >>> numBC,PETSC_NULL_INTEGER, & >>> PETSC_NULL_IS,PETSC_NULL_IS, &???????????? !Error here >>> PETSC_NULL_IS,section,ierr) >>> ??????? CHKERRQ(ierr) >>> >>> ??????? call PetscSectionSetFieldName(section,0,'flow',ierr) >>> ??????? CHKERRQ(ierr) >>> >>> ??????? call DMSetDefaultSection(dmda_flow%da,section,ierr) >>> ??????? CHKERRQ(ierr) >>> >>> ??????? call PetscSectionDestroy(section,ierr) >>> ??????? CHKERRQ(ierr) >>> #endif >>> >>> Thanks, >>> >>> Danyang >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to which >>> their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Feb 22 17:16:32 2018 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 22 Feb 2018 18:16:32 -0500 Subject: [petsc-users] Question on DMPlexCreateSection for Fortran In-Reply-To: <7f5c629f-4258-5f9d-237c-9a1fd9d4c3df@gmail.com> References: <72eb7a04-a348-5637-c051-d2d35adf2b8d@gmail.com> <25e20ee5-23ee-a3fc-7b45-f981563f03b4@gmail.com> <7f5c629f-4258-5f9d-237c-9a1fd9d4c3df@gmail.com> Message-ID: On Thu, Feb 22, 2018 at 6:10 PM, Danyang Su wrote: > Hi Matt, > > Just to let you know that after updating to PETSc 3.8.3 version, the > DMPlexCreateSection in my code now works. > > One more question, what is the PETSC_NULL_XXX for IS pointer, as shown > below, in C code, it just pass NULL, but in fortran, what is the name of > null object for pBcCompIS and pBcPointIS. > Fortran does not "use pointers", so we need to pass a real object that we then convert to a NULL pointer for C. PETSC_NULL_IS is a real IS object in Fortran that then gets converted to NULL before calling the C function. Thanks, Matt > call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, & > numFields,pNumComp,pNumDof, & > numBC,pBcField, & > pBcCompIS,pBcPointIS, & > PETSC_NULL_IS,section,ierr) > CHKERRQ(ierr) > > Thanks, > > Danyang > > > On 18-02-21 09:22 AM, Danyang Su wrote: > > Hi Matt, > > To test the Segmentation Violation problem in my code, I modified the > example ex1f90.F to reproduce the problem I have in my own code. > > If use DMPlexCreateBoxMesh to generate the mesh, the code works fine. > However, if I use DMPlexCreateGmshFromFile, using the same mesh exported > from "DMPlexCreateBoxMesh", it gives Segmentation Violation error. > > Did I miss something in the input mesh file? My first guess is the label > "marker" used in the code, but I couldn't find any place to set this label. > > Would you please let me know how to solve this problem. My code is done in > a similar way as ex1f90, it reads mesh from external file or creates from > cell list, distributes the mesh (these already work), and then creates > sections and sets ndof to the nodes. > > Thanks, > > Danyang > > On 18-02-20 10:07 AM, Danyang Su wrote: > > On 18-02-20 09:52 AM, Matthew Knepley wrote: > > On Tue, Feb 20, 2018 at 12:30 PM, Danyang Su wrote: > >> Hi All, >> >> I tried to compile the DMPlexCreateSection code but got error information >> as shown below. >> >> Error: Symbol 'petsc_null_is' at (1) has no IMPLICIT type >> >> I tried to use PETSC_NULL_OBJECT instead of PETSC_NULL_IS, then the code >> can be compiled but run into Segmentation Violation error in >> DMPlexCreateSection. >> > From the webpage > > http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DMPLEX/ > DMPlexCreateSection.html > > The F90 version is DMPlexCreateSectionF90. Doing this with F77 arrays > would have been too painful. > > Hi Matt, > > Sorry, I still cannot compile the code if use DMPlexCreateSectionF90 > instead of DMPlexCreateSection. Would you please tell me in more details? > > undefined reference to `dmplexcreatesectionf90_' > > then I #include , but this throws more > error during compilation. > > > Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ > petscdmplex.h90:6: > Included at ../../solver/solver_ddmethod.F90:62: > > PETSCSECTION_HIDE section > 1 > Error: Unclassifiable statement at (1) > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ > ftn-custom/petscdmplex.h90:167.10: > Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ > petscdmplex.h90:6: > Included at ../../solver/solver_ddmethod.F90:62: > > PETSCSECTION_HIDE section > 1 > Error: Unclassifiable statement at (1) > /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ > ftn-custom/petscdmplex.h90:179.10: > Included at /home/dsu/Soft/PETSc/petsc-3.7.5/include/petsc/finclude/ > petscdmplex.h90:6: > Included at ../../solver/solver_ddmethod.F90:62: > > > Thanks, > > Matt > >> dmda_flow%da is distributed dm object that works fine. >> >> The fortran example I follow is http://www.mcs.anl.gov/petsc/p >> etsc-dev/src/dm/impls/plex/examples/tutorials/ex1f90.F90. >> >> What parameters should I use if passing null to bcField, bcComps, >> bcPoints and perm. >> >> PetscErrorCode DMPlexCreateSection (DM dm, PetscInt dim, PetscInt numFields,const PetscInt numComp[],const PetscInt numDof[], PetscInt numBC,const PetscInt bcField[], >> const IS bcComps[], const IS bcPoints[], IS perm, PetscSection *section) >> >> >> #include >> #include >> #include >> >> ... >> >> #ifdef USG >> numFields = 1 >> numComp(1) = 1 >> pNumComp => numComp >> >> do i = 1, numFields*(dmda_flow%dim+1) >> numDof(i) = 0 >> end do >> numDof(0*(dmda_flow%dim+1)+1) = dmda_flow%dof >> pNumDof => numDof >> >> numBC = 0 >> >> call DMPlexCreateSection(dmda_flow%da,dmda_flow%dim, & >> numFields,pNumComp,pNumDof, >> & >> numBC,PETSC_NULL_INTEGER, >> & >> PETSC_NULL_IS,PETSC_NULL_IS, >> & !Error here >> PETSC_NULL_IS,section,ierr) >> CHKERRQ(ierr) >> >> call PetscSectionSetFieldName(section,0,'flow',ierr) >> CHKERRQ(ierr) >> >> call DMSetDefaultSection(dmda_flow%da,section,ierr) >> CHKERRQ(ierr) >> >> call PetscSectionDestroy(section,ierr) >> CHKERRQ(ierr) >> #endif >> >> Thanks, >> >> Danyang >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anthony.J.Ruth.12 at nd.edu Thu Feb 22 22:39:05 2018 From: Anthony.J.Ruth.12 at nd.edu (Anthony Ruth) Date: Thu, 22 Feb 2018 23:39:05 -0500 Subject: [petsc-users] Inertia of Hermitian Matrix Message-ID: Hello, I am trying to diagonalize a hermitian matrix using the Eigen Problem Solver in SLEPc, I run into errors on calls to MatGetInertia() with complex hermitian matrices that I did not see with real matrices. The complex and real versions were done with separate PETSC_ARCH. I do not know if the problem is with the set up of the matrix or more generally a problem calculating the inertia for a complex matrix. The matrix is created by: ierr = MatSetType(A,MATAIJ);CHKERRQ(ierr); ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N);CHKERRQ(ierr); ierr = MatSetFromOptions(A);CHKERRQ(ierr); ierr = MatSetUp(A);CHKERRQ(ierr); ierr = MatSetOption(A,MAT_HERMITIAN,PETSC_TRUE);CHKERRQ(ierr); ierr = MatGetOwnershipRange(A,&first_row,&last_row);CHKERRQ(ierr); ierr = MatSetValues(A,m,idxm,n,idxn,data,INSERT_VALUES);CHKERRQ(ierr); ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); For a hermitian matrix, all the eigenvalues are real, so I believe it is possible to calculate an inertia by looking at the signs of the diagonal entries. I believe if it was complex but not hermitian, the complex eigenvalues calculating inertia would be difficult. Is there some problem with doing this through sparse iterative methods? Is there a certain place the matrix needs to be specified as hermitian besides upon assembly? Here is the error stack I see when running: Mat Object: 1 MPI processes type: seqaij row 0: (0, 0.) (1, 1. + 1. i) (2, 0.) (3, 0.) (4, 0.) row 1: (0, 1. - 1. i) (1, 0.) (2, 1. + 1. i) (3, 0.) (4, 0.) row 2: (0, 0.) (1, 1. - 1. i) (2, 0.) (3, 1. + 1. i) (4, 0.) row 3: (0, 0.) (1, 0.) (2, 1. - 1. i) (3, 0.) (4, 1. + 1. i) row 4: (0, 0.) (1, 0.) (2, 0.) (3, 1. - 1. i) (4, 0.) [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: No support for this operation for this object type [0]PETSC ERROR: Mat type mumps [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 [0]PETSC ERROR: Configure options --download-metis --download-mumps --download-parmetis --download-scalapack --with-scalar-type=complex [0]PETSC ERROR: #1 MatGetInertia() line 8416 in /home/anthony/DFTB+SIPs/petsc-3.8.2/src/mat/interface/matrix.c [0]PETSC ERROR: #2 EPSSliceGetInertia() line 333 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c [0]PETSC ERROR: #3 EPSSetUp_KrylovSchur_Slice() line 459 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c [0]PETSC ERROR: #4 EPSSetUp_KrylovSchur() line 146 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/krylovschur.c [0]PETSC ERROR: #5 EPSSetUp() line 165 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssetup.c [0]PETSC ERROR: #6 EPSSliceGetEPS() line 298 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c [0]PETSC ERROR: #7 EPSSetUp_KrylovSchur_Slice() line 408 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c [0]PETSC ERROR: #8 EPSSetUp_KrylovSchur() line 146 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/krylovschur.c [0]PETSC ERROR: #9 EPSSetUp() line 165 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssetup.c [0]PETSC ERROR: #10 SIPSolve() line 195 in /home/anthony/DFTB+SIPs/dftb-eig15/sips.c [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Invalid argument [0]PETSC ERROR: Wrong type of object: Parameter # 1 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 [0]PETSC ERROR: Configure options --download-metis --download-mumps --download-parmetis --download-scalapack --with-scalar-type=complex [0]PETSC ERROR: #11 EPSGetConverged() line 257 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssolve.c [0]PETSC ERROR: #12 squareFromEPS() line 131 in /home/anthony/DFTB+SIPs/dftb-eig15/sips_square.c regards, Anthony Ruth Condensed Matter Theory University of Notre Dame -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Feb 23 00:33:12 2018 From: danyang.su at gmail.com (Danyang Su) Date: Thu, 22 Feb 2018 22:33:12 -0800 Subject: [petsc-users] Cell type for DMPlexCreateFromCellList Message-ID: <38103cfe-8087-2ed3-c55c-7b04619fde11@gmail.com> Hi All, What cell types does DMPlexCreateFromCellList support? I test this with triangle, tetrahedron and prism. Both triangle and tetrahedron work but prism mesh throws error saying "Cone size 6 not supported for dimension 3". Could anyone tell me all the supported cell types? Thanks, Danyang From jroman at dsic.upv.es Fri Feb 23 05:02:41 2018 From: jroman at dsic.upv.es (Jose E. Roman) Date: Fri, 23 Feb 2018 12:02:41 +0100 Subject: [petsc-users] Inertia of Hermitian Matrix In-Reply-To: References: Message-ID: <568A6F8D-B30E-4BC3-8870-403BBA4E0264@dsic.upv.es> Unfortunately MUMPS does not return inertia with complex matrices, it seems that it is not implemented. See the note "Usage with Complex Scalars" in section 3.4.5 of SLEPc users manual. You could use multi-communicators with as many partitions as MPI processes, so that each process performs a factorization sequentially with PETSc's Cholesky (which returns inertia for complex Hermitian matrices). But this is useful only for matrices that are not too large. Jose > El 23 feb 2018, a las 5:39, Anthony Ruth escribi?: > > Hello, > > I am trying to diagonalize a hermitian matrix using the Eigen Problem Solver in SLEPc, I run into errors on calls to MatGetInertia() with complex hermitian matrices that I did not see with real matrices. The complex and real versions were done with separate PETSC_ARCH. I do not know if the problem is with the set up of the matrix or more generally a problem calculating the inertia for a complex matrix. > The matrix is created by: > > ierr = MatSetType(A,MATAIJ);CHKERRQ(ierr); > ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,N,N);CHKERRQ(ierr); > ierr = MatSetFromOptions(A);CHKERRQ(ierr); > ierr = MatSetUp(A);CHKERRQ(ierr); > ierr = MatSetOption(A,MAT_HERMITIAN,PETSC_TRUE);CHKERRQ(ierr); > ierr = MatGetOwnershipRange(A,&first_row,&last_row);CHKERRQ(ierr); > ierr = MatSetValues(A,m,idxm,n,idxn,data,INSERT_VALUES);CHKERRQ(ierr); > ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > For a hermitian matrix, all the eigenvalues are real, so I believe it is possible to calculate an inertia by looking at the signs of the diagonal entries. I believe if it was complex but not hermitian, the complex eigenvalues calculating inertia would be difficult. Is there some problem with doing this through sparse iterative methods? Is there a certain place the matrix needs to be specified as hermitian besides upon assembly? > > Here is the error stack I see when running: > > > Mat Object: 1 MPI processes > type: seqaij > row 0: (0, 0.) (1, 1. + 1. i) (2, 0.) (3, 0.) (4, 0.) > row 1: (0, 1. - 1. i) (1, 0.) (2, 1. + 1. i) (3, 0.) (4, 0.) > row 2: (0, 0.) (1, 1. - 1. i) (2, 0.) (3, 1. + 1. i) (4, 0.) > row 3: (0, 0.) (1, 0.) (2, 1. - 1. i) (3, 0.) (4, 1. + 1. i) > row 4: (0, 0.) (1, 0.) (2, 0.) (3, 1. - 1. i) (4, 0.) > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: No support for this operation for this object type > [0]PETSC ERROR: Mat type mumps > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 > [0]PETSC ERROR: Configure options --download-metis --download-mumps --download-parmetis --download-scalapack --with-scalar-type=complex > [0]PETSC ERROR: #1 MatGetInertia() line 8416 in /home/anthony/DFTB+SIPs/petsc-3.8.2/src/mat/interface/matrix.c > [0]PETSC ERROR: #2 EPSSliceGetInertia() line 333 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c > [0]PETSC ERROR: #3 EPSSetUp_KrylovSchur_Slice() line 459 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c > [0]PETSC ERROR: #4 EPSSetUp_KrylovSchur() line 146 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/krylovschur.c > [0]PETSC ERROR: #5 EPSSetUp() line 165 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssetup.c > [0]PETSC ERROR: #6 EPSSliceGetEPS() line 298 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c > [0]PETSC ERROR: #7 EPSSetUp_KrylovSchur_Slice() line 408 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/ks-slice.c > [0]PETSC ERROR: #8 EPSSetUp_KrylovSchur() line 146 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/impls/krylov/krylovschur/krylovschur.c > [0]PETSC ERROR: #9 EPSSetUp() line 165 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssetup.c > [0]PETSC ERROR: #10 SIPSolve() line 195 in /home/anthony/DFTB+SIPs/dftb-eig15/sips.c > [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- > [0]PETSC ERROR: Invalid argument > [0]PETSC ERROR: Wrong type of object: Parameter # 1 > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. > [0]PETSC ERROR: Petsc Release Version 3.8.2, Nov, 09, 2017 > [0]PETSC ERROR: Configure options --download-metis --download-mumps --download-parmetis --download-scalapack --with-scalar-type=complex > [0]PETSC ERROR: #11 EPSGetConverged() line 257 in /home/anthony/DFTB+SIPs/slepc-3.8.1/src/eps/interface/epssolve.c > [0]PETSC ERROR: #12 squareFromEPS() line 131 in /home/anthony/DFTB+SIPs/dftb-eig15/sips_square.c > > > > regards, > Anthony Ruth > Condensed Matter Theory > University of Notre Dame From knepley at gmail.com Fri Feb 23 05:04:51 2018 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 23 Feb 2018 06:04:51 -0500 Subject: [petsc-users] Cell type for DMPlexCreateFromCellList In-Reply-To: <38103cfe-8087-2ed3-c55c-7b04619fde11@gmail.com> References: <38103cfe-8087-2ed3-c55c-7b04619fde11@gmail.com> Message-ID: On Fri, Feb 23, 2018 at 1:33 AM, Danyang Su wrote: > Hi All, > > What cell types does DMPlexCreateFromCellList support? I test this with > triangle, tetrahedron and prism. Both triangle and tetrahedron work but > prism mesh throws error saying "Cone size 6 not supported for dimension 3". > > Could anyone tell me all the supported cell types? > The limitation occurs in two places: 1) Calculating edges and faces: I only know how to do this for tri, tet, quad, and hex. Give PETSC_FALSE to CreateFromCellList() and this error will go away. You could also provide the information for interpolating prisms, which would depend on how they are ordered when you read in cells. 2) Even if you read them in, I have no geometric routines for prisms. Thanks, Matt Thanks, > > Danyang > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Fri Feb 23 11:23:45 2018 From: danyang.su at gmail.com (Danyang Su) Date: Fri, 23 Feb 2018 09:23:45 -0800 Subject: [petsc-users] Cell type for DMPlexCreateFromCellList In-Reply-To: References: <38103cfe-8087-2ed3-c55c-7b04619fde11@gmail.com> Message-ID: <398e70ab-8fdd-74ea-0d80-8a2020291ca0@gmail.com> On 18-02-23 03:04 AM, Matthew Knepley wrote: > On Fri, Feb 23, 2018 at 1:33 AM, Danyang Su > wrote: > > Hi All, > > What cell types does DMPlexCreateFromCellList support? I test this > with triangle, tetrahedron and prism. Both triangle and > tetrahedron work but prism mesh throws error saying "Cone size 6 > not supported for dimension 3". > > Could anyone tell me all the supported cell types? > > > The limitation occurs in two places: > > ? 1) Calculating edges and faces: I only know how to do this for tri, > tet, quad, and hex. Give PETSC_FALSE to CreateFromCellList() and this > error will go away. Passing PETSC_FALSE to CreateFromCellList() works. There is no problem in the distributed nodes and cells. Is there any plan to add prism(wedge) in the near future? > ? ? ? You could also provide the information for interpolating prisms, > which would depend on how they are ordered when you read in cells. > > ? 2) Even if you read them in, I have no geometric routines for prisms. That's fine for me. I just need to create dmplex based on the given cell/vertex list and distribute over all the processors. Then follow the general routines that have been used in the structured grid. Thanks, Danyang > > ? Thanks, > > ? ? ?Matt > > Thanks, > > Danyang > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Feb 23 11:32:32 2018 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 23 Feb 2018 12:32:32 -0500 Subject: [petsc-users] Cell type for DMPlexCreateFromCellList In-Reply-To: <398e70ab-8fdd-74ea-0d80-8a2020291ca0@gmail.com> References: <38103cfe-8087-2ed3-c55c-7b04619fde11@gmail.com> <398e70ab-8fdd-74ea-0d80-8a2020291ca0@gmail.com> Message-ID: On Fri, Feb 23, 2018 at 12:23 PM, Danyang Su wrote: > > On 18-02-23 03:04 AM, Matthew Knepley wrote: > > On Fri, Feb 23, 2018 at 1:33 AM, Danyang Su wrote: > >> Hi All, >> >> What cell types does DMPlexCreateFromCellList support? I test this with >> triangle, tetrahedron and prism. Both triangle and tetrahedron work but >> prism mesh throws error saying "Cone size 6 not supported for dimension 3". >> >> Could anyone tell me all the supported cell types? >> > > The limitation occurs in two places: > > 1) Calculating edges and faces: I only know how to do this for tri, tet, > quad, and hex. Give PETSC_FALSE to CreateFromCellList() and this error will > go away. > > Passing PETSC_FALSE to CreateFromCellList() works. There is no problem in > the distributed nodes and cells. Is there any plan to add prism(wedge) in > the near future? > Its not hard. We would just need to add one more function. However, I would want to make sure that it conformed to what most people want. I would like it to match the Firedrake definition for extruded meshes. > You could also provide the information for interpolating prisms, > which would depend on how they are ordered when you read in cells. > > 2) Even if you read them in, I have no geometric routines for prisms. > > That's fine for me. I just need to create dmplex based on the given > cell/vertex list and distribute over all the processors. Then follow the > general routines that have been used in the structured grid. > Okay, then that should work now. Thanks, Matt > Thanks, > > Danyang > > > Thanks, > > Matt > > Thanks, >> >> Danyang >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_g at berkeley.edu Mon Feb 26 09:41:13 2018 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 26 Feb 2018 16:41:13 +0100 Subject: [petsc-users] Check if allocated in Fortran Message-ID: <70cb8b81-ecf1-b1b8-72a9-63bb735d220c@berkeley.edu> I am trying to update some code that works in version 3.7.6 to version 3.8.3. What is the recommended way to check if a petsc type, such as Vec, has already been created? Current code looks like: Vec xvec if(xvec.eq.0) then????????? ! Check if xvec needs to be created ? call VecCreate(PETSC_COMM_WORLD, xvec, ierr) endif In PETSc 3.8.3 I am now getting a compiler error parkv.F(60): error #6355: This binary operation is invalid for this data type.?? [XVEC] ????? if(xvec.eq.0) then ---------^ From balay at mcs.anl.gov Mon Feb 26 12:11:41 2018 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 26 Feb 2018 12:11:41 -0600 Subject: [petsc-users] Check if allocated in Fortran In-Reply-To: <70cb8b81-ecf1-b1b8-72a9-63bb735d220c@berkeley.edu> References: <70cb8b81-ecf1-b1b8-72a9-63bb735d220c@berkeley.edu> Message-ID: Perhaps the following code? Vec x x = PETSC_NULL_VEC if (x .eq. PETSC_NULL_VEC) then print*,'vec is null' endif Satish On Mon, 26 Feb 2018, Sanjay Govindjee wrote: > I am trying to update some code that works in version 3.7.6 to version 3.8.3. > > What is the recommended way to check if a petsc type, such as Vec, has already > been created? > > Current code looks like: > > Vec xvec > > if(xvec.eq.0) then????????? ! Check if xvec needs to be created > ? call VecCreate(PETSC_COMM_WORLD, xvec, ierr) > endif > > In PETSc 3.8.3 I am now getting a compiler error > > parkv.F(60): error #6355: This binary operation is invalid for this data > type.?? [XVEC] > ????? if(xvec.eq.0) then > ---------^ > > From s_g at berkeley.edu Mon Feb 26 14:42:28 2018 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Mon, 26 Feb 2018 21:42:28 +0100 Subject: [petsc-users] Check if allocated in Fortran In-Reply-To: References: <70cb8b81-ecf1-b1b8-72a9-63bb735d220c@berkeley.edu> Message-ID: <96fe4d8a-1fdc-19b9-b726-2e3ae04a3358@berkeley.edu> Thanks.? That did the trick. On 2/26/18 7:11 PM, Satish Balay wrote: > Perhaps the following code? > > Vec x > > x = PETSC_NULL_VEC > if (x .eq. PETSC_NULL_VEC) then > print*,'vec is null' > endif > > Satish > > On Mon, 26 Feb 2018, Sanjay Govindjee wrote: > >> I am trying to update some code that works in version 3.7.6 to version 3.8.3. >> >> What is the recommended way to check if a petsc type, such as Vec, has already >> been created? >> >> Current code looks like: >> >> Vec xvec >> >> if(xvec.eq.0) then????????? ! Check if xvec needs to be created >> ? call VecCreate(PETSC_COMM_WORLD, xvec, ierr) >> endif >> >> In PETSc 3.8.3 I am now getting a compiler error >> >> parkv.F(60): error #6355: This binary operation is invalid for this data >> type.?? [XVEC] >> ????? if(xvec.eq.0) then >> ---------^ >> >> From hengjiew at uci.edu Mon Feb 26 16:18:22 2018 From: hengjiew at uci.edu (frank) Date: Mon, 26 Feb 2018 14:18:22 -0800 Subject: [petsc-users] Fortran interface of MatNullSpaceCreate Message-ID: Hello, I have a question of the Fortran interface of subroutine MatNullSpaceCreate. I tried to call the subroutine in the following form: Vec???????????????? :: dummyVec, dummyVecs(1) MatNullSpace :: nullspace INTEGER???????? :: ierr (a) call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, PETSC_NULL_INTEGER, dummyVec, nullspace, ierr) (b) call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, PETSC_NULL_INTEGER, dummyVecs, nullspace, ierr) (a) and (b) gave me the same error during compilation: no specific subroutine for the generic MatNullSpaceCreate. I am using the latest version of Petsc. I just did a "git pull" and re-build it. How can I call the subroutine ? In addition, I found two 'petscmat.h90' : petsc/include/petsc/finclude/ftn-auto/petscmat.h90 and petsc/src/mat/f90-mod/petscmat.h90. The former defines a subroutine MatNullSpaceCreate in the above form (b). The latter provides generic interface for both (a) and (b). I am not sure if this relates to the error I get. Thank you. Frank From bsmith at mcs.anl.gov Mon Feb 26 16:35:01 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 26 Feb 2018 22:35:01 +0000 Subject: [petsc-users] Fortran interface of MatNullSpaceCreate In-Reply-To: References: Message-ID: It should be > call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, 0, dummyVecs, nullspace, ierr) If this doesn't work please send your test code that I can compile myself and figure out what is going on. Barry > On Feb 26, 2018, at 4:18 PM, frank wrote: > > Hello, > > I have a question of the Fortran interface of subroutine MatNullSpaceCreate. > > I tried to call the subroutine in the following form: > > Vec :: dummyVec, dummyVecs(1) > MatNullSpace :: nullspace > INTEGER :: ierr > > (a) call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, PETSC_NULL_INTEGER, dummyVec, nullspace, ierr) > > (b) call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, PETSC_NULL_INTEGER, dummyVecs, nullspace, ierr) > > (a) and (b) gave me the same error during compilation: no specific subroutine for the generic MatNullSpaceCreate. > > I am using the latest version of Petsc. I just did a "git pull" and re-build it. > How can I call the subroutine ? > > In addition, I found two 'petscmat.h90' : petsc/include/petsc/finclude/ftn-auto/petscmat.h90 and petsc/src/mat/f90-mod/petscmat.h90. > The former defines a subroutine MatNullSpaceCreate in the above form (b). The latter provides generic interface for both (a) and (b). > I am not sure if this relates to the error I get. > > Thank you. > > Frank From bsmith at mcs.anl.gov Mon Feb 26 16:53:19 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Mon, 26 Feb 2018 22:53:19 +0000 Subject: [petsc-users] Fortran interface of MatNullSpaceCreate In-Reply-To: References: Message-ID: <524E54BA-4CBA-49F5-8C7C-389F4840385E@mcs.anl.gov> > On Feb 26, 2018, at 4:47 PM, frank wrote: > > Hello, > > It works after changing PETSC_NULL_INTEGER to 0. Thank you so much. > > Does this mean the 3rd argument is a Fortran integer rather than PestcInt? No it is actually a PetscInt so technically you should declare PetscInt zero zero = 0 and pass in zero. The reason PETSC_NULL_INTEGER doesn't work is 1) it should not be used as a synonym for 0; it is a "pointer" to null not zero 2) PETSC_NULL_INTEGER is actually declared as an array of size 1 (this is why you get the confusing message about no specific subroutine that matches). PETSC_NULL_INTEGER can only be passed where PETSc expects and integer array, not an integer scalar. Barry > > Regards, > > Frank > > On 02/26/2018 02:35 PM, Smith, Barry F. wrote: >> It should be >> >>> call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, 0, dummyVecs, nullspace, ierr) >> If this doesn't work please send your test code that I can compile myself and figure out what is going on. >> >> Barry >> >> >> >> >>> On Feb 26, 2018, at 4:18 PM, frank wrote: >>> >>> Hello, >>> >>> I have a question of the Fortran interface of subroutine MatNullSpaceCreate. >>> >>> I tried to call the subroutine in the following form: >>> >>> Vec :: dummyVec, dummyVecs(1) >>> MatNullSpace :: nullspace >>> INTEGER :: ierr >>> >>> (a) call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, PETSC_NULL_INTEGER, dummyVec, nullspace, ierr) >>> >>> (b) call MatNullSpaceCreate( PETSC_COMM_WORLD, PETSC_TRUE, PETSC_NULL_INTEGER, dummyVecs, nullspace, ierr) >>> >>> (a) and (b) gave me the same error during compilation: no specific subroutine for the generic MatNullSpaceCreate. >>> >>> I am using the latest version of Petsc. I just did a "git pull" and re-build it. >>> How can I call the subroutine ? >>> >>> In addition, I found two 'petscmat.h90' : petsc/include/petsc/finclude/ftn-auto/petscmat.h90 and petsc/src/mat/f90-mod/petscmat.h90. >>> The former defines a subroutine MatNullSpaceCreate in the above form (b). The latter provides generic interface for both (a) and (b). >>> I am not sure if this relates to the error I get. >>> >>> Thank you. >>> >>> Frank > From s_g at berkeley.edu Mon Feb 26 17:38:58 2018 From: s_g at berkeley.edu (Sanjay Govindjee) Date: Tue, 27 Feb 2018 00:38:58 +0100 Subject: [petsc-users] Fortran interface of MatNullSpaceCreate In-Reply-To: <524E54BA-4CBA-49F5-8C7C-389F4840385E@mcs.anl.gov> References: <524E54BA-4CBA-49F5-8C7C-389F4840385E@mcs.anl.gov> Message-ID: <74caf960-7c72-643a-d617-ba2a3b1d8aef@berkeley.edu> If PETSc is expecting an integer, I presume it is fine to pass PETSC_NULL_INTEGER(1)? On 2/26/18 11:53 PM, Smith, Barry F. wrote: > PETSC_NULL_INTEGER is actually declared as an array of size 1 (this is why you get the confusing message about no specific subroutine that matches). PETSC_NULL_INTEGER can only be passed where PETSc expects and integer array, not an integer scalar. > > Barry -- ------------------------------------------------------------------- Sanjay Govindjee, PhD, PE Horace, Dorothy, and Katherine Johnson Professor in Engineering 779 Davis Hall University of California Berkeley, CA 94720-1710 Voice: +1 510 642 6060 FAX: +1 510 643 5264 s_g at berkeley.edu http://faculty.ce.berkeley.edu/sanjay ------------------------------------------------------------------- Books: Engineering Mechanics of Deformable Solids: A Presentation with Exercises http://ukcatalogue.oup.com/product/9780199651641.do http://amzn.com/0199651647 Engineering Mechanics 3 (Dynamics) 2nd Edition http://www.springer.com/978-3-642-53711-0 http://amzn.com/3642537111 Engineering Mechanics 3, Supplementary Problems: Dynamics http://www.amzn.com/B00SOXN8JU ----------------------------------------------- From bsmith at mcs.anl.gov Mon Feb 26 18:07:46 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 27 Feb 2018 00:07:46 +0000 Subject: [petsc-users] Fortran interface of MatNullSpaceCreate In-Reply-To: <74caf960-7c72-643a-d617-ba2a3b1d8aef@berkeley.edu> References: <524E54BA-4CBA-49F5-8C7C-389F4840385E@mcs.anl.gov> <74caf960-7c72-643a-d617-ba2a3b1d8aef@berkeley.edu> Message-ID: > On Feb 26, 2018, at 5:38 PM, Sanjay Govindjee wrote: > > If PETSc is expecting an integer, I presume it is fine to pass PETSC_NULL_INTEGER(1)? Not if the function has an interface definition. Fortran interfaces distinguish between passing an integer a and an integer a(1) so you can only use PETSC_NULL_INTEGER if it expects an array. > > On 2/26/18 11:53 PM, Smith, Barry F. wrote: >> PETSC_NULL_INTEGER is actually declared as an array of size 1 (this is why you get the confusing message about no specific subroutine that matches). PETSC_NULL_INTEGER can only be passed where PETSc expects and integer array, not an integer scalar. >> >> Barry > > -- > ------------------------------------------------------------------- > Sanjay Govindjee, PhD, PE > Horace, Dorothy, and Katherine Johnson Professor in Engineering > > 779 Davis Hall > University of California > Berkeley, CA 94720-1710 > > Voice: +1 510 642 6060 > FAX: +1 510 643 5264 > s_g at berkeley.edu > http://faculty.ce.berkeley.edu/sanjay > ------------------------------------------------------------------- > > Books: > > Engineering Mechanics of Deformable > Solids: A Presentation with Exercises > http://ukcatalogue.oup.com/product/9780199651641.do > http://amzn.com/0199651647 > > Engineering Mechanics 3 (Dynamics) 2nd Edition > http://www.springer.com/978-3-642-53711-0 > http://amzn.com/3642537111 > > Engineering Mechanics 3, Supplementary Problems: Dynamics > http://www.amzn.com/B00SOXN8JU > > ----------------------------------------------- > From zonexo at gmail.com Tue Feb 27 07:01:49 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Tue, 27 Feb 2018 21:01:49 +0800 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> <6DF75825-0B14-4BE9-8CF6-563418F9CDA4@mcs.anl.gov> <4775e526-7824-f601-1c66-fdd82c5395f9@gmail.com> Message-ID: On 23/2/2018 1:40 AM, Smith, Barry F. wrote: > First run under valgrind to look for memory issues. > > Second I would change 1 thing at a time. So use the intel 2017 compiler with PETSc 2.8.3 so the only change is your needed changes to match 2.8.3 and does not include a compiler change. > > I am not sure what numbers you are printing below but often changing optimization levels can and will change numerical values slightly so change in numerical values may not indicate anything is wrong (or it may indicate something is wrong depending on how different the numerical values are). > > Barry > Hi, I realised finally that it is due to compiling with -O3 and my prev PETSc (3.7.6) build, but using -O2 in the current build But just to confirm, is it ok to compile with -O3 for PETSc? Thanks. >> On Feb 21, 2018, at 10:23 PM, TAY wee-beng wrote: >> >> >> On 21/2/2018 11:44 AM, Smith, Barry F. wrote: >>> Did you follow the directions in the changes file for 3.8? >>> >>>
  • Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] DMSetUp()
  • >>>
  • DMDACreateXd() no longer can take negative values for dimensons, instead pass positive values and call DMSetFromOptions() immediately after
  • >>> >>> I suspect you are not calling DMSetUp() and this is causing the problem. >>> >>> Barry >> Ops sorry, indeed I didn't change that part. Got it compiled now. >> >> However, I have got a new problem. Previously, I was using Intel 2016 with PETSc 3.7.6. During compile, I used -O3 for all modules except one, which will give error (due to DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90). Hence, I need to use -O1. >> >> Now, I'm using Intel 2018 with PETSc 3.8.3 and I got the error: >> >> M Diverged but why?, time = 2 >> reason = -9 >> >> I tried to change all *.F90 from using -O3 to -O1 and although there's no diverged err printed, my values are different: >> >> 1 0.01600000 0.46655767 0.46310378 1.42427154 -0.81598016E+02 -0.11854431E-01 0.42046197E+06 >> 2 0.00956350 0.67395693 0.64698638 1.44166606 -0.12828928E+03 0.12179394E-01 0.41961824E+06 >> >> vs >> >> 1 0.01600000 0.49096543 0.46259333 1.41828130 -0.81561221E+02 -0.16146574E-01 0.42046335E+06 >> 2 0.00956310 0.68342495 0.63682485 1.44353571 -0.12813998E+03 0.24226242E+00 0.41962121E+06 >> >> The latter values are obtained using the debug built and they compared correctly with another cluster, which use GNU. >> >> What going on and how should I troubleshoot? >> Thanks >>> >>>> On Feb 20, 2018, at 7:35 PM, TAY wee-beng wrote: >>>> >>>> >>>> On 21/2/2018 10:47 AM, Smith, Barry F. wrote: >>>>> Try setting >>>>> >>>>> u_global = tVec(1) >>>>> >>>>> immediately before the call to DMCreateGlobalVector() >>>>> >>>>> >>>> Hi, >>>> >>>> I added the line in but still got the same error below. Btw, my code is organised as: >>>> >>>> module global_data >>>> >>>> #include "petsc/finclude/petsc.h" >>>> use petsc >>>> use kdtree2_module >>>> implicit none >>>> save >>>> ... >>>> Vec u_local,u_global ... >>>> ... >>>> contains >>>> >>>> subroutine allo_var >>>> ... >>>> u_global = tVec(1) >>>> call DMCreateGlobalVector(da_u,u_global,ierr) >>>> ... >>>> >>>> >>>> >>>> >>>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>>> ---------------------------------- >>>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>>> [0]PETSC ERROR: Null Object: Parameter # 2 >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>>> ble shooting. >>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018 >>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x >>>> 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program Fil >>>> es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 --with-file-create-pause=1 >>>> --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 --with-mpi-lib="[/c >>>> ygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/ >>>> c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" --with-shared-libra >>>> ries=0 >>>> [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in C:\Source\PETSC-~2.3\ >>>> src\vec\vec\INTERF~1\vector.c >>>> [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in C:\Source\PETSC-~2.3\src >>>> \dm\impls\da\dadist.c >>>> [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in C:\Source\PETSC-~2.3\src\d >>>> m\INTERF~1\dm.c >>>> >>>> Thanks. >>>>>> On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: >>>>>> >>>>>> call DMCreateGlobalVector(da_u,u_global,ierr) >>>>>> >>>>>> The error is: >>>>>> >>>>>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>>>>> ---------------------------------- >>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>>>>> [0]PETSC ERROR: Null Object: Parameter # 2 >>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>>>>> ble shooting. >>>>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>>>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 >>>>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>>>>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>>>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... >>>>>> >>>>>> But all I changed is from: >>>>>> >>>>>> module global_data >>>>>> #include "petsc/finclude/petsc.h" >>>>>> use petsc >>>>>> use kdtree2_module >>>>>> implicit none >>>>>> save >>>>>> !grid variables >>>>>> >>>>>> integer :: size_x,s.... >>>>>> >>>>>> ... >>>>>> >>>>>> to >>>>>> >>>>>> module global_data >>>>>> use kdtree2_module >>>>>> implicit none >>>>>> save >>>>>> #include "petsc/finclude/petsc.h90" >>>>>> !grid variables >>>>>> integer :: size_x,s... >>>>>> >>>>>> ... >>>>>> >>>>>> da_u, u_global were declared thru: >>>>>> >>>>>> DM da_u,da_v,... >>>>>> DM da_cu_types ... >>>>>> Vec u_local,u_global,v_local... >>>>>> >>>>>> So what could be the problem? >>>>>> >>>>>> >>>>>> Thank you very much. >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> ================================================ >>>>>> TAY Wee-Beng (Zheng Weiming) ??? >>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>> ================================================ >>>>>> >>>>>> On 20/2/2018 10:46 PM, Jose E. Roman wrote: >>>>>>> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >>>>>>> >>>>>>> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >>>>>>> >>>>>>> Jose >>>>>>> >>>>>>> >>>>>>>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>>>>>>> >>>>>>>> >>>>>>>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>>>>>>> >>>>>>>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>>>>>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>>>>>>> trying to compile: >>>>>>>>> >>>>>>>>> On Intel Fortran: >>>>>>>>> >>>>>>>>> Previously, I was using: >>>>>>>>> >>>>>>>>> #include "petsc/finclude/petsc.h90" >>>>>>>>> >>>>>>>>> in *.F90 when requires the use of PETSc >>>>>>>>> >>>>>>>>> I read in the change log that h90 is no longer there and so I replaced >>>>>>>>> with #include "petsc/finclude/petsc.h" >>>>>>>>> >>>>>>>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>>>>>>> they use some modules which uses PETSc. >>>>>>>>> >>>>>>>>> Now I can't compile them. The error is : >>>>>>>>> >>>>>>>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>>>>>>> file. Check INCLUDE paths. [PETSC] >>>>>>>>> use mpi_subroutines >>>>>>>>> >>>>>>>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>>>>>>> >>>>>>>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>>>>>>> use PETSc, by including PETSc include and lib files. >>>>>>>>> >>>>>>>>> May I know why this is so? It was not necessary before. >>>>>>>>> >>>>>>>>> Anyway, it managed to compile until it reached hypre.F90. >>>>>>>>> >>>>>>>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>>>>>>> option. Also, I have to use: >>>>>>>>> >>>>>>>>> integer(8) mpi_comm >>>>>>>>> >>>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>>> >>>>>>>>> to make my codes work with HYPRE. >>>>>>>>> >>>>>>>>> But now, compiling gives the error: >>>>>>>>> >>>>>>>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>>>>>>> those made accessible by a USE statement. [MPI_COMM] >>>>>>>>> integer(8) mpi_comm >>>>>>>>> --------------------------------------^ >>>>>>>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>>>>>>> variable. [MPI_COMM] >>>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>>> ----^ >>>>>>>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>>>>>>> expression operation is invalid for the data types of the two >>>>>>>>> operands. [1140850688] >>>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>>> ---------------^ >>>>>>>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>>>>>>> variable. [MPI_COMM] >>>>>>>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>>>>>>> ... >>>>>>>>> >>>>>>>>> What's actually happening? Why can't I compile now? >>>>>>>>> >>>>>>>>> On GNU gfortran: >>>>>>>>> >>>>>>>>> I tried to use similar tactics as above here. However, when compiling >>>>>>>>> math_routine.F90, I got the error: >>>>>>>>> >>>>>>>>> math_routine.F90:1333:21: >>>>>>>>> >>>>>>>>> call subb(orig,vert1,tvec) >>>>>>>>> 1 >>>>>>>>> Error: Invalid procedure argument at (1) >>>>>>>>> math_routine.F90:1339:18: >>>>>>>>> >>>>>>>>> qvec = cross_pdt2(tvec,edge1) >>>>>>>>> 1 >>>>>>>>> Error: Invalid procedure argument at (1) >>>>>>>>> math_routine.F90:1345:21: >>>>>>>>> >>>>>>>>> uu = dot_product(tvec,pvec) >>>>>>>>> 1 >>>>>>>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>>>>>>> numeric or LOGICAL >>>>>>>>> math_routine.F90:1371:21: >>>>>>>>> >>>>>>>>> uu = dot_product(tvec,pvec) >>>>>>>>> >>>>>>>>> These errors were not present before. My variables are mostly vectors: >>>>>>>>> >>>>>>>>> real(8), intent(in) :: >>>>>>>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>>>>>>> >>>>>>>>> real(8) :: uu,vv,dir(3) >>>>>>>>> >>>>>>>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>>>>>>> >>>>>>>>> I wonder what happened? >>>>>>>>> >>>>>>>>> Please advice. >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Thank you very much. >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> ================================================ >>>>>>>>> TAY Wee-Beng ??? >>>>>>>>> Research Scientist >>>>>>>>> Experimental AeroScience Group >>>>>>>>> Temasek Laboratories >>>>>>>>> National University of Singapore >>>>>>>>> T-Lab Building >>>>>>>>> 5A, Engineering Drive 1, #02-02 >>>>>>>>> Singapore 117411 >>>>>>>>> Phone: +65 65167330 >>>>>>>>> E-mail: tsltaywb at nus.edu.sg >>>>>>>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>>>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>>>>> ================================================ >>>>>>>>> >>>>>>>>> >>>>>>>>> ________________________________ >>>>>>>>> >>>>>>>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From bsmith at mcs.anl.gov Tue Feb 27 09:54:27 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Tue, 27 Feb 2018 15:54:27 +0000 Subject: [petsc-users] Compiling problem after upgrading to PETSc 3.8.3 In-Reply-To: References: <34377631-9561-4B61-A350-6F597356C30F@anl.gov> <52FBAFFC-573A-42A3-9E09-3F2532F27A62@dsic.upv.es> <7285cdbc-af1b-f116-d11e-72b9e7aba138@gmail.com> <6DF75825-0B14-4BE9-8CF6-563418F9CDA4@mcs.anl.gov> <4775e526-7824-f601-1c66-fdd82c5395f9@gmail.com> Message-ID: What optimization level that generates correct code depends on the compiler, its version and even sub-version. We stumble across compilers that generate bad code with optimization fairly often. If -O3 works, great but often -O2 is the best one can use. Barry > On Feb 27, 2018, at 7:01 AM, TAY wee-beng wrote: > > > On 23/2/2018 1:40 AM, Smith, Barry F. wrote: >> First run under valgrind to look for memory issues. >> >> Second I would change 1 thing at a time. So use the intel 2017 compiler with PETSc 2.8.3 so the only change is your needed changes to match 2.8.3 and does not include a compiler change. >> >> I am not sure what numbers you are printing below but often changing optimization levels can and will change numerical values slightly so change in numerical values may not indicate anything is wrong (or it may indicate something is wrong depending on how different the numerical values are). >> >> Barry >> > Hi, > > I realised finally that it is due to compiling with -O3 and my prev PETSc (3.7.6) build, but using -O2 in the current build > > But just to confirm, is it ok to compile with -O3 for PETSc? > > Thanks. >>> On Feb 21, 2018, at 10:23 PM, TAY wee-beng wrote: >>> >>> >>> On 21/2/2018 11:44 AM, Smith, Barry F. wrote: >>>> Did you follow the directions in the changes file for 3.8? >>>> >>>>
  • Replace calls to DMDACreateXd() with DMDACreateXd(), [DMSetFromOptions()] DMSetUp()
  • >>>>
  • DMDACreateXd() no longer can take negative values for dimensons, instead pass positive values and call DMSetFromOptions() immediately after
  • >>>> >>>> I suspect you are not calling DMSetUp() and this is causing the problem. >>>> >>>> Barry >>> Ops sorry, indeed I didn't change that part. Got it compiled now. >>> >>> However, I have got a new problem. Previously, I was using Intel 2016 with PETSc 3.7.6. During compile, I used -O3 for all modules except one, which will give error (due to DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90). Hence, I need to use -O1. >>> >>> Now, I'm using Intel 2018 with PETSc 3.8.3 and I got the error: >>> >>> M Diverged but why?, time = 2 >>> reason = -9 >>> >>> I tried to change all *.F90 from using -O3 to -O1 and although there's no diverged err printed, my values are different: >>> >>> 1 0.01600000 0.46655767 0.46310378 1.42427154 -0.81598016E+02 -0.11854431E-01 0.42046197E+06 >>> 2 0.00956350 0.67395693 0.64698638 1.44166606 -0.12828928E+03 0.12179394E-01 0.41961824E+06 >>> >>> vs >>> >>> 1 0.01600000 0.49096543 0.46259333 1.41828130 -0.81561221E+02 -0.16146574E-01 0.42046335E+06 >>> 2 0.00956310 0.68342495 0.63682485 1.44353571 -0.12813998E+03 0.24226242E+00 0.41962121E+06 >>> >>> The latter values are obtained using the debug built and they compared correctly with another cluster, which use GNU. >>> >>> What going on and how should I troubleshoot? >>> Thanks >>>> >>>>> On Feb 20, 2018, at 7:35 PM, TAY wee-beng wrote: >>>>> >>>>> >>>>> On 21/2/2018 10:47 AM, Smith, Barry F. wrote: >>>>>> Try setting >>>>>> >>>>>> u_global = tVec(1) >>>>>> >>>>>> immediately before the call to DMCreateGlobalVector() >>>>>> >>>>>> >>>>> Hi, >>>>> >>>>> I added the line in but still got the same error below. Btw, my code is organised as: >>>>> >>>>> module global_data >>>>> >>>>> #include "petsc/finclude/petsc.h" >>>>> use petsc >>>>> use kdtree2_module >>>>> implicit none >>>>> save >>>>> ... >>>>> Vec u_local,u_global ... >>>>> ... >>>>> contains >>>>> >>>>> subroutine allo_var >>>>> ... >>>>> u_global = tVec(1) >>>>> call DMCreateGlobalVector(da_u,u_global,ierr) >>>>> ... >>>>> >>>>> >>>>> >>>>> >>>>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>>>> ---------------------------------- >>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>>>> [0]PETSC ERROR: Null Object: Parameter # 2 >>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>>>> ble shooting. >>>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 11:18:20 2018 >>>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>>>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x >>>>> 86)/Microsoft SDKs/MPI/Include/x64]" --with-mpi-mpiexec="/cygdrive/c/Program Fil >>>>> es/Microsoft MPI/Bin/mpiexec.exe" --with-debugging=1 --with-file-create-pause=1 >>>>> --prefix=/cygdrive/c/wtay/Lib/petsc-3.8.3_win64_msmpi_vs2008 --with-mpi-lib="[/c >>>>> ygdrive/c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpifec.lib,/cygdrive/ >>>>> c/Program Files (x86)/Microsoft SDKs/MPI/Lib/x64/msmpi.lib]" --with-shared-libra >>>>> ries=0 >>>>> [0]PETSC ERROR: #1 VecSetLocalToGlobalMapping() line 78 in C:\Source\PETSC-~2.3\ >>>>> src\vec\vec\INTERF~1\vector.c >>>>> [0]PETSC ERROR: #2 DMCreateGlobalVector_DA() line 41 in C:\Source\PETSC-~2.3\src >>>>> \dm\impls\da\dadist.c >>>>> [0]PETSC ERROR: #3 DMCreateGlobalVector() line 844 in C:\Source\PETSC-~2.3\src\d >>>>> m\INTERF~1\dm.c >>>>> >>>>> Thanks. >>>>>>> On Feb 20, 2018, at 6:40 PM, TAY wee-beng wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Indeed, replacing tvec with t_vec solves the problem. Now I'm trying to debug step by step. I got into problem when calling: >>>>>>> >>>>>>> call DMCreateGlobalVector(da_u,u_global,ierr) >>>>>>> >>>>>>> The error is: >>>>>>> >>>>>>> [0]PETSC ERROR: --------------------- Error Message ---------------------------- >>>>>>> ---------------------------------- >>>>>>> [0]PETSC ERROR: Null argument, when expecting valid pointer >>>>>>> [0]PETSC ERROR: Null Object: Parameter # 2 >>>>>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trou >>>>>>> ble shooting. >>>>>>> [0]PETSC ERROR: Petsc Release Version 3.8.3, Dec, 09, 2017 >>>>>>> [0]PETSC ERROR: C:\Obj_tmp\ibm3d_IIB_mpi\Debug\ibm3d_IIB_mpi.exe on a petsc-3.8. >>>>>>> 3_win64_msmpi_vs2008 named 1C3YYY1-PC by tsltaywb Wed Feb 21 10:20:20 2018 >>>>>>> [0]PETSC ERROR: Configure options --with-cc="win32fe icl" --with-fc="win32fe ifo >>>>>>> rt" --with-cxx="win32fe icl" --download-fblaslapack --with-mpi-include="[/cygdri >>>>>>> ve/c/Program Files (x86)/Microsoft SDKs/MPI/Include,/cygdrive/c/Program Files (x.... >>>>>>> >>>>>>> But all I changed is from: >>>>>>> >>>>>>> module global_data >>>>>>> #include "petsc/finclude/petsc.h" >>>>>>> use petsc >>>>>>> use kdtree2_module >>>>>>> implicit none >>>>>>> save >>>>>>> !grid variables >>>>>>> >>>>>>> integer :: size_x,s.... >>>>>>> >>>>>>> ... >>>>>>> >>>>>>> to >>>>>>> >>>>>>> module global_data >>>>>>> use kdtree2_module >>>>>>> implicit none >>>>>>> save >>>>>>> #include "petsc/finclude/petsc.h90" >>>>>>> !grid variables >>>>>>> integer :: size_x,s... >>>>>>> >>>>>>> ... >>>>>>> >>>>>>> da_u, u_global were declared thru: >>>>>>> >>>>>>> DM da_u,da_v,... >>>>>>> DM da_cu_types ... >>>>>>> Vec u_local,u_global,v_local... >>>>>>> >>>>>>> So what could be the problem? >>>>>>> >>>>>>> >>>>>>> Thank you very much. >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> ================================================ >>>>>>> TAY Wee-Beng (Zheng Weiming) ??? >>>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>>> ================================================ >>>>>>> >>>>>>> On 20/2/2018 10:46 PM, Jose E. Roman wrote: >>>>>>>> Probably the first error is produced by using a variable (mpi_comm) with the same name as an MPI type. >>>>>>>> >>>>>>>> The second error I guess is due to variable tvec, since a Fortran type tVec is now being defined in src/vec/f90-mod/petscvec.h >>>>>>>> >>>>>>>> Jose >>>>>>>> >>>>>>>> >>>>>>>>> El 20 feb 2018, a las 15:35, Smith, Barry F. escribi?: >>>>>>>>> >>>>>>>>> >>>>>>>>> Please run a clean compile of everything and cut and paste all the output. This will make it much easier to debug than trying to understand your snippets of what is going wrong. >>>>>>>>> >>>>>>>>>> On Feb 20, 2018, at 1:56 AM, TAY Wee Beng wrote: >>>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> I was previously using PETSc 3.7.6 on different clusters with both Intel >>>>>>>>>> Fortran and GNU Fortran. After upgrading, I met some problems when >>>>>>>>>> trying to compile: >>>>>>>>>> >>>>>>>>>> On Intel Fortran: >>>>>>>>>> >>>>>>>>>> Previously, I was using: >>>>>>>>>> >>>>>>>>>> #include "petsc/finclude/petsc.h90" >>>>>>>>>> >>>>>>>>>> in *.F90 when requires the use of PETSc >>>>>>>>>> >>>>>>>>>> I read in the change log that h90 is no longer there and so I replaced >>>>>>>>>> with #include "petsc/finclude/petsc.h" >>>>>>>>>> >>>>>>>>>> It worked. But I also have some *.F90 which do not use PETSc. However, >>>>>>>>>> they use some modules which uses PETSc. >>>>>>>>>> >>>>>>>>>> Now I can't compile them. The error is : >>>>>>>>>> >>>>>>>>>> math_routine.f90(3): error #7002: Error in opening the compiled module >>>>>>>>>> file. Check INCLUDE paths. [PETSC] >>>>>>>>>> use mpi_subroutines >>>>>>>>>> >>>>>>>>>> mpi_subroutines is a module which uses PETSc, and it compiled w/o problem. >>>>>>>>>> >>>>>>>>>> The solution is that I have to compile e.g. math_routine.F90 as if they >>>>>>>>>> use PETSc, by including PETSc include and lib files. >>>>>>>>>> >>>>>>>>>> May I know why this is so? It was not necessary before. >>>>>>>>>> >>>>>>>>>> Anyway, it managed to compile until it reached hypre.F90. >>>>>>>>>> >>>>>>>>>> Previously, due to some bugs, I have to compile hypre with the -r8 >>>>>>>>>> option. Also, I have to use: >>>>>>>>>> >>>>>>>>>> integer(8) mpi_comm >>>>>>>>>> >>>>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>>>> >>>>>>>>>> to make my codes work with HYPRE. >>>>>>>>>> >>>>>>>>>> But now, compiling gives the error: >>>>>>>>>> >>>>>>>>>> hypre.F90(11): error #6401: The attributes of this name conflict with >>>>>>>>>> those made accessible by a USE statement. [MPI_COMM] >>>>>>>>>> integer(8) mpi_comm >>>>>>>>>> --------------------------------------^ >>>>>>>>>> hypre.F90(84): error #6478: A type-name must not be used as a >>>>>>>>>> variable. [MPI_COMM] >>>>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>>>> ----^ >>>>>>>>>> hypre.F90(84): error #6303: The assignment operation or the binary >>>>>>>>>> expression operation is invalid for the data types of the two >>>>>>>>>> operands. [1140850688] >>>>>>>>>> mpi_comm = MPI_COMM_WORLD >>>>>>>>>> ---------------^ >>>>>>>>>> hypre.F90(100): error #6478: A type-name must not be used as a >>>>>>>>>> variable. [MPI_COMM] >>>>>>>>>> call HYPRE_StructGridCreate(mpi_comm, 3, grid_hypre, ierr) >>>>>>>>>> ... >>>>>>>>>> >>>>>>>>>> What's actually happening? Why can't I compile now? >>>>>>>>>> >>>>>>>>>> On GNU gfortran: >>>>>>>>>> >>>>>>>>>> I tried to use similar tactics as above here. However, when compiling >>>>>>>>>> math_routine.F90, I got the error: >>>>>>>>>> >>>>>>>>>> math_routine.F90:1333:21: >>>>>>>>>> >>>>>>>>>> call subb(orig,vert1,tvec) >>>>>>>>>> 1 >>>>>>>>>> Error: Invalid procedure argument at (1) >>>>>>>>>> math_routine.F90:1339:18: >>>>>>>>>> >>>>>>>>>> qvec = cross_pdt2(tvec,edge1) >>>>>>>>>> 1 >>>>>>>>>> Error: Invalid procedure argument at (1) >>>>>>>>>> math_routine.F90:1345:21: >>>>>>>>>> >>>>>>>>>> uu = dot_product(tvec,pvec) >>>>>>>>>> 1 >>>>>>>>>> Error: ?vector_a? argument of ?dot_product? intrinsic at (1) must be >>>>>>>>>> numeric or LOGICAL >>>>>>>>>> math_routine.F90:1371:21: >>>>>>>>>> >>>>>>>>>> uu = dot_product(tvec,pvec) >>>>>>>>>> >>>>>>>>>> These errors were not present before. My variables are mostly vectors: >>>>>>>>>> >>>>>>>>>> real(8), intent(in) :: >>>>>>>>>> orig(3),infinity(3),vert1(3),vert2(3),vert3(3),normal(3) >>>>>>>>>> >>>>>>>>>> real(8) :: uu,vv,dir(3) >>>>>>>>>> >>>>>>>>>> real(8) :: edge1(3),edge2(3),tvec(3),pvec(3),qvec(3),det,inv_det,epsilon,d,t >>>>>>>>>> >>>>>>>>>> I wonder what happened? >>>>>>>>>> >>>>>>>>>> Please advice. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Thank you very much. >>>>>>>>>> >>>>>>>>>> Yours sincerely, >>>>>>>>>> >>>>>>>>>> ================================================ >>>>>>>>>> TAY Wee-Beng ??? >>>>>>>>>> Research Scientist >>>>>>>>>> Experimental AeroScience Group >>>>>>>>>> Temasek Laboratories >>>>>>>>>> National University of Singapore >>>>>>>>>> T-Lab Building >>>>>>>>>> 5A, Engineering Drive 1, #02-02 >>>>>>>>>> Singapore 117411 >>>>>>>>>> Phone: +65 65167330 >>>>>>>>>> E-mail: tsltaywb at nus.edu.sg >>>>>>>>>> http://www.temasek-labs.nus.edu.sg/program/program_aeroexperimental_tsltaywb.php >>>>>>>>>> Personal research webpage: http://tayweebeng.wixsite.com/website >>>>>>>>>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>>>>>>>>> linkedin: www.linkedin.com/in/tay-weebeng >>>>>>>>>> ================================================ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ________________________________ >>>>>>>>>> >>>>>>>>>> Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you. From t.appel17 at imperial.ac.uk Tue Feb 27 12:03:44 2018 From: t.appel17 at imperial.ac.uk (Thibaut Appel) Date: Tue, 27 Feb 2018 18:03:44 +0000 Subject: [petsc-users] [SLEPc] Performance of Krylov-Schur with MUMPS-based shift-and-invert In-Reply-To: References: <31fdfe68-4e4c-804f-225f-2a34f47210e8@imperial.ac.uk> Message-ID: Good afternoon Mr Roman, Thank you very much for your detailed and quick answer. I'll make the use of eps_view and log_view and see if I can optimize the preallocation of my matrix. Good to know that it is possible to play with the "mpd" parameter, I thought it was only for when large values of nev were requested - as suggested by the user guide. My residual norms are decent (10E-6 to 10E-8) so I do not think my application code requires significant tuning. When you say "EPSSetBalance() sometimes helps, even in the case of using spectral transformation", did you mean "particularly" instead of "even"? If yes, why? If I misunderstood, what did you want to explain? Regards, Thibaut On 19/02/18 18:36, Jose E. Roman wrote: > >> El 19 feb 2018, a las 19:15, Thibaut Appel escribi?: >> >> Good afternoon, >> >> I am solving generalized eigenvalue problems {Ax = omegaBx} in complex arithmetic, where A is non-hermitian and B is singular. I think the only way to get round the singularity is to employ a shift-and-invert method, where I am using MUMPS to invert the shifted matrix. >> >> I am using the Fortran interface of PETSc 3.8.3 and SLEPc 3.8.2 where my ./configure line was >> ./configure --with-fortran-kernels=1 --with-scalar-type=complex --with-blaslapack-dir=/home/linuxbrew/.linuxbrew/opt/openblas --PETSC_ARCH=cplx_dble_optim --with-cmake-dir=/home/linuxbrew/.linuxbrew/opt/cmake --with-mpi-dir=/home/linuxbrew/.linuxbrew/opt/openmpi --with-debugging=0 --download-scalapack --download-mumps --COPTFLAGS="-O3 -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3 -march=native" >> >> My matrices A and B are assembled correctly in parallel and my preallocation is quasi-optimal in the sense that I don't have any called to mallocs but I may overestimate the required memory for some rows of the matrices. Here is how I setup the EPS problem and solve: >> >> CALL EPSSetProblemType(eps,EPS_GNHEP,ierr) >> CALL EPSSetOperators(eps,MatA,MatB,ierr) >> CALL EPSSetType(eps,EPSKRYLOVSCHUR,ierr) >> CALL EPSSetDimensions(eps,nev,ncv,PETSC_DECIDE,ierr) >> CALL EPSSetTolerances(eps,tol_ev,PETSC_DECIDE,ierr) >> >> CALL EPSSetFromOptions(eps,ierr) >> CALL EPSSetTarget(eps,shift,ierr) >> CALL EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE,ierr) >> >> CALL EPSGetST(eps,st,ierr) >> CALL STGetKSP(st,ksp,ierr) >> CALL KSPGetPC(ksp,pc,ierr) >> >> CALL STSetType(st,STSINVERT,ierr) >> CALL KSPSetType(ksp,KSPPREONLY,ierr) >> CALL PCSetType(pc,PCLU,ierr) >> >> CALL PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS,ierr) >> CALL PCSetFromOptions(pc,ierr) >> >> CALL EPSSolve(eps,ierr) >> CALL EPSGetIterationNumber(eps,iter,ierr) >> CALL EPSGetConverged(eps,nev_conv,ierr) > The settings seem ok. You can use -eps_view to make sure that everything is set as you want. > >> - Using one MPI process, it takes 1 hour and 22 minutes to retrieve 250 eigenvalues with a Krylov subspace of size 500, a tolerance of 10^-12 when the leading dimension of the matrices is 405000. My matrix A has 98,415,000 non-zero elements and B has 1,215,000 non zero elements. Would you be shocked by that computation time? I would have expected something much lower given the values of nev and ncv I have but could be completely wrong in my understanding of the Krylov-Schur method. > If you run with -log_view you will see the breakup in the different steps. Most probably a large percentage of the time is in the factorization of the matrix (MatLUFactorSym and MatLUFactorNum). > > The matrix is quite dense (about 250 nonzero elements per row), so factorizing it is costly. You may want to try inexact shift-and-invert with an iterative method, but you will need a good preconditioner. > > The time needed for the other steps may be reduced a little bit by setting a smaller subspace size, for instance with -eps_mpd 200 > >> - My goal is speed and reliability. Is there anything you notice in my EPS solver that could be improved or corrected? I remember an exchange with Jose E. Roman where he said that the parameters of MUMPS are not worth being changed, however I notice some people play with the -mat_mumps_cntl_1 and -mat_mumps_cntl_3 which control the relative/absolute pivoting threshold? > Yes, you can try tuning MUMPS options. Maybe they are relevant in your application. > >> - Would you advise the use of EPSSetTrueResidual and EPSSetBalance since I am using a spectral transformation? > Are you getting large residual norms? > I would not suggest using EPSSetTrueResidual() because it may prevent convergence, especially if the target is not very close to the wanted eigenvalues. > EPSSetBalance() sometimes helps, even in the case of using spectral transformation. It is intended for ill-conditioned problems where the obtained residuals are not so good. > >> - Would you see anything that would prevent me from getting speedup in parallel executions? > I guess it will depend on how MUMPS scales for your problem. > > Jose > >> Thank you very much in advance and I look forward to exchanging with you about these different points, >> >> Thibaut >> From t.appel17 at imperial.ac.uk Tue Feb 27 19:05:41 2018 From: t.appel17 at imperial.ac.uk (Appel, Thibaut) Date: Wed, 28 Feb 2018 01:05:41 +0000 Subject: [petsc-users] Malloc error with 'correct' preallocation? Message-ID: <673B72FC-4252-447A-B90F-731002BE3C18@ic.ac.uk> Dear PETSc developers and users, I am forming a sparse matrix in complex, double precision arithmetic and can?t understand why I have ?PETSC ERROR: Argument out of range - New nonzero at (X,X) caused a malloc? during the assembly of the matrix. This matrix discretizes a PDE in 2D using a finite difference method in both spatial directions and, in short, here is the ensemble of routines I call: CALL MatCreate(PETSC_COMM_WORLD,MatA,ierr) CALL MatSetType(MatA,MATAIJ,ierr) CALL MatSetSizes(MatA,PETSC_DECIDE,PETSC_DECIDE,leading_dimension,leading_dimension,ierr) CALL MatSeqAIJSetPreallocation(MatA,0,PETSC_NULL_INTEGER,ierr) CALL MatMPIAIJSetPreallocation(MatA,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,ierr) CALL MatGetOwnershipRange(MatA,istart,iend,ierr) Then I preallocate my matrix doing tests on the derivative coefficient arrays like ALLOCATE(nnz(istart:iend-1), dnz(istart:iend-1), onz(istart:iend-1)) nnz = 0 dnz = 0 onz = 0 DO row= istart, iend-1 *detect the nonzero elements* IF (ABS(this_derivative_coef) > 0.D0) THEN nnz(row) = nnz(row) + corresponding_stencil_size DO *elements in stencil* *compute corresponding column* IF ((col >= istart) .AND. (col <= (iend-1))) THEN dnz(row) = dnz(row) + 1 ELSE onz(row) = onz(row) + 1 END IF END DO END IF END DO CALL MatSeqAIJSetPreallocation(MatA,PETSC_DEFAULT_INTEGER,nnz,ierr) CALL MatMPIAIJSetPreallocation(MatA,PETSC_DEFAULT_INTEGER,dnz,PETSC_DEFAULT_INTEGER,onz,ierr) CALL MatSetOption(MatA,MAT_IGNORE_ZERO_ENTRIES,PETSC_TRUE,ierr) And assemble the matrix, at each row, with the different derivatives terms (pure x derivative, pure y derivative, cross xy derivative?) DO row = istart, iend-1 cols(0) = *compute corresponding column* vals(0) = no_derivative_coef CALL MatSetValues(MatA,1,row,1,cols,vals,ADD_VALUES,ierr) DO m=0,x_order cols(m) = *compute corresponding column* vals(m) = x_derivative_coef END DO CALL MatSetValues(MatA,1,row,x_order+1,cols,vals,ADD_VALUES,ierr) DO m=0,y_order cols(m) = *compute corresponding column* vals(m) = y_derivative_coef END DO CALL MatSetValues(MatA,1,row,y_order+1,cols,vals,ADD_VALUES,ierr) DO n=0,y_order DO m=0,x_order cols(..) = *compute corresponding column* vals(..)= xy_derivative_coef END DO END DO CALL MatSetValues(MatA,1,row,(x_order+1)*(y_order+1),cols,vals,ADD_VALUES,ierr) END DO CALL MatAssemblyBegin(MatA,MAT_FINAL_ASSEMBLY,ierr) CALL MatAssemblyEnd(MatA,MAT_FINAL_ASSEMBLY,ierr) I am using ADD_VALUES as the different loops here-above can contribute to the same column. The approach I chose is therefore preallocating without overestimating the non-zero elements and hope that the MAT_IGNORE_ZERO_ENTRIES option discards the 'vals(?)' who are exactly zero during the assembly (I read that the criteria PETSc uses to do so is ?== 0.0?) so with the test I used everything should work fine. However, when testing with 1 MPI process, I have this malloc problem appearing at a certain row. I print the corresponding nnz(row) I allocated for this row, say NZ. I print for each (cols, vals) computed in the loops above the test (vals /= zero) to number the non-zero elements and notice that the malloc error appears when I insert a block of non-zero elements in which the last one is the NZth! In other words, why a malloc if the 'correct' number of elements is allocated? Is there something wrong with my understanding of ADD_VALUES? I read somewhere that it is good to call both MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation as PETSc will automatically call the right one whether it is a serial or parallel computation. Is it better to use MatXAIJSetPreallocation? Thank you in advance for any advice that could put me on the trail to correctness, and I would appreciate any correction should I do something that looks silly. Kind regards, Thibaut From bsmith at mcs.anl.gov Tue Feb 27 19:33:24 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 28 Feb 2018 01:33:24 +0000 Subject: [petsc-users] Malloc error with 'correct' preallocation? In-Reply-To: <673B72FC-4252-447A-B90F-731002BE3C18@ic.ac.uk> References: <673B72FC-4252-447A-B90F-731002BE3C18@ic.ac.uk> Message-ID: <997E68CE-08F5-4981-97BC-17C3BBE4E891@anl.gov> > On Feb 27, 2018, at 7:05 PM, Appel, Thibaut wrote: > > Dear PETSc developers and users, > > I am forming a sparse matrix in complex, double precision arithmetic and can?t understand why I have ?PETSC ERROR: Argument out of range - New nonzero at (X,X) caused a malloc? during the assembly of the matrix. > > This matrix discretizes a PDE in 2D using a finite difference method in both spatial directions and, in short, here is the ensemble of routines I call: > > CALL MatCreate(PETSC_COMM_WORLD,MatA,ierr) > CALL MatSetType(MatA,MATAIJ,ierr) > CALL MatSetSizes(MatA,PETSC_DECIDE,PETSC_DECIDE,leading_dimension,leading_dimension,ierr) > CALL MatSeqAIJSetPreallocation(MatA,0,PETSC_NULL_INTEGER,ierr) > CALL MatMPIAIJSetPreallocation(MatA,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,ierr) > CALL MatGetOwnershipRange(MatA,istart,iend,ierr) > > Then I preallocate my matrix doing tests on the derivative coefficient arrays like > > ALLOCATE(nnz(istart:iend-1), dnz(istart:iend-1), onz(istart:iend-1)) > nnz = 0 > dnz = 0 > onz = 0 > DO row= istart, iend-1 > > *detect the nonzero elements* > IF (ABS(this_derivative_coef) > 0.D0) THEN > nnz(row) = nnz(row) + corresponding_stencil_size > DO *elements in stencil* > *compute corresponding column* > IF ((col >= istart) .AND. (col <= (iend-1))) THEN > dnz(row) = dnz(row) + 1 > ELSE > onz(row) = onz(row) + 1 > END IF > END DO > END IF > > END DO > CALL MatSeqAIJSetPreallocation(MatA,PETSC_DEFAULT_INTEGER,nnz,ierr) > CALL MatMPIAIJSetPreallocation(MatA,PETSC_DEFAULT_INTEGER,dnz,PETSC_DEFAULT_INTEGER,onz,ierr) > CALL MatSetOption(MatA,MAT_IGNORE_ZERO_ENTRIES,PETSC_TRUE,ierr) > > And assemble the matrix, at each row, with the different derivatives terms (pure x derivative, pure y derivative, cross xy derivative?) > > DO row = istart, iend-1 > > cols(0) = *compute corresponding column* > vals(0) = no_derivative_coef > CALL MatSetValues(MatA,1,row,1,cols,vals,ADD_VALUES,ierr) > > DO m=0,x_order > cols(m) = *compute corresponding column* > vals(m) = x_derivative_coef > END DO > CALL MatSetValues(MatA,1,row,x_order+1,cols,vals,ADD_VALUES,ierr) > > DO m=0,y_order > cols(m) = *compute corresponding column* > vals(m) = y_derivative_coef > END DO > CALL MatSetValues(MatA,1,row,y_order+1,cols,vals,ADD_VALUES,ierr) > > DO n=0,y_order > DO m=0,x_order > cols(..) = *compute corresponding column* > vals(..)= xy_derivative_coef > END DO > END DO > CALL MatSetValues(MatA,1,row,(x_order+1)*(y_order+1),cols,vals,ADD_VALUES,ierr) > > END DO > > CALL MatAssemblyBegin(MatA,MAT_FINAL_ASSEMBLY,ierr) > CALL MatAssemblyEnd(MatA,MAT_FINAL_ASSEMBLY,ierr) > > I am using ADD_VALUES as the different loops here-above can contribute to the same column. > > The approach I chose is therefore preallocating without overestimating the non-zero elements and hope that the MAT_IGNORE_ZERO_ENTRIES option discards the 'vals(?)' who are exactly zero during the assembly (I read that the criteria PETSc uses to do so is ?== 0.0?) so with the test I used everything should work fine. > > However, when testing with 1 MPI process, I have this malloc problem appearing at a certain row. > > I print the corresponding nnz(row) I allocated for this row, say NZ. > I print for each (cols, vals) computed in the loops above the test (vals /= zero) to number the non-zero elements and notice that the malloc error appears when I insert a block of non-zero elements in which the last one is the NZth! > > In other words, why a malloc if the 'correct' number of elements is allocated? Don't know > Is there something wrong with my understanding of ADD_VALUES? Unlikely > > I read somewhere that it is good to call both MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation as PETSc will automatically call the right one whether it is a serial or parallel computation. Yes this is true > Is it better to use MatXAIJSetPreallocation? In your case no > > Thank you in advance for any advice that could put me on the trail to correctness, and I would appreciate any correction should I do something that looks silly. Nothing looks silly. The easiest approach to tracking down the problem is to run in the debugger and put break points at the key points and make sure it behaves as expected at those points. Could you send the code? Preferably small and easy to build. If I can reproduce the problem I can track down what is going on. Barry > > Kind regards, > > > Thibaut > From danyang.su at gmail.com Tue Feb 27 22:44:52 2018 From: danyang.su at gmail.com (Danyang Su) Date: Tue, 27 Feb 2018 20:44:52 -0800 Subject: [petsc-users] object name overwritten in VecView Message-ID: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> Hi All, How to set different object names when using multiple VecView? I try to use PetscObjectSetName with multiple output, but the object name is overwritten by the last one. As shown below, as well as the enclosed files as example, the vector name in sol.vtk is vec_v for both vector u and v. call PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr);CHKERRA(ierr) call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr);CHKERRA(ierr) call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr);CHKERRA(ierr) call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr) call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) call VecView(u, viewer, ierr);CHKERRA(ierr) call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) call VecView(v, viewer, ierr);CHKERRA(ierr) call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) Thanks, Danyang -------------- next part -------------- A non-text attachment was scrubbed... Name: ex1f90.F90 Type: text/x-fortran Size: 4654 bytes Desc: not available URL: -------------- next part -------------- CFLAGS = FFLAGS = CPPFLAGS = FPPFLAGS = LOCDIR = src/dm/impls/plex/examples/tutorials/ EXAMPLESC = ex1.c ex2.c ex5.c EXAMPLESF = ex1f90.F90 MANSEC = DM include ${PETSC_DIR}/lib/petsc/conf/variables include ${PETSC_DIR}/lib/petsc/conf/rules ex1: ex1.o chkopts -${CLINKER} -o ex1 ex1.o ${PETSC_DM_LIB} ${RM} -f ex1.o ex1f90: ex1f90.o chkopts -${FLINKER} -o ex1f90 ex1f90.o ${PETSC_DM_LIB} ${RM} -f ex1f90.o ex2: ex2.o chkopts -${CLINKER} -o ex2 ex2.o ${PETSC_DM_LIB} ${RM} -f ex2.o ex5: ex5.o chkopts -${CLINKER} -o ex5 ex5.o ${PETSC_DM_LIB} ${RM} -f ex5.o ex6: ex6.o chkopts -${CLINKER} -o ex6 ex6.o ${PETSC_DM_LIB} ${RM} -f ex6.o ex7: ex7.o chkopts -${CLINKER} -o ex7 ex7.o ${PETSC_DM_LIB} ${RM} -f ex7.o #-------------------------------------------------------------------------- runex1: -@${MPIEXEC} -n 2 ./ex1 -dim 3 runex1f90: -@${MPIEXEC} -n 2 ./ex1 -dim 2 runex2: -@${MPIEXEC} -n 1 ./ex2 -dim 3 -dm_refine 2 -dm_view hdf5:ex2.h5 -${PETSC_DIR}/bin/petsc_gen_xdmf.py ex2.h5 runex5: -@${MPIEXEC} -n 2 ./ex5 -filename ${PETSC_DIR}/share/petsc/datafiles/meshes/square.msh -new_dm_view include ${PETSC_DIR}/lib/petsc/conf/test -------------- next part -------------- # vtk DataFile Version 2.0 Simplicial Mesh Example ASCII DATASET UNSTRUCTURED_GRID POINTS 8 double 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 0.000000e+00 1.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 CELLS 6 30 4 0 7 3 2 4 0 5 7 4 4 0 1 3 7 4 5 1 0 7 4 0 6 7 2 4 7 6 0 4 CELL_TYPES 6 10 10 10 10 10 10 POINT_DATA 8 VECTORS vec_v double 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 VECTORS vec_v double 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 From bsmith at mcs.anl.gov Tue Feb 27 23:39:19 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 28 Feb 2018 05:39:19 +0000 Subject: [petsc-users] object name overwritten in VecView In-Reply-To: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> References: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> Message-ID: Matt, I have confirmed this is reproducible and a bug. The problem arises because frame #0: 0x000000010140625a libpetsc.3.8.dylib`PetscViewerVTKAddField_VTK(viewer=0x00007fe66760c750, dm=0x00007fe668810820, PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, vec=0x00007fe66880ee20) at vtkv.c:140 frame #1: 0x0000000101404e6e libpetsc.3.8.dylib`PetscViewerVTKAddField(viewer=0x00007fe66760c750, dm=0x00007fe668810820, PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, vec=0x00007fe66880ee20) at vtkv.c:46 frame #2: 0x0000000101e0b7c3 libpetsc.3.8.dylib`VecView_Plex_Local(v=0x00007fe66880ee20, viewer=0x00007fe66760c750) at plex.c:301 frame #3: 0x0000000101e0ead7 libpetsc.3.8.dylib`VecView_Plex(v=0x00007fe66880e820, viewer=0x00007fe66760c750) at plex.c:348 keeps a linked list of vectors that are to be viewed and the vectors are the same Vec because they are obtained with DMGetLocalVector(). The safest fix is to have PetscViewerVTKAddField_VTK() do a VecDuplicate() on the vector passed in and store that in the linked list instead of just storing a pointer to the passed in vector (which might and can be overwritten before all the linked vectors are actually stored). Barry > On Feb 27, 2018, at 10:44 PM, Danyang Su wrote: > > Hi All, > > How to set different object names when using multiple VecView? I try to use PetscObjectSetName with multiple output, but the object name is overwritten by the last one. > > As shown below, as well as the enclosed files as example, the vector name in sol.vtk is vec_v for both vector u and v. > > call PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr);CHKERRA(ierr) > call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr);CHKERRA(ierr) > call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr);CHKERRA(ierr) > call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr) > > call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) > call VecView(u, viewer, ierr);CHKERRA(ierr) > > call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) > call VecView(v, viewer, ierr);CHKERRA(ierr) > > call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) > > call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) > call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) > > Thanks, > > Danyang > > From jroman at dsic.upv.es Wed Feb 28 03:17:21 2018 From: jroman at dsic.upv.es (Jose E. Roman) Date: Wed, 28 Feb 2018 10:17:21 +0100 Subject: [petsc-users] [SLEPc] Performance of Krylov-Schur with MUMPS-based shift-and-invert In-Reply-To: References: <31fdfe68-4e4c-804f-225f-2a34f47210e8@imperial.ac.uk> Message-ID: <0934487B-BAF6-4768-B03C-26FDB2F49EB8@dsic.upv.es> Balancing may reduce the norm of the matrix or the condition number of some eigenvalues. It applies to the ST operator, (A-sigma*B)^{-1}*B in case of shift-and-invert. The case that sigma is close to an eigenvalue is usually not a problem, provided that you use a robust direct solver (MUMPS). In cases where both A and B are singular, balancing may improve residual errors. I think it is not necessary in your case. Jose > El 27 feb 2018, a las 19:03, Thibaut Appel escribi?: > > Good afternoon Mr Roman, > > Thank you very much for your detailed and quick answer. I'll make the use of eps_view and log_view and see if I can optimize the preallocation of my matrix. > > Good to know that it is possible to play with the "mpd" parameter, I thought it was only for when large values of nev were requested - as suggested by the user guide. > > My residual norms are decent (10E-6 to 10E-8) so I do not think my application code requires significant tuning. > > When you say "EPSSetBalance() sometimes helps, even in the case of using spectral transformation", did you mean "particularly" instead of "even"? If yes, why? If I misunderstood, what did you want to explain? > > Regards, > > > Thibaut > > On 19/02/18 18:36, Jose E. Roman wrote: >> >>> El 19 feb 2018, a las 19:15, Thibaut Appel escribi?: >>> >>> Good afternoon, >>> >>> I am solving generalized eigenvalue problems {Ax = omegaBx} in complex arithmetic, where A is non-hermitian and B is singular. I think the only way to get round the singularity is to employ a shift-and-invert method, where I am using MUMPS to invert the shifted matrix. >>> >>> I am using the Fortran interface of PETSc 3.8.3 and SLEPc 3.8.2 where my ./configure line was >>> ./configure --with-fortran-kernels=1 --with-scalar-type=complex --with-blaslapack-dir=/home/linuxbrew/.linuxbrew/opt/openblas --PETSC_ARCH=cplx_dble_optim --with-cmake-dir=/home/linuxbrew/.linuxbrew/opt/cmake --with-mpi-dir=/home/linuxbrew/.linuxbrew/opt/openmpi --with-debugging=0 --download-scalapack --download-mumps --COPTFLAGS="-O3 -march=native" --CXXOPTFLAGS="-O3 -march=native" --FOPTFLAGS="-O3 -march=native" >>> >>> My matrices A and B are assembled correctly in parallel and my preallocation is quasi-optimal in the sense that I don't have any called to mallocs but I may overestimate the required memory for some rows of the matrices. Here is how I setup the EPS problem and solve: >>> >>> CALL EPSSetProblemType(eps,EPS_GNHEP,ierr) >>> CALL EPSSetOperators(eps,MatA,MatB,ierr) >>> CALL EPSSetType(eps,EPSKRYLOVSCHUR,ierr) >>> CALL EPSSetDimensions(eps,nev,ncv,PETSC_DECIDE,ierr) >>> CALL EPSSetTolerances(eps,tol_ev,PETSC_DECIDE,ierr) >>> >>> CALL EPSSetFromOptions(eps,ierr) >>> CALL EPSSetTarget(eps,shift,ierr) >>> CALL EPSSetWhichEigenpairs(eps,EPS_TARGET_MAGNITUDE,ierr) >>> >>> CALL EPSGetST(eps,st,ierr) >>> CALL STGetKSP(st,ksp,ierr) >>> CALL KSPGetPC(ksp,pc,ierr) >>> >>> CALL STSetType(st,STSINVERT,ierr) >>> CALL KSPSetType(ksp,KSPPREONLY,ierr) >>> CALL PCSetType(pc,PCLU,ierr) >>> >>> CALL PCFactorSetMatSolverPackage(pc,MATSOLVERMUMPS,ierr) >>> CALL PCSetFromOptions(pc,ierr) >>> >>> CALL EPSSolve(eps,ierr) >>> CALL EPSGetIterationNumber(eps,iter,ierr) >>> CALL EPSGetConverged(eps,nev_conv,ierr) >> The settings seem ok. You can use -eps_view to make sure that everything is set as you want. >> >>> - Using one MPI process, it takes 1 hour and 22 minutes to retrieve 250 eigenvalues with a Krylov subspace of size 500, a tolerance of 10^-12 when the leading dimension of the matrices is 405000. My matrix A has 98,415,000 non-zero elements and B has 1,215,000 non zero elements. Would you be shocked by that computation time? I would have expected something much lower given the values of nev and ncv I have but could be completely wrong in my understanding of the Krylov-Schur method. >> If you run with -log_view you will see the breakup in the different steps. Most probably a large percentage of the time is in the factorization of the matrix (MatLUFactorSym and MatLUFactorNum). >> >> The matrix is quite dense (about 250 nonzero elements per row), so factorizing it is costly. You may want to try inexact shift-and-invert with an iterative method, but you will need a good preconditioner. >> >> The time needed for the other steps may be reduced a little bit by setting a smaller subspace size, for instance with -eps_mpd 200 >> >>> - My goal is speed and reliability. Is there anything you notice in my EPS solver that could be improved or corrected? I remember an exchange with Jose E. Roman where he said that the parameters of MUMPS are not worth being changed, however I notice some people play with the -mat_mumps_cntl_1 and -mat_mumps_cntl_3 which control the relative/absolute pivoting threshold? >> Yes, you can try tuning MUMPS options. Maybe they are relevant in your application. >> >>> - Would you advise the use of EPSSetTrueResidual and EPSSetBalance since I am using a spectral transformation? >> Are you getting large residual norms? >> I would not suggest using EPSSetTrueResidual() because it may prevent convergence, especially if the target is not very close to the wanted eigenvalues. >> EPSSetBalance() sometimes helps, even in the case of using spectral transformation. It is intended for ill-conditioned problems where the obtained residuals are not so good. >> >>> - Would you see anything that would prevent me from getting speedup in parallel executions? >> I guess it will depend on how MUMPS scales for your problem. >> >> Jose >> >>> Thank you very much in advance and I look forward to exchanging with you about these different points, >>> >>> Thibaut >>> > From niko.karin at gmail.com Wed Feb 28 03:39:19 2018 From: niko.karin at gmail.com (Karin&NiKo) Date: Wed, 28 Feb 2018 10:39:19 +0100 Subject: [petsc-users] Multi-preconditioned Krylov Message-ID: Dear PETSc team, I would like to experiment multi-preconditioned Krylov methods, as presented in the paper or Bridson and Greif ( https://www.cs.ubc.ca/~rbridson/mpcg/) and more specifically in a context of DD like in the paper of Spillane ( https://hal.archives-ouvertes.fr/hal-01170059/document) or Gosselet ( https://hal.archives-ouvertes.fr/hal-01056928/document). It seems to me this is not a basically supported feature of PETSc. My question : Is there a way of doing it by playing with the runtime options of PETSc or do I have to implement it in the core of the KSP object? If so, what would be the correct design? Thanks, Nicolas -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 28 05:58:08 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 28 Feb 2018 06:58:08 -0500 Subject: [petsc-users] object name overwritten in VecView In-Reply-To: References: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> Message-ID: On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. wrote: > > Matt, > > I have confirmed this is reproducible and a bug. The problem arises > because > > frame #0: 0x000000010140625a libpetsc.3.8.dylib` > PetscViewerVTKAddField_VTK(viewer=0x00007fe66760c750, > dm=0x00007fe668810820, PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll > at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, > vec=0x00007fe66880ee20) at vtkv.c:140 > frame #1: 0x0000000101404e6e libpetsc.3.8.dylib` > PetscViewerVTKAddField(viewer=0x00007fe66760c750, dm=0x00007fe668810820, > PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at > plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, vec=0x00007fe66880ee20) > at vtkv.c:46 > frame #2: 0x0000000101e0b7c3 libpetsc.3.8.dylib`VecView_Plex_Local(v=0x00007fe66880ee20, > viewer=0x00007fe66760c750) at plex.c:301 > frame #3: 0x0000000101e0ead7 libpetsc.3.8.dylib`VecView_Plex(v=0x00007fe66880e820, > viewer=0x00007fe66760c750) at plex.c:348 > > keeps a linked list of vectors that are to be viewed and the vectors are > the same Vec because they are obtained with DMGetLocalVector(). > > The safest fix is to have PetscViewerVTKAddField_VTK() do a VecDuplicate() > on the vector passed in and store that in the linked list instead of just > storing a pointer to the passed in vector (which might and can be > overwritten before all the linked vectors are actually stored). > Danyang, Barry is right, and the bug can be fixed the way he says. However, this points out why VTK is bad format. I think a better choice is to use HDF5 and XDMF. For example, in my code now I always use DMVIewFromOptions(dm, NULL, "-dm_view"); and then later (perhaps several times) VecViewFromOptions(u, NULL, "-u_vec_view") VecViewFromOptions(v, NULL, "-v_vec_view") and then on the command line -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view hdf5:test.h5::append which produces a file test.h5 Then I run $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5 which produces another file test.xmf This can be loaded by Paraview for visualization. Thanks, Matt > > Barry > > > > > On Feb 27, 2018, at 10:44 PM, Danyang Su wrote: > > > > Hi All, > > > > How to set different object names when using multiple VecView? I try to > use PetscObjectSetName with multiple output, but the object name is > overwritten by the last one. > > > > As shown below, as well as the enclosed files as example, the vector > name in sol.vtk is vec_v for both vector u and v. > > > > call PetscViewerCreate(PETSC_COMM_WORLD, viewer, > ierr);CHKERRA(ierr) > > call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr);CHKERRA(ierr) > > call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, > ierr);CHKERRA(ierr) > > call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr) > > > > call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) > > call VecView(u, viewer, ierr);CHKERRA(ierr) > > > > call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) > > call VecView(v, viewer, ierr);CHKERRA(ierr) > > > > call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) > > > > call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) > > call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) > > > > Thanks, > > > > Danyang > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From danyang.su at gmail.com Wed Feb 28 09:07:32 2018 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 28 Feb 2018 07:07:32 -0800 Subject: [petsc-users] object name overwritten in VecView In-Reply-To: References: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> Message-ID: Hi Matt? Thanks for your suggestion and I will use xmf instead. Regards, Danyang On February 28, 2018 3:58:08 AM PST, Matthew Knepley wrote: >On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. >wrote: > >> >> Matt, >> >> I have confirmed this is reproducible and a bug. The problem arises >> because >> >> frame #0: 0x000000010140625a libpetsc.3.8.dylib` >> PetscViewerVTKAddField_VTK(viewer=0x00007fe66760c750, >> dm=0x00007fe668810820, >PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll >> at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, >> vec=0x00007fe66880ee20) at vtkv.c:140 >> frame #1: 0x0000000101404e6e libpetsc.3.8.dylib` >> PetscViewerVTKAddField(viewer=0x00007fe66760c750, >dm=0x00007fe668810820, >> PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at >> plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, >vec=0x00007fe66880ee20) >> at vtkv.c:46 >> frame #2: 0x0000000101e0b7c3 >libpetsc.3.8.dylib`VecView_Plex_Local(v=0x00007fe66880ee20, >> viewer=0x00007fe66760c750) at plex.c:301 >> frame #3: 0x0000000101e0ead7 >libpetsc.3.8.dylib`VecView_Plex(v=0x00007fe66880e820, >> viewer=0x00007fe66760c750) at plex.c:348 >> >> keeps a linked list of vectors that are to be viewed and the vectors >are >> the same Vec because they are obtained with DMGetLocalVector(). >> >> The safest fix is to have PetscViewerVTKAddField_VTK() do a >VecDuplicate() >> on the vector passed in and store that in the linked list instead of >just >> storing a pointer to the passed in vector (which might and can be >> overwritten before all the linked vectors are actually stored). >> > >Danyang, > >Barry is right, and the bug can be fixed the way he says. However, this >points out why VTK is bad format. I think a better choice is >to use HDF5 and XDMF. For example, in my code now I always use > > DMVIewFromOptions(dm, NULL, "-dm_view"); > >and then later (perhaps several times) > > VecViewFromOptions(u, NULL, "-u_vec_view") > VecViewFromOptions(v, NULL, "-v_vec_view") > >and then on the command line > > -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view >hdf5:test.h5::append > >which produces a file > > test.h5 > >Then I run > > $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5 > >which produces another file > > test.xmf > >This can be loaded by Paraview for visualization. > > Thanks, > > Matt > > > >> >> Barry >> >> >> >> > On Feb 27, 2018, at 10:44 PM, Danyang Su >wrote: >> > >> > Hi All, >> > >> > How to set different object names when using multiple VecView? I >try to >> use PetscObjectSetName with multiple output, but the object name is >> overwritten by the last one. >> > >> > As shown below, as well as the enclosed files as example, the >vector >> name in sol.vtk is vec_v for both vector u and v. >> > >> > call PetscViewerCreate(PETSC_COMM_WORLD, viewer, >> ierr);CHKERRA(ierr) >> > call PetscViewerSetType(viewer, PETSCVIEWERVTK, >ierr);CHKERRA(ierr) >> > call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, >> ierr);CHKERRA(ierr) >> > call PetscViewerFileSetName(viewer, 'sol.vtk', >ierr);CHKERRA(ierr) >> > >> > call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) >> > call VecView(u, viewer, ierr);CHKERRA(ierr) >> > >> > call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) >> > call VecView(v, viewer, ierr);CHKERRA(ierr) >> > >> > call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) >> > >> > call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) >> > call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) >> > >> > Thanks, >> > >> > Danyang >> > >> > >> >> > > >-- >What most experimenters take for granted before they begin their >experiments is infinitely more interesting than any results to which >their >experiments lead. >-- Norbert Wiener > >https://www.cse.buffalo.edu/~knepley/ -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Feb 28 09:45:14 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Wed, 28 Feb 2018 23:45:14 +0800 Subject: [petsc-users] Scaling problem when cores > 600 Message-ID: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> Hi, I have a CFD code which uses PETSc and HYPRE. I found that for a certain case with grid size of 192,570,048, I encounter scaling problem when my cores > 600. At 600 cores, the code took 10min for 100 time steps. At 960, 1440 and 2880 cores, it still takes around 10min. At 360 cores, it took 15min. So how can I find the bottleneck? Any recommended steps? I must also mention that I partition my grid only in the x and y direction. There is no partitioning in the z direction due to limited code development. I wonder if there is a strong effect in this case. -- Thank you very much Yours sincerely, ================================================ TAY Wee-Beng ??? (Zheng Weiming) Personal research webpage: http://tayweebeng.wixsite.com/website Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA linkedin: www.linkedin.com/in/tay-weebeng ================================================ From knepley at gmail.com Wed Feb 28 10:10:16 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 28 Feb 2018 11:10:16 -0500 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> Message-ID: On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng wrote: > Hi, > > I have a CFD code which uses PETSc and HYPRE. I found that for a certain > case with grid size of 192,570,048, I encounter scaling problem when my > cores > 600. At 600 cores, the code took 10min for 100 time steps. At 960, > 1440 and 2880 cores, it still takes around 10min. At 360 cores, it took > 15min. > > So how can I find the bottleneck? Any recommended steps? > For any performance question, we need to see the output of -log_view for all test cases. > I must also mention that I partition my grid only in the x and y > direction. There is no partitioning in the z direction due to limited code > development. I wonder if there is a strong effect in this case. Maybe. Usually what happens is you fill up memory with a z-column and cannot scale further. Thanks, Matt > > -- > Thank you very much > > Yours sincerely, > > ================================================ > TAY Wee-Beng ??? (Zheng Weiming) > Personal research webpage: http://tayweebeng.wixsite.com/website > Youtube research showcase: https://www.youtube.com/channe > l/UC72ZHtvQNMpNs2uRTSToiLA > linkedin: www.linkedin.com/in/tay-weebeng > ================================================ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 28 10:17:30 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 28 Feb 2018 16:17:30 +0000 Subject: [petsc-users] object name overwritten in VecView In-Reply-To: References: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> Message-ID: <37E456DA-DC44-4BFE-BAA7-CD74FBA4E377@mcs.anl.gov> It turns out the fix is really easy. Here is a patch. Apply it with patch -p1 < barry-vtk.patch then do make gnumake all in $PETSC_DIR > On Feb 28, 2018, at 9:07 AM, Danyang Su wrote: > > Hi Matt? > > Thanks for your suggestion and I will use xmf instead. > > Regards, > > Danyang > > On February 28, 2018 3:58:08 AM PST, Matthew Knepley wrote: > On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. wrote: > > Matt, > > I have confirmed this is reproducible and a bug. The problem arises because > > frame #0: 0x000000010140625a libpetsc.3.8.dylib`PetscViewerVTKAddField_VTK(viewer=0x00007fe66760c750, dm=0x00007fe668810820, PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, vec=0x00007fe66880ee20) at vtkv.c:140 > frame #1: 0x0000000101404e6e libpetsc.3.8.dylib`PetscViewerVTKAddField(viewer=0x00007fe66760c750, dm=0x00007fe668810820, PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, vec=0x00007fe66880ee20) at vtkv.c:46 > frame #2: 0x0000000101e0b7c3 libpetsc.3.8.dylib`VecView_Plex_Local(v=0x00007fe66880ee20, viewer=0x00007fe66760c750) at plex.c:301 > frame #3: 0x0000000101e0ead7 libpetsc.3.8.dylib`VecView_Plex(v=0x00007fe66880e820, viewer=0x00007fe66760c750) at plex.c:348 > > keeps a linked list of vectors that are to be viewed and the vectors are the same Vec because they are obtained with DMGetLocalVector(). > > The safest fix is to have PetscViewerVTKAddField_VTK() do a VecDuplicate() on the vector passed in and store that in the linked list instead of just storing a pointer to the passed in vector (which might and can be overwritten before all the linked vectors are actually stored). > > Danyang, > > Barry is right, and the bug can be fixed the way he says. However, this points out why VTK is bad format. I think a better choice is > to use HDF5 and XDMF. For example, in my code now I always use > > DMVIewFromOptions(dm, NULL, "-dm_view"); > > and then later (perhaps several times) > > VecViewFromOptions(u, NULL, "-u_vec_view") > VecViewFromOptions(v, NULL, "-v_vec_view") > > and then on the command line > > -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view hdf5:test.h5::append > > which produces a file > > test.h5 > > Then I run > > $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5 > > which produces another file > > test.xmf > > This can be loaded by Paraview for visualization. > > Thanks, > > Matt > > > > Barry > > > > > On Feb 27, 2018, at 10:44 PM, Danyang Su wrote: > > > > Hi All, > > > > How to set different object names when using multiple VecView? I try to use PetscObjectSetName with multiple output, but the object name is overwritten by the last one. > > > > As shown below, as well as the enclosed files as example, the vector name in sol.vtk is vec_v for both vector u and v. > > > > call PetscViewerCreate(PETSC_COMM_WORLD, viewer, ierr);CHKERRA(ierr) > > call PetscViewerSetType(viewer, PETSCVIEWERVTK, ierr);CHKERRA(ierr) > > call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, ierr);CHKERRA(ierr) > > call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr) > > > > call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) > > call VecView(u, viewer, ierr);CHKERRA(ierr) > > > > call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) > > call VecView(v, viewer, ierr);CHKERRA(ierr) > > > > call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) > > > > call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) > > call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) > > > > Thanks, > > > > Danyang > > > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: barry-vtk.patch Type: application/octet-stream Size: 2092 bytes Desc: barry-vtk.patch URL: From danyang.su at gmail.com Wed Feb 28 10:59:37 2018 From: danyang.su at gmail.com (Danyang Su) Date: Wed, 28 Feb 2018 08:59:37 -0800 Subject: [petsc-users] object name overwritten in VecView In-Reply-To: <37E456DA-DC44-4BFE-BAA7-CD74FBA4E377@mcs.anl.gov> References: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> <37E456DA-DC44-4BFE-BAA7-CD74FBA4E377@mcs.anl.gov> Message-ID: <1df19bff-580f-9b5c-b888-ec337135577b@gmail.com> Hi Barry and Matt, Thanks for your quick response. Considering the output performance, as well as the long-term plan of PETSc development, which format would you suggest? I personally prefer the data format that can be post-processed by Paraview as our sequential code (written without PETSc) is also using Paraview compatible data format. XDMF sounds promising as suggested by Matt. Thanks, Danyang On 18-02-28 08:17 AM, Smith, Barry F. wrote: > > ? It turns out the fix is really easy. Here is a patch. > > ? Apply it with > > ??? patch -p1 < barry-vtk.patch > > ? then do > > ??? make gnumake > > ? all in $PETSC_DIR > > > > > On Feb 28, 2018, at 9:07 AM, Danyang Su wrote: > > > > Hi Matt? > > > > Thanks for your suggestion and I will use xmf instead. > > > > Regards, > > > > Danyang > > > > On February 28, 2018 3:58:08 AM PST, Matthew Knepley > wrote: > > On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. > wrote: > > > >?? Matt, > > > >?? I have confirmed this is reproducible and a bug. The problem > arises because > > > > frame #0: 0x000000010140625a > libpetsc.3.8.dylib`PetscViewerVTKAddField_VTK(viewer=0x00007fe66760c750, > dm=0x00007fe668810820, > PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at > plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, > vec=0x00007fe66880ee20) at vtkv.c:140 > >???? frame #1: 0x0000000101404e6e > libpetsc.3.8.dylib`PetscViewerVTKAddField(viewer=0x00007fe66760c750, > dm=0x00007fe668810820, > PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at > plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, > vec=0x00007fe66880ee20) at vtkv.c:46 > >???? frame #2: 0x0000000101e0b7c3 > libpetsc.3.8.dylib`VecView_Plex_Local(v=0x00007fe66880ee20, > viewer=0x00007fe66760c750) at plex.c:301 > >???? frame #3: 0x0000000101e0ead7 > libpetsc.3.8.dylib`VecView_Plex(v=0x00007fe66880e820, > viewer=0x00007fe66760c750) at plex.c:348 > > > > keeps a linked list of vectors that are to be viewed and the vectors > are the same Vec because they are obtained with DMGetLocalVector(). > > > > The safest fix is to have PetscViewerVTKAddField_VTK() do a > VecDuplicate() on the vector passed in and store that in the linked > list instead of just storing a pointer to the passed in vector (which > might and can be overwritten before all the linked vectors are > actually stored). > > > > Danyang, > > > > Barry is right, and the bug can be fixed the way he says. However, > this points out why VTK is bad format. I think a better choice is > > to use HDF5 and XDMF. For example, in my code now I always use > > > >?? DMVIewFromOptions(dm, NULL, "-dm_view"); > > > > and then later (perhaps several times) > > > >?? VecViewFromOptions(u, NULL, "-u_vec_view") > >?? VecViewFromOptions(v, NULL, "-v_vec_view") > > > > and then on the command line > > > >?? -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view > hdf5:test.h5::append > > > > which produces a file > > > >?? test.h5 > > > > Then I run > > > >?? $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5 > > > > which produces another file > > > >?? test.xmf > > > > This can be loaded by Paraview for visualization. > > > >?? Thanks, > > > >????? Matt > > > > > > > >?? Barry > > > > > > > > > On Feb 27, 2018, at 10:44 PM, Danyang Su wrote: > > > > > > Hi All, > > > > > > How to set different object names when using multiple VecView? I > try to use PetscObjectSetName with multiple output, but the object > name is overwritten by the last one. > > > > > > As shown below, as well as the enclosed files as example, the > vector name in sol.vtk is vec_v for both vector u and v. > > > > > >????? call PetscViewerCreate(PETSC_COMM_WORLD, viewer, > ierr);CHKERRA(ierr) > > >????? call PetscViewerSetType(viewer, PETSCVIEWERVTK, > ierr);CHKERRA(ierr) > > >????? call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, > ierr);CHKERRA(ierr) > > >????? call PetscViewerFileSetName(viewer, 'sol.vtk', > ierr);CHKERRA(ierr) > > > > > >????? call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) > > >????? call VecView(u, viewer, ierr);CHKERRA(ierr) > > > > > >????? call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) > > >????? call VecView(v, viewer, ierr);CHKERRA(ierr) > > > > > >????? call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) > > > > > >????? call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) > > >????? call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) > > > > > > Thanks, > > > > > > Danyang > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > > -- > > Sent from my Android device with K-9 Mail. Please excuse my brevity. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 28 11:16:33 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 28 Feb 2018 12:16:33 -0500 Subject: [petsc-users] object name overwritten in VecView In-Reply-To: <1df19bff-580f-9b5c-b888-ec337135577b@gmail.com> References: <24502b17-7101-e7cb-7dd9-2fc981a61213@gmail.com> <37E456DA-DC44-4BFE-BAA7-CD74FBA4E377@mcs.anl.gov> <1df19bff-580f-9b5c-b888-ec337135577b@gmail.com> Message-ID: On Wed, Feb 28, 2018 at 11:59 AM, Danyang Su wrote: > Hi Barry and Matt, > > Thanks for your quick response. Considering the output performance, as > well as the long-term plan of PETSc development, which format would you > suggest? I personally prefer the data format that can be post-processed by > Paraview as our sequential code (written without PETSc) is also using > Paraview compatible data format. XDMF sounds promising as suggested by Matt. > I definitely suggest the HDF5+XDMF route. Thanks, Matt > Thanks, > > Danyang > On 18-02-28 08:17 AM, Smith, Barry F. wrote: > > > It turns out the fix is really easy. Here is a patch. > > Apply it with > > patch -p1 < barry-vtk.patch > > then do > > make gnumake > > all in $PETSC_DIR > > > > > On Feb 28, 2018, at 9:07 AM, Danyang Su > wrote: > > > > Hi Matt? > > > > Thanks for your suggestion and I will use xmf instead. > > > > Regards, > > > > Danyang > > > > On February 28, 2018 3:58:08 AM PST, Matthew Knepley > wrote: > > On Wed, Feb 28, 2018 at 12:39 AM, Smith, Barry F. > wrote: > > > > Matt, > > > > I have confirmed this is reproducible and a bug. The problem arises > because > > > > frame #0: 0x000000010140625a libpetsc.3.8.dylib` > PetscViewerVTKAddField_VTK(viewer=0x00007fe66760c750, > dm=0x00007fe668810820, PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll > at plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, > vec=0x00007fe66880ee20) at vtkv.c:140 > > frame #1: 0x0000000101404e6e libpetsc.3.8.dylib` > PetscViewerVTKAddField(viewer=0x00007fe66760c750, dm=0x00007fe668810820, > PetscViewerVTKWriteFunction=(libpetsc.3.8.dylib`DMPlexVTKWriteAll at > plexvtk.c:633), fieldtype=PETSC_VTK_POINT_FIELD, vec=0x00007fe66880ee20) > at vtkv.c:46 > > frame #2: 0x0000000101e0b7c3 libpetsc.3.8.dylib`VecView_ > Plex_Local(v=0x00007fe66880ee20, viewer=0x00007fe66760c750) at plex.c:301 > > frame #3: 0x0000000101e0ead7 libpetsc.3.8.dylib`VecView_Plex(v=0x00007fe66880e820, > viewer=0x00007fe66760c750) at plex.c:348 > > > > keeps a linked list of vectors that are to be viewed and the vectors are > the same Vec because they are obtained with DMGetLocalVector(). > > > > The safest fix is to have PetscViewerVTKAddField_VTK() do a > VecDuplicate() on the vector passed in and store that in the linked list > instead of just storing a pointer to the passed in vector (which might and > can be overwritten before all the linked vectors are actually stored). > > > > Danyang, > > > > Barry is right, and the bug can be fixed the way he says. However, this > points out why VTK is bad format. I think a better choice is > > to use HDF5 and XDMF. For example, in my code now I always use > > > > DMVIewFromOptions(dm, NULL, "-dm_view"); > > > > and then later (perhaps several times) > > > > VecViewFromOptions(u, NULL, "-u_vec_view") > > VecViewFromOptions(v, NULL, "-v_vec_view") > > > > and then on the command line > > > > -dm_view hdf5:test.h5 -u_vec_view hdf5:test.h5::append -v_vec_view > hdf5:test.h5::append > > > > which produces a file > > > > test.h5 > > > > Then I run > > > > $PETSC_DIR/bin/petsc_gen_xdmf.py test.h5 > > > > which produces another file > > > > test.xmf > > > > This can be loaded by Paraview for visualization. > > > > Thanks, > > > > Matt > > > > > > > > Barry > > > > > > > > > On Feb 27, 2018, at 10:44 PM, Danyang Su > wrote: > > > > > > Hi All, > > > > > > How to set different object names when using multiple VecView? I try > to use PetscObjectSetName with multiple output, but the object name is > overwritten by the last one. > > > > > > As shown below, as well as the enclosed files as example, the vector > name in sol.vtk is vec_v for both vector u and v. > > > > > > call PetscViewerCreate(PETSC_COMM_WORLD, viewer, > ierr);CHKERRA(ierr) > > > call PetscViewerSetType(viewer, PETSCVIEWERVTK, > ierr);CHKERRA(ierr) > > > call PetscViewerPushFormat(viewer, PETSC_VIEWER_ASCII_VTK, > ierr);CHKERRA(ierr) > > > call PetscViewerFileSetName(viewer, 'sol.vtk', ierr);CHKERRA(ierr) > > > > > > call PetscObjectSetName(u, 'vec_u', ierr);CHKERRA(ierr) > > > call VecView(u, viewer, ierr);CHKERRA(ierr) > > > > > > call PetscObjectSetName(v, 'vec_v', ierr);CHKERRA(ierr) > > > call VecView(v, viewer, ierr);CHKERRA(ierr) > > > > > > call PetscViewerDestroy(viewer, ierr);CHKERRA(ierr) > > > > > > call DMRestoreGlobalVector(dm, u, ierr);CHKERRA(ierr) > > > call DMRestoreGlobalVector(dm, v, ierr);CHKERRA(ierr) > > > > > > Thanks, > > > > > > Danyang > > > > > > > > > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > > -- Norbert Wiener > > > > https://www.cse.buffalo.edu/~knepley/ > > > > -- > > Sent from my Android device with K-9 Mail. Please excuse my brevity. > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.appel17 at imperial.ac.uk Wed Feb 28 12:23:54 2018 From: t.appel17 at imperial.ac.uk (Thibaut Appel) Date: Wed, 28 Feb 2018 18:23:54 +0000 Subject: [petsc-users] Malloc error with 'correct' preallocation? In-Reply-To: <997E68CE-08F5-4981-97BC-17C3BBE4E891@anl.gov> References: <673B72FC-4252-447A-B90F-731002BE3C18@ic.ac.uk> <997E68CE-08F5-4981-97BC-17C3BBE4E891@anl.gov> Message-ID: Good afternoon, It looks like, after further investigation, that I wasn't filling the diagonal element on some rows and hence not allocating for it - which triggers an error as you must leave room and set the diagonal entry even it is zero according to the MatMPIAIJSetPreallocation documentation, not but mentioned on the MatSeqAIJSetPreallocation page even though this is pure logic. Besides that I didn't find anything that does not behave as expected! Thanks for your support, Thibaut On 28/02/18 01:33, Smith, Barry F. wrote: >> On Feb 27, 2018, at 7:05 PM, Appel, Thibaut wrote: >> >> Dear PETSc developers and users, >> >> I am forming a sparse matrix in complex, double precision arithmetic and can?t understand why I have ?PETSC ERROR: Argument out of range - New nonzero at (X,X) caused a malloc? during the assembly of the matrix. >> >> This matrix discretizes a PDE in 2D using a finite difference method in both spatial directions and, in short, here is the ensemble of routines I call: >> >> CALL MatCreate(PETSC_COMM_WORLD,MatA,ierr) >> CALL MatSetType(MatA,MATAIJ,ierr) >> CALL MatSetSizes(MatA,PETSC_DECIDE,PETSC_DECIDE,leading_dimension,leading_dimension,ierr) >> CALL MatSeqAIJSetPreallocation(MatA,0,PETSC_NULL_INTEGER,ierr) >> CALL MatMPIAIJSetPreallocation(MatA,0,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,ierr) >> CALL MatGetOwnershipRange(MatA,istart,iend,ierr) >> >> Then I preallocate my matrix doing tests on the derivative coefficient arrays like >> >> ALLOCATE(nnz(istart:iend-1), dnz(istart:iend-1), onz(istart:iend-1)) >> nnz = 0 >> dnz = 0 >> onz = 0 >> DO row= istart, iend-1 >> >> *detect the nonzero elements* >> IF (ABS(this_derivative_coef) > 0.D0) THEN >> nnz(row) = nnz(row) + corresponding_stencil_size >> DO *elements in stencil* >> *compute corresponding column* >> IF ((col >= istart) .AND. (col <= (iend-1))) THEN >> dnz(row) = dnz(row) + 1 >> ELSE >> onz(row) = onz(row) + 1 >> END IF >> END DO >> END IF >> >> END DO >> CALL MatSeqAIJSetPreallocation(MatA,PETSC_DEFAULT_INTEGER,nnz,ierr) >> CALL MatMPIAIJSetPreallocation(MatA,PETSC_DEFAULT_INTEGER,dnz,PETSC_DEFAULT_INTEGER,onz,ierr) >> CALL MatSetOption(MatA,MAT_IGNORE_ZERO_ENTRIES,PETSC_TRUE,ierr) >> >> And assemble the matrix, at each row, with the different derivatives terms (pure x derivative, pure y derivative, cross xy derivative?) >> >> DO row = istart, iend-1 >> >> cols(0) = *compute corresponding column* >> vals(0) = no_derivative_coef >> CALL MatSetValues(MatA,1,row,1,cols,vals,ADD_VALUES,ierr) >> >> DO m=0,x_order >> cols(m) = *compute corresponding column* >> vals(m) = x_derivative_coef >> END DO >> CALL MatSetValues(MatA,1,row,x_order+1,cols,vals,ADD_VALUES,ierr) >> >> DO m=0,y_order >> cols(m) = *compute corresponding column* >> vals(m) = y_derivative_coef >> END DO >> CALL MatSetValues(MatA,1,row,y_order+1,cols,vals,ADD_VALUES,ierr) >> >> DO n=0,y_order >> DO m=0,x_order >> cols(..) = *compute corresponding column* >> vals(..)= xy_derivative_coef >> END DO >> END DO >> CALL MatSetValues(MatA,1,row,(x_order+1)*(y_order+1),cols,vals,ADD_VALUES,ierr) >> >> END DO >> >> CALL MatAssemblyBegin(MatA,MAT_FINAL_ASSEMBLY,ierr) >> CALL MatAssemblyEnd(MatA,MAT_FINAL_ASSEMBLY,ierr) >> >> I am using ADD_VALUES as the different loops here-above can contribute to the same column. >> >> The approach I chose is therefore preallocating without overestimating the non-zero elements and hope that the MAT_IGNORE_ZERO_ENTRIES option discards the 'vals(?)' who are exactly zero during the assembly (I read that the criteria PETSc uses to do so is ?== 0.0?) so with the test I used everything should work fine. >> >> However, when testing with 1 MPI process, I have this malloc problem appearing at a certain row. >> >> I print the corresponding nnz(row) I allocated for this row, say NZ. >> I print for each (cols, vals) computed in the loops above the test (vals /= zero) to number the non-zero elements and notice that the malloc error appears when I insert a block of non-zero elements in which the last one is the NZth! >> >> In other words, why a malloc if the 'correct' number of elements is allocated? > Don't know > >> Is there something wrong with my understanding of ADD_VALUES? > Unlikely > >> I read somewhere that it is good to call both MatSeqAIJSetPreallocation and MatMPIAIJSetPreallocation as PETSc will automatically call the right one whether it is a serial or parallel computation. > Yes this is true > >> Is it better to use MatXAIJSetPreallocation? > In your case no > >> Thank you in advance for any advice that could put me on the trail to correctness, and I would appreciate any correction should I do something that looks silly. > Nothing looks silly. > > The easiest approach to tracking down the problem is to run in the debugger and put break points at the key points and make sure it behaves as expected at those points. > > Could you send the code? Preferably small and easy to build. If I can reproduce the problem I can track down what is going on. > > Barry > >> Kind regards, >> >> >> Thibaut >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 28 12:59:07 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Wed, 28 Feb 2018 18:59:07 +0000 Subject: [petsc-users] Multi-preconditioned Krylov In-Reply-To: References: Message-ID: <65EF4411-B312-4600-A6B1-D5EB0564FCA6@anl.gov> Tried to read the papers, couldn't follow, them but my guess is you need to copy the PETSc CG KSP routines and rework them as a new KSP type for these algorithms. Barry > On Feb 28, 2018, at 3:39 AM, Karin&NiKo wrote: > > Dear PETSc team, > > I would like to experiment multi-preconditioned Krylov methods, as presented in the paper or Bridson and Greif (https://www.cs.ubc.ca/~rbridson/mpcg/) and more specifically in a context of DD like in the paper of Spillane (https://hal.archives-ouvertes.fr/hal-01170059/document) or Gosselet (https://hal.archives-ouvertes.fr/hal-01056928/document). > > It seems to me this is not a basically supported feature of PETSc. > > My question : > Is there a way of doing it by playing with the runtime options of PETSc or do I have to implement it in the core of the KSP object? If so, what would be the correct design? > > Thanks, > Nicolas > From jed at jedbrown.org Wed Feb 28 13:32:42 2018 From: jed at jedbrown.org (Jed Brown) Date: Wed, 28 Feb 2018 12:32:42 -0700 Subject: [petsc-users] Multi-preconditioned Krylov In-Reply-To: <65EF4411-B312-4600-A6B1-D5EB0564FCA6@anl.gov> References: <65EF4411-B312-4600-A6B1-D5EB0564FCA6@anl.gov> Message-ID: <878tbcucyt.fsf@jedbrown.org> Nicole has expressed interested in helping with a PETSc implementation, she just doesn't have much experience with PETSc yet and I haven't had time to prioritize doing it myself. If you want to develop a PETSc implementation, I would suggest reaching out to her and Cc'ing me. "Smith, Barry F." writes: > Tried to read the papers, couldn't follow, them but my guess is you need to copy the PETSc CG KSP routines and rework them as a new KSP type for these algorithms. > > Barry > > >> On Feb 28, 2018, at 3:39 AM, Karin&NiKo wrote: >> >> Dear PETSc team, >> >> I would like to experiment multi-preconditioned Krylov methods, as presented in the paper or Bridson and Greif (https://www.cs.ubc.ca/~rbridson/mpcg/) and more specifically in a context of DD like in the paper of Spillane (https://hal.archives-ouvertes.fr/hal-01170059/document) or Gosselet (https://hal.archives-ouvertes.fr/hal-01056928/document). >> >> It seems to me this is not a basically supported feature of PETSc. >> >> My question : >> Is there a way of doing it by playing with the runtime options of PETSc or do I have to implement it in the core of the KSP object? If so, what would be the correct design? >> >> Thanks, >> Nicolas >> From knepley at gmail.com Wed Feb 28 13:40:17 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 28 Feb 2018 14:40:17 -0500 Subject: [petsc-users] Multi-preconditioned Krylov In-Reply-To: <878tbcucyt.fsf@jedbrown.org> References: <65EF4411-B312-4600-A6B1-D5EB0564FCA6@anl.gov> <878tbcucyt.fsf@jedbrown.org> Message-ID: If you do this, please please please also update NGMRES since it looks like you would do a similar linear least squares thing with the directions. It should be factored out. SLEPc has a nice TSQR that we could pull into PETSc for this. Maty On Feb 28, 2018 14:36, "Jed Brown" wrote: > Nicole has expressed interested in helping with a PETSc implementation, > she just doesn't have much experience with PETSc yet and I haven't had > time to prioritize doing it myself. If you want to develop a PETSc > implementation, I would suggest reaching out to her and Cc'ing me. > > "Smith, Barry F." writes: > > > Tried to read the papers, couldn't follow, them but my guess is > you need to copy the PETSc CG KSP routines and rework them as a new KSP > type for these algorithms. > > > > Barry > > > > > >> On Feb 28, 2018, at 3:39 AM, Karin&NiKo wrote: > >> > >> Dear PETSc team, > >> > >> I would like to experiment multi-preconditioned Krylov methods, as > presented in the paper or Bridson and Greif (https://www.cs.ubc.ca/~ > rbridson/mpcg/) and more specifically in a context of DD like in the > paper of Spillane (https://hal.archives-ouvertes.fr/hal-01170059/document) > or Gosselet (https://hal.archives-ouvertes.fr/hal-01056928/document). > >> > >> It seems to me this is not a basically supported feature of PETSc. > >> > >> My question : > >> Is there a way of doing it by playing with the runtime options of PETSc > or do I have to implement it in the core of the KSP object? If so, what > would be the correct design? > >> > >> Thanks, > >> Nicolas > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Wed Feb 28 20:01:01 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 1 Mar 2018 10:01:01 +0800 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> Message-ID: <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> On 1/3/2018 12:10 AM, Matthew Knepley wrote: > On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng > wrote: > > Hi, > > I have a CFD code which uses PETSc and HYPRE. I found that for a > certain case with grid size of 192,570,048, I encounter scaling > problem when my cores > 600. At 600 cores, the code took 10min for > 100 time steps. At 960, 1440 and 2880 cores, it still takes around > 10min. At 360 cores, it took 15min. > > So how can I find the bottleneck? Any recommended steps? > > > For any performance question, we need to see the output of -log_view > for all test cases. Hi, To be more specific, I use PETSc KSPBCGS and HYPRE geometric multigrid (entirely based on HYPRE, no PETSc) for the momentum and Poisson eqns in my code. So can log_view be used in this case to give a meaningful? Since part of the code uses HYPRE? I also program another subroutine in the past which uses PETSc to solve the Poisson eqn. It uses either HYPRE's boomeramg, KSPBCGS or KSPGMRES. If I use boomeramg, can log_view be used in this case? Or do I have to use KSPBCGS or KSPGMRES, which is directly from PETSc? However, I ran KSPGMRES yesterday with the Poisson eqn and my ans didn't converge. Thanks. > > I must also mention that I partition my grid only in the x and y > direction. There is no partitioning in the z direction due to > limited code development. I wonder if there is a strong effect in > this case. > > > Maybe. Usually what happens is you fill up memory with a z-column and > cannot scale further. > > ? Thanks, > > ? ? ?Matt > > > -- > Thank you very much > > Yours sincerely, > > ================================================ > TAY Wee-Beng ??? (Zheng Weiming) > Personal research webpage: http://tayweebeng.wixsite.com/website > > Youtube research showcase: > https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA > > linkedin: www.linkedin.com/in/tay-weebeng > > ================================================ > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Feb 28 20:07:10 2018 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 28 Feb 2018 21:07:10 -0500 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> Message-ID: On Wed, Feb 28, 2018 at 9:01 PM, TAY wee-beng wrote: > > On 1/3/2018 12:10 AM, Matthew Knepley wrote: > > On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng wrote: > >> Hi, >> >> I have a CFD code which uses PETSc and HYPRE. I found that for a certain >> case with grid size of 192,570,048, I encounter scaling problem when my >> cores > 600. At 600 cores, the code took 10min for 100 time steps. At 960, >> 1440 and 2880 cores, it still takes around 10min. At 360 cores, it took >> 15min. >> >> So how can I find the bottleneck? Any recommended steps? >> > > For any performance question, we need to see the output of -log_view for > all test cases. > > Hi, > > To be more specific, I use PETSc KSPBCGS and HYPRE geometric multigrid > (entirely based on HYPRE, no PETSc) for the momentum and Poisson eqns in my > code. > > So can log_view be used in this case to give a meaningful? Since part of > the code uses HYPRE? > Make an event to time the HYPRE solve. It only takes a few lines of code. > I also program another subroutine in the past which uses PETSc to solve > the Poisson eqn. It uses either HYPRE's boomeramg, KSPBCGS or KSPGMRES. > > If I use boomeramg, can log_view be used in this case? > Yes, its automatic. > Or do I have to use KSPBCGS or KSPGMRES, which is directly from PETSc? > However, I ran KSPGMRES yesterday with the Poisson eqn and my ans didn't > converge. > Plain GMRES is not good for Poisson. You would be better off with GMRES/GAMG. Thanks, Matt > Thanks. > > > >> I must also mention that I partition my grid only in the x and y >> direction. There is no partitioning in the z direction due to limited code >> development. I wonder if there is a strong effect in this case. > > > Maybe. Usually what happens is you fill up memory with a z-column and > cannot scale further. > > Thanks, > > Matt > > >> >> -- >> Thank you very much >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng ??? (Zheng Weiming) >> Personal research webpage: http://tayweebeng.wixsite.com/website >> Youtube research showcase: https://www.youtube.com/channe >> l/UC72ZHtvQNMpNs2uRTSToiLA >> linkedin: www.linkedin.com/in/tay-weebeng >> ================================================ >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfadams at lbl.gov Wed Feb 28 21:16:08 2018 From: mfadams at lbl.gov (Mark Adams) Date: Wed, 28 Feb 2018 22:16:08 -0500 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> Message-ID: > > > Or do I have to use KSPBCGS or KSPGMRES, which is directly from PETSc? > However, I ran KSPGMRES yesterday with the Poisson eqn and my ans didn't > converge. > As Matt said GMRES is not great for symmetric operators like Poisson and you can use CG for the KSP method. HYPRE and GAMG are both fine preconditioners. Try them both. And I would suggest not worrying about scaling until you get a solver that converges. Get your solver working well on small problems and as long as you are not relying on fundamentally unscalable algorithms you should be able to get scalability later. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 28 22:14:38 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 1 Mar 2018 04:14:38 +0000 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> Message-ID: > On Feb 28, 2018, at 8:01 PM, TAY wee-beng wrote: > > > On 1/3/2018 12:10 AM, Matthew Knepley wrote: >> On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng wrote: >> Hi, >> >> I have a CFD code which uses PETSc and HYPRE. I found that for a certain case with grid size of 192,570,048, I encounter scaling problem when my cores > 600. At 600 cores, the code took 10min for 100 time steps. At 960, 1440 and 2880 cores, it still takes around 10min. At 360 cores, it took 15min. >> >> So how can I find the bottleneck? Any recommended steps? >> >> For any performance question, we need to see the output of -log_view for all test cases. > Hi, > > To be more specific, I use PETSc KSPBCGS and HYPRE geometric multigrid (entirely based on HYPRE, no PETSc) for the momentum and Poisson eqns in my code. > > So can log_view be used in this case to give a meaningful? Since part of the code uses HYPRE? Yes, just send the logs. > > I also program another subroutine in the past which uses PETSc to solve the Poisson eqn. It uses either HYPRE's boomeramg, KSPBCGS or KSPGMRES. > > If I use boomeramg, can log_view be used in this case? > > Or do I have to use KSPBCGS or KSPGMRES, which is directly from PETSc? However, I ran KSPGMRES yesterday with the Poisson eqn and my ans didn't converge. > > Thanks. >> >> I must also mention that I partition my grid only in the x and y direction. There is no partitioning in the z direction due to limited code development. I wonder if there is a strong effect in this case. >> >> Maybe. Usually what happens is you fill up memory with a z-column and cannot scale further. >> >> Thanks, >> >> Matt >> >> >> -- >> Thank you very much >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng ??? (Zheng Weiming) >> Personal research webpage: http://tayweebeng.wixsite.com/website >> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >> linkedin: www.linkedin.com/in/tay-weebeng >> ================================================ >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ > From zonexo at gmail.com Wed Feb 28 22:59:02 2018 From: zonexo at gmail.com (TAY wee-beng) Date: Thu, 1 Mar 2018 12:59:02 +0800 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> Message-ID: <0b7a2713-5548-70bc-9768-9f76dc344bac@gmail.com> On 1/3/2018 10:07 AM, Matthew Knepley wrote: > On Wed, Feb 28, 2018 at 9:01 PM, TAY wee-beng > wrote: > > > On 1/3/2018 12:10 AM, Matthew Knepley wrote: >> On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng > > wrote: >> >> Hi, >> >> I have a CFD code which uses PETSc and HYPRE. I found that >> for a certain case with grid size of 192,570,048, I encounter >> scaling problem when my cores > 600. At 600 cores, the code >> took 10min for 100 time steps. At 960, 1440 and 2880 cores, >> it still takes around 10min. At 360 cores, it took 15min. >> >> So how can I find the bottleneck? Any recommended steps? >> >> >> For any performance question, we need to see the output of >> -log_view for all test cases. > Hi, > > To be more specific, I use PETSc KSPBCGS and HYPRE geometric > multigrid (entirely based on HYPRE, no PETSc) for the momentum and > Poisson eqns in my code. > > So can log_view be used in this case to give a meaningful? Since > part of the code uses HYPRE? > > > Make an event to time the HYPRE solve. It only takes a few lines of code. Hi, I check PETSc and found some routines which can be used to time the HYPRE solve, like PetscGetTime and PetscGetCPUTime. And then using: PetscLogDouble t1, t2; ??? ierr = PetscGetCPUTime(&t1);CHKERRQ(ierr); ??? ... code to time ... ??? ierr = PetscGetCPUTime(&t2);CHKERRQ(ierr); ??? printf("Code took %f CPU seconds\n", t2-t1); Are these 2 routines suitable? Which one should I use? > > I also program another subroutine in the past which uses PETSc to > solve the Poisson eqn. It uses either HYPRE's boomeramg, KSPBCGS > or KSPGMRES. > > If I use boomeramg, can log_view be used in this case? > > > Yes, its automatic. > > Or do I have to use KSPBCGS or KSPGMRES, which is directly from > PETSc? However, I ran KSPGMRES yesterday with the Poisson eqn and > my ans didn't converge. > > > Plain GMRES is not good for Poisson. You would be better off with > GMRES/GAMG. > > ? Thanks, > > ? ? ?Matt > > Thanks. >> >> I must also mention that I partition my grid only in the x >> and y direction. There is no partitioning in the z direction >> due to limited code development. I wonder if there is a >> strong effect in this case. >> >> >> Maybe. Usually what happens is you fill up memory with a z-column >> and cannot scale further. >> >> ? Thanks, >> >> ? ? ?Matt >> >> >> -- >> Thank you very much >> >> Yours sincerely, >> >> ================================================ >> TAY Wee-Beng ??? (Zheng Weiming) >> Personal research webpage: >> http://tayweebeng.wixsite.com/website >> >> Youtube research showcase: >> https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >> >> linkedin: www.linkedin.com/in/tay-weebeng >> >> ================================================ >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >> > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener > > https://www.cse.buffalo.edu/~knepley/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Feb 28 23:04:19 2018 From: bsmith at mcs.anl.gov (Smith, Barry F.) Date: Thu, 1 Mar 2018 05:04:19 +0000 Subject: [petsc-users] Scaling problem when cores > 600 In-Reply-To: <0b7a2713-5548-70bc-9768-9f76dc344bac@gmail.com> References: <26ed65df-90d0-1f27-5d66-b6488bef3b90@gmail.com> <5a76371a-0eb1-f204-4974-549865ce726d@gmail.com> <0b7a2713-5548-70bc-9768-9f76dc344bac@gmail.com> Message-ID: > On Feb 28, 2018, at 10:59 PM, TAY wee-beng wrote: > > > On 1/3/2018 10:07 AM, Matthew Knepley wrote: >> On Wed, Feb 28, 2018 at 9:01 PM, TAY wee-beng wrote: >> >> On 1/3/2018 12:10 AM, Matthew Knepley wrote: >>> On Wed, Feb 28, 2018 at 10:45 AM, TAY wee-beng wrote: >>> Hi, >>> >>> I have a CFD code which uses PETSc and HYPRE. I found that for a certain case with grid size of 192,570,048, I encounter scaling problem when my cores > 600. At 600 cores, the code took 10min for 100 time steps. At 960, 1440 and 2880 cores, it still takes around 10min. At 360 cores, it took 15min. >>> >>> So how can I find the bottleneck? Any recommended steps? >>> >>> For any performance question, we need to see the output of -log_view for all test cases. >> Hi, >> >> To be more specific, I use PETSc KSPBCGS and HYPRE geometric multigrid (entirely based on HYPRE, no PETSc) for the momentum and Poisson eqns in my code. >> >> So can log_view be used in this case to give a meaningful? Since part of the code uses HYPRE? >> >> Make an event to time the HYPRE solve. It only takes a few lines of code. > Hi, > > I check PETSc and found some routines which can be used to time the HYPRE solve, like PetscGetTime and PetscGetCPUTime. > > And then using: > > PetscLogDouble t1, t2; > > ierr = PetscGetCPUTime(&t1);CHKERRQ(ierr); > ... code to time ... > ierr = PetscGetCPUTime(&t2);CHKERRQ(ierr); > printf("Code took %f CPU seconds\n", t2-t1); > > Are these 2 routines suitable? Which one should I use? Absolutely not please don't use those things. Matt specifically mentioned PetscLogEventRegister(). Please look at the manual page for that and use that plus PetscLogEventBegin() and PetscLogEventEnd() >> >> I also program another subroutine in the past which uses PETSc to solve the Poisson eqn. It uses either HYPRE's boomeramg, KSPBCGS or KSPGMRES. >> >> If I use boomeramg, can log_view be used in this case? >> >> Yes, its automatic. >> >> Or do I have to use KSPBCGS or KSPGMRES, which is directly from PETSc? However, I ran KSPGMRES yesterday with the Poisson eqn and my ans didn't converge. >> >> Plain GMRES is not good for Poisson. You would be better off with GMRES/GAMG. >> >> Thanks, >> >> Matt >> >> Thanks. >>> >>> I must also mention that I partition my grid only in the x and y direction. There is no partitioning in the z direction due to limited code development. I wonder if there is a strong effect in this case. >>> >>> Maybe. Usually what happens is you fill up memory with a z-column and cannot scale further. >>> >>> Thanks, >>> >>> Matt >>> >>> >>> -- >>> Thank you very much >>> >>> Yours sincerely, >>> >>> ================================================ >>> TAY Wee-Beng ??? (Zheng Weiming) >>> Personal research webpage: http://tayweebeng.wixsite.com/website >>> Youtube research showcase: https://www.youtube.com/channel/UC72ZHtvQNMpNs2uRTSToiLA >>> linkedin: www.linkedin.com/in/tay-weebeng >>> ================================================ >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >>> -- Norbert Wiener >>> >>> https://www.cse.buffalo.edu/~knepley/ >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener >> >> https://www.cse.buffalo.edu/~knepley/ >