From vyan2000 at gmail.com Fri May 1 17:04:24 2009 From: vyan2000 at gmail.com (Ryan Yan) Date: Fri, 1 May 2009 18:04:24 -0400 Subject: MPIAIJ and MatSetValuesBlocked Message-ID: Hi, all, I am using MPIAIJ for my matrix A, but when I call the function MatSetValuesBlocked, I got the error: PetscPrintf(PETSC_COMM_WORLD, "breakpoint 1\n"); ierr = MatSetValuesBlocked(A,1,&irow,1,(col_ind+icol),temp_vector,INSERT_VALUES); PetscPrintf(PETSC_COMM_WORLD, "breakpoint 2\n"); breakpoint 1 [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: Mat type mpiaij! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/src/ksp/ksp/examples/tutorials/ttt2/kspex1reader_binmpiaij on a linux-gnu named vyan2000-linux by vyan2000 Fri May 1 17:58:48 2009 [0]PETSC ERROR: Libraries linked from /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Thu Feb 5 21:10:10 2009 [0]PETSC ERROR: Configure options --with-mpi-dir=/usr/lib/ --with-debugger=gdb --with-shared=0 --download-hypre=1 --download-parmetis=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatSetValuesBlocked() line 1289 in src/mat/interface/matrix.c breakpoint 2 Is it the reason that MatSetValuesBlocked only works for MPIBAIJ or SeqBAIJ? Thanks, Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri May 1 17:12:04 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 1 May 2009 17:12:04 -0500 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: References: Message-ID: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> Support for MatSetValuesBlocked() for AIJ matrices was added in PETSc 3.0.0 Note that if your matrix is truly blocked you should use BAIJ matrices, if your matrix is not truly blocked then there is no benefit to using MatSetValuesBlocked() it was added so people could easily switch between AIJ and BAIJ for testing. Barry On May 1, 2009, at 5:04 PM, Ryan Yan wrote: > Hi, all, > I am using MPIAIJ for my matrix A, but when I call the function > MatSetValuesBlocked, I got the error: > > PetscPrintf(PETSC_COMM_WORLD, "breakpoint 1\n"); > ierr = MatSetValuesBlocked(A,1,&irow,1,(col_ind > +icol),temp_vector,INSERT_VALUES); > PetscPrintf(PETSC_COMM_WORLD, "breakpoint 2\n"); > > > breakpoint 1 > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: Mat type mpiaij! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 > 10:02:49 CDT 2008 HG revision: > 31306062cd1a6f6a2496fccb4878f485c9b91760 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/src/ksp/ > ksp/examples/tutorials/ttt2/kspex1reader_binmpiaij on a linux-gnu > named vyan2000-linux by vyan2000 Fri May 1 17:58:48 2009 > [0]PETSC ERROR: Libraries linked from /home/vyan2000/local/PPETSc/ > petsc-2.3.3-p15/lib/linux-gnu-c-debug > [0]PETSC ERROR: Configure run at Thu Feb 5 21:10:10 2009 > [0]PETSC ERROR: Configure options --with-mpi-dir=/usr/lib/ --with- > debugger=gdb --with-shared=0 --download-hypre=1 --download-parmetis=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: MatSetValuesBlocked() line 1289 in src/mat/interface/ > matrix.c > breakpoint 2 > > Is it the reason that MatSetValuesBlocked only works for MPIBAIJ or > SeqBAIJ? > > Thanks, > > Yan From liuchangjohn at gmail.com Sun May 3 04:42:08 2009 From: liuchangjohn at gmail.com (liu chang) Date: Sun, 3 May 2009 17:42:08 +0800 Subject: Cast matrix as a vector? Message-ID: <94e43e390905030242g7b90eb84t56202b6707f29391@mail.gmail.com> I'm using TAO's LMVM method for optimize a dense matrix, but TAO expects its input as a Vec. Do I have to copy the content back and forth between a Mat and a Vec? Can I somehow cast the Mat into a Vec? Both the Mat and the Vec are distributed evenly across the processes so ideally there doesn't need to be any copying at all. Thanks, Liu Chang From liuchangjohn at gmail.com Sun May 3 04:51:50 2009 From: liuchangjohn at gmail.com (liu chang) Date: Sun, 3 May 2009 17:51:50 +0800 Subject: Convex optimization with linear constraint? In-Reply-To: References: <94e43e390904291213q8845391vf1e37d06174a0c4@mail.gmail.com> Message-ID: <94e43e390905030251i141cb38es51483dee95ce8482@mail.gmail.com> Thanks. I made LMVM optimize over a reduced number of variables and solve the rest from the linear equations. On Thu, Apr 30, 2009 at 3:37 AM, David Fuentes wrote: > > Liu, > > Unless I'm missing something, I don't think you will directly find what > you are looking for. ?You will prob have to solve > > ? A * vec_x = vec_b > > directly in your FormObjective function then use an adjoint method or > something to compute your gradient directly > in your FormGradient routine. > > > > > On Thu, 30 Apr 2009, liu chang wrote: > >> I'm using PETSc + TAO's LMVM method for a convex optimization problem. >> As the project progresses, it's clear that some linear constraints are >> also needed, the problem now looks like: >> >> minimize f(vec_x) (f is neither linear or quadratic, but is convex) >> subject to >> A * vec_x = vec_b >> >> As LMVM does not support linear constraints, I'm looking for another >> solver. TAO lists several functions dealing with constraints, but >> they're all in the developer section, and in the samples linked from >> the manual I haven't found one that's linearly constrained. Is there a >> suitable one in TAO? >> >> Liu Chang >> > From knepley at gmail.com Sun May 3 12:16:12 2009 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 3 May 2009 12:16:12 -0500 Subject: Cast matrix as a vector? In-Reply-To: <94e43e390905030242g7b90eb84t56202b6707f29391@mail.gmail.com> References: <94e43e390905030242g7b90eb84t56202b6707f29391@mail.gmail.com> Message-ID: 1) I guarantee you the copy takes no time. Measure it. 2) If you are worried about memory, PETSc Vec and dense matrix can have the same layout and share a pointer. Matt On Sun, May 3, 2009 at 4:42 AM, liu chang wrote: > I'm using TAO's LMVM method for optimize a dense matrix, but TAO > expects its input as a Vec. Do I have to copy the content back and > forth between a Mat and a Vec? Can I somehow cast the Mat into a Vec? > > Both the Mat and the Vec are distributed evenly across the processes > so ideally there doesn't need to be any copying at all. > > Thanks, > Liu Chang > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchangjohn at gmail.com Sun May 3 12:59:40 2009 From: liuchangjohn at gmail.com (liu chang) Date: Mon, 4 May 2009 01:59:40 +0800 Subject: Cast matrix as a vector? In-Reply-To: References: <94e43e390905030242g7b90eb84t56202b6707f29391@mail.gmail.com> Message-ID: <94e43e390905031059h10a7fc3cicd75d6f242a1e7bc@mail.gmail.com> Thanks Matt. > 2) If you are worried about memory, PETSc Vec and dense > ???? matrix can have the same layout and share a pointer. That's the ideal solution but how can I do that? I can't find a way to create a Mat or Vec from an existing pointer. Regards, Liu Chang > On Sun, May 3, 2009 at 4:42 AM, liu chang wrote: >> >> I'm using TAO's LMVM method for optimize a dense matrix, but TAO >> expects its input as a Vec. Do I have to copy the content back and >> forth between a Mat and a Vec? Can I somehow cast the Mat into a Vec? >> >> Both the Mat and the Vec are distributed evenly across the processes >> so ideally there doesn't need to be any copying at all. >> >> Thanks, >> Liu Chang > > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > From knepley at gmail.com Sun May 3 13:35:00 2009 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 3 May 2009 13:35:00 -0500 Subject: Cast matrix as a vector? In-Reply-To: <94e43e390905031059h10a7fc3cicd75d6f242a1e7bc@mail.gmail.com> References: <94e43e390905030242g7b90eb84t56202b6707f29391@mail.gmail.com> <94e43e390905031059h10a7fc3cicd75d6f242a1e7bc@mail.gmail.com> Message-ID: On Sun, May 3, 2009 at 12:59 PM, liu chang wrote: > Thanks Matt. > > > 2) If you are worried about memory, PETSc Vec and dense > > matrix can have the same layout and share a pointer. > > That's the ideal solution but how can I do that? I can't find a way to > create a Mat or Vec from an existing pointer. http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Vec/VecCreateMPIWithArray.html http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateMPIDense.html Matt > > Regards, > Liu Chang > > > On Sun, May 3, 2009 at 4:42 AM, liu chang > wrote: > >> > >> I'm using TAO's LMVM method for optimize a dense matrix, but TAO > >> expects its input as a Vec. Do I have to copy the content back and > >> forth between a Mat and a Vec? Can I somehow cast the Mat into a Vec? > >> > >> Both the Mat and the Vec are distributed evenly across the processes > >> so ideally there doesn't need to be any copying at all. > >> > >> Thanks, > >> Liu Chang > > > > > > > > -- > > What most experimenters take for granted before they begin their > experiments > > is infinitely more interesting than any results to which their > experiments > > lead. > > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From vyan2000 at gmail.com Mon May 4 13:37:44 2009 From: vyan2000 at gmail.com (Ryan Yan) Date: Mon, 4 May 2009 14:37:44 -0400 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> Message-ID: Hi Barry, My matrix is read from files stroing a matrix in the Block CRS format. So a natural way to create the matrix is MPIBAIJ. However, if I want to use MatSetValuesBlocked() for the newly created MPIBAIJ, I still need to load a tempary arrary(from Block CRS file) with the length of blocksize^2 and pass it into the Matrix via MatSetValuesBlocked(). This process is similar to the MatSetValues. Could you make a little bit more clarification on why the MatSetValuesBlocked() have some advantage on blocked structure? Thanks, Yan On Fri, May 1, 2009 at 6:12 PM, Barry Smith wrote: > > Support for MatSetValuesBlocked() for AIJ matrices was added in PETSc > 3.0.0 > > Note that if your matrix is truly blocked you should use BAIJ matrices, if > your matrix is not truly blocked then there is no benefit to using > MatSetValuesBlocked() it was added so people could easily switch between AIJ > and BAIJ for testing. > > Barry > > > On May 1, 2009, at 5:04 PM, Ryan Yan wrote: > > Hi, all, >> I am using MPIAIJ for my matrix A, but when I call the function >> MatSetValuesBlocked, I got the error: >> >> PetscPrintf(PETSC_COMM_WORLD, "breakpoint 1\n"); >> ierr = >> MatSetValuesBlocked(A,1,&irow,1,(col_ind+icol),temp_vector,INSERT_VALUES); >> PetscPrintf(PETSC_COMM_WORLD, "breakpoint 2\n"); >> >> >> breakpoint 1 >> [0]PETSC ERROR: --------------------- Error Message >> ------------------------------------ >> [0]PETSC ERROR: No support for this operation for this object type! >> [0]PETSC ERROR: Mat type mpiaij! >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 >> CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >> [0]PETSC ERROR: See docs/index.html for manual pages. >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: >> /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/src/ksp/ksp/examples/tutorials/ttt2/kspex1reader_binmpiaij >> on a linux-gnu named vyan2000-linux by vyan2000 Fri May 1 17:58:48 2009 >> [0]PETSC ERROR: Libraries linked from >> /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/lib/linux-gnu-c-debug >> [0]PETSC ERROR: Configure run at Thu Feb 5 21:10:10 2009 >> [0]PETSC ERROR: Configure options --with-mpi-dir=/usr/lib/ >> --with-debugger=gdb --with-shared=0 --download-hypre=1 --download-parmetis=1 >> [0]PETSC ERROR: >> ------------------------------------------------------------------------ >> [0]PETSC ERROR: MatSetValuesBlocked() line 1289 in >> src/mat/interface/matrix.c >> breakpoint 2 >> >> Is it the reason that MatSetValuesBlocked only works for MPIBAIJ or >> SeqBAIJ? >> >> Thanks, >> >> Yan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Mon May 4 13:45:04 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 4 May 2009 13:45:04 -0500 (CDT) Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> Message-ID: If you have a 3x3 block [i.e 9 values]. And compare MatSetValues() vs MatSetValuesBlocked() - then the differences are: 1. 9 row,col indices provided for MatSetValues() vs 1-row,col index for the block 2. The internal code for MatSetValues might have to loop over all 9 indices and do 9 searches/checks [for the correct location with the matrix]. In the MatSetValuesBlocked() case - its just 1 search/check - and all the 9 values copied into the internal structure.. 3. there could potentiall be 9 function calls with MatSetValues() vs 1 for MatSetValuesBlocked() Even if you have to format the data a bit [perhaps copy into array[9]] that overhead might be less than the loop-search/insert overhead of MatSetValues(). Satish On Mon, 4 May 2009, Ryan Yan wrote: > Hi Barry, > My matrix is read from files stroing a matrix in the Block CRS format. So a > natural way to create the matrix is MPIBAIJ. However, if I want to > use MatSetValuesBlocked() for the newly created MPIBAIJ, I still need > to load a tempary arrary(from Block CRS file) with the length of blocksize^2 > and pass it into the Matrix via MatSetValuesBlocked(). This process is > similar to the MatSetValues. Could you make a little bit more clarification > on why the MatSetValuesBlocked() have some advantage on blocked structure? > > Thanks, > > Yan > > > > On Fri, May 1, 2009 at 6:12 PM, Barry Smith wrote: > > > > > Support for MatSetValuesBlocked() for AIJ matrices was added in PETSc > > 3.0.0 > > > > Note that if your matrix is truly blocked you should use BAIJ matrices, if > > your matrix is not truly blocked then there is no benefit to using > > MatSetValuesBlocked() it was added so people could easily switch between AIJ > > and BAIJ for testing. > > > > Barry > > > > > > On May 1, 2009, at 5:04 PM, Ryan Yan wrote: > > > > Hi, all, > >> I am using MPIAIJ for my matrix A, but when I call the function > >> MatSetValuesBlocked, I got the error: > >> > >> PetscPrintf(PETSC_COMM_WORLD, "breakpoint 1\n"); > >> ierr = > >> MatSetValuesBlocked(A,1,&irow,1,(col_ind+icol),temp_vector,INSERT_VALUES); > >> PetscPrintf(PETSC_COMM_WORLD, "breakpoint 2\n"); > >> > >> > >> breakpoint 1 > >> [0]PETSC ERROR: --------------------- Error Message > >> ------------------------------------ > >> [0]PETSC ERROR: No support for this operation for this object type! > >> [0]PETSC ERROR: Mat type mpiaij! > >> [0]PETSC ERROR: > >> ------------------------------------------------------------------------ > >> [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 > >> CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 > >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. > >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > >> [0]PETSC ERROR: See docs/index.html for manual pages. > >> [0]PETSC ERROR: > >> ------------------------------------------------------------------------ > >> [0]PETSC ERROR: > >> /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/src/ksp/ksp/examples/tutorials/ttt2/kspex1reader_binmpiaij > >> on a linux-gnu named vyan2000-linux by vyan2000 Fri May 1 17:58:48 2009 > >> [0]PETSC ERROR: Libraries linked from > >> /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/lib/linux-gnu-c-debug > >> [0]PETSC ERROR: Configure run at Thu Feb 5 21:10:10 2009 > >> [0]PETSC ERROR: Configure options --with-mpi-dir=/usr/lib/ > >> --with-debugger=gdb --with-shared=0 --download-hypre=1 --download-parmetis=1 > >> [0]PETSC ERROR: > >> ------------------------------------------------------------------------ > >> [0]PETSC ERROR: MatSetValuesBlocked() line 1289 in > >> src/mat/interface/matrix.c > >> breakpoint 2 > >> > >> Is it the reason that MatSetValuesBlocked only works for MPIBAIJ or > >> SeqBAIJ? > >> > >> Thanks, > >> > >> Yan > >> > > > > > From jed at 59A2.org Mon May 4 14:40:46 2009 From: jed at 59A2.org (Jed Brown) Date: Mon, 04 May 2009 21:40:46 +0200 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> Message-ID: <49FF44BE.6040904@59A2.org> Ryan Yan wrote: > Could you make a little bit more clarification > on why the MatSetValuesBlocked() have some advantage on blocked structure? In addition to the assembly advantages that Satish pointed out, BAIJ requires less storage for the column indices, effectively improving the arithmetic intensity of many kernels, and speeding up matrix factorization (e.g. symbolic factorization only needs to compute fill in terms of blocks instead of individual elements). The use of inodes with AIJ (default when applicable) reduces the memory bandwidth requirements of the column indices, turns point relaxation smoothers (SOR) into stronger block relaxation, and allows a certain amount of unrolling. BAIJ requires even less metadata, provides more regular memory access, and does more unrolling. If your matrix is truly blocked, BAIJ should provide better performance with all preconditioners that support it. Many third-party preconditioners will not work with BAIJ, so it is useful to give your matrix a prefix (or check the options database if you are getting your matrix from a DA or similar) so that you can set it's type with -foo_mat_type when using a preconditioner that requires it. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 260 bytes Desc: OpenPGP digital signature URL: From vyan2000 at gmail.com Mon May 4 14:52:19 2009 From: vyan2000 at gmail.com (Ryan Yan) Date: Mon, 4 May 2009 15:52:19 -0400 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> Message-ID: Hi, Satish, It is very illustrating! Thank you very much! Yan On Mon, May 4, 2009 at 2:45 PM, Satish Balay wrote: > If you have a 3x3 block [i.e 9 values]. And compare MatSetValues() vs > MatSetValuesBlocked() - then the differences are: > > 1. 9 row,col indices provided for MatSetValues() vs 1-row,col index > for the block > > 2. The internal code for MatSetValues might have to loop over all 9 > indices and do 9 searches/checks [for the correct location with the > matrix]. In the MatSetValuesBlocked() case - its just 1 search/check - > and all the 9 values copied into the internal structure.. > > 3. there could potentiall be 9 function calls with MatSetValues() vs 1 > for MatSetValuesBlocked() > > Even if you have to format the data a bit [perhaps copy into array[9]] > that overhead might be less than the loop-search/insert overhead of > MatSetValues(). > > Satish > > On Mon, 4 May 2009, Ryan Yan wrote: > > > Hi Barry, > > My matrix is read from files stroing a matrix in the Block CRS format. So > a > > natural way to create the matrix is MPIBAIJ. However, if I want to > > use MatSetValuesBlocked() for the newly created MPIBAIJ, I still need > > to load a tempary arrary(from Block CRS file) with the length of > blocksize^2 > > and pass it into the Matrix via MatSetValuesBlocked(). This process is > > similar to the MatSetValues. Could you make a little bit more > clarification > > on why the MatSetValuesBlocked() have some advantage on blocked > structure? > > > > Thanks, > > > > Yan > > > > > > > > On Fri, May 1, 2009 at 6:12 PM, Barry Smith wrote: > > > > > > > > Support for MatSetValuesBlocked() for AIJ matrices was added in PETSc > > > 3.0.0 > > > > > > Note that if your matrix is truly blocked you should use BAIJ > matrices, if > > > your matrix is not truly blocked then there is no benefit to using > > > MatSetValuesBlocked() it was added so people could easily switch > between AIJ > > > and BAIJ for testing. > > > > > > Barry > > > > > > > > > On May 1, 2009, at 5:04 PM, Ryan Yan wrote: > > > > > > Hi, all, > > >> I am using MPIAIJ for my matrix A, but when I call the function > > >> MatSetValuesBlocked, I got the error: > > >> > > >> PetscPrintf(PETSC_COMM_WORLD, "breakpoint 1\n"); > > >> ierr = > > >> > MatSetValuesBlocked(A,1,&irow,1,(col_ind+icol),temp_vector,INSERT_VALUES); > > >> PetscPrintf(PETSC_COMM_WORLD, "breakpoint 2\n"); > > >> > > >> > > >> breakpoint 1 > > >> [0]PETSC ERROR: --------------------- Error Message > > >> ------------------------------------ > > >> [0]PETSC ERROR: No support for this operation for this object type! > > >> [0]PETSC ERROR: Mat type mpiaij! > > >> [0]PETSC ERROR: > > >> > ------------------------------------------------------------------------ > > >> [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 > 10:02:49 > > >> CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 > > >> [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > >> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > >> [0]PETSC ERROR: See docs/index.html for manual pages. > > >> [0]PETSC ERROR: > > >> > ------------------------------------------------------------------------ > > >> [0]PETSC ERROR: > > >> > /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/src/ksp/ksp/examples/tutorials/ttt2/kspex1reader_binmpiaij > > >> on a linux-gnu named vyan2000-linux by vyan2000 Fri May 1 17:58:48 > 2009 > > >> [0]PETSC ERROR: Libraries linked from > > >> /home/vyan2000/local/PPETSc/petsc-2.3.3-p15/lib/linux-gnu-c-debug > > >> [0]PETSC ERROR: Configure run at Thu Feb 5 21:10:10 2009 > > >> [0]PETSC ERROR: Configure options --with-mpi-dir=/usr/lib/ > > >> --with-debugger=gdb --with-shared=0 --download-hypre=1 > --download-parmetis=1 > > >> [0]PETSC ERROR: > > >> > ------------------------------------------------------------------------ > > >> [0]PETSC ERROR: MatSetValuesBlocked() line 1289 in > > >> src/mat/interface/matrix.c > > >> breakpoint 2 > > >> > > >> Is it the reason that MatSetValuesBlocked only works for MPIBAIJ or > > >> SeqBAIJ? > > >> > > >> Thanks, > > >> > > >> Yan > > >> > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Mon May 4 14:56:18 2009 From: rlmackie862 at gmail.com (Randall Mackie) Date: Mon, 04 May 2009 12:56:18 -0700 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: <49FF44BE.6040904@59A2.org> References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> <49FF44BE.6040904@59A2.org> Message-ID: <49FF4862.5020607@gmail.com> Jed, Can you explain how to tell if the matrix is truly blocked? What's the difference between a blocked matrix and one with several degrees of freedom at each node, or are they the same thing? I'm solving Maxwell's equations in 3D, so I have three vector field components at each node, is that what you mean by blocked? Thanks, Randy Jed Brown wrote: > Ryan Yan wrote: >> Could you make a little bit more clarification >> on why the MatSetValuesBlocked() have some advantage on blocked structure? > > In addition to the assembly advantages that Satish pointed out, BAIJ > requires less storage for the column indices, effectively improving the > arithmetic intensity of many kernels, and speeding up matrix > factorization (e.g. symbolic factorization only needs to compute fill in > terms of blocks instead of individual elements). The use of inodes with > AIJ (default when applicable) reduces the memory bandwidth requirements > of the column indices, turns point relaxation smoothers (SOR) into > stronger block relaxation, and allows a certain amount of unrolling. > BAIJ requires even less metadata, provides more regular memory access, > and does more unrolling. > > If your matrix is truly blocked, BAIJ should provide better performance > with all preconditioners that support it. Many third-party > preconditioners will not work with BAIJ, so it is useful to give your > matrix a prefix (or check the options database if you are getting your > matrix from a DA or similar) so that you can set it's type with > -foo_mat_type when using a preconditioner that requires it. > > Jed > From knepley at gmail.com Mon May 4 14:59:37 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 4 May 2009 14:59:37 -0500 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: <49FF4862.5020607@gmail.com> References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> <49FF44BE.6040904@59A2.org> <49FF4862.5020607@gmail.com> Message-ID: On Mon, May 4, 2009 at 2:56 PM, Randall Mackie wrote: > Jed, > > Can you explain how to tell if the matrix is truly blocked? What's the > difference between a blocked matrix and one with several degrees of freedom > at each node, or are they the same thing? I'm solving Maxwell's equations > in 3D, so I have three vector field components at each node, is that what > you mean by blocked? Yes, that is blocked. Matt > > Thanks, Randy > > > Jed Brown wrote: > >> Ryan Yan wrote: >> >>> Could you make a little bit more clarification >>> on why the MatSetValuesBlocked() have some advantage on blocked >>> structure? >>> >> >> In addition to the assembly advantages that Satish pointed out, BAIJ >> requires less storage for the column indices, effectively improving the >> arithmetic intensity of many kernels, and speeding up matrix >> factorization (e.g. symbolic factorization only needs to compute fill in >> terms of blocks instead of individual elements). The use of inodes with >> AIJ (default when applicable) reduces the memory bandwidth requirements >> of the column indices, turns point relaxation smoothers (SOR) into >> stronger block relaxation, and allows a certain amount of unrolling. >> BAIJ requires even less metadata, provides more regular memory access, >> and does more unrolling. >> >> If your matrix is truly blocked, BAIJ should provide better performance >> with all preconditioners that support it. Many third-party >> preconditioners will not work with BAIJ, so it is useful to give your >> matrix a prefix (or check the options database if you are getting your >> matrix from a DA or similar) so that you can set it's type with >> -foo_mat_type when using a preconditioner that requires it. >> >> Jed >> >> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon May 4 15:05:08 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 4 May 2009 15:05:08 -0500 Subject: MPIAIJ and MatSetValuesBlocked In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov> <49FF44BE.6040904@59A2.org> <49FF4862.5020607@gmail.com> Message-ID: <725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov> On May 4, 2009, at 2:59 PM, Matthew Knepley wrote: > On Mon, May 4, 2009 at 2:56 PM, Randall Mackie > wrote: > Jed, > > Can you explain how to tell if the matrix is truly blocked? What's the > difference between a blocked matrix and one with several degrees of > freedom > at each node, or are they the same thing? I'm solving Maxwell's > equations > in 3D, so I have three vector field components at each node, is that > what > you mean by blocked? > > Yes, that is blocked. The 3 components at each node create a little tiny blocks representing the coupling between the various components at the node. If each component is coupled to all other components at that node then the little blocks are dense. If there is only partial coupling then the little blocks are sparse. BAIJ treats the little blocks as dense, so if they are truly dense you get the best performance. If they are "mostly filled in" then you get better performance with BAIJ (over AIJ) but with the cost of doing some extra computations on the zeros in the blocks. Barry > > > Matt > > > Thanks, Randy > > > Jed Brown wrote: > Ryan Yan wrote: > Could you make a little bit more clarification > on why the MatSetValuesBlocked() have some advantage on blocked > structure? > > In addition to the assembly advantages that Satish pointed out, BAIJ > requires less storage for the column indices, effectively improving > the > arithmetic intensity of many kernels, and speeding up matrix > factorization (e.g. symbolic factorization only needs to compute > fill in > terms of blocks instead of individual elements). The use of inodes > with > AIJ (default when applicable) reduces the memory bandwidth > requirements > of the column indices, turns point relaxation smoothers (SOR) into > stronger block relaxation, and allows a certain amount of unrolling. > BAIJ requires even less metadata, provides more regular memory access, > and does more unrolling. > > If your matrix is truly blocked, BAIJ should provide better > performance > with all preconditioners that support it. Many third-party > preconditioners will not work with BAIJ, so it is useful to give your > matrix a prefix (or check the options database if you are getting your > matrix from a DA or similar) so that you can set it's type with > -foo_mat_type when using a preconditioner that requires it. > > Jed > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From Chun.SUN at 3ds.com Tue May 5 09:42:30 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Tue, 5 May 2009 10:42:30 -0400 Subject: AIJ and BAIJ convertion In-Reply-To: <725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov> References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org> <49FF4862.5020607@gmail.com> <725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> Hi, I created a matrix with AIJ format. I knew the structure of this matrix should also fit BAIJ type with some certain block size (say 6). Now instead of creating another BAIJ matrix then fill in the values again, do I have an easier way to convert this AIJ matrix to BAIJ matrix with given block size? I'm willing to allocate another memory for the new matrix. Thanks, Chun From hzhang at mcs.anl.gov Tue May 5 10:15:10 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 5 May 2009 10:15:10 -0500 (CDT) Subject: AIJ and BAIJ convertion In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org> <49FF4862.5020607@gmail.com> <725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov> <2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> Message-ID: Run your code with the option '-mat_type baij -mat_block_size 6' You can use '-mat_view_info' to varify the matrix type that is actually used. Hong On Tue, 5 May 2009, SUN Chun wrote: > Hi, > > I created a matrix with AIJ format. I knew the structure of this matrix should also fit BAIJ type with some certain block size (say 6). Now instead of creating another BAIJ matrix then fill in the values again, do I have an easier way to convert this AIJ matrix to BAIJ matrix with given block size? I'm willing to allocate another memory for the new matrix. > > Thanks, > Chun > From balay at mcs.anl.gov Tue May 5 10:18:29 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 5 May 2009 10:18:29 -0500 (CDT) Subject: AIJ and BAIJ convertion In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org> <49FF4862.5020607@gmail.com> <725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov> <2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> Message-ID: This is assuming MatCreate() is used - not MatCreateMPIAIJ().. Satish On Tue, 5 May 2009, Hong Zhang wrote: > > Run your code with the option > '-mat_type baij -mat_block_size 6' > You can use '-mat_view_info' to varify the matrix type > that is actually used. > > Hong > > On Tue, 5 May 2009, SUN Chun wrote: > > > Hi, > > > > I created a matrix with AIJ format. I knew the structure of this matrix > > should also fit BAIJ type with some certain block size (say 6). Now instead > > of creating another BAIJ matrix then fill in the values again, do I have an > > easier way to convert this AIJ matrix to BAIJ matrix with given block size? > > I'm willing to allocate another memory for the new matrix. > > > > Thanks, > > Chun > > > From fredrik.bengzon at math.umu.se Tue May 5 09:48:36 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Tue, 05 May 2009 16:48:36 +0200 Subject: installation Petsc 3.0.0 and parmetis Message-ID: <4A0051C4.50609@math.umu.se> Hi Petsc team I've installed petsc 3.0.0 with parmetis for use with superlu_dist. Everything compiles well, but when I link with my application code I get undefined reference to METIS_mCPartGraphRecursive2. Do I need to set any LD paths or something pointing to the 'externalpackages' directory or should this be taken care of by Petsc? Regards, Fredrik Bengzon From Chun.SUN at 3ds.com Tue May 5 10:31:29 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Tue, 5 May 2009 11:31:29 -0400 Subject: AIJ and BAIJ convertion In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org><49FF4862.5020607@gmail.com><725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov><2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E3B5@CORP-CLT-EXB01.ds> Thanks Hong and Satish, Unfortunately I did use MatCreateMPIAIJ. It's difficult to change that part of my code. Plus I have matrices dumped out with AIJ format and I want to read it as BAIJ. It seems that I have no option other than MatCreateMPIBAIJ then MatSetBlockSize then add entries one by one...? I was reading MatCopy and it says A and B should have same nnz pattern, which won't happen between AIJ and BAIJ. Also I was reading MatConvert, but it allocates new matrix for you without asking blocksize. I'm not sure either of them can do this. Thanks, Chun -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay Sent: Tuesday, May 05, 2009 11:18 AM To: PETSc users list Subject: Re: AIJ and BAIJ convertion This is assuming MatCreate() is used - not MatCreateMPIAIJ().. Satish On Tue, 5 May 2009, Hong Zhang wrote: > > Run your code with the option > '-mat_type baij -mat_block_size 6' > You can use '-mat_view_info' to varify the matrix type > that is actually used. > > Hong > > On Tue, 5 May 2009, SUN Chun wrote: > > > Hi, > > > > I created a matrix with AIJ format. I knew the structure of this matrix > > should also fit BAIJ type with some certain block size (say 6). Now instead > > of creating another BAIJ matrix then fill in the values again, do I have an > > easier way to convert this AIJ matrix to BAIJ matrix with given block size? > > I'm willing to allocate another memory for the new matrix. > > > > Thanks, > > Chun > > > From hzhang at mcs.anl.gov Tue May 5 10:50:41 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 5 May 2009 10:50:41 -0500 (CDT) Subject: AIJ and BAIJ convertion In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA28E3B5@CORP-CLT-EXB01.ds> References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org><49FF4862.5020607@gmail.com><725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov><2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> <2545DC7A42DF804AAAB2ADA5043D57DA28E3B5@CORP-CLT-EXB01.ds> Message-ID: On Tue, 5 May 2009, SUN Chun wrote: > Thanks Hong and Satish, > > Unfortunately I did use MatCreateMPIAIJ. It's difficult to change that part of my code. >Plus I have matrices dumped out with AIJ format and I want to read it as >BAIJ. It seems that I have no option other than MatCreateMPIBAIJ then >MatSetBlockSize then add entries one by one...? You can call ierr = MatLoad(fd,MATBAIJ,&newbaijmat);CHKERRQ(ierr); and run your code with '-matload_block_size 6'. In this way, a new baij matrix is created with bs=6. > > I was reading MatCopy and it says A and B should have same nnz pattern, which won't happen between AIJ and BAIJ. Also I was reading MatConvert, but it allocates new matrix for you without asking blocksize. I'm not sure either of them can do this. MatCopy() cannot be used because aij and baij have very differnt data structure. MatConvert() gives a baij matrix with bs=1, not the one you want. Hong > > Thanks, > Chun > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay > Sent: Tuesday, May 05, 2009 11:18 AM > To: PETSc users list > Subject: Re: AIJ and BAIJ convertion > > This is assuming MatCreate() is used - not MatCreateMPIAIJ().. > > Satish > > On Tue, 5 May 2009, Hong Zhang wrote: > >> >> Run your code with the option >> '-mat_type baij -mat_block_size 6' >> You can use '-mat_view_info' to varify the matrix type >> that is actually used. >> >> Hong >> >> On Tue, 5 May 2009, SUN Chun wrote: >> >>> Hi, >>> >>> I created a matrix with AIJ format. I knew the structure of this matrix >>> should also fit BAIJ type with some certain block size (say 6). Now instead >>> of creating another BAIJ matrix then fill in the values again, do I have an >>> easier way to convert this AIJ matrix to BAIJ matrix with given block size? >>> I'm willing to allocate another memory for the new matrix. >>> >>> Thanks, >>> Chun >>> >> > > From balay at mcs.anl.gov Tue May 5 11:09:02 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 5 May 2009 11:09:02 -0500 (CDT) Subject: AIJ and BAIJ convertion In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org><49FF4862.5020607@gmail.com><725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov><2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds> <2545DC7A42DF804AAAB2ADA5043D57DA28E3B5@CORP-CLT-EXB01.ds> Message-ID: On Tue, 5 May 2009, Hong Zhang wrote: > > > On Tue, 5 May 2009, SUN Chun wrote: > > > Thanks Hong and Satish, > > > > Unfortunately I did use MatCreateMPIAIJ. It's difficult to change that part > > of my code. > > Plus I have matrices dumped out with AIJ format and I want to read it as > > BAIJ. It seems that I have no option other than MatCreateMPIBAIJ then > > MatSetBlockSize then add entries one by one...? > You can call > ierr = MatLoad(fd,MATBAIJ,&newbaijmat);CHKERRQ(ierr); > and run your code with '-matload_block_size 6'. > In this way, a new baij matrix is created with bs=6. Also the call to MatCreateMPIAIJ() can be substituted with calls to MatCreate(), MatSetType(MATMPIAIJ),MatSetSizes() etc..] without changing the rest of the code.. Satish From Chun.SUN at 3ds.com Tue May 5 11:58:03 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Tue, 5 May 2009 12:58:03 -0400 Subject: AIJ and BAIJ convertion In-Reply-To: References: <84E5CDF6-4AC7-4425-BC9C-A111D8CE0A89@mcs.anl.gov><49FF44BE.6040904@59A2.org><49FF4862.5020607@gmail.com><725C9B3B-DA29-4CFD-AD9A-624FDE96EA39@mcs.anl.gov><2545DC7A42DF804AAAB2ADA5043D57DA28E3B4@CORP-CLT-EXB01.ds><2545DC7A42DF804AAAB2ADA5043D57DA28E3B5@CORP-CLT-EXB01.ds> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E3B6@CORP-CLT-EXB01.ds> Thank you so much! I'm going with -matload_block_size for the short term solution. It works! Chun -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay Sent: Tuesday, May 05, 2009 12:09 PM To: PETSc users list Subject: RE: AIJ and BAIJ convertion On Tue, 5 May 2009, Hong Zhang wrote: > > > On Tue, 5 May 2009, SUN Chun wrote: > > > Thanks Hong and Satish, > > > > Unfortunately I did use MatCreateMPIAIJ. It's difficult to change that part > > of my code. > > Plus I have matrices dumped out with AIJ format and I want to read it as > > BAIJ. It seems that I have no option other than MatCreateMPIBAIJ then > > MatSetBlockSize then add entries one by one...? > You can call > ierr = MatLoad(fd,MATBAIJ,&newbaijmat);CHKERRQ(ierr); > and run your code with '-matload_block_size 6'. > In this way, a new baij matrix is created with bs=6. Also the call to MatCreateMPIAIJ() can be substituted with calls to MatCreate(), MatSetType(MATMPIAIJ),MatSetSizes() etc..] without changing the rest of the code.. Satish From keita at cray.com Tue May 5 13:20:41 2009 From: keita at cray.com (Keita Teranishi) Date: Tue, 5 May 2009 13:20:41 -0500 Subject: Duplicating MATSeqAIJ matrix to other PEs Message-ID: <925346A443D4E340BEB20248BAFCDBDF0ACB2D10@CFEVS1-IP.americas.cray.com> Hi, I have been trying to copy a MatSeqAIJ matrix on PE0 to the rest of the PEs so that every PE has the exactly the same matrix. What is the best way to do that? Thanks in advance, ================================ Keita Teranishi Scientific Library Group Cray, Inc. keita at cray.com ================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hung.V.Nguyen at usace.army.mil Tue May 5 13:52:12 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Tue, 5 May 2009 13:52:12 -0500 Subject: Additive multilevel Schwarz preconditioner Message-ID: Hello, Does PETSc supports additive multilevel Schwarz preconditioner? If yes, how to set/run it? Thanks, -hung From bsmith at mcs.anl.gov Tue May 5 15:37:24 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 5 May 2009 15:37:24 -0500 Subject: installation Petsc 3.0.0 and parmetis In-Reply-To: <4A0051C4.50609@math.umu.se> References: <4A0051C4.50609@math.umu.se> Message-ID: <443095EC-55E5-4FEE-A93F-10F5888599BD@mcs.anl.gov> This function is defined in libmetis.a but used by functions in libparmetis.a Likely the applications make file doesn't properly list -lparmetis -lmetis like it needs too? If this is not the problem then do (in the PETSc directory) nm -o $PETSC_ARCH/lib/lib*.a | grep METIS_mCPartGraphRecursive2 and send us the results. Barry On May 5, 2009, at 9:48 AM, Fredrik Bengzon wrote: > Hi Petsc team > I've installed petsc 3.0.0 with parmetis for use with superlu_dist. > Everything compiles well, but when I link with my application code I > get undefined reference to METIS_mCPartGraphRecursive2. Do I need to > set any LD paths or something pointing to the 'externalpackages' > directory or should this be taken care of by Petsc? > > Regards, > > Fredrik Bengzon From bsmith at mcs.anl.gov Tue May 5 15:59:12 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 5 May 2009 15:59:12 -0500 Subject: Additive multilevel Schwarz preconditioner In-Reply-To: References: Message-ID: These preconditioners require a definition of the interpolation between levels, if that is available then additive multilevel Schwarz preconditioner is available and can be thought of as a variant of multigrid where the levels are visited additively instead of recursively (multiplicatively). The PCMG preconditioner is the tool in PETSc for handling multigrid/multilevel preconditioners. Once the PCMG is setup then you can choose the additive version with -pc_mg_type additive By default PETSc uses ILU as the smoother on each level. For a pure additive form of the algorithm you will want -mg_levels_pc_type jacobi You will also want to turn off the GMRES accelerator on each level by - mg_levels_ksp_type none Running with -ksp_view will show you exactly what options are being used. -help will give you the various other options. If you are lucky enough to be running on a structured grid and want piecewise linear or constant interpolation then you can use the PETSc DMMG solver to handle almost everything for you. See its manual page. Other wise you will need to set the interpolation operators yourself see the manual page for PCMGSetInterpolation() Barry Note: in my experience using a classical multigrid algorithm will always beat the use of the additive Schwarz multilevel algorithm because newer information is used as the calculation proceeds. The multilevel additive Schwarz method was an important step in understanding Schwarz and multilevel/multigrid algorithms but is not a particularly useful algorithm in practice. On May 5, 2009, at 1:52 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > Hello, > > Does PETSc supports additive multilevel Schwarz preconditioner? If > yes, how > to set/run it? > > Thanks, > > -hung From bsmith at mcs.anl.gov Tue May 5 16:06:30 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 5 May 2009 16:06:30 -0500 Subject: Duplicating MATSeqAIJ matrix to other PEs In-Reply-To: <925346A443D4E340BEB20248BAFCDBDF0ACB2D10@CFEVS1-IP.americas.cray.com> References: <925346A443D4E340BEB20248BAFCDBDF0ACB2D10@CFEVS1-IP.americas.cray.com> Message-ID: <2A940F4D-53DD-4A50-A604-A8824E9469FE@mcs.anl.gov> We don't have such a beasty. The simpliest thing is to have process 0 broadcast the i,j, and a arrays of Mat_SeqAIJ to all processes then use MatCreateMPIAIJWithArrays() on all processes except zero. Finally change the free_a and free_ij fields of the newly created SeqAIJ matrices to PETSC_TRUE so that the matrix will free the space when it is destroyed. Note you will need to include src/mat/impls/aij/ seq/aij.h into the source file to access the entries in the Mat_SeqAIJ data structure. Barry On May 5, 2009, at 1:20 PM, Keita Teranishi wrote: > Hi, > > I have been trying to copy a MatSeqAIJ matrix on PE0 to the rest of > the PEs so that every PE has the exactly the same matrix. What is > the best way to do that? > > Thanks in advance, > > ================================ > Keita Teranishi > Scientific Library Group > Cray, Inc. > keita at cray.com > ================================ > From Hung.V.Nguyen at usace.army.mil Tue May 5 16:27:06 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Tue, 5 May 2009 16:27:06 -0500 Subject: Additive multilevel Schwarz preconditioner In-Reply-To: References: Message-ID: Thank a lot for your info. I will give it a try. -Hung -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Tuesday, May 05, 2009 3:59 PM To: PETSc users list Subject: Re: Additive multilevel Schwarz preconditioner These preconditioners require a definition of the interpolation between levels, if that is available then additive multilevel Schwarz preconditioner is available and can be thought of as a variant of multigrid where the levels are visited additively instead of recursively (multiplicatively). The PCMG preconditioner is the tool in PETSc for handling multigrid/multilevel preconditioners. Once the PCMG is setup then you can choose the additive version with -pc_mg_type additive By default PETSc uses ILU as the smoother on each level. For a pure additive form of the algorithm you will want -mg_levels_pc_type jacobi You will also want to turn off the GMRES accelerator on each level by - mg_levels_ksp_type none Running with -ksp_view will show you exactly what options are being used. -help will give you the various other options. If you are lucky enough to be running on a structured grid and want piecewise linear or constant interpolation then you can use the PETSc DMMG solver to handle almost everything for you. See its manual page. Other wise you will need to set the interpolation operators yourself see the manual page for PCMGSetInterpolation() Barry Note: in my experience using a classical multigrid algorithm will always beat the use of the additive Schwarz multilevel algorithm because newer information is used as the calculation proceeds. The multilevel additive Schwarz method was an important step in understanding Schwarz and multilevel/multigrid algorithms but is not a particularly useful algorithm in practice. On May 5, 2009, at 1:52 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > Hello, > > Does PETSc supports additive multilevel Schwarz preconditioner? If > yes, how to set/run it? > > Thanks, > > -hung From xy2102 at columbia.edu Tue May 5 16:46:41 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 05 May 2009 17:46:41 -0400 Subject: memory check: error message from valgrind Message-ID: <20090505174641.z0dyh1qscgkkgc0w@cubmail.cc.columbia.edu> Hi, I am using valgrind to check the memory leaking and it turned out that I got 3 errors and they were happening at the the PetscInitialize(), and DACreate2d() line 79: PetscInitialize(&argc, &argv, (char*)0, help); line 84: ierr = DACreate2d(comm,DA_XPERIODIC,DA_STENCIL_BOX, -5, -5, PETSC_DECIDE, PETSC_DECIDE, 4, 2, 0, 0, &da);CHKERRQ(ierr); What could be wrong here? Thanks! The following is the message from valgrind ----------------------------------------------------------------- valgrind --leak-check=full --show-reachable=yes --tool=memcheck ./vdthpffxmhd -options_file option_ffxmhd>output ==26665== Memcheck, a memory error detector. ==26665== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward et al. ==26665== Using LibVEX rev 1804, a library for dynamic binary translation. ==26665== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. ==26665== Using valgrind-3.3.0-Debian, a dynamic binary instrumentation framework. ==26665== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al. ==26665== For more details, rerun with: -v ==26665== ==26665== Invalid read of size 4 ==26665== at 0x40151F9: (within /lib/ld-2.7.so) ==26665== by 0x4005C59: (within /lib/ld-2.7.so) ==26665== by 0x4007A87: (within /lib/ld-2.7.so) ==26665== by 0x4011533: (within /lib/ld-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== Address 0x432af8c is 36 bytes inside a block of size 37 alloc'd ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x4008021: (within /lib/ld-2.7.so) ==26665== by 0x4011533: (within /lib/ld-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x42A0782: gethostbyname_r (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== ==26665== Invalid read of size 4 ==26665== at 0x40151E3: (within /lib/ld-2.7.so) ==26665== by 0x4005C59: (within /lib/ld-2.7.so) ==26665== by 0x4007A87: (within /lib/ld-2.7.so) ==26665== by 0x4011533: (within /lib/ld-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so) ==26665== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/cmov/libnss_compat-2.7.so) ==26665== Address 0x432c1f8 is 32 bytes inside a block of size 35 alloc'd ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x4008021: (within /lib/ld-2.7.so) ==26665== by 0x4011533: (within /lib/ld-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so) ==26665== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/cmov/libnss_compat-2.7.so) ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== ==26665== ERROR SUMMARY: 3 errors from 2 contexts (suppressed: 41 from 1) ==26665== malloc/free: in use at exit: 64,352 bytes in 87 blocks. ==26665== malloc/free: 2,472 allocs, 2,385 frees, 3,015,115 bytes allocated. ==26665== For counts of detected errors, rerun with: -v ==26665== searching for pointers to 87 not-freed blocks. ==26665== checked 1,120,136 bytes. ==26665== ==26665== ==26665== 32 bytes in 2 blocks are still reachable in loss record 1 of 5 ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x871B2A8: MPID_VCRT_Create (mpid_vc.c:62) ==26665== by 0x8718C6A: MPID_Init (mpid_init.c:116) ==26665== by 0x86F1C3B: MPIR_Init_thread (initthread.c:288) ==26665== by 0x86F175D: PMPI_Init (init.c:106) ==26665== by 0x86355A1: PetscInitialize (pinit.c:503) ==26665== by 0x804B627: main (vdthpffxmhd.c:79) ==26665== ==26665== ==26665== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 2 of 5 ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x429A3E2: (within /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x429AC2D: __nss_database_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x4731FDB: ??? ==26665== by 0x473313C: ??? ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) ==26665== by 0x863565B: PetscInitialize (pinit.c:518) ==26665== by 0x804B627: main (vdthpffxmhd.c:79) ==26665== ==26665== ==26665== 40 bytes in 5 blocks are indirectly lost in loss record 3 of 5 ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x4299FBB: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x4731FFB: ??? ==26665== by 0x473313C: ??? ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) ==26665== by 0x863565B: PetscInitialize (pinit.c:518) ==26665== by 0x804B627: main (vdthpffxmhd.c:79) ==26665== ==26665== ==26665== 80 bytes in 5 blocks are indirectly lost in loss record 4 of 5 ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x428739B: tsearch (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x4299F7D: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x4731FFB: ??? ==26665== by 0x473313C: ??? ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) ==26665== by 0x863565B: PetscInitialize (pinit.c:518) ==26665== by 0x804B627: main (vdthpffxmhd.c:79) ==26665== ==26665== ==26665== 64,164 bytes in 74 blocks are still reachable in loss record 5 of 5 ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==26665== by 0x8610BE4: PetscMallocAlign (mal.c:40) ==26665== by 0x8611CD3: PetscTrMallocDefault (mtr.c:194) ==26665== by 0x81DE577: DACreate2d (da2.c:364) ==26665== by 0x804B701: main (vdthpffxmhd.c:84) ==26665== ==26665== LEAK SUMMARY: ==26665== definitely lost: 36 bytes in 1 blocks. ==26665== indirectly lost: 120 bytes in 10 blocks. ==26665== possibly lost: 0 bytes in 0 blocks. ==26665== still reachable: 64,196 bytes in 76 blocks. ==26665== suppressed: 0 bytes in 0 blocks. ------------------------------------------------------------------ -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Tue May 5 17:04:49 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 05 May 2009 18:04:49 -0400 Subject: More about memory check: error message from valgrind In-Reply-To: <20090505174641.z0dyh1qscgkkgc0w@cubmail.cc.columbia.edu> References: <20090505174641.z0dyh1qscgkkgc0w@cubmail.cc.columbia.edu> Message-ID: <20090505180449.3atscffm0ow080kk@cubmail.cc.columbia.edu> Hi, I ran the exe5 (/petsc-3.0.0-p1/src/snes/examples/tutorials/ex5.c) and it turned out the errors for PetscInitialize() also exists. Anything wrong? valgrind --leak-check=full --show-reachable=yes --tool=memcheck ./test ==27852== Memcheck, a memory error detector. ==27852== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward et al. ==27852== Using LibVEX rev 1804, a library for dynamic binary translation. ==27852== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. ==27852== Using valgrind-3.3.0-Debian, a dynamic binary instrumentation framework. ==27852== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al. ==27852== For more details, rerun with: -v ==27852== ==27852== Invalid read of size 4 ==27852== at 0x40151F9: (within /lib/ld-2.7.so) ==27852== by 0x4005C59: (within /lib/ld-2.7.so) ==27852== by 0x4007A87: (within /lib/ld-2.7.so) ==27852== by 0x4011533: (within /lib/ld-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== Address 0x432af8c is 36 bytes inside a block of size 37 alloc'd ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==27852== by 0x4008021: (within /lib/ld-2.7.so) ==27852== by 0x4011533: (within /lib/ld-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x42A0782: gethostbyname_r (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== ==27852== Invalid read of size 4 ==27852== at 0x40151E3: (within /lib/ld-2.7.so) ==27852== by 0x4005C59: (within /lib/ld-2.7.so) ==27852== by 0x4007A87: (within /lib/ld-2.7.so) ==27852== by 0x4011533: (within /lib/ld-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so) ==27852== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/cmov/libnss_compat-2.7.so) ==27852== Address 0x432c1f8 is 32 bytes inside a block of size 35 alloc'd ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==27852== by 0x4008021: (within /lib/ld-2.7.so) ==27852== by 0x4011533: (within /lib/ld-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so) ==27852== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/cmov/libnss_compat-2.7.so) ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) Number of Newton iterations = 4 ==27852== ==27852== ERROR SUMMARY: 3 errors from 2 contexts (suppressed: 41 from 1) ==27852== malloc/free: in use at exit: 156 bytes in 11 blocks. ==27852== malloc/free: 1,345 allocs, 1,334 frees, 714,197 bytes allocated. ==27852== For counts of detected errors, rerun with: -v ==27852== searching for pointers to 11 not-freed blocks. ==27852== checked 1,072,220 bytes. ==27852== ==27852== ==27852== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely lost in loss record 1 of 3 ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==27852== by 0x429A3E2: (within /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x429AC2D: __nss_database_lookup (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x4731FDB: ??? ==27852== by 0x473313C: ??? ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x861D299: PetscGetUserName (fuser.c:68) ==27852== by 0x85DAA80: PetscErrorPrintfInitialize (errtrace.c:68) ==27852== by 0x860DBB3: PetscInitialize (pinit.c:518) ==27852== by 0x804B62E: main (test.c:90) ==27852== ==27852== ==27852== 40 bytes in 5 blocks are indirectly lost in loss record 2 of 3 ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==27852== by 0x4299FBB: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x4731FFB: ??? ==27852== by 0x473313C: ??? ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x861D299: PetscGetUserName (fuser.c:68) ==27852== by 0x85DAA80: PetscErrorPrintfInitialize (errtrace.c:68) ==27852== by 0x860DBB3: PetscInitialize (pinit.c:518) ==27852== by 0x804B62E: main (test.c:90) ==27852== ==27852== ==27852== 80 bytes in 5 blocks are indirectly lost in loss record 3 of 3 ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) ==27852== by 0x428739B: tsearch (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x4299F7D: __nss_lookup_function (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x4731FFB: ??? ==27852== by 0x473313C: ??? ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) ==27852== by 0x861D299: PetscGetUserName (fuser.c:68) ==27852== by 0x85DAA80: PetscErrorPrintfInitialize (errtrace.c:68) ==27852== by 0x860DBB3: PetscInitialize (pinit.c:518) ==27852== by 0x804B62E: main (test.c:90) ==27852== ==27852== LEAK SUMMARY: ==27852== definitely lost: 36 bytes in 1 blocks. ==27852== indirectly lost: 120 bytes in 10 blocks. ==27852== possibly lost: 0 bytes in 0 blocks. ==27852== still reachable: 0 bytes in 0 blocks. ==27852== suppressed: 0 bytes in 0 blocks. Quoting "(Rebecca) Xuefei YUAN" : > Hi, > > I am using valgrind to check the memory leaking and it turned out that > I got 3 errors and they were happening at the the PetscInitialize(), > and DACreate2d() > > line 79: PetscInitialize(&argc, &argv, (char*)0, help); > line 84: ierr = DACreate2d(comm,DA_XPERIODIC,DA_STENCIL_BOX, -5, -5, > PETSC_DECIDE, PETSC_DECIDE, 4, 2, 0, 0, &da);CHKERRQ(ierr); > > > What could be wrong here? Thanks! > > The following is the message from valgrind > > ----------------------------------------------------------------- > valgrind --leak-check=full --show-reachable=yes --tool=memcheck > ./vdthpffxmhd -options_file option_ffxmhd>output > ==26665== Memcheck, a memory error detector. > ==26665== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward et al. > ==26665== Using LibVEX rev 1804, a library for dynamic binary translation. > ==26665== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. > ==26665== Using valgrind-3.3.0-Debian, a dynamic binary instrumentation > framework. > ==26665== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al. > ==26665== For more details, rerun with: -v > ==26665== > ==26665== Invalid read of size 4 > ==26665== at 0x40151F9: (within /lib/ld-2.7.so) > ==26665== by 0x4005C59: (within /lib/ld-2.7.so) > ==26665== by 0x4007A87: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429BEC5: __nss_hosts_lookup (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== Address 0x432af8c is 36 bytes inside a block of size 37 alloc'd > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x4008021: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429BEC5: __nss_hosts_lookup (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x42A0782: gethostbyname_r (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== > ==26665== Invalid read of size 4 > ==26665== at 0x40151E3: (within /lib/ld-2.7.so) > ==26665== by 0x4005C59: (within /lib/ld-2.7.so) > ==26665== by 0x4007A87: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so) > ==26665== by 0x473313C: _nss_compat_getpwuid_r (in > /lib/tls/i686/cmov/libnss_compat-2.7.so) > ==26665== Address 0x432c1f8 is 32 bytes inside a block of size 35 alloc'd > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x4008021: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so) > ==26665== by 0x473313C: _nss_compat_getpwuid_r (in > /lib/tls/i686/cmov/libnss_compat-2.7.so) > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== > ==26665== ERROR SUMMARY: 3 errors from 2 contexts (suppressed: 41 from 1) > ==26665== malloc/free: in use at exit: 64,352 bytes in 87 blocks. > ==26665== malloc/free: 2,472 allocs, 2,385 frees, 3,015,115 bytes allocated. > ==26665== For counts of detected errors, rerun with: -v > ==26665== searching for pointers to 87 not-freed blocks. > ==26665== checked 1,120,136 bytes. > ==26665== > ==26665== > ==26665== 32 bytes in 2 blocks are still reachable in loss record 1 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x871B2A8: MPID_VCRT_Create (mpid_vc.c:62) > ==26665== by 0x8718C6A: MPID_Init (mpid_init.c:116) > ==26665== by 0x86F1C3B: MPIR_Init_thread (initthread.c:288) > ==26665== by 0x86F175D: PMPI_Init (init.c:106) > ==26665== by 0x86355A1: PetscInitialize (pinit.c:503) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 156 (36 direct, 120 indirect) bytes in 1 blocks are > definitely lost in loss record 2 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x429A3E2: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429AC2D: __nss_database_lookup (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4731FDB: ??? > ==26665== by 0x473313C: ??? > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) > ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) > ==26665== by 0x863565B: PetscInitialize (pinit.c:518) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 40 bytes in 5 blocks are indirectly lost in loss record 3 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x4299FBB: __nss_lookup_function (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4731FFB: ??? > ==26665== by 0x473313C: ??? > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) > ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) > ==26665== by 0x863565B: PetscInitialize (pinit.c:518) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 80 bytes in 5 blocks are indirectly lost in loss record 4 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x428739B: tsearch (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4299F7D: __nss_lookup_function (in > /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4731FFB: ??? > ==26665== by 0x473313C: ??? > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) > ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) > ==26665== by 0x863565B: PetscInitialize (pinit.c:518) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 64,164 bytes in 74 blocks are still reachable in loss record 5 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x8610BE4: PetscMallocAlign (mal.c:40) > ==26665== by 0x8611CD3: PetscTrMallocDefault (mtr.c:194) > ==26665== by 0x81DE577: DACreate2d (da2.c:364) > ==26665== by 0x804B701: main (vdthpffxmhd.c:84) > ==26665== > ==26665== LEAK SUMMARY: > ==26665== definitely lost: 36 bytes in 1 blocks. > ==26665== indirectly lost: 120 bytes in 10 blocks. > ==26665== possibly lost: 0 bytes in 0 blocks. > ==26665== still reachable: 64,196 bytes in 76 blocks. > ==26665== suppressed: 0 bytes in 0 blocks. > > > ------------------------------------------------------------------ > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Tue May 5 17:07:54 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 5 May 2009 17:07:54 -0500 Subject: More about memory check: error message from valgrind In-Reply-To: <20090505180449.3atscffm0ow080kk@cubmail.cc.columbia.edu> References: <20090505174641.z0dyh1qscgkkgc0w@cubmail.cc.columbia.edu> <20090505180449.3atscffm0ow080kk@cubmail.cc.columbia.edu> Message-ID: These are all errors at the OS level and can be ignored. The other error in the DA you got in your code I cannot understand. Barry On May 5, 2009, at 5:04 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I ran the exe5 (/petsc-3.0.0-p1/src/snes/examples/tutorials/ex5.c) > and it turned out the errors for PetscInitialize() also exists. > Anything wrong? > > valgrind --leak-check=full --show-reachable=yes --tool=memcheck ./test > ==27852== Memcheck, a memory error detector. > ==27852== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward > et al. > ==27852== Using LibVEX rev 1804, a library for dynamic binary > translation. > ==27852== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. > ==27852== Using valgrind-3.3.0-Debian, a dynamic binary > instrumentation framework. > ==27852== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward > et al. > ==27852== For more details, rerun with: -v > ==27852== > ==27852== Invalid read of size 4 > ==27852== at 0x40151F9: (within /lib/ld-2.7.so) > ==27852== by 0x4005C59: (within /lib/ld-2.7.so) > ==27852== by 0x4007A87: (within /lib/ld-2.7.so) > ==27852== by 0x4011533: (within /lib/ld-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) > ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== Address 0x432af8c is 36 bytes inside a block of size 37 > alloc'd > ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==27852== by 0x4008021: (within /lib/ld-2.7.so) > ==27852== by 0x4011533: (within /lib/ld-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) > ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x42A0782: gethostbyname_r (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== > ==27852== Invalid read of size 4 > ==27852== at 0x40151E3: (within /lib/ld-2.7.so) > ==27852== by 0x4005C59: (within /lib/ld-2.7.so) > ==27852== by 0x4007A87: (within /lib/ld-2.7.so) > ==27852== by 0x4011533: (within /lib/ld-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) > ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x4731FFB: (within /lib/tls/i686/cmov/ > libnss_compat-2.7.so) > ==27852== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/ > cmov/libnss_compat-2.7.so) > ==27852== Address 0x432c1f8 is 32 bytes inside a block of size 35 > alloc'd > ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==27852== by 0x4008021: (within /lib/ld-2.7.so) > ==27852== by 0x4011533: (within /lib/ld-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x4010F4D: (within /lib/ld-2.7.so) > ==27852== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x400D5C5: (within /lib/ld-2.7.so) > ==27852== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x4731FFB: (within /lib/tls/i686/cmov/ > libnss_compat-2.7.so) > ==27852== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/ > cmov/libnss_compat-2.7.so) > ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ > libc-2.7.so) > Number of Newton iterations = 4 > ==27852== > ==27852== ERROR SUMMARY: 3 errors from 2 contexts (suppressed: 41 > from 1) > ==27852== malloc/free: in use at exit: 156 bytes in 11 blocks. > ==27852== malloc/free: 1,345 allocs, 1,334 frees, 714,197 bytes > allocated. > ==27852== For counts of detected errors, rerun with: -v > ==27852== searching for pointers to 11 not-freed blocks. > ==27852== checked 1,072,220 bytes. > ==27852== > ==27852== > ==27852== 156 (36 direct, 120 indirect) bytes in 1 blocks are > definitely lost in loss record 1 of 3 > ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==27852== by 0x429A3E2: (within /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x429AC2D: __nss_database_lookup (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x4731FDB: ??? > ==27852== by 0x473313C: ??? > ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x861D299: PetscGetUserName (fuser.c:68) > ==27852== by 0x85DAA80: PetscErrorPrintfInitialize (errtrace.c:68) > ==27852== by 0x860DBB3: PetscInitialize (pinit.c:518) > ==27852== by 0x804B62E: main (test.c:90) > ==27852== > ==27852== > ==27852== 40 bytes in 5 blocks are indirectly lost in loss record 2 > of 3 > ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==27852== by 0x4299FBB: __nss_lookup_function (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x4731FFB: ??? > ==27852== by 0x473313C: ??? > ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x861D299: PetscGetUserName (fuser.c:68) > ==27852== by 0x85DAA80: PetscErrorPrintfInitialize (errtrace.c:68) > ==27852== by 0x860DBB3: PetscInitialize (pinit.c:518) > ==27852== by 0x804B62E: main (test.c:90) > ==27852== > ==27852== > ==27852== 80 bytes in 5 blocks are indirectly lost in loss record 3 > of 3 > ==27852== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==27852== by 0x428739B: tsearch (in /lib/tls/i686/cmov/libc-2.7.so) > ==27852== by 0x4299F7D: __nss_lookup_function (in /lib/tls/i686/ > cmov/libc-2.7.so) > ==27852== by 0x4731FFB: ??? > ==27852== by 0x473313C: ??? > ==27852== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==27852== by 0x861D299: PetscGetUserName (fuser.c:68) > ==27852== by 0x85DAA80: PetscErrorPrintfInitialize (errtrace.c:68) > ==27852== by 0x860DBB3: PetscInitialize (pinit.c:518) > ==27852== by 0x804B62E: main (test.c:90) > ==27852== > ==27852== LEAK SUMMARY: > ==27852== definitely lost: 36 bytes in 1 blocks. > ==27852== indirectly lost: 120 bytes in 10 blocks. > ==27852== possibly lost: 0 bytes in 0 blocks. > ==27852== still reachable: 0 bytes in 0 blocks. > ==27852== suppressed: 0 bytes in 0 blocks. > > > > Quoting "(Rebecca) Xuefei YUAN" : > >> Hi, >> >> I am using valgrind to check the memory leaking and it turned out >> that >> I got 3 errors and they were happening at the the PetscInitialize(), >> and DACreate2d() >> >> line 79: PetscInitialize(&argc, &argv, (char*)0, help); >> line 84: ierr = DACreate2d(comm,DA_XPERIODIC,DA_STENCIL_BOX, -5, -5, >> PETSC_DECIDE, PETSC_DECIDE, 4, 2, 0, 0, &da);CHKERRQ(ierr); >> >> >> What could be wrong here? Thanks! >> >> The following is the message from valgrind >> >> ----------------------------------------------------------------- >> valgrind --leak-check=full --show-reachable=yes --tool=memcheck >> ./vdthpffxmhd -options_file option_ffxmhd>output >> ==26665== Memcheck, a memory error detector. >> ==26665== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward >> et al. >> ==26665== Using LibVEX rev 1804, a library for dynamic binary >> translation. >> ==26665== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. >> ==26665== Using valgrind-3.3.0-Debian, a dynamic binary >> instrumentation >> framework. >> ==26665== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward >> et al. >> ==26665== For more details, rerun with: -v >> ==26665== >> ==26665== Invalid read of size 4 >> ==26665== at 0x40151F9: (within /lib/ld-2.7.so) >> ==26665== by 0x4005C59: (within /lib/ld-2.7.so) >> ==26665== by 0x4007A87: (within /lib/ld-2.7.so) >> ==26665== by 0x4011533: (within /lib/ld-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0454: __libc_dlopen_mode (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429A186: __nss_lookup_function (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429BEC5: __nss_hosts_lookup (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== Address 0x432af8c is 36 bytes inside a block of size 37 >> alloc'd >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x4008021: (within /lib/ld-2.7.so) >> ==26665== by 0x4011533: (within /lib/ld-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0454: __libc_dlopen_mode (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429A186: __nss_lookup_function (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429BEC5: __nss_hosts_lookup (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x42A0782: gethostbyname_r (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== >> ==26665== Invalid read of size 4 >> ==26665== at 0x40151E3: (within /lib/ld-2.7.so) >> ==26665== by 0x4005C59: (within /lib/ld-2.7.so) >> ==26665== by 0x4007A87: (within /lib/ld-2.7.so) >> ==26665== by 0x4011533: (within /lib/ld-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0454: __libc_dlopen_mode (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429A186: __nss_lookup_function (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/ >> libnss_compat-2.7.so) >> ==26665== by 0x473313C: _nss_compat_getpwuid_r (in >> /lib/tls/i686/cmov/libnss_compat-2.7.so) >> ==26665== Address 0x432c1f8 is 32 bytes inside a block of size 35 >> alloc'd >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x4008021: (within /lib/ld-2.7.so) >> ==26665== by 0x4011533: (within /lib/ld-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) >> ==26665== by 0x42C0454: __libc_dlopen_mode (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429A186: __nss_lookup_function (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/ >> libnss_compat-2.7.so) >> ==26665== by 0x473313C: _nss_compat_getpwuid_r (in >> /lib/tls/i686/cmov/libnss_compat-2.7.so) >> ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== >> ==26665== ERROR SUMMARY: 3 errors from 2 contexts (suppressed: 41 >> from 1) >> ==26665== malloc/free: in use at exit: 64,352 bytes in 87 blocks. >> ==26665== malloc/free: 2,472 allocs, 2,385 frees, 3,015,115 bytes >> allocated. >> ==26665== For counts of detected errors, rerun with: -v >> ==26665== searching for pointers to 87 not-freed blocks. >> ==26665== checked 1,120,136 bytes. >> ==26665== >> ==26665== >> ==26665== 32 bytes in 2 blocks are still reachable in loss record 1 >> of 5 >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x871B2A8: MPID_VCRT_Create (mpid_vc.c:62) >> ==26665== by 0x8718C6A: MPID_Init (mpid_init.c:116) >> ==26665== by 0x86F1C3B: MPIR_Init_thread (initthread.c:288) >> ==26665== by 0x86F175D: PMPI_Init (init.c:106) >> ==26665== by 0x86355A1: PetscInitialize (pinit.c:503) >> ==26665== by 0x804B627: main (vdthpffxmhd.c:79) >> ==26665== >> ==26665== >> ==26665== 156 (36 direct, 120 indirect) bytes in 1 blocks are >> definitely lost in loss record 2 of 5 >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x429A3E2: (within /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x429AC2D: __nss_database_lookup (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x4731FDB: ??? >> ==26665== by 0x473313C: ??? >> ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) >> ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) >> ==26665== by 0x863565B: PetscInitialize (pinit.c:518) >> ==26665== by 0x804B627: main (vdthpffxmhd.c:79) >> ==26665== >> ==26665== >> ==26665== 40 bytes in 5 blocks are indirectly lost in loss record 3 >> of 5 >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x4299FBB: __nss_lookup_function (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x4731FFB: ??? >> ==26665== by 0x473313C: ??? >> ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) >> ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) >> ==26665== by 0x863565B: PetscInitialize (pinit.c:518) >> ==26665== by 0x804B627: main (vdthpffxmhd.c:79) >> ==26665== >> ==26665== >> ==26665== 80 bytes in 5 blocks are indirectly lost in loss record 4 >> of 5 >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x428739B: tsearch (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x4299F7D: __nss_lookup_function (in >> /lib/tls/i686/cmov/libc-2.7.so) >> ==26665== by 0x4731FFB: ??? >> ==26665== by 0x473313C: ??? >> ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/ >> libc-2.7.so) >> ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) >> ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) >> ==26665== by 0x863565B: PetscInitialize (pinit.c:518) >> ==26665== by 0x804B627: main (vdthpffxmhd.c:79) >> ==26665== >> ==26665== >> ==26665== 64,164 bytes in 74 blocks are still reachable in loss >> record 5 of 5 >> ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) >> ==26665== by 0x8610BE4: PetscMallocAlign (mal.c:40) >> ==26665== by 0x8611CD3: PetscTrMallocDefault (mtr.c:194) >> ==26665== by 0x81DE577: DACreate2d (da2.c:364) >> ==26665== by 0x804B701: main (vdthpffxmhd.c:84) >> ==26665== >> ==26665== LEAK SUMMARY: >> ==26665== definitely lost: 36 bytes in 1 blocks. >> ==26665== indirectly lost: 120 bytes in 10 blocks. >> ==26665== possibly lost: 0 bytes in 0 blocks. >> ==26665== still reachable: 64,196 bytes in 76 blocks. >> ==26665== suppressed: 0 bytes in 0 blocks. >> >> >> ------------------------------------------------------------------ >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From xy2102 at columbia.edu Tue May 5 17:20:26 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 05 May 2009 18:20:26 -0400 Subject: More about memory check---Some strange results running in PETSc. In-Reply-To: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> References: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> Message-ID: <20090505182026.dsun9f4fkso4sw4o@cubmail.cc.columbia.edu> Hi,Barry, This is the old email I sent out about the missing solution. Cheers, R Quoting "(Rebecca) Xuefei YUAN" : > Hi, > > I am running some codes and stores the solution in the text file. > However, I found that some results are wired in the sense that some > processors are "eating" my (i,j) index and the corresponding solution. > > For example, the solution at time step =235 on processor 6 is right, > but at time step = 236 on processor 6, one grid solution is missing > and thus the order of the index is wrong. > > In the attached two files: > hp.solution.dt0.16700.n90.t235.p6 (right one) > hp.solution.dt0.16700.n90.t236.p6 (wrong one) > for example, > > in hp.solution.dt0.16700.n90.t235.p6 (right one) > i j > --------------------------------------------------------------------------------------------------------------------------------------------------------------------- > 50 14 3.3212491928636803e-02 7.2992225179014901e-03 > 2.9841295384404947e+00 2.2004855368148415e-02 > 51 14 4.0287965701667774e-02 2.5401878231124070e-03 > 2.9201873761746322e+00 2.6251864239477816e-02 > 52 14 4.7084950235070790e-02 -1.5367647745423544e-03 > 2.8460826176461214e+00 3.1550405800570377e-02 > 53 14 5.3394938807608198e-02 -3.8189091837479271e-03 > 2.7618414550374171e+00 3.4078755072804334e-02 > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > however, in hp.solution.dt0.16700.n90.t236.p6 (wrong one) > i j > --------------------------------------------------------------------------------------------------------------------------------------------------------------------- > 50 14 3.1239406700376452e-02 8.5179559039992043e-03 > 2.9840003096148520e+00 2.1760859158622522e-02 > 51 14 3.8032143218986063e-02 3.6341965920997721e-03 > 2.9198035731818854e+00 2.6200771510346648e-02 > 53 14 5.0661309132451274e-02 -2.9274557377189617e-03 > 2.7606822480069755e+00 3.4021016413777964e-02 > 54 14 5.6049570141121191e-02 -2.8111244430837979e-03 > 2.6669503598276267e+00 3.8855104759650566e-02 > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > the (i,j) = (52,14) is missing and as a result, one grid point > solution is missing. > > I did not understand how this happens and why this happens? > Any ideas? > > Thanks very much! > > Cheers, > > Rebecca > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From damian.kaleta at gmail.com Tue May 5 21:32:27 2009 From: damian.kaleta at gmail.com (Damian Kaleta) Date: Tue, 5 May 2009 21:32:27 -0500 Subject: implementing my own mat vec multiplayer Message-ID: <9B55E934-BDE4-4915-9D87-3A7F27316E67@mail.utexas.edu> Hi, I was wondering if PETSc will allow me to implement my own matrix- vector multiplayer when I use Linear solvers (KSP). If it is possible, how can I achieve that? Thank you, Damian Kaleta From bsmith at mcs.anl.gov Tue May 5 21:51:18 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 5 May 2009 21:51:18 -0500 Subject: implementing my own mat vec multiplayer In-Reply-To: <9B55E934-BDE4-4915-9D87-3A7F27316E67@mail.utexas.edu> References: <9B55E934-BDE4-4915-9D87-3A7F27316E67@mail.utexas.edu> Message-ID: See the manual page for MatCreateShell http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateShell.html#MatCreateShell On May 5, 2009, at 9:32 PM, Damian Kaleta wrote: > Hi, > > I was wondering if PETSc will allow me to implement my own matrix- > vector multiplayer when I use Linear solvers (KSP). If it is > possible, how can I achieve that? > > Thank you, > Damian Kaleta From knepley at gmail.com Wed May 6 07:42:02 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 6 May 2009 07:42:02 -0500 Subject: memory check: error message from valgrind In-Reply-To: <20090505174641.z0dyh1qscgkkgc0w@cubmail.cc.columbia.edu> References: <20090505174641.z0dyh1qscgkkgc0w@cubmail.cc.columbia.edu> Message-ID: valgrind is unhappy with nss_lookup, but you can ignore these. They are not your error. Matt On Tue, May 5, 2009 at 4:46 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I am using valgrind to check the memory leaking and it turned out that I > got 3 errors and they were happening at the the PetscInitialize(), and > DACreate2d() > > line 79: PetscInitialize(&argc, &argv, (char*)0, help); > line 84: ierr = DACreate2d(comm,DA_XPERIODIC,DA_STENCIL_BOX, -5, -5, > PETSC_DECIDE, PETSC_DECIDE, 4, 2, 0, 0, &da);CHKERRQ(ierr); > > > What could be wrong here? Thanks! > > The following is the message from valgrind > > ----------------------------------------------------------------- > valgrind --leak-check=full --show-reachable=yes --tool=memcheck > ./vdthpffxmhd -options_file option_ffxmhd>output > ==26665== Memcheck, a memory error detector. > ==26665== Copyright (C) 2002-2007, and GNU GPL'd, by Julian Seward et al. > ==26665== Using LibVEX rev 1804, a library for dynamic binary translation. > ==26665== Copyright (C) 2004-2007, and GNU GPL'd, by OpenWorks LLP. > ==26665== Using valgrind-3.3.0-Debian, a dynamic binary instrumentation > framework. > ==26665== Copyright (C) 2000-2007, and GNU GPL'd, by Julian Seward et al. > ==26665== For more details, rerun with: -v > ==26665== > ==26665== Invalid read of size 4 > ==26665== at 0x40151F9: (within /lib/ld-2.7.so) > ==26665== by 0x4005C59: (within /lib/ld-2.7.so) > ==26665== by 0x4007A87: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== Address 0x432af8c is 36 bytes inside a block of size 37 alloc'd > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x4008021: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x429A29F: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429BEC5: __nss_hosts_lookup (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x42A0782: gethostbyname_r (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== > ==26665== Invalid read of size 4 > ==26665== at 0x40151E3: (within /lib/ld-2.7.so) > ==26665== by 0x4005C59: (within /lib/ld-2.7.so) > ==26665== by 0x4007A87: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so > ) > ==26665== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/cmov/ > libnss_compat-2.7.so) > ==26665== Address 0x432c1f8 is 32 bytes inside a block of size 35 alloc'd > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x4008021: (within /lib/ld-2.7.so) > ==26665== by 0x4011533: (within /lib/ld-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x4010F4D: (within /lib/ld-2.7.so) > ==26665== by 0x42C0291: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x400D5C5: (within /lib/ld-2.7.so) > ==26665== by 0x42C0454: __libc_dlopen_mode (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x429A186: __nss_lookup_function (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x4731FFB: (within /lib/tls/i686/cmov/libnss_compat-2.7.so > ) > ==26665== by 0x473313C: _nss_compat_getpwuid_r (in /lib/tls/i686/cmov/ > libnss_compat-2.7.so) > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== > ==26665== ERROR SUMMARY: 3 errors from 2 contexts (suppressed: 41 from 1) > ==26665== malloc/free: in use at exit: 64,352 bytes in 87 blocks. > ==26665== malloc/free: 2,472 allocs, 2,385 frees, 3,015,115 bytes > allocated. > ==26665== For counts of detected errors, rerun with: -v > ==26665== searching for pointers to 87 not-freed blocks. > ==26665== checked 1,120,136 bytes. > ==26665== > ==26665== > ==26665== 32 bytes in 2 blocks are still reachable in loss record 1 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x871B2A8: MPID_VCRT_Create (mpid_vc.c:62) > ==26665== by 0x8718C6A: MPID_Init (mpid_init.c:116) > ==26665== by 0x86F1C3B: MPIR_Init_thread (initthread.c:288) > ==26665== by 0x86F175D: PMPI_Init (init.c:106) > ==26665== by 0x86355A1: PetscInitialize (pinit.c:503) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 156 (36 direct, 120 indirect) bytes in 1 blocks are definitely > lost in loss record 2 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x429A3E2: (within /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x429AC2D: __nss_database_lookup (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x4731FDB: ??? > ==26665== by 0x473313C: ??? > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) > ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) > ==26665== by 0x863565B: PetscInitialize (pinit.c:518) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 40 bytes in 5 blocks are indirectly lost in loss record 3 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x4299FBB: __nss_lookup_function (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x4731FFB: ??? > ==26665== by 0x473313C: ??? > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) > ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) > ==26665== by 0x863565B: PetscInitialize (pinit.c:518) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 80 bytes in 5 blocks are indirectly lost in loss record 4 of 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x428739B: tsearch (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x4299F7D: __nss_lookup_function (in /lib/tls/i686/cmov/ > libc-2.7.so) > ==26665== by 0x4731FFB: ??? > ==26665== by 0x473313C: ??? > ==26665== by 0x4246D15: getpwuid_r (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x424665D: getpwuid (in /lib/tls/i686/cmov/libc-2.7.so) > ==26665== by 0x8644D41: PetscGetUserName (fuser.c:68) > ==26665== by 0x8602528: PetscErrorPrintfInitialize (errtrace.c:68) > ==26665== by 0x863565B: PetscInitialize (pinit.c:518) > ==26665== by 0x804B627: main (vdthpffxmhd.c:79) > ==26665== > ==26665== > ==26665== 64,164 bytes in 74 blocks are still reachable in loss record 5 of > 5 > ==26665== at 0x4022AB8: malloc (vg_replace_malloc.c:207) > ==26665== by 0x8610BE4: PetscMallocAlign (mal.c:40) > ==26665== by 0x8611CD3: PetscTrMallocDefault (mtr.c:194) > ==26665== by 0x81DE577: DACreate2d (da2.c:364) > ==26665== by 0x804B701: main (vdthpffxmhd.c:84) > ==26665== > ==26665== LEAK SUMMARY: > ==26665== definitely lost: 36 bytes in 1 blocks. > ==26665== indirectly lost: 120 bytes in 10 blocks. > ==26665== possibly lost: 0 bytes in 0 blocks. > ==26665== still reachable: 64,196 bytes in 76 blocks. > ==26665== suppressed: 0 bytes in 0 blocks. > > > ------------------------------------------------------------------ > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredrik.bengzon at math.umu.se Wed May 6 11:03:57 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Wed, 06 May 2009 18:03:57 +0200 Subject: example of call to superlu_dist Message-ID: <4A01B4ED.50303@math.umu.se> Hi Is there an example of how to call superlu_dist somewhere. I'm not looking for command line options, but how to call KSP in my code when using superlu_dist. I've set the KSPPREONLY, and PCLU options, and also made a call to PCFactorSetMatSolverPackage(), but this does not seem to be the right way to do it. Petsc aborts with error message 'mpiaij matrix does not have a build-in LU solver'. Do I need to specify any particular matrix format to use with superlu_dist? Any input is appreciated. Regards Fredrik Bengzon From knepley at gmail.com Wed May 6 11:19:24 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 6 May 2009 11:19:24 -0500 Subject: example of call to superlu_dist In-Reply-To: <4A01B4ED.50303@math.umu.se> References: <4A01B4ED.50303@math.umu.se> Message-ID: Here is a test from src/ksp/ksp/examples/tutorials/makefile: ./ex10 -f0 ${DATAFILESPATH}/matrices/small -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist -num_numfac 2 -num_rhs 2 You should be able to run that. Matt On Wed, May 6, 2009 at 11:03 AM, Fredrik Bengzon < fredrik.bengzon at math.umu.se> wrote: > Hi > Is there an example of how to call superlu_dist somewhere. I'm not looking > for command line options, but how to call KSP in my code when using > superlu_dist. I've set the KSPPREONLY, and PCLU options, and also made a > call to PCFactorSetMatSolverPackage(), but this does not seem to be the > right way to do it. Petsc aborts with error message 'mpiaij matrix does not > have a build-in LU solver'. Do I need to specify any particular matrix > format to use with superlu_dist? Any input is appreciated. > Regards > Fredrik Bengzon > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Wed May 6 11:20:45 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Wed, 6 May 2009 11:20:45 -0500 (CDT) Subject: example of call to superlu_dist In-Reply-To: <4A01B4ED.50303@math.umu.se> References: <4A01B4ED.50303@math.umu.se> Message-ID: '-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist' See src/ksp/ksp/examples/tutorials/makefile Hong On Wed, 6 May 2009, Fredrik Bengzon wrote: > Hi > Is there an example of how to call superlu_dist somewhere. I'm not looking > for command line options, but how to call KSP in my code when using > superlu_dist. I've set the KSPPREONLY, and PCLU options, and also made a call > to PCFactorSetMatSolverPackage(), but this does not seem to be the right way > to do it. Petsc aborts with error message 'mpiaij matrix does not have a > build-in LU solver'. Do I need to specify any particular matrix format to use > with superlu_dist? Any input is appreciated. > Regards > Fredrik Bengzon > From socrates.wei at gmail.com Wed May 6 11:56:45 2009 From: socrates.wei at gmail.com (Zi-Hao Wei) Date: Thu, 7 May 2009 00:56:45 +0800 Subject: Memory usage in log summary Message-ID: Hello For example, if I used four processors, the Mat memory usage is 10 megabytes. Does the total Mat memory usage should be 40 megabytes? Thanks. -- Zi-Hao Wei Department of Mathematics National Central University, Taiwan Emo Philips - "I got some new underwear the other day. Well, new to me." - http://www.brainyquote.com/quotes/authors/e/emo_philips.html From balay at mcs.anl.gov Wed May 6 12:14:10 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 6 May 2009 12:14:10 -0500 (CDT) Subject: example of call to superlu_dist In-Reply-To: <4A01B4ED.50303@math.umu.se> References: <4A01B4ED.50303@math.umu.se> Message-ID: PCFactorSetMatSolverPackage() works for me with a test example. Satish [petsc:ksp/examples/tutorials] petsc> hg diff ex2.c diff -r fca389f2db83 src/ksp/ksp/examples/tutorials/ex2.c --- a/src/ksp/ksp/examples/tutorials/ex2.c Wed May 06 10:52:26 2009 -0500 +++ b/src/ksp/ksp/examples/tutorials/ex2.c Wed May 06 12:13:10 2009 -0500 @@ -38,6 +38,7 @@ PetscErrorCode ierr; PetscTruth flg = PETSC_FALSE; PetscScalar v,one = 1.0,neg_one = -1.0; + PC pc; #if defined(PETSC_USE_LOG) PetscLogStage stage; #endif @@ -187,6 +188,9 @@ KSPSetFromOptions() is called _after_ any other customization routines. */ + ierr = KSPGetPC(ksp,&pc);CHKERRQ(ierr); + ierr = PCSetType(pc,PCLU);CHKERRQ(ierr); + ierr = PCFactorSetMatSolverPackage(pc,MAT_SOLVER_SUPERLU_DIST); ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr); /* - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - [petsc:ksp/examples/tutorials] petsc> mpiexec -n 2 ./ex2 -ksp_view KSP Object: type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=0.000138889, absolute=1e-50, divergence=10000 left preconditioning PC Object: type: lu LU: out-of-place factorization matrix ordering: natural LU: tolerance for zero pivot 1e-12 LU: factor fill ratio needed 0 Factored matrix follows Matrix Object: type=mpiaij, rows=56, cols=56 package used to perform factorization: superlu_dist total: nonzeros=0, allocated nonzeros=112 SuperLU_DIST run parameters: Process grid nprow 2 x npcol 1 Equilibrate matrix TRUE Matrix input mode 1 Replace tiny pivots TRUE Use iterative refinement FALSE Processors in row 2 col partition 1 Row permutation LargeDiag Column permutation MMD_AT_PLUS_A Parallel symbolic factorization FALSE Repeated factorization SamePattern linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=56, cols=56 total: nonzeros=250, allocated nonzeros=560 not using I-node (on process 0) routines Norm of error < 1.e-12 iterations 1 [petsc:ksp/examples/tutorials] petsc> On Wed, 6 May 2009, Fredrik Bengzon wrote: > Hi > Is there an example of how to call superlu_dist somewhere. I'm not looking > for command line options, but how to call KSP in my code when using > superlu_dist. I've set the KSPPREONLY, and PCLU options, and also made a call > to PCFactorSetMatSolverPackage(), but this does not seem to be the right way > to do it. Petsc aborts with error message 'mpiaij matrix does not have a > build-in LU solver'. Do I need to specify any particular matrix format to use > with superlu_dist? Any input is appreciated. > Regards > Fredrik Bengzon > From balay at mcs.anl.gov Wed May 6 12:16:56 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 6 May 2009 12:16:56 -0500 (CDT) Subject: example of call to superlu_dist In-Reply-To: References: <4A01B4ED.50303@math.umu.se> Message-ID: On Wed, 6 May 2009, Satish Balay wrote: > PCFactorSetMatSolverPackage() works for me with a test example. > On Wed, 6 May 2009, Fredrik Bengzon wrote: > > > Hi > > Is there an example of how to call superlu_dist somewhere. I'm not looking > > for command line options, but how to call KSP in my code when using > > superlu_dist. I've set the KSPPREONLY, and PCLU options, and also made a call > > to PCFactorSetMatSolverPackage(), but this does not seem to be the right way > > to do it. Petsc aborts with error message 'mpiaij matrix does not have a > > build-in LU solver'. Do I need to specify any particular matrix format to use > > with superlu_dist? Any input is appreciated. BTW: Did you build PETSc with superlu_dist? Do the examples work with superlu_dist - with the command line options: '-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist' Satish From knepley at gmail.com Wed May 6 12:28:11 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 6 May 2009 12:28:11 -0500 Subject: Memory usage in log summary In-Reply-To: References: Message-ID: On Wed, May 6, 2009 at 11:56 AM, Zi-Hao Wei wrote: > Hello > > For example, if I used four processors, the Mat memory usage is 10 > megabytes. > Does the total Mat memory usage should be 40 megabytes? I do not understand your question. However, the memory reported by log_summary is the total over all processes. Matt > > Thanks. > > -- > Zi-Hao Wei > Department of Mathematics > National Central University, Taiwan > > Emo Philips - "I got some new underwear the other day. Well, new to > me." - http://www.brainyquote.com/quotes/authors/e/emo_philips.html > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fredrik.bengzon at math.umu.se Wed May 6 13:16:15 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Wed, 06 May 2009 20:16:15 +0200 Subject: example of call to superlu_dist In-Reply-To: References: <4A01B4ED.50303@math.umu.se> Message-ID: <4A01D3EF.4060000@math.umu.se> Thank you all, After interchanging the calls to PCSetType and PCFactorSetMatSolver SuperLU_dist works. /Fredrik PCFactorSetMatSolverPackage Now Fredrik Bengzon Satish Balay wrote: > On Wed, 6 May 2009, Satish Balay wrote: > > >> PCFactorSetMatSolverPackage() works for me with a test example. >> > > >> On Wed, 6 May 2009, Fredrik Bengzon wrote: >> >> >>> Hi >>> Is there an example of how to call superlu_dist somewhere. I'm not looking >>> for command line options, but how to call KSP in my code when using >>> superlu_dist. I've set the KSPPREONLY, and PCLU options, and also made a call >>> to PCFactorSetMatSolverPackage(), but this does not seem to be the right way >>> to do it. Petsc aborts with error message 'mpiaij matrix does not have a >>> build-in LU solver'. Do I need to specify any particular matrix format to use >>> with superlu_dist? Any input is appreciated. >>> > > BTW: Did you build PETSc with superlu_dist? Do the examples work with > superlu_dist - with the command line options: > > '-ksp_type preonly -pc_type lu -pc_factor_mat_solver_package superlu_dist' > > Satish > > From schuang at ats.ucla.edu Thu May 7 16:09:44 2009 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Thu, 07 May 2009 14:09:44 -0700 Subject: user's own preconditioner for PCG Message-ID: <4A034E18.5010505@ats.ucla.edu> Hi, In a parallel PCG solve, is there a way to instruct PETSc to call my own (serial) solver to invert the per-process, local precondition matrix (as in the block-Jacobi preconditioner)? Is there an example/documentation that I can follow? Which part of code should I start looking at? We are trying to determine if there is any value to use the aforementioned customized preconditioner as compared to the general ones in PETSc and to DMMG, for this particular matrix at hand. This is a structured mesh problem. Thank you. Shao-Ching From bsmith at mcs.anl.gov Thu May 7 16:11:44 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 7 May 2009 16:11:44 -0500 Subject: user's own preconditioner for PCG In-Reply-To: <4A034E18.5010505@ats.ucla.edu> References: <4A034E18.5010505@ats.ucla.edu> Message-ID: <37AC4DC5-911E-42AC-8204-D5A6C2759CB5@mcs.anl.gov> See the manual page for PCSHELL On May 7, 2009, at 4:09 PM, Shao-Ching Huang wrote: > Hi, > > In a parallel PCG solve, is there a way to instruct PETSc to call my > own (serial) solver to invert the per-process, local precondition > matrix (as in the block-Jacobi preconditioner)? > > Is there an example/documentation that I can follow? Which part of > code should I start looking at? > > We are trying to determine if there is any value to use the > aforementioned customized preconditioner as compared to the general > ones in PETSc and to DMMG, for this particular matrix at hand. This > is a structured mesh problem. > > Thank you. > > Shao-Ching From knepley at gmail.com Thu May 7 16:12:31 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 7 May 2009 16:12:31 -0500 Subject: user's own preconditioner for PCG In-Reply-To: <4A034E18.5010505@ats.ucla.edu> References: <4A034E18.5010505@ats.ucla.edu> Message-ID: On Thu, May 7, 2009 at 4:09 PM, Shao-Ching Huang wrote: > Hi, > > In a parallel PCG solve, is there a way to instruct PETSc to call my own > (serial) solver to invert the per-process, local precondition matrix (as in > the block-Jacobi preconditioner)? It sounds like the proper way to do this is to use -pc_type bjacobi -sub_ksp_type preonly and then pull out then replace the block PC with a PCSHELL with wraps up your preconditioner. Matt > > Is there an example/documentation that I can follow? Which part of code > should I start looking at? > > We are trying to determine if there is any value to use the aforementioned > customized preconditioner as compared to the general ones in PETSc and to > DMMG, for this particular matrix at hand. This is a structured mesh problem. > > Thank you. > > Shao-Ching > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From schuang at ats.ucla.edu Thu May 7 16:31:41 2009 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Thu, 07 May 2009 14:31:41 -0700 Subject: user's own preconditioner for PCG In-Reply-To: References: <4A034E18.5010505@ats.ucla.edu> Message-ID: <4A03533D.8070006@ats.ucla.edu> Barry and Matt: Thank you. I will look into PCSHELL. Shao-Ching Matthew Knepley wrote: > On Thu, May 7, 2009 at 4:09 PM, Shao-Ching Huang > wrote: > > Hi, > > In a parallel PCG solve, is there a way to instruct PETSc to call my > own (serial) solver to invert the per-process, local precondition > matrix (as in the block-Jacobi preconditioner)? > > > It sounds like the proper way to do this is to use -pc_type bjacobi > -sub_ksp_type preonly and then pull out > then replace the block PC with a PCSHELL with wraps up your preconditioner. > > Matt > > > > Is there an example/documentation that I can follow? Which part of > code should I start looking at? > > We are trying to determine if there is any value to use the > aforementioned customized preconditioner as compared to the general > ones in PETSc and to DMMG, for this particular matrix at hand. This > is a structured mesh problem. > > Thank you. > > Shao-Ching > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From ruiwang2 at illinois.edu Thu May 7 20:19:33 2009 From: ruiwang2 at illinois.edu (Rui Wang) Date: Thu, 7 May 2009 20:19:33 -0500 (CDT) Subject: ILUDropTolerance is not compatible with Mat re-ordering?? Message-ID: <20090507201933.BNW22896@expms3.cites.uiuc.edu> Dear All, I am trying very hard to use PCILUSetUseDropTolerance() together with PCILUSetMatOrdering(), but it does not work, no matter what kind of MatOrderingType (RCM, ND...) i choose. The message I got: ------------------------------------------------------------- Note: The EXACT line numbers in the stack are not available, INSTEAD the line number of the start of the function is given. [0] MatILUDTFactor_SeqAIJ line 66 src/mat/impls/aij/seq/aijfact.c [0] MatILUDTFactor line 1362 src/mat/interface/matrix.c [0] PCSetUp_ILU line 568 src/sles/pc/impls/ilu/ilu.c [0] PCSetUp line 756 src/sles/pc/interface/precon.c [0] SLESSolve line 466 src/sles/interface/sles.c -------------------------------------------- [0]PETSC ERROR: unknownfunction() line 0 in unknown file [0] MPI Abort by user Aborting program ! [0] Aborting program! p0_32609: p4_error: : 59 ---------------------------------------------------------- I spent a lot of time on this and still cannot figure it out. Actually if I just use PCILUSetUseDropTolerance() itself, it works fine. Also, if I choose other ILU methods such as level-based ILU, PCILUSetMatOrdering() works perfectly with them. I wonder how to use ILUDropTolerance together with the reordering technique (such as RCM)? or they are not compatible? BTW, my PETSc version is 2.1.0. Is this because i am using this old version? Thanks a lot. Sincerely, Rui Wang -------------------------------------------------- Rui Wang Ph.D. Candidate Research Assistant and Predoctoral Fellow Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign From bsmith at mcs.anl.gov Thu May 7 20:25:25 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 7 May 2009 20:25:25 -0500 Subject: ILUDropTolerance is not compatible with Mat re-ordering?? In-Reply-To: <20090507201933.BNW22896@expms3.cites.uiuc.edu> References: <20090507201933.BNW22896@expms3.cites.uiuc.edu> Message-ID: <8DCAE1CD-6BC8-4601-B659-D508D23FDE1A@mcs.anl.gov> PETSc has currently very little support for drop tolerance ILU. ILU is the bane of any decent mathematicians existence. There isn't support for a reordering. Hong is actually working on much better, more general and more complete ILUdt that should be ready in a couple of months. (Yes, Hong has gone over to the dark side of mathematics :-)). Barry On May 7, 2009, at 8:19 PM, Rui Wang wrote: > Dear All, > > I am trying very hard to use PCILUSetUseDropTolerance() > together with PCILUSetMatOrdering(), but it does not work, no matter > what kind of MatOrderingType (RCM, ND...) i choose. The message I got: > > ------------------------------------------------------------- > Note: The EXACT line numbers in the stack are not available, > INSTEAD the line number of the start of the function > is given. > [0] MatILUDTFactor_SeqAIJ line 66 src/mat/impls/aij/seq/aijfact.c > [0] MatILUDTFactor line 1362 src/mat/interface/matrix.c > [0] PCSetUp_ILU line 568 src/sles/pc/impls/ilu/ilu.c > [0] PCSetUp line 756 src/sles/pc/interface/precon.c > [0] SLESSolve line 466 src/sles/interface/sles.c > -------------------------------------------- > [0]PETSC ERROR: unknownfunction() line 0 in unknown file > [0] MPI Abort by user Aborting program ! > [0] Aborting program! > p0_32609: p4_error: : 59 > ---------------------------------------------------------- > > I spent a lot of time on this and still cannot figure it out. > > Actually if I just use PCILUSetUseDropTolerance() itself, it works > fine. Also, if I choose other ILU methods such as level-based ILU, > PCILUSetMatOrdering() works perfectly with them. > > I wonder how to use ILUDropTolerance together with the reordering > technique (such as RCM)? or they are not compatible? > BTW, my PETSc version is 2.1.0. Is this because i am using this old > version? > > Thanks a lot. > > Sincerely, > Rui Wang > > > > > -------------------------------------------------- > Rui Wang > Ph.D. Candidate > Research Assistant and Predoctoral Fellow > Department of Electrical and Computer Engineering > University of Illinois at Urbana-Champaign From ruiwang2 at illinois.edu Thu May 7 21:26:54 2009 From: ruiwang2 at illinois.edu (Rui Wang) Date: Thu, 7 May 2009 21:26:54 -0500 (CDT) Subject: ILUDropTolerance is not compatible with Mat re-ordering?? In-Reply-To: <8DCAE1CD-6BC8-4601-B659-D508D23FDE1A@mcs.anl.gov> References: <20090507201933.BNW22896@expms3.cites.uiuc.edu> <8DCAE1CD-6BC8-4601-B659-D508D23FDE1A@mcs.anl.gov> Message-ID: <20090507212654.BNW28776@expms3.cites.uiuc.edu> Thanks a lot. I appreciate your help. Have a nice evening. Rui ---- Original message ---- >Date: Thu, 7 May 2009 20:25:25 -0500 >From: Barry Smith >Subject: Re: ILUDropTolerance is not compatible with Mat re-ordering?? >To: ruiwang2 at illinois.edu, PETSc users list > > > > PETSc has currently very little support for drop tolerance ILU. >ILU is the bane of any decent mathematicians existence. > > There isn't support for a reordering. > > Hong is actually working on much better, more general and more >complete ILUdt that should be ready in a couple of >months. (Yes, Hong has gone over to the dark side of mathematics :-)). > > Barry > >On May 7, 2009, at 8:19 PM, Rui Wang wrote: > >> Dear All, >> >> I am trying very hard to use PCILUSetUseDropTolerance() >> together with PCILUSetMatOrdering(), but it does not work, no matter >> what kind of MatOrderingType (RCM, ND...) i choose. The message I got: >> >> ------------------------------------------------------------- >> Note: The EXACT line numbers in the stack are not available, >> INSTEAD the line number of the start of the function >> is given. >> [0] MatILUDTFactor_SeqAIJ line 66 src/mat/impls/aij/seq/aijfact.c >> [0] MatILUDTFactor line 1362 src/mat/interface/matrix.c >> [0] PCSetUp_ILU line 568 src/sles/pc/impls/ilu/ilu.c >> [0] PCSetUp line 756 src/sles/pc/interface/precon.c >> [0] SLESSolve line 466 src/sles/interface/sles.c >> -------------------------------------------- >> [0]PETSC ERROR: unknownfunction() line 0 in unknown file >> [0] MPI Abort by user Aborting program ! >> [0] Aborting program! >> p0_32609: p4_error: : 59 >> ---------------------------------------------------------- >> >> I spent a lot of time on this and still cannot figure it out. >> >> Actually if I just use PCILUSetUseDropTolerance() itself, it works >> fine. Also, if I choose other ILU methods such as level-based ILU, >> PCILUSetMatOrdering() works perfectly with them. >> >> I wonder how to use ILUDropTolerance together with the reordering >> technique (such as RCM)? or they are not compatible? >> BTW, my PETSc version is 2.1.0. Is this because i am using this old >> version? >> >> Thanks a lot. >> >> Sincerely, >> Rui Wang >> >> >> >> >> -------------------------------------------------- >> Rui Wang >> Ph.D. Candidate >> Research Assistant and Predoctoral Fellow >> Department of Electrical and Computer Engineering >> University of Illinois at Urbana-Champaign > From hzhang at mcs.anl.gov Thu May 7 21:34:19 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 7 May 2009 21:34:19 -0500 (CDT) Subject: ILUDropTolerance is not compatible with Mat re-ordering?? In-Reply-To: <20090507212654.BNW28776@expms3.cites.uiuc.edu> References: <20090507201933.BNW22896@expms3.cites.uiuc.edu> <8DCAE1CD-6BC8-4601-B659-D508D23FDE1A@mcs.anl.gov> <20090507212654.BNW28776@expms3.cites.uiuc.edu> Message-ID: Rui, I'm currently working on iludt. You can try it from petsc-dev. See http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html on how to get petsc-dev. Example: petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c: ./ex2 -pc_type ilu -pc_factor_use_drop_tolerance 0.01,0.0,2 i.e., run ilu with the option -pc_factor_use_drop_tolerance . Currently, it only supports sequential aij format. dtcol is not implemented. Send us bug report and let us know your need. Hong On Thu, 7 May 2009, Rui Wang wrote: > Thanks a lot. I appreciate your help. Have a nice evening. > > Rui > > ---- Original message ---- >> Date: Thu, 7 May 2009 20:25:25 -0500 >> From: Barry Smith >> Subject: Re: ILUDropTolerance is not compatible with Mat re-ordering?? >> To: ruiwang2 at illinois.edu, PETSc users list >> >> >> >> PETSc has currently very little support for drop tolerance ILU. >> ILU is the bane of any decent mathematicians existence. >> >> There isn't support for a reordering. >> >> Hong is actually working on much better, more general and more >> complete ILUdt that should be ready in a couple of >> months. (Yes, Hong has gone over to the dark side of mathematics :-)). >> >> Barry >> >> On May 7, 2009, at 8:19 PM, Rui Wang wrote: >> >>> Dear All, >>> >>> I am trying very hard to use PCILUSetUseDropTolerance() >>> together with PCILUSetMatOrdering(), but it does not work, no matter >>> what kind of MatOrderingType (RCM, ND...) i choose. The message I got: >>> >>> ------------------------------------------------------------- >>> Note: The EXACT line numbers in the stack are not available, >>> INSTEAD the line number of the start of the function >>> is given. >>> [0] MatILUDTFactor_SeqAIJ line 66 src/mat/impls/aij/seq/aijfact.c >>> [0] MatILUDTFactor line 1362 src/mat/interface/matrix.c >>> [0] PCSetUp_ILU line 568 src/sles/pc/impls/ilu/ilu.c >>> [0] PCSetUp line 756 src/sles/pc/interface/precon.c >>> [0] SLESSolve line 466 src/sles/interface/sles.c >>> -------------------------------------------- >>> [0]PETSC ERROR: unknownfunction() line 0 in unknown file >>> [0] MPI Abort by user Aborting program ! >>> [0] Aborting program! >>> p0_32609: p4_error: : 59 >>> ---------------------------------------------------------- >>> >>> I spent a lot of time on this and still cannot figure it out. >>> >>> Actually if I just use PCILUSetUseDropTolerance() itself, it works >>> fine. Also, if I choose other ILU methods such as level-based ILU, >>> PCILUSetMatOrdering() works perfectly with them. >>> >>> I wonder how to use ILUDropTolerance together with the reordering >>> technique (such as RCM)? or they are not compatible? >>> BTW, my PETSc version is 2.1.0. Is this because i am using this old >>> version? >>> >>> Thanks a lot. >>> >>> Sincerely, >>> Rui Wang >>> >>> >>> >>> >>> -------------------------------------------------- >>> Rui Wang >>> Ph.D. Candidate >>> Research Assistant and Predoctoral Fellow >>> Department of Electrical and Computer Engineering >>> University of Illinois at Urbana-Champaign >> > From ruiwang2 at illinois.edu Thu May 7 21:45:38 2009 From: ruiwang2 at illinois.edu (Rui Wang) Date: Thu, 7 May 2009 21:45:38 -0500 (CDT) Subject: ILUDropTolerance is not compatible with Mat re-ordering?? In-Reply-To: References: <20090507201933.BNW22896@expms3.cites.uiuc.edu> <8DCAE1CD-6BC8-4601-B659-D508D23FDE1A@mcs.anl.gov> <20090507212654.BNW28776@expms3.cites.uiuc.edu> Message-ID: <20090507214538.BNW30127@expms3.cites.uiuc.edu> Dr. Zhang, Thanks. I definitely will try. Best regards, Rui ---- Original message ---- >Date: Thu, 7 May 2009 21:34:19 -0500 (CDT) >From: Hong Zhang >Subject: Re: ILUDropTolerance is not compatible with Mat re-ordering?? >To: ruiwang2 at illinois.edu, PETSc users list > > >Rui, > >I'm currently working on iludt. >You can try it from petsc-dev. >See http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html >on how to get petsc-dev. > >Example: petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c: > >./ex2 -pc_type ilu -pc_factor_use_drop_tolerance 0.01,0.0,2 > >i.e., run ilu with the option >-pc_factor_use_drop_tolerance . > >Currently, it only supports sequential aij format. >dtcol is not implemented. > >Send us bug report and >let us know your need. > >Hong > >On Thu, 7 May 2009, Rui Wang wrote: > >> Thanks a lot. I appreciate your help. Have a nice evening. >> >> Rui >> >> ---- Original message ---- >>> Date: Thu, 7 May 2009 20:25:25 -0500 >>> From: Barry Smith >>> Subject: Re: ILUDropTolerance is not compatible with Mat re-ordering?? >>> To: ruiwang2 at illinois.edu, PETSc users list >>> >>> >>> >>> PETSc has currently very little support for drop tolerance ILU. >>> ILU is the bane of any decent mathematicians existence. >>> >>> There isn't support for a reordering. >>> >>> Hong is actually working on much better, more general and more >>> complete ILUdt that should be ready in a couple of >>> months. (Yes, Hong has gone over to the dark side of mathematics :-)). >>> >>> Barry >>> >>> On May 7, 2009, at 8:19 PM, Rui Wang wrote: >>> >>>> Dear All, >>>> >>>> I am trying very hard to use PCILUSetUseDropTolerance() >>>> together with PCILUSetMatOrdering(), but it does not work, no matter >>>> what kind of MatOrderingType (RCM, ND...) i choose. The message I got: >>>> >>>> ------------------------------------------------------------- >>>> Note: The EXACT line numbers in the stack are not available, >>>> INSTEAD the line number of the start of the function >>>> is given. >>>> [0] MatILUDTFactor_SeqAIJ line 66 src/mat/impls/aij/seq/aijfact.c >>>> [0] MatILUDTFactor line 1362 src/mat/interface/matrix.c >>>> [0] PCSetUp_ILU line 568 src/sles/pc/impls/ilu/ilu.c >>>> [0] PCSetUp line 756 src/sles/pc/interface/precon.c >>>> [0] SLESSolve line 466 src/sles/interface/sles.c >>>> -------------------------------------------- >>>> [0]PETSC ERROR: unknownfunction() line 0 in unknown file >>>> [0] MPI Abort by user Aborting program ! >>>> [0] Aborting program! >>>> p0_32609: p4_error: : 59 >>>> ---------------------------------------------------------- >>>> >>>> I spent a lot of time on this and still cannot figure it out. >>>> >>>> Actually if I just use PCILUSetUseDropTolerance() itself, it works >>>> fine. Also, if I choose other ILU methods such as level-based ILU, >>>> PCILUSetMatOrdering() works perfectly with them. >>>> >>>> I wonder how to use ILUDropTolerance together with the reordering >>>> technique (such as RCM)? or they are not compatible? >>>> BTW, my PETSc version is 2.1.0. Is this because i am using this old >>>> version? >>>> >>>> Thanks a lot. >>>> >>>> Sincerely, >>>> Rui Wang >>>> >>>> >>>> >>>> >>>> -------------------------------------------------- >>>> Rui Wang >>>> Ph.D. Candidate >>>> Research Assistant and Predoctoral Fellow >>>> Department of Electrical and Computer Engineering >>>> University of Illinois at Urbana-Champaign >>> >> -------------------------------------------------- Rui Wang Ph.D. Candidate Research Assistant and Predoctoral Fellow Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign From fredrik.bengzon at math.umu.se Fri May 8 09:53:55 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Fri, 08 May 2009 16:53:55 +0200 Subject: superlu_dist options Message-ID: <4A044783.9090702@math.umu.se> Hi Petsc team, Sorry for posting questions not really concerning the petsc core, but when I run superlu_dist from within slepc I notice that the load balance is poor. It is just fine during assembly (I use Metis to partition my finite element mesh) but when calling the slepc solver it dramatically changes. I use superlu_dist as solver for the eigenvalue iteration. My question is: can this have something to do with the fact that the option 'Parallel symbolic factorization' is set to false? If so, can I change the options to superlu_dist using MatSetOption for instance? Also, does this mean that superlu_dist is not using parmetis to reorder the matrix? Best Regards, Fredrik Bengzon From hzhang at mcs.anl.gov Fri May 8 10:14:27 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 8 May 2009 10:14:27 -0500 (CDT) Subject: superlu_dist options In-Reply-To: <4A044783.9090702@math.umu.se> References: <4A044783.9090702@math.umu.se> Message-ID: Run your code with '-eps_view -ksp_view' for checking which methods are used and '-log_summary' to see which operations dominate the computation. You can turn on parallel symbolic factorization with '-mat_superlu_dist_parsymbfact'. Unless you use large num of processors, symbolic factorization takes ignorable execution time. The numeric factorization usually dominates. Hong On Fri, 8 May 2009, Fredrik Bengzon wrote: > Hi Petsc team, > Sorry for posting questions not really concerning the petsc core, but when I > run superlu_dist from within slepc I notice that the load balance is poor. It > is just fine during assembly (I use Metis to partition my finite element > mesh) but when calling the slepc solver it dramatically changes. I use > superlu_dist as solver for the eigenvalue iteration. My question is: can this > have something to do with the fact that the option 'Parallel symbolic > factorization' is set to false? If so, can I change the options to > superlu_dist using MatSetOption for instance? Also, does this mean that > superlu_dist is not using parmetis to reorder the matrix? > Best Regards, > Fredrik Bengzon > > From fredrik.bengzon at math.umu.se Fri May 8 10:39:07 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Fri, 08 May 2009 17:39:07 +0200 Subject: superlu_dist options In-Reply-To: References: <4A044783.9090702@math.umu.se> Message-ID: <4A04521B.5000905@math.umu.se> Hong, Thank you for the suggestions, but I have looked at the EPS and KSP objects and I can not find anything wrong. The problem is that it takes longer to solve with 4 cpus than with 2 so the scalability seems to be absent when using superlu_dist. I have stored my mass and stiffness matrix in the mpiaij format and just passed them on to slepc. When using the petsc iterative krylov solvers i see 100% workload on all processors but when i switch to superlu_dist only two cpus seem to do the whole work of LU factoring. I don't want to use the krylov solver though since it might cause slepc not to converge. Regards, Fredrik Hong Zhang wrote: > > Run your code with '-eps_view -ksp_view' for checking > which methods are used > and '-log_summary' to see which operations dominate > the computation. > > You can turn on parallel symbolic factorization > with '-mat_superlu_dist_parsymbfact'. > > Unless you use large num of processors, symbolic factorization > takes ignorable execution time. The numeric > factorization usually dominates. > > Hong > > On Fri, 8 May 2009, Fredrik Bengzon wrote: > >> Hi Petsc team, >> Sorry for posting questions not really concerning the petsc core, but >> when I run superlu_dist from within slepc I notice that the load >> balance is poor. It is just fine during assembly (I use Metis to >> partition my finite element mesh) but when calling the slepc solver >> it dramatically changes. I use superlu_dist as solver for the >> eigenvalue iteration. My question is: can this have something to do >> with the fact that the option 'Parallel symbolic factorization' is >> set to false? If so, can I change the options to superlu_dist using >> MatSetOption for instance? Also, does this mean that superlu_dist is >> not using parmetis to reorder the matrix? >> Best Regards, >> Fredrik Bengzon >> >> > From knepley at gmail.com Fri May 8 10:41:56 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 8 May 2009 10:41:56 -0500 Subject: superlu_dist options In-Reply-To: <4A04521B.5000905@math.umu.se> References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> Message-ID: Send all the output of view and -log_summary. Matt On Fri, May 8, 2009 at 10:39 AM, Fredrik Bengzon < fredrik.bengzon at math.umu.se> wrote: > Hong, > Thank you for the suggestions, but I have looked at the EPS and KSP objects > and I can not find anything wrong. The problem is that it takes longer to > solve with 4 cpus than with 2 so the scalability seems to be absent when > using superlu_dist. I have stored my mass and stiffness matrix in the mpiaij > format and just passed them on to slepc. When using the petsc iterative > krylov solvers i see 100% workload on all processors but when i switch to > superlu_dist only two cpus seem to do the whole work of LU factoring. I > don't want to use the krylov solver though since it might cause slepc not to > converge. > Regards, > Fredrik > > Hong Zhang wrote: > >> >> Run your code with '-eps_view -ksp_view' for checking >> which methods are used >> and '-log_summary' to see which operations dominate >> the computation. >> >> You can turn on parallel symbolic factorization >> with '-mat_superlu_dist_parsymbfact'. >> >> Unless you use large num of processors, symbolic factorization >> takes ignorable execution time. The numeric >> factorization usually dominates. >> >> Hong >> >> On Fri, 8 May 2009, Fredrik Bengzon wrote: >> >> Hi Petsc team, >>> Sorry for posting questions not really concerning the petsc core, but >>> when I run superlu_dist from within slepc I notice that the load balance is >>> poor. It is just fine during assembly (I use Metis to partition my finite >>> element mesh) but when calling the slepc solver it dramatically changes. I >>> use superlu_dist as solver for the eigenvalue iteration. My question is: can >>> this have something to do with the fact that the option 'Parallel symbolic >>> factorization' is set to false? If so, can I change the options to >>> superlu_dist using MatSetOption for instance? Also, does this mean that >>> superlu_dist is not using parmetis to reorder the matrix? >>> Best Regards, >>> Fredrik Bengzon >>> >>> >>> >> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 8 10:44:17 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 8 May 2009 10:44:17 -0500 (CDT) Subject: superlu_dist options In-Reply-To: <4A04521B.5000905@math.umu.se> References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> Message-ID: Just a note about scalability: its a function of the hardware as well.. For proper scalability studies - you'll need a true distributed system with fast network [not SMP nodes..] Satish On Fri, 8 May 2009, Fredrik Bengzon wrote: > Hong, > Thank you for the suggestions, but I have looked at the EPS and KSP objects > and I can not find anything wrong. The problem is that it takes longer to > solve with 4 cpus than with 2 so the scalability seems to be absent when using > superlu_dist. I have stored my mass and stiffness matrix in the mpiaij format > and just passed them on to slepc. When using the petsc iterative krylov > solvers i see 100% workload on all processors but when i switch to > superlu_dist only two cpus seem to do the whole work of LU factoring. I don't > want to use the krylov solver though since it might cause slepc not to > converge. > Regards, > Fredrik > > Hong Zhang wrote: > > > > Run your code with '-eps_view -ksp_view' for checking > > which methods are used > > and '-log_summary' to see which operations dominate > > the computation. > > > > You can turn on parallel symbolic factorization > > with '-mat_superlu_dist_parsymbfact'. > > > > Unless you use large num of processors, symbolic factorization > > takes ignorable execution time. The numeric > > factorization usually dominates. > > > > Hong > > > > On Fri, 8 May 2009, Fredrik Bengzon wrote: > > > > > Hi Petsc team, > > > Sorry for posting questions not really concerning the petsc core, but when > > > I run superlu_dist from within slepc I notice that the load balance is > > > poor. It is just fine during assembly (I use Metis to partition my finite > > > element mesh) but when calling the slepc solver it dramatically changes. I > > > use superlu_dist as solver for the eigenvalue iteration. My question is: > > > can this have something to do with the fact that the option 'Parallel > > > symbolic factorization' is set to false? If so, can I change the options > > > to superlu_dist using MatSetOption for instance? Also, does this mean that > > > superlu_dist is not using parmetis to reorder the matrix? > > > Best Regards, > > > Fredrik Bengzon > > > > > > > > > > From fredrik.bengzon at math.umu.se Fri May 8 10:59:55 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Fri, 08 May 2009 17:59:55 +0200 Subject: superlu_dist options In-Reply-To: References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> Message-ID: <4A0456FB.3070705@math.umu.se> Hi, Here is the output from the KSP and EPS objects, and the log summary. / Fredrik Reading Triangle/Tetgen mesh #nodes=19345 #elements=81895 #nodes per element=4 Partitioning mesh with METIS 4.0 Element distribution (rank | #elements) 0 | 19771 1 | 20954 2 | 20611 3 | 20559 rank 1 has 257 ghost nodes rank 0 has 127 ghost nodes rank 2 has 143 ghost nodes rank 3 has 270 ghost nodes Calling 3D Navier-Lame Eigenvalue Solver Assembling stiffness and mass matrix Solving eigensystem with SLEPc KSP Object:(st_) type: preonly maximum iterations=100000, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 left preconditioning PC Object:(st_) type: lu LU: out-of-place factorization matrix ordering: natural LU: tolerance for zero pivot 1e-12 EPS Object: problem type: generalized symmetric eigenvalue problem method: krylovschur extraction type: Rayleigh-Ritz selected portion of the spectrum: largest eigenvalues in magnitude number of eigenvalues (nev): 4 number of column vectors (ncv): 19 maximum dimension of projected problem (mpd): 19 maximum number of iterations: 6108 tolerance: 1e-05 dimension of user-provided deflation space: 0 IP Object: orthogonalization method: classical Gram-Schmidt orthogonalization refinement: if needed (eta: 0.707100) ST Object: type: sinvert shift: 0 Matrices A and B have same nonzero pattern Associated KSP object ------------------------------ KSP Object:(st_) type: preonly maximum iterations=100000, initial guess is zero tolerances: relative=1e-08, absolute=1e-50, divergence=10000 left preconditioning PC Object:(st_) type: lu LU: out-of-place factorization matrix ordering: natural LU: tolerance for zero pivot 1e-12 LU: factor fill ratio needed 0 Factored matrix follows Matrix Object: type=mpiaij, rows=58035, cols=58035 package used to perform factorization: superlu_dist total: nonzeros=0, allocated nonzeros=116070 SuperLU_DIST run parameters: Process grid nprow 2 x npcol 2 Equilibrate matrix TRUE Matrix input mode 1 Replace tiny pivots TRUE Use iterative refinement FALSE Processors in row 2 col partition 2 Row permutation LargeDiag Column permutation PARMETIS Parallel symbolic factorization TRUE Repeated factorization SamePattern linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=58035, cols=58035 total: nonzeros=2223621, allocated nonzeros=2233584 using I-node (on process 0) routines: found 4695 nodes, limit used is 5 ------------------------------ Number of iterations in the eigensolver: 1 Number of requested eigenvalues: 4 Stopping condition: tol=1e-05, maxit=6108 Number of converged eigenpairs: 8 Writing binary .vtu file /scratch/fredrik/output/mode-0.vtu Writing binary .vtu file /scratch/fredrik/output/mode-1.vtu Writing binary .vtu file /scratch/fredrik/output/mode-2.vtu Writing binary .vtu file /scratch/fredrik/output/mode-3.vtu Writing binary .vtu file /scratch/fredrik/output/mode-4.vtu Writing binary .vtu file /scratch/fredrik/output/mode-5.vtu Writing binary .vtu file /scratch/fredrik/output/mode-6.vtu Writing binary .vtu file /scratch/fredrik/output/mode-7.vtu ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /home/fredrik/Hakan/cmlfet/a.out on a linux-gnu named medusa1 with 4 processors, by fredrik Fri May 8 17:57:28 2009 Using Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 09:15:37 CDT 2009 Max Max/Min Avg Total Time (sec): 5.429e+02 1.00001 5.429e+02 Objects: 1.380e+02 1.00000 1.380e+02 Flops: 1.053e+08 1.05695 1.028e+08 4.114e+08 Flops/sec: 1.939e+05 1.05696 1.894e+05 7.577e+05 Memory: 5.927e+07 1.03224 2.339e+08 MPI Messages: 2.880e+02 1.51579 2.535e+02 1.014e+03 MPI Message Lengths: 4.868e+07 1.08170 1.827e+05 1.853e+08 MPI Reductions: 1.122e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 5.4292e+02 100.0% 4.1136e+08 100.0% 1.014e+03 100.0% 1.827e+05 100.0% 3.600e+02 80.2% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run config/configure.py # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage STSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 STApply 28 1.0 5.1775e+02 1.0 3.15e+07 1.1 1.7e+02 4.2e+03 2.8e+01 95 30 17 0 6 95 30 17 0 8 0 EPSSetUp 1 1.0 1.0482e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.6e+01 2 0 0 0 10 2 0 0 0 13 0 EPSSolve 1 1.0 3.7193e+02 1.0 9.59e+07 1.1 3.5e+02 4.2e+03 9.7e+01 69 91 35 1 22 69 91 35 1 27 1 IPOrthogonalize 19 1.0 3.4406e-01 1.1 6.75e+07 1.1 2.3e+02 4.2e+03 7.6e+01 0 64 22 1 17 0 64 22 1 21 767 IPInnerProduct 153 1.0 3.1410e-01 1.0 5.63e+07 1.1 2.3e+02 4.2e+03 3.9e+01 0 53 23 1 9 0 53 23 1 11 700 IPApplyMatrix 39 1.0 2.4903e-01 1.1 4.38e+07 1.1 2.3e+02 4.2e+03 0.0e+00 0 42 23 1 0 0 42 23 1 0 687 UpdateVectors 1 1.0 4.2958e-03 1.2 4.51e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 4107 VecDot 1 1.0 5.6815e-04 4.7 2.97e+04 1.1 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 204 VecNorm 8 1.0 2.5260e-03 3.2 2.38e+05 1.1 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 368 VecScale 27 1.0 5.9605e-04 1.1 4.01e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2629 VecCopy 53 1.0 4.0610e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 77 1.0 6.2165e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 38 1.0 2.7709e-03 1.7 1.13e+06 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1592 VecMAXPY 38 1.0 2.5925e-02 1.1 1.13e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 11 0 0 0 0 11 0 0 0 1701 VecAssemblyBegin 5 1.0 9.0070e-03 2.3 0.00e+00 0.0 3.6e+01 2.1e+04 1.5e+01 0 0 4 0 3 0 0 4 0 4 0 VecAssemblyEnd 5 1.0 3.4809e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 73 1.0 8.5931e-03 1.5 0.00e+00 0.0 4.6e+02 8.9e+03 0.0e+00 0 0 45 2 0 0 0 45 2 0 0 VecScatterEnd 73 1.0 2.2542e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 76 1.0 3.0838e-02 1.1 1.24e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 12 0 0 0 0 12 0 0 0 1573 VecReduceComm 38 1.0 4.8040e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.8e+01 0 0 0 0 8 0 0 0 0 11 0 VecNormalize 8 1.0 2.7280e-03 2.8 3.56e+05 1.1 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 511 MatMult 67 1.0 4.1397e-01 1.1 7.53e+07 1.1 4.0e+02 4.2e+03 0.0e+00 0 71 40 1 0 0 71 40 1 0 710 MatSolve 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 MatLUFactorSym 1 1.0 3.6097e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1 1.0 1.0464e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatAssemblyBegin 9 1.0 3.3842e-0146.7 0.00e+00 0.0 5.4e+01 6.0e+04 8.0e+00 0 0 5 2 2 0 0 5 2 2 0 MatAssemblyEnd 9 1.0 2.3042e-01 1.0 0.00e+00 0.0 3.6e+01 9.4e+02 3.1e+01 0 0 4 0 7 0 0 4 0 9 0 MatGetRow 5206 1.1 3.1164e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 5 1.0 8.7580e-01 1.2 0.00e+00 0.0 1.5e+02 1.1e+06 2.5e+01 0 0 15 88 6 0 0 15 88 7 0 MatZeroEntries 2 1.0 1.0233e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 2 1.0 1.0149e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 1 0 KSPSetup 1 1.0 2.8610e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 28 1.0 5.1758e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.8e+01 95 0 0 0 6 95 0 0 0 8 0 PCSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 PCApply 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Spectral Transform 1 1 536 0 Eigenproblem Solver 1 1 824 0 Inner product 1 1 428 0 Index Set 38 38 1796776 0 IS L to G Mapping 1 1 58700 0 Vec 65 65 5458584 0 Vec Scatter 9 9 7092 0 Application Order 1 1 155232 0 Matrix 17 16 17715680 0 Krylov Solver 1 1 832 0 Preconditioner 1 1 744 0 Viewer 2 2 1088 0 ======================================================================================================================== Average time to get PetscTime(): 1.90735e-07 Average time for MPI_Barrier(): 5.9557e-05 Average time for zero size MPI_Send(): 2.97427e-05 #PETSc Option Table entries: -log_summary -mat_superlu_dist_parsymbfact #End o PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Wed May 6 15:14:39 2009 Configure options: --download-superlu_dist=1 --download-parmetis=1 --with-mpi-dir=/usr/lib/mpich --with-shared=0 ----------------------------------------- Libraries compiled on Wed May 6 15:14:49 CEST 2009 on medusa1 Machine characteristics: Linux medusa1 2.6.18-6-amd64 #1 SMP Fri Dec 12 05:49:32 UTC 2008 x86_64 GNU/Linux Using PETSc directory: /home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5 Using PETSc arch: linux-gnu-c-debug ----------------------------------------- Using C compiler: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -g3 Using Fortran compiler: /usr/lib/mpich/bin/mpif77 -Wall -Wno-unused-variable -g ----------------------------------------- Using include paths: -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/include -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include -I/usr/lib/mpich/include ------------------------------------------ Using C linker: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -g3 Using Fortran linker: /usr/lib/mpich/bin/mpif77 -Wall -Wno-unused-variable -g Using libraries: -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -L/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -lX11 -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -L/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -lsuperlu_dist_2.3 -llapack -lblas -lparmetis -lmetis -lm -L/usr/lib/mpich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.1.2 -L/usr/lib64 -L/lib64 -ldl -lmpich -lpthread -lrt -lgcc_s -lg2c -lm -L/usr/lib/gcc/x86_64-linux-gnu/3.4.6 -L/lib -lm -ldl -lmpich -lpthread -lrt -lgcc_s -ldl ------------------------------------------ real 9m10.616s user 0m23.921s sys 0m6.944s Satish Balay wrote: > Just a note about scalability: its a function of the hardware as > well.. For proper scalability studies - you'll need a true distributed > system with fast network [not SMP nodes..] > > Satish > > On Fri, 8 May 2009, Fredrik Bengzon wrote: > > >> Hong, >> Thank you for the suggestions, but I have looked at the EPS and KSP objects >> and I can not find anything wrong. The problem is that it takes longer to >> solve with 4 cpus than with 2 so the scalability seems to be absent when using >> superlu_dist. I have stored my mass and stiffness matrix in the mpiaij format >> and just passed them on to slepc. When using the petsc iterative krylov >> solvers i see 100% workload on all processors but when i switch to >> superlu_dist only two cpus seem to do the whole work of LU factoring. I don't >> want to use the krylov solver though since it might cause slepc not to >> converge. >> Regards, >> Fredrik >> >> Hong Zhang wrote: >> >>> Run your code with '-eps_view -ksp_view' for checking >>> which methods are used >>> and '-log_summary' to see which operations dominate >>> the computation. >>> >>> You can turn on parallel symbolic factorization >>> with '-mat_superlu_dist_parsymbfact'. >>> >>> Unless you use large num of processors, symbolic factorization >>> takes ignorable execution time. The numeric >>> factorization usually dominates. >>> >>> Hong >>> >>> On Fri, 8 May 2009, Fredrik Bengzon wrote: >>> >>> >>>> Hi Petsc team, >>>> Sorry for posting questions not really concerning the petsc core, but when >>>> I run superlu_dist from within slepc I notice that the load balance is >>>> poor. It is just fine during assembly (I use Metis to partition my finite >>>> element mesh) but when calling the slepc solver it dramatically changes. I >>>> use superlu_dist as solver for the eigenvalue iteration. My question is: >>>> can this have something to do with the fact that the option 'Parallel >>>> symbolic factorization' is set to false? If so, can I change the options >>>> to superlu_dist using MatSetOption for instance? Also, does this mean that >>>> superlu_dist is not using parmetis to reorder the matrix? >>>> Best Regards, >>>> Fredrik Bengzon >>>> >>>> >>>> >> > > > From knepley at gmail.com Fri May 8 11:03:53 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 8 May 2009 11:03:53 -0500 Subject: superlu_dist options In-Reply-To: <4A0456FB.3070705@math.umu.se> References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> <4A0456FB.3070705@math.umu.se> Message-ID: Look at the timing. The symbolic factorization takes 1e-4 seconds and the numeric takes only 10s, out of 542s. MatSolve is taking 517s. If you have a problem, it is likely there. However, the MatSolve looks balanced. Matt On Fri, May 8, 2009 at 10:59 AM, Fredrik Bengzon < fredrik.bengzon at math.umu.se> wrote: > Hi, > Here is the output from the KSP and EPS objects, and the log summary. > / Fredrik > > > Reading Triangle/Tetgen mesh > #nodes=19345 > #elements=81895 > #nodes per element=4 > Partitioning mesh with METIS 4.0 > Element distribution (rank | #elements) > 0 | 19771 > 1 | 20954 > 2 | 20611 > 3 | 20559 > rank 1 has 257 ghost nodes > rank 0 has 127 ghost nodes > rank 2 has 143 ghost nodes > rank 3 has 270 ghost nodes > Calling 3D Navier-Lame Eigenvalue Solver > Assembling stiffness and mass matrix > Solving eigensystem with SLEPc > KSP Object:(st_) > type: preonly > maximum iterations=100000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-50, divergence=10000 > left preconditioning > PC Object:(st_) > type: lu > LU: out-of-place factorization > matrix ordering: natural > LU: tolerance for zero pivot 1e-12 > EPS Object: > problem type: generalized symmetric eigenvalue problem > method: krylovschur > extraction type: Rayleigh-Ritz > selected portion of the spectrum: largest eigenvalues in magnitude > number of eigenvalues (nev): 4 > number of column vectors (ncv): 19 > maximum dimension of projected problem (mpd): 19 > maximum number of iterations: 6108 > tolerance: 1e-05 > dimension of user-provided deflation space: 0 > IP Object: > orthogonalization method: classical Gram-Schmidt > orthogonalization refinement: if needed (eta: 0.707100) > ST Object: > type: sinvert > shift: 0 > Matrices A and B have same nonzero pattern > Associated KSP object > ------------------------------ > KSP Object:(st_) > type: preonly > maximum iterations=100000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-50, divergence=10000 > left preconditioning > PC Object:(st_) > type: lu > LU: out-of-place factorization > matrix ordering: natural > LU: tolerance for zero pivot 1e-12 > LU: factor fill ratio needed 0 > Factored matrix follows > Matrix Object: > type=mpiaij, rows=58035, cols=58035 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=116070 > SuperLU_DIST run parameters: > Process grid nprow 2 x npcol 2 > Equilibrate matrix TRUE > Matrix input mode 1 > Replace tiny pivots TRUE > Use iterative refinement FALSE > Processors in row 2 col partition 2 > Row permutation LargeDiag > Column permutation PARMETIS > Parallel symbolic factorization TRUE > Repeated factorization SamePattern > linear system matrix = precond matrix: > Matrix Object: > type=mpiaij, rows=58035, cols=58035 > total: nonzeros=2223621, allocated nonzeros=2233584 > using I-node (on process 0) routines: found 4695 nodes, limit > used is 5 > ------------------------------ > Number of iterations in the eigensolver: 1 > Number of requested eigenvalues: 4 > Stopping condition: tol=1e-05, maxit=6108 > Number of converged eigenpairs: 8 > > Writing binary .vtu file /scratch/fredrik/output/mode-0.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-1.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-2.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-3.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-4.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-5.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-6.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-7.vtu > > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r > -fCourier9' to print this document *** > > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance Summary: > ---------------------------------------------- > > /home/fredrik/Hakan/cmlfet/a.out on a linux-gnu named medusa1 with 4 > processors, by fredrik Fri May 8 17:57:28 2009 > Using Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 09:15:37 CDT 2009 > > Max Max/Min Avg Total > Time (sec): 5.429e+02 1.00001 5.429e+02 > Objects: 1.380e+02 1.00000 1.380e+02 > Flops: 1.053e+08 1.05695 1.028e+08 4.114e+08 > Flops/sec: 1.939e+05 1.05696 1.894e+05 7.577e+05 > Memory: 5.927e+07 1.03224 2.339e+08 > MPI Messages: 2.880e+02 1.51579 2.535e+02 1.014e+03 > MPI Message Lengths: 4.868e+07 1.08170 1.827e+05 1.853e+08 > MPI Reductions: 1.122e+02 1.00000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length N --> > 2N flops > and VecAXPY() for complex vectors of length N --> > 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- > -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts %Total > Avg %Total counts %Total > 0: Main Stage: 5.4292e+02 100.0% 4.1136e+08 100.0% 1.014e+03 100.0% > 1.827e+05 100.0% 3.600e+02 80.2% > > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() and > PetscLogStagePop(). > %T - percent time in this phase %F - percent flops in this > phase > %M - percent messages in this phase %L - percent message lengths in > this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over > all processors) > > ------------------------------------------------------------------------------------------------------------------------ > > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run config/configure.py # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > > Event Count Time (sec) Flops > --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > STSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 > STApply 28 1.0 5.1775e+02 1.0 3.15e+07 1.1 1.7e+02 4.2e+03 > 2.8e+01 95 30 17 0 6 95 30 17 0 8 0 > EPSSetUp 1 1.0 1.0482e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 4.6e+01 2 0 0 0 10 2 0 0 0 13 0 > EPSSolve 1 1.0 3.7193e+02 1.0 9.59e+07 1.1 3.5e+02 4.2e+03 > 9.7e+01 69 91 35 1 22 69 91 35 1 27 1 > IPOrthogonalize 19 1.0 3.4406e-01 1.1 6.75e+07 1.1 2.3e+02 4.2e+03 > 7.6e+01 0 64 22 1 17 0 64 22 1 21 767 > IPInnerProduct 153 1.0 3.1410e-01 1.0 5.63e+07 1.1 2.3e+02 4.2e+03 > 3.9e+01 0 53 23 1 9 0 53 23 1 11 700 > IPApplyMatrix 39 1.0 2.4903e-01 1.1 4.38e+07 1.1 2.3e+02 4.2e+03 > 0.0e+00 0 42 23 1 0 0 42 23 1 0 687 > UpdateVectors 1 1.0 4.2958e-03 1.2 4.51e+06 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 4 0 0 0 0 4 0 0 0 4107 > VecDot 1 1.0 5.6815e-04 4.7 2.97e+04 1.1 0.0e+00 0.0e+00 > 1.0e+00 0 0 0 0 0 0 0 0 0 0 204 > VecNorm 8 1.0 2.5260e-03 3.2 2.38e+05 1.1 0.0e+00 0.0e+00 > 8.0e+00 0 0 0 0 2 0 0 0 0 2 368 > VecScale 27 1.0 5.9605e-04 1.1 4.01e+05 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 2629 > VecCopy 53 1.0 4.0610e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 77 1.0 6.2165e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 38 1.0 2.7709e-03 1.7 1.13e+06 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 1592 > VecMAXPY 38 1.0 2.5925e-02 1.1 1.13e+07 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 11 0 0 0 0 11 0 0 0 1701 > VecAssemblyBegin 5 1.0 9.0070e-03 2.3 0.00e+00 0.0 3.6e+01 2.1e+04 > 1.5e+01 0 0 4 0 3 0 0 4 0 4 0 > VecAssemblyEnd 5 1.0 3.4809e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 73 1.0 8.5931e-03 1.5 0.00e+00 0.0 4.6e+02 8.9e+03 > 0.0e+00 0 0 45 2 0 0 0 45 2 0 0 > VecScatterEnd 73 1.0 2.2542e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecReduceArith 76 1.0 3.0838e-02 1.1 1.24e+07 1.1 0.0e+00 0.0e+00 > 0.0e+00 0 12 0 0 0 0 12 0 0 0 1573 > VecReduceComm 38 1.0 4.8040e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 3.8e+01 0 0 0 0 8 0 0 0 0 11 0 > VecNormalize 8 1.0 2.7280e-03 2.8 3.56e+05 1.1 0.0e+00 0.0e+00 > 8.0e+00 0 0 0 0 2 0 0 0 0 2 511 > MatMult 67 1.0 4.1397e-01 1.1 7.53e+07 1.1 4.0e+02 4.2e+03 > 0.0e+00 0 71 40 1 0 0 71 40 1 0 710 > MatSolve 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 > MatLUFactorSym 1 1.0 3.6097e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatLUFactorNum 1 1.0 1.0464e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 > MatAssemblyBegin 9 1.0 3.3842e-0146.7 0.00e+00 0.0 5.4e+01 6.0e+04 > 8.0e+00 0 0 5 2 2 0 0 5 2 2 0 > MatAssemblyEnd 9 1.0 2.3042e-01 1.0 0.00e+00 0.0 3.6e+01 9.4e+02 > 3.1e+01 0 0 4 0 7 0 0 4 0 9 0 > MatGetRow 5206 1.1 3.1164e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetSubMatrice 5 1.0 8.7580e-01 1.2 0.00e+00 0.0 1.5e+02 1.1e+06 > 2.5e+01 0 0 15 88 6 0 0 15 88 7 0 > MatZeroEntries 2 1.0 1.0233e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatView 2 1.0 1.0149e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.0e+00 0 0 0 0 0 0 0 0 0 1 0 > KSPSetup 1 1.0 2.8610e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 28 1.0 5.1758e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 2.8e+01 95 0 0 0 6 95 0 0 0 8 0 > PCSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 > PCApply 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 > 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 > > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' Mem. > > --- Event Stage 0: Main Stage > > Spectral Transform 1 1 536 0 > Eigenproblem Solver 1 1 824 0 > Inner product 1 1 428 0 > Index Set 38 38 1796776 0 > IS L to G Mapping 1 1 58700 0 > Vec 65 65 5458584 0 > Vec Scatter 9 9 7092 0 > Application Order 1 1 155232 0 > Matrix 17 16 17715680 0 > Krylov Solver 1 1 832 0 > Preconditioner 1 1 744 0 > Viewer 2 2 1088 0 > > ======================================================================================================================== > Average time to get PetscTime(): 1.90735e-07 > Average time for MPI_Barrier(): 5.9557e-05 > Average time for zero size MPI_Send(): 2.97427e-05 > #PETSc Option Table entries: > -log_summary > -mat_superlu_dist_parsymbfact > #End o PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 > Configure run at: Wed May 6 15:14:39 2009 > Configure options: --download-superlu_dist=1 --download-parmetis=1 > --with-mpi-dir=/usr/lib/mpich --with-shared=0 > ----------------------------------------- > Libraries compiled on Wed May 6 15:14:49 CEST 2009 on medusa1 > Machine characteristics: Linux medusa1 2.6.18-6-amd64 #1 SMP Fri Dec 12 > 05:49:32 UTC 2008 x86_64 GNU/Linux > Using PETSc directory: /home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5 > Using PETSc arch: linux-gnu-c-debug > ----------------------------------------- > Using C compiler: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings > -Wno-strict-aliasing -g3 Using Fortran compiler: /usr/lib/mpich/bin/mpif77 > -Wall -Wno-unused-variable -g ----------------------------------------- > Using include paths: > -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include > -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/include > -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include > -I/usr/lib/mpich/include ------------------------------------------ > Using C linker: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings > -Wno-strict-aliasing -g3 > Using Fortran linker: /usr/lib/mpich/bin/mpif77 -Wall -Wno-unused-variable > -g Using libraries: > -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib > -L/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib > -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc > -lX11 > -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib > -L/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib > -lsuperlu_dist_2.3 -llapack -lblas -lparmetis -lmetis -lm > -L/usr/lib/mpich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.1.2 -L/usr/lib64 > -L/lib64 -ldl -lmpich -lpthread -lrt -lgcc_s -lg2c -lm > -L/usr/lib/gcc/x86_64-linux-gnu/3.4.6 -L/lib -lm -ldl -lmpich -lpthread -lrt > -lgcc_s -ldl > ------------------------------------------ > > real 9m10.616s > user 0m23.921s > sys 0m6.944s > > > > > > > > > > > > > > > > > > > > Satish Balay wrote: > >> Just a note about scalability: its a function of the hardware as >> well.. For proper scalability studies - you'll need a true distributed >> system with fast network [not SMP nodes..] >> >> Satish >> >> On Fri, 8 May 2009, Fredrik Bengzon wrote: >> >> >> >>> Hong, >>> Thank you for the suggestions, but I have looked at the EPS and KSP >>> objects >>> and I can not find anything wrong. The problem is that it takes longer to >>> solve with 4 cpus than with 2 so the scalability seems to be absent when >>> using >>> superlu_dist. I have stored my mass and stiffness matrix in the mpiaij >>> format >>> and just passed them on to slepc. When using the petsc iterative krylov >>> solvers i see 100% workload on all processors but when i switch to >>> superlu_dist only two cpus seem to do the whole work of LU factoring. I >>> don't >>> want to use the krylov solver though since it might cause slepc not to >>> converge. >>> Regards, >>> Fredrik >>> >>> Hong Zhang wrote: >>> >>> >>>> Run your code with '-eps_view -ksp_view' for checking >>>> which methods are used >>>> and '-log_summary' to see which operations dominate >>>> the computation. >>>> >>>> You can turn on parallel symbolic factorization >>>> with '-mat_superlu_dist_parsymbfact'. >>>> >>>> Unless you use large num of processors, symbolic factorization >>>> takes ignorable execution time. The numeric >>>> factorization usually dominates. >>>> >>>> Hong >>>> >>>> On Fri, 8 May 2009, Fredrik Bengzon wrote: >>>> >>>> >>>> >>>>> Hi Petsc team, >>>>> Sorry for posting questions not really concerning the petsc core, but >>>>> when >>>>> I run superlu_dist from within slepc I notice that the load balance is >>>>> poor. It is just fine during assembly (I use Metis to partition my >>>>> finite >>>>> element mesh) but when calling the slepc solver it dramatically >>>>> changes. I >>>>> use superlu_dist as solver for the eigenvalue iteration. My question >>>>> is: >>>>> can this have something to do with the fact that the option 'Parallel >>>>> symbolic factorization' is set to false? If so, can I change the >>>>> options >>>>> to superlu_dist using MatSetOption for instance? Also, does this mean >>>>> that >>>>> superlu_dist is not using parmetis to reorder the matrix? >>>>> Best Regards, >>>>> Fredrik Bengzon >>>>> >>>>> >>>>> >>>>> >>>> >>> >> >> >> >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri May 8 11:15:43 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 8 May 2009 11:15:43 -0500 Subject: superlu_dist options In-Reply-To: References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> <4A0456FB.3070705@math.umu.se> Message-ID: <070EE08B-E1A8-4567-9779-B7C7617EF94F@mcs.anl.gov> On May 8, 2009, at 11:03 AM, Matthew Knepley wrote: > Look at the timing. The symbolic factorization takes 1e-4 seconds > and the numeric takes > only 10s, out of 542s. MatSolve is taking 517s. If you have a > problem, it is likely there. > However, the MatSolve looks balanced. Something is funky with this. The 28 solves should not be so much more than the numeric factorization. Perhaps it is worth saving the matrix and reporting this as a performance bug to Sherrie. Barry > > > Matt > > On Fri, May 8, 2009 at 10:59 AM, Fredrik Bengzon > wrote: > Hi, > Here is the output from the KSP and EPS objects, and the log summary. > / Fredrik > > > Reading Triangle/Tetgen mesh > #nodes=19345 > #elements=81895 > #nodes per element=4 > Partitioning mesh with METIS 4.0 > Element distribution (rank | #elements) > 0 | 19771 > 1 | 20954 > 2 | 20611 > 3 | 20559 > rank 1 has 257 ghost nodes > rank 0 has 127 ghost nodes > rank 2 has 143 ghost nodes > rank 3 has 270 ghost nodes > Calling 3D Navier-Lame Eigenvalue Solver > Assembling stiffness and mass matrix > Solving eigensystem with SLEPc > KSP Object:(st_) > type: preonly > maximum iterations=100000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-50, divergence=10000 > left preconditioning > PC Object:(st_) > type: lu > LU: out-of-place factorization > matrix ordering: natural > LU: tolerance for zero pivot 1e-12 > EPS Object: > problem type: generalized symmetric eigenvalue problem > method: krylovschur > extraction type: Rayleigh-Ritz > selected portion of the spectrum: largest eigenvalues in magnitude > number of eigenvalues (nev): 4 > number of column vectors (ncv): 19 > maximum dimension of projected problem (mpd): 19 > maximum number of iterations: 6108 > tolerance: 1e-05 > dimension of user-provided deflation space: 0 > IP Object: > orthogonalization method: classical Gram-Schmidt > orthogonalization refinement: if needed (eta: 0.707100) > ST Object: > type: sinvert > shift: 0 > Matrices A and B have same nonzero pattern > Associated KSP object > ------------------------------ > KSP Object:(st_) > type: preonly > maximum iterations=100000, initial guess is zero > tolerances: relative=1e-08, absolute=1e-50, divergence=10000 > left preconditioning > PC Object:(st_) > type: lu > LU: out-of-place factorization > matrix ordering: natural > LU: tolerance for zero pivot 1e-12 > LU: factor fill ratio needed 0 > Factored matrix follows > Matrix Object: > type=mpiaij, rows=58035, cols=58035 > package used to perform factorization: superlu_dist > total: nonzeros=0, allocated nonzeros=116070 > SuperLU_DIST run parameters: > Process grid nprow 2 x npcol 2 > Equilibrate matrix TRUE > Matrix input mode 1 > Replace tiny pivots TRUE > Use iterative refinement FALSE > Processors in row 2 col partition 2 > Row permutation LargeDiag > Column permutation PARMETIS > Parallel symbolic factorization TRUE > Repeated factorization SamePattern > linear system matrix = precond matrix: > Matrix Object: > type=mpiaij, rows=58035, cols=58035 > total: nonzeros=2223621, allocated nonzeros=2233584 > using I-node (on process 0) routines: found 4695 nodes, > limit used is 5 > ------------------------------ > Number of iterations in the eigensolver: 1 > Number of requested eigenvalues: 4 > Stopping condition: tol=1e-05, maxit=6108 > Number of converged eigenpairs: 8 > > Writing binary .vtu file /scratch/fredrik/output/mode-0.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-1.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-2.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-3.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-4.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-5.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-6.vtu > Writing binary .vtu file /scratch/fredrik/output/mode-7.vtu > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript - > r -fCourier9' to print this document *** > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance > Summary: ---------------------------------------------- > > /home/fredrik/Hakan/cmlfet/a.out on a linux-gnu named medusa1 with 4 > processors, by fredrik Fri May 8 17:57:28 2009 > Using Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 09:15:37 CDT > 2009 > > Max Max/Min Avg Total > Time (sec): 5.429e+02 1.00001 5.429e+02 > Objects: 1.380e+02 1.00000 1.380e+02 > Flops: 1.053e+08 1.05695 1.028e+08 4.114e+08 > Flops/sec: 1.939e+05 1.05696 1.894e+05 7.577e+05 > Memory: 5.927e+07 1.03224 2.339e+08 > MPI Messages: 2.880e+02 1.51579 2.535e+02 1.014e+03 > MPI Message Lengths: 4.868e+07 1.08170 1.827e+05 1.853e+08 > MPI Reductions: 1.122e+02 1.00000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of length > N --> 2N flops > and VecAXPY() for complex vectors of > length N --> 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- > Messages --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts > %Total Avg %Total counts %Total > 0: Main Stage: 5.4292e+02 100.0% 4.1136e+08 100.0% 1.014e+03 > 100.0% 1.827e+05 100.0% 3.600e+02 80.2% > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all > processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with PetscLogStagePush() > and PetscLogStagePop(). > %T - percent time in this phase %F - percent flops in > this phase > %M - percent messages in this phase %L - percent message > lengths in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time > over all processors) > ------------------------------------------------------------------------------------------------------------------------ > > > ########################################################## > # # > # WARNING!!! # > # # > # This code was compiled with a debugging option, # > # To get timing results run config/configure.py # > # using --with-debugging=no, the performance will # > # be generally two or three times faster. # > # # > ########################################################## > > > Event Count Time (sec) > Flops --- Global --- --- Stage --- > Total > Max Ratio Max Ratio Max Ratio Mess Avg > len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > STSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 > STApply 28 1.0 5.1775e+02 1.0 3.15e+07 1.1 1.7e+02 4.2e > +03 2.8e+01 95 30 17 0 6 95 30 17 0 8 0 > EPSSetUp 1 1.0 1.0482e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 4.6e+01 2 0 0 0 10 2 0 0 0 13 0 > EPSSolve 1 1.0 3.7193e+02 1.0 9.59e+07 1.1 3.5e+02 4.2e > +03 9.7e+01 69 91 35 1 22 69 91 35 1 27 1 > IPOrthogonalize 19 1.0 3.4406e-01 1.1 6.75e+07 1.1 2.3e+02 4.2e > +03 7.6e+01 0 64 22 1 17 0 64 22 1 21 767 > IPInnerProduct 153 1.0 3.1410e-01 1.0 5.63e+07 1.1 2.3e+02 4.2e > +03 3.9e+01 0 53 23 1 9 0 53 23 1 11 700 > IPApplyMatrix 39 1.0 2.4903e-01 1.1 4.38e+07 1.1 2.3e+02 4.2e > +03 0.0e+00 0 42 23 1 0 0 42 23 1 0 687 > UpdateVectors 1 1.0 4.2958e-03 1.2 4.51e+06 1.1 0.0e+00 0.0e > +00 0.0e+00 0 4 0 0 0 0 4 0 0 0 4107 > VecDot 1 1.0 5.6815e-04 4.7 2.97e+04 1.1 0.0e+00 0.0e > +00 1.0e+00 0 0 0 0 0 0 0 0 0 0 204 > VecNorm 8 1.0 2.5260e-03 3.2 2.38e+05 1.1 0.0e+00 0.0e > +00 8.0e+00 0 0 0 0 2 0 0 0 0 2 368 > VecScale 27 1.0 5.9605e-04 1.1 4.01e+05 1.1 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2629 > VecCopy 53 1.0 4.0610e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 77 1.0 6.2165e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecAXPY 38 1.0 2.7709e-03 1.7 1.13e+06 1.1 0.0e+00 0.0e > +00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1592 > VecMAXPY 38 1.0 2.5925e-02 1.1 1.13e+07 1.1 0.0e+00 0.0e > +00 0.0e+00 0 11 0 0 0 0 11 0 0 0 1701 > VecAssemblyBegin 5 1.0 9.0070e-03 2.3 0.00e+00 0.0 3.6e+01 2.1e > +04 1.5e+01 0 0 4 0 3 0 0 4 0 4 0 > VecAssemblyEnd 5 1.0 3.4809e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecScatterBegin 73 1.0 8.5931e-03 1.5 0.00e+00 0.0 4.6e+02 8.9e > +03 0.0e+00 0 0 45 2 0 0 0 45 2 0 0 > VecScatterEnd 73 1.0 2.2542e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecReduceArith 76 1.0 3.0838e-02 1.1 1.24e+07 1.1 0.0e+00 0.0e > +00 0.0e+00 0 12 0 0 0 0 12 0 0 0 1573 > VecReduceComm 38 1.0 4.8040e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e > +00 3.8e+01 0 0 0 0 8 0 0 0 0 11 0 > VecNormalize 8 1.0 2.7280e-03 2.8 3.56e+05 1.1 0.0e+00 0.0e > +00 8.0e+00 0 0 0 0 2 0 0 0 0 2 511 > MatMult 67 1.0 4.1397e-01 1.1 7.53e+07 1.1 4.0e+02 4.2e > +03 0.0e+00 0 71 40 1 0 0 71 40 1 0 710 > MatSolve 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 > MatLUFactorSym 1 1.0 3.6097e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatLUFactorNum 1 1.0 1.0464e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 > MatAssemblyBegin 9 1.0 3.3842e-0146.7 0.00e+00 0.0 5.4e+01 6.0e > +04 8.0e+00 0 0 5 2 2 0 0 5 2 2 0 > MatAssemblyEnd 9 1.0 2.3042e-01 1.0 0.00e+00 0.0 3.6e+01 9.4e > +02 3.1e+01 0 0 4 0 7 0 0 4 0 9 0 > MatGetRow 5206 1.1 3.1164e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetSubMatrice 5 1.0 8.7580e-01 1.2 0.00e+00 0.0 1.5e+02 1.1e > +06 2.5e+01 0 0 15 88 6 0 0 15 88 7 0 > MatZeroEntries 2 1.0 1.0233e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatView 2 1.0 1.0149e-03 2.0 0.00e+00 0.0 0.0e+00 0.0e > +00 2.0e+00 0 0 0 0 0 0 0 0 0 1 0 > KSPSetup 1 1.0 2.8610e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 28 1.0 5.1758e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 2.8e+01 95 0 0 0 6 95 0 0 0 8 0 > PCSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 > PCApply 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' > Mem. > > --- Event Stage 0: Main Stage > > Spectral Transform 1 1 536 0 > Eigenproblem Solver 1 1 824 0 > Inner product 1 1 428 0 > Index Set 38 38 1796776 0 > IS L to G Mapping 1 1 58700 0 > Vec 65 65 5458584 0 > Vec Scatter 9 9 7092 0 > Application Order 1 1 155232 0 > Matrix 17 16 17715680 0 > Krylov Solver 1 1 832 0 > Preconditioner 1 1 744 0 > Viewer 2 2 1088 0 > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > ====================================================================== > Average time to get PetscTime(): 1.90735e-07 > Average time for MPI_Barrier(): 5.9557e-05 > Average time for zero size MPI_Send(): 2.97427e-05 > #PETSc Option Table entries: > -log_summary > -mat_superlu_dist_parsymbfact > #End o PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 > Configure run at: Wed May 6 15:14:39 2009 > Configure options: --download-superlu_dist=1 --download-parmetis=1 -- > with-mpi-dir=/usr/lib/mpich --with-shared=0 > ----------------------------------------- > Libraries compiled on Wed May 6 15:14:49 CEST 2009 on medusa1 > Machine characteristics: Linux medusa1 2.6.18-6-amd64 #1 SMP Fri Dec > 12 05:49:32 UTC 2008 x86_64 GNU/Linux > Using PETSc directory: /home/fredrik/Hakan/cmlfet/external/ > petsc-3.0.0-p5 > Using PETSc arch: linux-gnu-c-debug > ----------------------------------------- > Using C compiler: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings - > Wno-strict-aliasing -g3 Using Fortran compiler: /usr/lib/mpich/bin/ > mpif77 -Wall -Wno-unused-variable -g > ----------------------------------------- > Using include paths: -I/home/fredrik/Hakan/cmlfet/external/ > petsc-3.0.0-p5/linux-gnu-c-debug/include -I/home/fredrik/Hakan/ > cmlfet/external/petsc-3.0.0-p5/include -I/home/fredrik/Hakan/cmlfet/ > external/petsc-3.0.0-p5/linux-gnu-c-debug/include -I/usr/lib/mpich/ > include ------------------------------------------ > Using C linker: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings -Wno- > strict-aliasing -g3 > Using Fortran linker: /usr/lib/mpich/bin/mpif77 -Wall -Wno-unused- > variable -g Using libraries: -Wl,-rpath,/home/fredrik/Hakan/cmlfet/ > external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -L/home/fredrik/Hakan/ > cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -lpetscts - > lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc > -lX11 -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/ > linux-gnu-c-debug/lib -L/home/fredrik/Hakan/cmlfet/external/ > petsc-3.0.0-p5/linux-gnu-c-debug/lib -lsuperlu_dist_2.3 -llapack - > lblas -lparmetis -lmetis -lm -L/usr/lib/mpich/lib -L/usr/lib/gcc/ > x86_64-linux-gnu/4.1.2 -L/usr/lib64 -L/lib64 -ldl -lmpich -lpthread - > lrt -lgcc_s -lg2c -lm -L/usr/lib/gcc/x86_64-linux-gnu/3.4.6 -L/lib - > lm -ldl -lmpich -lpthread -lrt -lgcc_s -ldl > ------------------------------------------ > > real 9m10.616s > user 0m23.921s > sys 0m6.944s > > > > > > > > > > > > > > > > > > > > Satish Balay wrote: > Just a note about scalability: its a function of the hardware as > well.. For proper scalability studies - you'll need a true distributed > system with fast network [not SMP nodes..] > > Satish > > On Fri, 8 May 2009, Fredrik Bengzon wrote: > > > Hong, > Thank you for the suggestions, but I have looked at the EPS and KSP > objects > and I can not find anything wrong. The problem is that it takes > longer to > solve with 4 cpus than with 2 so the scalability seems to be absent > when using > superlu_dist. I have stored my mass and stiffness matrix in the > mpiaij format > and just passed them on to slepc. When using the petsc iterative > krylov > solvers i see 100% workload on all processors but when i switch to > superlu_dist only two cpus seem to do the whole work of LU > factoring. I don't > want to use the krylov solver though since it might cause slepc not to > converge. > Regards, > Fredrik > > Hong Zhang wrote: > > Run your code with '-eps_view -ksp_view' for checking > which methods are used > and '-log_summary' to see which operations dominate > the computation. > > You can turn on parallel symbolic factorization > with '-mat_superlu_dist_parsymbfact'. > > Unless you use large num of processors, symbolic factorization > takes ignorable execution time. The numeric > factorization usually dominates. > > Hong > > On Fri, 8 May 2009, Fredrik Bengzon wrote: > > > Hi Petsc team, > Sorry for posting questions not really concerning the petsc core, > but when > I run superlu_dist from within slepc I notice that the load balance is > poor. It is just fine during assembly (I use Metis to partition my > finite > element mesh) but when calling the slepc solver it dramatically > changes. I > use superlu_dist as solver for the eigenvalue iteration. My question > is: > can this have something to do with the fact that the option 'Parallel > symbolic factorization' is set to false? If so, can I change the > options > to superlu_dist using MatSetOption for instance? Also, does this > mean that > superlu_dist is not using parmetis to reorder the matrix? > Best Regards, > Fredrik Bengzon > > > > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From fredrik.bengzon at math.umu.se Fri May 8 17:26:28 2009 From: fredrik.bengzon at math.umu.se (Fredrik Bengzon) Date: Sat, 09 May 2009 00:26:28 +0200 Subject: superlu_dist options In-Reply-To: <070EE08B-E1A8-4567-9779-B7C7617EF94F@mcs.anl.gov> References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> <4A0456FB.3070705@math.umu.se> <070EE08B-E1A8-4567-9779-B7C7617EF94F@mcs.anl.gov> Message-ID: <4A04B194.8020806@math.umu.se> Hi again, I resorted to using Mumps, which seems to scale very well, in Slepc. However I have another question: how do you sort an MPI vector in Petsc, and can you get the permutation also? /Fredrik Barry Smith wrote: > > On May 8, 2009, at 11:03 AM, Matthew Knepley wrote: > >> Look at the timing. The symbolic factorization takes 1e-4 seconds and >> the numeric takes >> only 10s, out of 542s. MatSolve is taking 517s. If you have a >> problem, it is likely there. >> However, the MatSolve looks balanced. > > Something is funky with this. The 28 solves should not be so much > more than the numeric factorization. > Perhaps it is worth saving the matrix and reporting this as a > performance bug to Sherrie. > > Barry > >> >> >> Matt >> >> On Fri, May 8, 2009 at 10:59 AM, Fredrik Bengzon >> wrote: >> Hi, >> Here is the output from the KSP and EPS objects, and the log summary. >> / Fredrik >> >> >> Reading Triangle/Tetgen mesh >> #nodes=19345 >> #elements=81895 >> #nodes per element=4 >> Partitioning mesh with METIS 4.0 >> Element distribution (rank | #elements) >> 0 | 19771 >> 1 | 20954 >> 2 | 20611 >> 3 | 20559 >> rank 1 has 257 ghost nodes >> rank 0 has 127 ghost nodes >> rank 2 has 143 ghost nodes >> rank 3 has 270 ghost nodes >> Calling 3D Navier-Lame Eigenvalue Solver >> Assembling stiffness and mass matrix >> Solving eigensystem with SLEPc >> KSP Object:(st_) >> type: preonly >> maximum iterations=100000, initial guess is zero >> tolerances: relative=1e-08, absolute=1e-50, divergence=10000 >> left preconditioning >> PC Object:(st_) >> type: lu >> LU: out-of-place factorization >> matrix ordering: natural >> LU: tolerance for zero pivot 1e-12 >> EPS Object: >> problem type: generalized symmetric eigenvalue problem >> method: krylovschur >> extraction type: Rayleigh-Ritz >> selected portion of the spectrum: largest eigenvalues in magnitude >> number of eigenvalues (nev): 4 >> number of column vectors (ncv): 19 >> maximum dimension of projected problem (mpd): 19 >> maximum number of iterations: 6108 >> tolerance: 1e-05 >> dimension of user-provided deflation space: 0 >> IP Object: >> orthogonalization method: classical Gram-Schmidt >> orthogonalization refinement: if needed (eta: 0.707100) >> ST Object: >> type: sinvert >> shift: 0 >> Matrices A and B have same nonzero pattern >> Associated KSP object >> ------------------------------ >> KSP Object:(st_) >> type: preonly >> maximum iterations=100000, initial guess is zero >> tolerances: relative=1e-08, absolute=1e-50, divergence=10000 >> left preconditioning >> PC Object:(st_) >> type: lu >> LU: out-of-place factorization >> matrix ordering: natural >> LU: tolerance for zero pivot 1e-12 >> LU: factor fill ratio needed 0 >> Factored matrix follows >> Matrix Object: >> type=mpiaij, rows=58035, cols=58035 >> package used to perform factorization: superlu_dist >> total: nonzeros=0, allocated nonzeros=116070 >> SuperLU_DIST run parameters: >> Process grid nprow 2 x npcol 2 >> Equilibrate matrix TRUE >> Matrix input mode 1 >> Replace tiny pivots TRUE >> Use iterative refinement FALSE >> Processors in row 2 col partition 2 >> Row permutation LargeDiag >> Column permutation PARMETIS >> Parallel symbolic factorization TRUE >> Repeated factorization SamePattern >> linear system matrix = precond matrix: >> Matrix Object: >> type=mpiaij, rows=58035, cols=58035 >> total: nonzeros=2223621, allocated nonzeros=2233584 >> using I-node (on process 0) routines: found 4695 nodes, >> limit used is 5 >> ------------------------------ >> Number of iterations in the eigensolver: 1 >> Number of requested eigenvalues: 4 >> Stopping condition: tol=1e-05, maxit=6108 >> Number of converged eigenpairs: 8 >> >> Writing binary .vtu file /scratch/fredrik/output/mode-0.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-1.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-2.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-3.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-4.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-5.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-6.vtu >> Writing binary .vtu file /scratch/fredrik/output/mode-7.vtu >> ************************************************************************************************************************ >> >> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript >> -r -fCourier9' to print this document *** >> ************************************************************************************************************************ >> >> >> ---------------------------------------------- PETSc Performance >> Summary: ---------------------------------------------- >> >> /home/fredrik/Hakan/cmlfet/a.out on a linux-gnu named medusa1 with 4 >> processors, by fredrik Fri May 8 17:57:28 2009 >> Using Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 09:15:37 CDT 2009 >> >> Max Max/Min Avg Total >> Time (sec): 5.429e+02 1.00001 5.429e+02 >> Objects: 1.380e+02 1.00000 1.380e+02 >> Flops: 1.053e+08 1.05695 1.028e+08 4.114e+08 >> Flops/sec: 1.939e+05 1.05696 1.894e+05 7.577e+05 >> Memory: 5.927e+07 1.03224 2.339e+08 >> MPI Messages: 2.880e+02 1.51579 2.535e+02 1.014e+03 >> MPI Message Lengths: 4.868e+07 1.08170 1.827e+05 1.853e+08 >> MPI Reductions: 1.122e+02 1.00000 >> >> Flop counting convention: 1 flop = 1 real number operation of type >> (multiply/divide/add/subtract) >> e.g., VecAXPY() for real vectors of length >> N --> 2N flops >> and VecAXPY() for complex vectors of length >> N --> 8N flops >> >> Summary of Stages: ----- Time ------ ----- Flops ----- --- >> Messages --- -- Message Lengths -- -- Reductions -- >> Avg %Total Avg %Total counts >> %Total Avg %Total counts %Total >> 0: Main Stage: 5.4292e+02 100.0% 4.1136e+08 100.0% 1.014e+03 >> 100.0% 1.827e+05 100.0% 3.600e+02 80.2% >> >> ------------------------------------------------------------------------------------------------------------------------ >> >> See the 'Profiling' chapter of the users' manual for details on >> interpreting output. >> Phase summary info: >> Count: number of times phase was executed >> Time and Flops: Max - maximum over all processors >> Ratio - ratio of maximum to minimum over all processors >> Mess: number of messages sent >> Avg. len: average message length >> Reduct: number of global reductions >> Global: entire computation >> Stage: stages of a computation. Set stages with PetscLogStagePush() >> and PetscLogStagePop(). >> %T - percent time in this phase %F - percent flops in >> this phase >> %M - percent messages in this phase %L - percent message >> lengths in this phase >> %R - percent reductions in this phase >> Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time >> over all processors) >> ------------------------------------------------------------------------------------------------------------------------ >> >> >> >> ########################################################## >> # # >> # WARNING!!! # >> # # >> # This code was compiled with a debugging option, # >> # To get timing results run config/configure.py # >> # using --with-debugging=no, the performance will # >> # be generally two or three times faster. # >> # # >> ########################################################## >> >> >> Event Count Time (sec) >> Flops --- Global --- --- Stage --- Total >> Max Ratio Max Ratio Max Ratio Mess Avg >> len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >> ------------------------------------------------------------------------------------------------------------------------ >> >> >> --- Event Stage 0: Main Stage >> >> STSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 >> STApply 28 1.0 5.1775e+02 1.0 3.15e+07 1.1 1.7e+02 >> 4.2e+03 2.8e+01 95 30 17 0 6 95 30 17 0 8 0 >> EPSSetUp 1 1.0 1.0482e+01 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 4.6e+01 2 0 0 0 10 2 0 0 0 13 0 >> EPSSolve 1 1.0 3.7193e+02 1.0 9.59e+07 1.1 3.5e+02 >> 4.2e+03 9.7e+01 69 91 35 1 22 69 91 35 1 27 1 >> IPOrthogonalize 19 1.0 3.4406e-01 1.1 6.75e+07 1.1 2.3e+02 >> 4.2e+03 7.6e+01 0 64 22 1 17 0 64 22 1 21 767 >> IPInnerProduct 153 1.0 3.1410e-01 1.0 5.63e+07 1.1 2.3e+02 >> 4.2e+03 3.9e+01 0 53 23 1 9 0 53 23 1 11 700 >> IPApplyMatrix 39 1.0 2.4903e-01 1.1 4.38e+07 1.1 2.3e+02 >> 4.2e+03 0.0e+00 0 42 23 1 0 0 42 23 1 0 687 >> UpdateVectors 1 1.0 4.2958e-03 1.2 4.51e+06 1.1 0.0e+00 >> 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 4107 >> VecDot 1 1.0 5.6815e-04 4.7 2.97e+04 1.1 0.0e+00 >> 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 204 >> VecNorm 8 1.0 2.5260e-03 3.2 2.38e+05 1.1 0.0e+00 >> 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 368 >> VecScale 27 1.0 5.9605e-04 1.1 4.01e+05 1.1 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2629 >> VecCopy 53 1.0 4.0610e-03 1.4 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecSet 77 1.0 6.2165e-03 1.1 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecAXPY 38 1.0 2.7709e-03 1.7 1.13e+06 1.1 0.0e+00 >> 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1592 >> VecMAXPY 38 1.0 2.5925e-02 1.1 1.13e+07 1.1 0.0e+00 >> 0.0e+00 0.0e+00 0 11 0 0 0 0 11 0 0 0 1701 >> VecAssemblyBegin 5 1.0 9.0070e-03 2.3 0.00e+00 0.0 3.6e+01 >> 2.1e+04 1.5e+01 0 0 4 0 3 0 0 4 0 4 0 >> VecAssemblyEnd 5 1.0 3.4809e-04 1.1 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecScatterBegin 73 1.0 8.5931e-03 1.5 0.00e+00 0.0 4.6e+02 >> 8.9e+03 0.0e+00 0 0 45 2 0 0 0 45 2 0 0 >> VecScatterEnd 73 1.0 2.2542e-02 2.2 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> VecReduceArith 76 1.0 3.0838e-02 1.1 1.24e+07 1.1 0.0e+00 >> 0.0e+00 0.0e+00 0 12 0 0 0 0 12 0 0 0 1573 >> VecReduceComm 38 1.0 4.8040e-02 2.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 3.8e+01 0 0 0 0 8 0 0 0 0 11 0 >> VecNormalize 8 1.0 2.7280e-03 2.8 3.56e+05 1.1 0.0e+00 >> 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 511 >> MatMult 67 1.0 4.1397e-01 1.1 7.53e+07 1.1 4.0e+02 >> 4.2e+03 0.0e+00 0 71 40 1 0 0 71 40 1 0 710 >> MatSolve 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 >> MatLUFactorSym 1 1.0 3.6097e-04 1.1 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatLUFactorNum 1 1.0 1.0464e+01 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 >> MatAssemblyBegin 9 1.0 3.3842e-0146.7 0.00e+00 0.0 5.4e+01 >> 6.0e+04 8.0e+00 0 0 5 2 2 0 0 5 2 2 0 >> MatAssemblyEnd 9 1.0 2.3042e-01 1.0 0.00e+00 0.0 3.6e+01 >> 9.4e+02 3.1e+01 0 0 4 0 7 0 0 4 0 9 0 >> MatGetRow 5206 1.1 3.1164e-03 1.1 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatGetSubMatrice 5 1.0 8.7580e-01 1.2 0.00e+00 0.0 1.5e+02 >> 1.1e+06 2.5e+01 0 0 15 88 6 0 0 15 88 7 0 >> MatZeroEntries 2 1.0 1.0233e-02 1.1 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> MatView 2 1.0 1.0149e-03 2.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 1 0 >> KSPSetup 1 1.0 2.8610e-06 1.5 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >> KSPSolve 28 1.0 5.1758e+02 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 2.8e+01 95 0 0 0 6 95 0 0 0 8 0 >> PCSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 >> PCApply 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 >> 0.0e+00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 >> ------------------------------------------------------------------------------------------------------------------------ >> >> >> Memory usage is given in bytes: >> >> Object Type Creations Destructions Memory Descendants' >> Mem. >> >> --- Event Stage 0: Main Stage >> >> Spectral Transform 1 1 536 0 >> Eigenproblem Solver 1 1 824 0 >> Inner product 1 1 428 0 >> Index Set 38 38 1796776 0 >> IS L to G Mapping 1 1 58700 0 >> Vec 65 65 5458584 0 >> Vec Scatter 9 9 7092 0 >> Application Order 1 1 155232 0 >> Matrix 17 16 17715680 0 >> Krylov Solver 1 1 832 0 >> Preconditioner 1 1 744 0 >> Viewer 2 2 1088 0 >> ======================================================================================================================== >> >> Average time to get PetscTime(): 1.90735e-07 >> Average time for MPI_Barrier(): 5.9557e-05 >> Average time for zero size MPI_Send(): 2.97427e-05 >> #PETSc Option Table entries: >> -log_summary >> -mat_superlu_dist_parsymbfact >> #End o PETSc Option Table entries >> Compiled without FORTRAN kernels >> Compiled with full precision matrices (default) >> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 >> sizeof(PetscScalar) 8 >> Configure run at: Wed May 6 15:14:39 2009 >> Configure options: --download-superlu_dist=1 --download-parmetis=1 >> --with-mpi-dir=/usr/lib/mpich --with-shared=0 >> ----------------------------------------- >> Libraries compiled on Wed May 6 15:14:49 CEST 2009 on medusa1 >> Machine characteristics: Linux medusa1 2.6.18-6-amd64 #1 SMP Fri Dec >> 12 05:49:32 UTC 2008 x86_64 GNU/Linux >> Using PETSc directory: >> /home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5 >> Using PETSc arch: linux-gnu-c-debug >> ----------------------------------------- >> Using C compiler: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings >> -Wno-strict-aliasing -g3 Using Fortran compiler: >> /usr/lib/mpich/bin/mpif77 -Wall -Wno-unused-variable -g >> ----------------------------------------- >> Using include paths: >> -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include >> -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/include >> -I/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include >> -I/usr/lib/mpich/include ------------------------------------------ >> Using C linker: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings >> -Wno-strict-aliasing -g3 >> Using Fortran linker: /usr/lib/mpich/bin/mpif77 -Wall >> -Wno-unused-variable -g Using libraries: >> -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib >> -L/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib >> -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec >> -lpetsc -lX11 >> -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib >> -L/home/fredrik/Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib >> -lsuperlu_dist_2.3 -llapack -lblas -lparmetis -lmetis -lm >> -L/usr/lib/mpich/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.1.2 >> -L/usr/lib64 -L/lib64 -ldl -lmpich -lpthread -lrt -lgcc_s -lg2c -lm >> -L/usr/lib/gcc/x86_64-linux-gnu/3.4.6 -L/lib -lm -ldl -lmpich >> -lpthread -lrt -lgcc_s -ldl >> ------------------------------------------ >> >> real 9m10.616s >> user 0m23.921s >> sys 0m6.944s >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Satish Balay wrote: >> Just a note about scalability: its a function of the hardware as >> well.. For proper scalability studies - you'll need a true distributed >> system with fast network [not SMP nodes..] >> >> Satish >> >> On Fri, 8 May 2009, Fredrik Bengzon wrote: >> >> >> Hong, >> Thank you for the suggestions, but I have looked at the EPS and KSP >> objects >> and I can not find anything wrong. The problem is that it takes >> longer to >> solve with 4 cpus than with 2 so the scalability seems to be absent >> when using >> superlu_dist. I have stored my mass and stiffness matrix in the >> mpiaij format >> and just passed them on to slepc. When using the petsc iterative krylov >> solvers i see 100% workload on all processors but when i switch to >> superlu_dist only two cpus seem to do the whole work of LU factoring. >> I don't >> want to use the krylov solver though since it might cause slepc not to >> converge. >> Regards, >> Fredrik >> >> Hong Zhang wrote: >> >> Run your code with '-eps_view -ksp_view' for checking >> which methods are used >> and '-log_summary' to see which operations dominate >> the computation. >> >> You can turn on parallel symbolic factorization >> with '-mat_superlu_dist_parsymbfact'. >> >> Unless you use large num of processors, symbolic factorization >> takes ignorable execution time. The numeric >> factorization usually dominates. >> >> Hong >> >> On Fri, 8 May 2009, Fredrik Bengzon wrote: >> >> >> Hi Petsc team, >> Sorry for posting questions not really concerning the petsc core, but >> when >> I run superlu_dist from within slepc I notice that the load balance is >> poor. It is just fine during assembly (I use Metis to partition my >> finite >> element mesh) but when calling the slepc solver it dramatically >> changes. I >> use superlu_dist as solver for the eigenvalue iteration. My question is: >> can this have something to do with the fact that the option 'Parallel >> symbolic factorization' is set to false? If so, can I change the options >> to superlu_dist using MatSetOption for instance? Also, does this mean >> that >> superlu_dist is not using parmetis to reorder the matrix? >> Best Regards, >> Fredrik Bengzon >> >> >> >> >> >> >> >> >> >> >> >> --What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which >> their experiments lead. >> -- Norbert Wiener > > From bsmith at mcs.anl.gov Fri May 8 17:28:16 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 8 May 2009 17:28:16 -0500 Subject: superlu_dist options In-Reply-To: <4A04B194.8020806@math.umu.se> References: <4A044783.9090702@math.umu.se> <4A04521B.5000905@math.umu.se> <4A0456FB.3070705@math.umu.se> <070EE08B-E1A8-4567-9779-B7C7617EF94F@mcs.anl.gov> <4A04B194.8020806@math.umu.se> Message-ID: I don't think we have any parallel sorts in PETSc. Barry On May 8, 2009, at 5:26 PM, Fredrik Bengzon wrote: > Hi again, > I resorted to using Mumps, which seems to scale very well, in Slepc. > However I have another question: how do you sort an MPI vector in > Petsc, and can you get the permutation also? > /Fredrik > > > Barry Smith wrote: >> >> On May 8, 2009, at 11:03 AM, Matthew Knepley wrote: >> >>> Look at the timing. The symbolic factorization takes 1e-4 seconds >>> and the numeric takes >>> only 10s, out of 542s. MatSolve is taking 517s. If you have a >>> problem, it is likely there. >>> However, the MatSolve looks balanced. >> >> Something is funky with this. The 28 solves should not be so much >> more than the numeric factorization. >> Perhaps it is worth saving the matrix and reporting this as a >> performance bug to Sherrie. >> >> Barry >> >>> >>> >>> Matt >>> >>> On Fri, May 8, 2009 at 10:59 AM, Fredrik Bengzon >> > wrote: >>> Hi, >>> Here is the output from the KSP and EPS objects, and the log >>> summary. >>> / Fredrik >>> >>> >>> Reading Triangle/Tetgen mesh >>> #nodes=19345 >>> #elements=81895 >>> #nodes per element=4 >>> Partitioning mesh with METIS 4.0 >>> Element distribution (rank | #elements) >>> 0 | 19771 >>> 1 | 20954 >>> 2 | 20611 >>> 3 | 20559 >>> rank 1 has 257 ghost nodes >>> rank 0 has 127 ghost nodes >>> rank 2 has 143 ghost nodes >>> rank 3 has 270 ghost nodes >>> Calling 3D Navier-Lame Eigenvalue Solver >>> Assembling stiffness and mass matrix >>> Solving eigensystem with SLEPc >>> KSP Object:(st_) >>> type: preonly >>> maximum iterations=100000, initial guess is zero >>> tolerances: relative=1e-08, absolute=1e-50, divergence=10000 >>> left preconditioning >>> PC Object:(st_) >>> type: lu >>> LU: out-of-place factorization >>> matrix ordering: natural >>> LU: tolerance for zero pivot 1e-12 >>> EPS Object: >>> problem type: generalized symmetric eigenvalue problem >>> method: krylovschur >>> extraction type: Rayleigh-Ritz >>> selected portion of the spectrum: largest eigenvalues in magnitude >>> number of eigenvalues (nev): 4 >>> number of column vectors (ncv): 19 >>> maximum dimension of projected problem (mpd): 19 >>> maximum number of iterations: 6108 >>> tolerance: 1e-05 >>> dimension of user-provided deflation space: 0 >>> IP Object: >>> orthogonalization method: classical Gram-Schmidt >>> orthogonalization refinement: if needed (eta: 0.707100) >>> ST Object: >>> type: sinvert >>> shift: 0 >>> Matrices A and B have same nonzero pattern >>> Associated KSP object >>> ------------------------------ >>> KSP Object:(st_) >>> type: preonly >>> maximum iterations=100000, initial guess is zero >>> tolerances: relative=1e-08, absolute=1e-50, divergence=10000 >>> left preconditioning >>> PC Object:(st_) >>> type: lu >>> LU: out-of-place factorization >>> matrix ordering: natural >>> LU: tolerance for zero pivot 1e-12 >>> LU: factor fill ratio needed 0 >>> Factored matrix follows >>> Matrix Object: >>> type=mpiaij, rows=58035, cols=58035 >>> package used to perform factorization: superlu_dist >>> total: nonzeros=0, allocated nonzeros=116070 >>> SuperLU_DIST run parameters: >>> Process grid nprow 2 x npcol 2 >>> Equilibrate matrix TRUE >>> Matrix input mode 1 >>> Replace tiny pivots TRUE >>> Use iterative refinement FALSE >>> Processors in row 2 col partition 2 >>> Row permutation LargeDiag >>> Column permutation PARMETIS >>> Parallel symbolic factorization TRUE >>> Repeated factorization SamePattern >>> linear system matrix = precond matrix: >>> Matrix Object: >>> type=mpiaij, rows=58035, cols=58035 >>> total: nonzeros=2223621, allocated nonzeros=2233584 >>> using I-node (on process 0) routines: found 4695 nodes, >>> limit used is 5 >>> ------------------------------ >>> Number of iterations in the eigensolver: 1 >>> Number of requested eigenvalues: 4 >>> Stopping condition: tol=1e-05, maxit=6108 >>> Number of converged eigenpairs: 8 >>> >>> Writing binary .vtu file /scratch/fredrik/output/mode-0.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-1.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-2.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-3.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-4.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-5.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-6.vtu >>> Writing binary .vtu file /scratch/fredrik/output/mode-7.vtu >>> ************************************************************************************************************************ >>> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use >>> 'enscript -r -fCourier9' to print this document *** >>> ************************************************************************************************************************ >>> >>> ---------------------------------------------- PETSc Performance >>> Summary: ---------------------------------------------- >>> >>> /home/fredrik/Hakan/cmlfet/a.out on a linux-gnu named medusa1 with >>> 4 processors, by fredrik Fri May 8 17:57:28 2009 >>> Using Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 09:15:37 >>> CDT 2009 >>> >>> Max Max/Min Avg Total >>> Time (sec): 5.429e+02 1.00001 5.429e+02 >>> Objects: 1.380e+02 1.00000 1.380e+02 >>> Flops: 1.053e+08 1.05695 1.028e+08 4.114e+08 >>> Flops/sec: 1.939e+05 1.05696 1.894e+05 7.577e+05 >>> Memory: 5.927e+07 1.03224 2.339e+08 >>> MPI Messages: 2.880e+02 1.51579 2.535e+02 1.014e+03 >>> MPI Message Lengths: 4.868e+07 1.08170 1.827e+05 1.853e+08 >>> MPI Reductions: 1.122e+02 1.00000 >>> >>> Flop counting convention: 1 flop = 1 real number operation of type >>> (multiply/divide/add/subtract) >>> e.g., VecAXPY() for real vectors of >>> length N --> 2N flops >>> and VecAXPY() for complex vectors of >>> length N --> 8N flops >>> >>> Summary of Stages: ----- Time ------ ----- Flops ----- --- >>> Messages --- -- Message Lengths -- -- Reductions -- >>> Avg %Total Avg %Total counts >>> %Total Avg %Total counts %Total >>> 0: Main Stage: 5.4292e+02 100.0% 4.1136e+08 100.0% 1.014e >>> +03 100.0% 1.827e+05 100.0% 3.600e+02 80.2% >>> >>> ------------------------------------------------------------------------------------------------------------------------ >>> See the 'Profiling' chapter of the users' manual for details on >>> interpreting output. >>> Phase summary info: >>> Count: number of times phase was executed >>> Time and Flops: Max - maximum over all processors >>> Ratio - ratio of maximum to minimum over all >>> processors >>> Mess: number of messages sent >>> Avg. len: average message length >>> Reduct: number of global reductions >>> Global: entire computation >>> Stage: stages of a computation. Set stages with >>> PetscLogStagePush() and PetscLogStagePop(). >>> %T - percent time in this phase %F - percent flops in >>> this phase >>> %M - percent messages in this phase %L - percent message >>> lengths in this phase >>> %R - percent reductions in this phase >>> Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max >>> time over all processors) >>> ------------------------------------------------------------------------------------------------------------------------ >>> >>> >>> ########################################################## >>> # # >>> # WARNING!!! # >>> # # >>> # This code was compiled with a debugging option, # >>> # To get timing results run config/configure.py # >>> # using --with-debugging=no, the performance will # >>> # be generally two or three times faster. # >>> # # >>> ########################################################## >>> >>> >>> Event Count Time (sec) >>> Flops --- Global --- --- Stage --- >>> Total >>> Max Ratio Max Ratio Max Ratio Mess Avg >>> len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >>> ------------------------------------------------------------------------------------------------------------------------ >>> >>> --- Event Stage 0: Main Stage >>> >>> STSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 >>> STApply 28 1.0 5.1775e+02 1.0 3.15e+07 1.1 1.7e+02 >>> 4.2e+03 2.8e+01 95 30 17 0 6 95 30 17 0 8 0 >>> EPSSetUp 1 1.0 1.0482e+01 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 4.6e+01 2 0 0 0 10 2 0 0 0 13 0 >>> EPSSolve 1 1.0 3.7193e+02 1.0 9.59e+07 1.1 3.5e+02 >>> 4.2e+03 9.7e+01 69 91 35 1 22 69 91 35 1 27 1 >>> IPOrthogonalize 19 1.0 3.4406e-01 1.1 6.75e+07 1.1 2.3e+02 >>> 4.2e+03 7.6e+01 0 64 22 1 17 0 64 22 1 21 767 >>> IPInnerProduct 153 1.0 3.1410e-01 1.0 5.63e+07 1.1 2.3e+02 >>> 4.2e+03 3.9e+01 0 53 23 1 9 0 53 23 1 11 700 >>> IPApplyMatrix 39 1.0 2.4903e-01 1.1 4.38e+07 1.1 2.3e+02 >>> 4.2e+03 0.0e+00 0 42 23 1 0 0 42 23 1 0 687 >>> UpdateVectors 1 1.0 4.2958e-03 1.2 4.51e+06 1.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 4107 >>> VecDot 1 1.0 5.6815e-04 4.7 2.97e+04 1.1 0.0e+00 >>> 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 204 >>> VecNorm 8 1.0 2.5260e-03 3.2 2.38e+05 1.1 0.0e+00 >>> 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 368 >>> VecScale 27 1.0 5.9605e-04 1.1 4.01e+05 1.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2629 >>> VecCopy 53 1.0 4.0610e-03 1.4 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> VecSet 77 1.0 6.2165e-03 1.1 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> VecAXPY 38 1.0 2.7709e-03 1.7 1.13e+06 1.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1592 >>> VecMAXPY 38 1.0 2.5925e-02 1.1 1.13e+07 1.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 11 0 0 0 0 11 0 0 0 1701 >>> VecAssemblyBegin 5 1.0 9.0070e-03 2.3 0.00e+00 0.0 3.6e+01 >>> 2.1e+04 1.5e+01 0 0 4 0 3 0 0 4 0 4 0 >>> VecAssemblyEnd 5 1.0 3.4809e-04 1.1 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> VecScatterBegin 73 1.0 8.5931e-03 1.5 0.00e+00 0.0 4.6e+02 >>> 8.9e+03 0.0e+00 0 0 45 2 0 0 0 45 2 0 0 >>> VecScatterEnd 73 1.0 2.2542e-02 2.2 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> VecReduceArith 76 1.0 3.0838e-02 1.1 1.24e+07 1.1 0.0e+00 >>> 0.0e+00 0.0e+00 0 12 0 0 0 0 12 0 0 0 1573 >>> VecReduceComm 38 1.0 4.8040e-02 2.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 3.8e+01 0 0 0 0 8 0 0 0 0 11 0 >>> VecNormalize 8 1.0 2.7280e-03 2.8 3.56e+05 1.1 0.0e+00 >>> 0.0e+00 8.0e+00 0 0 0 0 2 0 0 0 0 2 511 >>> MatMult 67 1.0 4.1397e-01 1.1 7.53e+07 1.1 4.0e+02 >>> 4.2e+03 0.0e+00 0 71 40 1 0 0 71 40 1 0 710 >>> MatSolve 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 >>> MatLUFactorSym 1 1.0 3.6097e-04 1.1 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> MatLUFactorNum 1 1.0 1.0464e+01 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 >>> MatAssemblyBegin 9 1.0 3.3842e-0146.7 0.00e+00 0.0 5.4e+01 >>> 6.0e+04 8.0e+00 0 0 5 2 2 0 0 5 2 2 0 >>> MatAssemblyEnd 9 1.0 2.3042e-01 1.0 0.00e+00 0.0 3.6e+01 >>> 9.4e+02 3.1e+01 0 0 4 0 7 0 0 4 0 9 0 >>> MatGetRow 5206 1.1 3.1164e-03 1.1 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> MatGetSubMatrice 5 1.0 8.7580e-01 1.2 0.00e+00 0.0 1.5e+02 >>> 1.1e+06 2.5e+01 0 0 15 88 6 0 0 15 88 7 0 >>> MatZeroEntries 2 1.0 1.0233e-02 1.1 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> MatView 2 1.0 1.0149e-03 2.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 1 0 >>> KSPSetup 1 1.0 2.8610e-06 1.5 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 >>> KSPSolve 28 1.0 5.1758e+02 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 2.8e+01 95 0 0 0 6 95 0 0 0 8 0 >>> PCSetUp 1 1.0 1.0467e+01 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 8.0e+00 2 0 0 0 2 2 0 0 0 2 0 >>> PCApply 28 1.0 5.1757e+02 1.0 0.00e+00 0.0 0.0e+00 >>> 0.0e+00 0.0e+00 95 0 0 0 0 95 0 0 0 0 0 >>> ------------------------------------------------------------------------------------------------------------------------ >>> >>> Memory usage is given in bytes: >>> >>> Object Type Creations Destructions Memory >>> Descendants' Mem. >>> >>> --- Event Stage 0: Main Stage >>> >>> Spectral Transform 1 1 536 0 >>> Eigenproblem Solver 1 1 824 0 >>> Inner product 1 1 428 0 >>> Index Set 38 38 1796776 0 >>> IS L to G Mapping 1 1 58700 0 >>> Vec 65 65 5458584 0 >>> Vec Scatter 9 9 7092 0 >>> Application Order 1 1 155232 0 >>> Matrix 17 16 17715680 0 >>> Krylov Solver 1 1 832 0 >>> Preconditioner 1 1 744 0 >>> Viewer 2 2 1088 0 >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> = >>> ==================================================================== >>> Average time to get PetscTime(): 1.90735e-07 >>> Average time for MPI_Barrier(): 5.9557e-05 >>> Average time for zero size MPI_Send(): 2.97427e-05 >>> #PETSc Option Table entries: >>> -log_summary >>> -mat_superlu_dist_parsymbfact >>> #End o PETSc Option Table entries >>> Compiled without FORTRAN kernels >>> Compiled with full precision matrices (default) >>> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 >>> sizeof(PetscScalar) 8 >>> Configure run at: Wed May 6 15:14:39 2009 >>> Configure options: --download-superlu_dist=1 --download-parmetis=1 >>> --with-mpi-dir=/usr/lib/mpich --with-shared=0 >>> ----------------------------------------- >>> Libraries compiled on Wed May 6 15:14:49 CEST 2009 on medusa1 >>> Machine characteristics: Linux medusa1 2.6.18-6-amd64 #1 SMP Fri >>> Dec 12 05:49:32 UTC 2008 x86_64 GNU/Linux >>> Using PETSc directory: /home/fredrik/Hakan/cmlfet/external/ >>> petsc-3.0.0-p5 >>> Using PETSc arch: linux-gnu-c-debug >>> ----------------------------------------- >>> Using C compiler: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings - >>> Wno-strict-aliasing -g3 Using Fortran compiler: /usr/lib/mpich/ >>> bin/mpif77 -Wall -Wno-unused-variable -g >>> ----------------------------------------- >>> Using include paths: -I/home/fredrik/Hakan/cmlfet/external/ >>> petsc-3.0.0-p5/linux-gnu-c-debug/include -I/home/fredrik/Hakan/ >>> cmlfet/external/petsc-3.0.0-p5/include -I/home/fredrik/Hakan/ >>> cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/include -I/usr/ >>> lib/mpich/include ------------------------------------------ >>> Using C linker: /usr/lib/mpich/bin/mpicc -Wall -Wwrite-strings - >>> Wno-strict-aliasing -g3 >>> Using Fortran linker: /usr/lib/mpich/bin/mpif77 -Wall -Wno-unused- >>> variable -g Using libraries: -Wl,-rpath,/home/fredrik/Hakan/cmlfet/ >>> external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -L/home/fredrik/ >>> Hakan/cmlfet/external/petsc-3.0.0-p5/linux-gnu-c-debug/lib - >>> lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec - >>> lpetsc -lX11 -Wl,-rpath,/home/fredrik/Hakan/cmlfet/external/ >>> petsc-3.0.0-p5/linux-gnu-c-debug/lib -L/home/fredrik/Hakan/cmlfet/ >>> external/petsc-3.0.0-p5/linux-gnu-c-debug/lib -lsuperlu_dist_2.3 - >>> llapack -lblas -lparmetis -lmetis -lm -L/usr/lib/mpich/lib -L/usr/ >>> lib/gcc/x86_64-linux-gnu/4.1.2 -L/usr/lib64 -L/lib64 -ldl -lmpich - >>> lpthread -lrt -lgcc_s -lg2c -lm -L/usr/lib/gcc/x86_64-linux-gnu/ >>> 3.4.6 -L/lib -lm -ldl -lmpich -lpthread -lrt -lgcc_s -ldl >>> ------------------------------------------ >>> >>> real 9m10.616s >>> user 0m23.921s >>> sys 0m6.944s >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> Satish Balay wrote: >>> Just a note about scalability: its a function of the hardware as >>> well.. For proper scalability studies - you'll need a true >>> distributed >>> system with fast network [not SMP nodes..] >>> >>> Satish >>> >>> On Fri, 8 May 2009, Fredrik Bengzon wrote: >>> >>> >>> Hong, >>> Thank you for the suggestions, but I have looked at the EPS and >>> KSP objects >>> and I can not find anything wrong. The problem is that it takes >>> longer to >>> solve with 4 cpus than with 2 so the scalability seems to be >>> absent when using >>> superlu_dist. I have stored my mass and stiffness matrix in the >>> mpiaij format >>> and just passed them on to slepc. When using the petsc iterative >>> krylov >>> solvers i see 100% workload on all processors but when i switch to >>> superlu_dist only two cpus seem to do the whole work of LU >>> factoring. I don't >>> want to use the krylov solver though since it might cause slepc >>> not to >>> converge. >>> Regards, >>> Fredrik >>> >>> Hong Zhang wrote: >>> >>> Run your code with '-eps_view -ksp_view' for checking >>> which methods are used >>> and '-log_summary' to see which operations dominate >>> the computation. >>> >>> You can turn on parallel symbolic factorization >>> with '-mat_superlu_dist_parsymbfact'. >>> >>> Unless you use large num of processors, symbolic factorization >>> takes ignorable execution time. The numeric >>> factorization usually dominates. >>> >>> Hong >>> >>> On Fri, 8 May 2009, Fredrik Bengzon wrote: >>> >>> >>> Hi Petsc team, >>> Sorry for posting questions not really concerning the petsc core, >>> but when >>> I run superlu_dist from within slepc I notice that the load >>> balance is >>> poor. It is just fine during assembly (I use Metis to partition my >>> finite >>> element mesh) but when calling the slepc solver it dramatically >>> changes. I >>> use superlu_dist as solver for the eigenvalue iteration. My >>> question is: >>> can this have something to do with the fact that the option >>> 'Parallel >>> symbolic factorization' is set to false? If so, can I change the >>> options >>> to superlu_dist using MatSetOption for instance? Also, does this >>> mean that >>> superlu_dist is not using parmetis to reorder the matrix? >>> Best Regards, >>> Fredrik Bengzon >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> --What most experimenters take for granted before they begin their >>> experiments is infinitely more interesting than any results to >>> which their experiments lead. >>> -- Norbert Wiener >> >> > From Hung.V.Nguyen at usace.army.mil Tue May 12 15:02:06 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Tue, 12 May 2009 15:02:06 -0500 Subject: Change value of amgStrongThreshold Message-ID: Hello All, Is there an option to change a value of amgStrongThreshold at running time when run hypre via petsc? Thanks, -hung aprun -n 16 ./test_matrix_read -ksp_type cg -ksp_rtol 1.0e-9 -pc_type hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_tol 1.0e-9 -pc_hypre_boomeramg_max_iter 10 -ksp_monitor -ksp_view -amgStrongThreshold 0.7 0 KSP Residual norm 5.426061506200e+04 1 KSP Residual norm 1.365341976191e+02 2 KSP Residual norm 2.126401900010e+01 3 KSP Residual norm 1.100477412796e+01 4 KSP Residual norm 4.812547276514e+00 5 KSP Residual norm 2.358447263575e+00 6 KSP Residual norm 2.173357413396e+00 7 KSP Residual norm 1.503960597666e+00 8 KSP Residual norm 1.010547645731e+00 9 KSP Residual norm 8.426472409077e-01 10 KSP Residual norm 6.521097422133e-01 11 KSP Residual norm 4.759600459962e-01 12 KSP Residual norm 3.354705674276e-01 13 KSP Residual norm 2.928909822875e-01 14 KSP Residual norm 2.131100261605e-01 15 KSP Residual norm 1.434520361965e-01 16 KSP Residual norm 1.239990589407e-01 17 KSP Residual norm 9.133339702949e-02 18 KSP Residual norm 7.304670860369e-02 19 KSP Residual norm 5.128919929550e-02 20 KSP Residual norm 4.683930171651e-02 21 KSP Residual norm 3.312636103461e-02 22 KSP Residual norm 2.726223533933e-02 23 KSP Residual norm 1.630121490736e-02 24 KSP Residual norm 1.439580288349e-02 25 KSP Residual norm 9.386850326614e-03 26 KSP Residual norm 7.258934432207e-03 27 KSP Residual norm 5.391044121226e-03 28 KSP Residual norm 4.096261122185e-03 29 KSP Residual norm 3.821143506943e-03 30 KSP Residual norm 2.369891585552e-03 31 KSP Residual norm 1.726252735068e-03 32 KSP Residual norm 1.330257887371e-03 33 KSP Residual norm 9.565460328854e-04 34 KSP Residual norm 8.134595787555e-04 35 KSP Residual norm 5.318612397027e-04 36 KSP Residual norm 4.258345241608e-04 37 KSP Residual norm 3.061218892187e-04 38 KSP Residual norm 2.242561068076e-04 39 KSP Residual norm 1.751430116154e-04 40 KSP Residual norm 1.409093607762e-04 41 KSP Residual norm 1.035510364928e-04 42 KSP Residual norm 8.161852163909e-05 43 KSP Residual norm 4.877043330106e-05 KSP Object: type: cg maximum iterations=10000 tolerances: relative=1e-09, absolute=1e-50, divergence=10000 left preconditioning PC Object: type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 10 HYPRE BoomerAMG: Convergence tolerance PER hypre call 1e-09 HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0 HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1 HYPRE BoomerAMG: Outer relax weight (all) 1 HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=717486, cols=717486 total: nonzeros=14085842, allocated nonzeros=38744244 not using I-node (on process 0) routines Time in PETSc solver: 163.976773 seconds The number of iteration = 43 The solution residual error = 4.877043e-05 infinity norm A*x -b 2.831224e-01 2 norm 5.680606e-04 infinity norm 2.154956e-05 1 norm 1.409420e-01 From bsmith at mcs.anl.gov Tue May 12 15:08:31 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 12 May 2009 15:08:31 -0500 Subject: Change value of amgStrongThreshold In-Reply-To: References: Message-ID: -pc_hypre_boomeramg_strong_threshold If you run with -help you will see all the options Barry On May 12, 2009, at 3:02 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > Hello All, > > Is there an option to change a value of amgStrongThreshold at > running time > when run hypre via petsc? > > Thanks, > > -hung > > aprun -n 16 ./test_matrix_read -ksp_type cg -ksp_rtol 1.0e-9 - > pc_type > hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_tol 1.0e-9 > -pc_hypre_boomeramg_max_iter 10 -ksp_monitor -ksp_view - > amgStrongThreshold > 0.7 > 0 KSP Residual norm 5.426061506200e+04 > 1 KSP Residual norm 1.365341976191e+02 > 2 KSP Residual norm 2.126401900010e+01 > 3 KSP Residual norm 1.100477412796e+01 > 4 KSP Residual norm 4.812547276514e+00 > 5 KSP Residual norm 2.358447263575e+00 > 6 KSP Residual norm 2.173357413396e+00 > 7 KSP Residual norm 1.503960597666e+00 > 8 KSP Residual norm 1.010547645731e+00 > 9 KSP Residual norm 8.426472409077e-01 > 10 KSP Residual norm 6.521097422133e-01 > 11 KSP Residual norm 4.759600459962e-01 > 12 KSP Residual norm 3.354705674276e-01 > 13 KSP Residual norm 2.928909822875e-01 > 14 KSP Residual norm 2.131100261605e-01 > 15 KSP Residual norm 1.434520361965e-01 > 16 KSP Residual norm 1.239990589407e-01 > 17 KSP Residual norm 9.133339702949e-02 > 18 KSP Residual norm 7.304670860369e-02 > 19 KSP Residual norm 5.128919929550e-02 > 20 KSP Residual norm 4.683930171651e-02 > 21 KSP Residual norm 3.312636103461e-02 > 22 KSP Residual norm 2.726223533933e-02 > 23 KSP Residual norm 1.630121490736e-02 > 24 KSP Residual norm 1.439580288349e-02 > 25 KSP Residual norm 9.386850326614e-03 > 26 KSP Residual norm 7.258934432207e-03 > 27 KSP Residual norm 5.391044121226e-03 > 28 KSP Residual norm 4.096261122185e-03 > 29 KSP Residual norm 3.821143506943e-03 > 30 KSP Residual norm 2.369891585552e-03 > 31 KSP Residual norm 1.726252735068e-03 > 32 KSP Residual norm 1.330257887371e-03 > 33 KSP Residual norm 9.565460328854e-04 > 34 KSP Residual norm 8.134595787555e-04 > 35 KSP Residual norm 5.318612397027e-04 > 36 KSP Residual norm 4.258345241608e-04 > 37 KSP Residual norm 3.061218892187e-04 > 38 KSP Residual norm 2.242561068076e-04 > 39 KSP Residual norm 1.751430116154e-04 > 40 KSP Residual norm 1.409093607762e-04 > 41 KSP Residual norm 1.035510364928e-04 > 42 KSP Residual norm 8.161852163909e-05 > 43 KSP Residual norm 4.877043330106e-05 > KSP Object: > type: cg > maximum iterations=10000 > tolerances: relative=1e-09, absolute=1e-50, divergence=10000 > left preconditioning > PC Object: > type: hypre > HYPRE BoomerAMG preconditioning > HYPRE BoomerAMG: Cycle type V > HYPRE BoomerAMG: Maximum number of levels 25 > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 10 > HYPRE BoomerAMG: Convergence tolerance PER hypre call 1e-09 > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > HYPRE BoomerAMG: Interpolation truncation factor 0 > HYPRE BoomerAMG: Interpolation: max elements per row 0 > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > HYPRE BoomerAMG: Maximum row sums 0.9 > HYPRE BoomerAMG: Sweeps down 1 > HYPRE BoomerAMG: Sweeps up 1 > HYPRE BoomerAMG: Sweeps on coarse 1 > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > HYPRE BoomerAMG: Relax weight (all) 1 > HYPRE BoomerAMG: Outer relax weight (all) 1 > HYPRE BoomerAMG: Using CF-relaxation > HYPRE BoomerAMG: Measure type local > HYPRE BoomerAMG: Coarsen type Falgout > HYPRE BoomerAMG: Interpolation type classical > linear system matrix = precond matrix: > Matrix Object: > type=mpiaij, rows=717486, cols=717486 > total: nonzeros=14085842, allocated nonzeros=38744244 > not using I-node (on process 0) routines > Time in PETSc solver: 163.976773 seconds > The number of iteration = 43 > The solution residual error = 4.877043e-05 > infinity norm A*x -b 2.831224e-01 > 2 norm 5.680606e-04 > infinity norm 2.154956e-05 > 1 norm 1.409420e-01 > From jed at 59A2.org Tue May 12 15:08:59 2009 From: jed at 59A2.org (Jed Brown) Date: Tue, 12 May 2009 22:08:59 +0200 Subject: Change value of amgStrongThreshold In-Reply-To: References: Message-ID: <4A09D75B.3090808@59A2.org> Nguyen, Hung V ERDC-ITL-MS wrote: > Hello All, > > Is there an option to change a value of amgStrongThreshold at running time > when run hypre via petsc? $ cd petsc/src/ksp/ksp/examples/tutorials/ && make ex2 && ./ex2 -pc_type hypre -help |grep strong -pc_hypre_boomeramg_strong_threshold <0.25>: Threshold for being strongly connected (None) Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 260 bytes Desc: OpenPGP digital signature URL: From Hung.V.Nguyen at usace.army.mil Tue May 12 15:13:17 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Tue, 12 May 2009 15:13:17 -0500 Subject: Change value of amgStrongThreshold In-Reply-To: References: Message-ID: Barry, Thank you! -hung -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Tuesday, May 12, 2009 3:09 PM To: PETSc users list Subject: Re: Change value of amgStrongThreshold -pc_hypre_boomeramg_strong_threshold If you run with -help you will see all the options Barry On May 12, 2009, at 3:02 PM, Nguyen, Hung V ERDC-ITL-MS wrote: > Hello All, > > Is there an option to change a value of amgStrongThreshold at running > time when run hypre via petsc? > > Thanks, > > -hung > > aprun -n 16 ./test_matrix_read -ksp_type cg -ksp_rtol 1.0e-9 - > pc_type hypre -pc_hypre_type boomeramg -pc_hypre_boomeramg_tol 1.0e-9 > -pc_hypre_boomeramg_max_iter 10 -ksp_monitor -ksp_view - > amgStrongThreshold > 0.7 > 0 KSP Residual norm 5.426061506200e+04 > 1 KSP Residual norm 1.365341976191e+02 > 2 KSP Residual norm 2.126401900010e+01 > 3 KSP Residual norm 1.100477412796e+01 > 4 KSP Residual norm 4.812547276514e+00 > 5 KSP Residual norm 2.358447263575e+00 > 6 KSP Residual norm 2.173357413396e+00 > 7 KSP Residual norm 1.503960597666e+00 > 8 KSP Residual norm 1.010547645731e+00 > 9 KSP Residual norm 8.426472409077e-01 10 KSP Residual norm > 6.521097422133e-01 > 11 KSP Residual norm 4.759600459962e-01 > 12 KSP Residual norm 3.354705674276e-01 > 13 KSP Residual norm 2.928909822875e-01 > 14 KSP Residual norm 2.131100261605e-01 > 15 KSP Residual norm 1.434520361965e-01 > 16 KSP Residual norm 1.239990589407e-01 > 17 KSP Residual norm 9.133339702949e-02 > 18 KSP Residual norm 7.304670860369e-02 > 19 KSP Residual norm 5.128919929550e-02 20 KSP Residual norm > 4.683930171651e-02 > 21 KSP Residual norm 3.312636103461e-02 > 22 KSP Residual norm 2.726223533933e-02 > 23 KSP Residual norm 1.630121490736e-02 > 24 KSP Residual norm 1.439580288349e-02 > 25 KSP Residual norm 9.386850326614e-03 > 26 KSP Residual norm 7.258934432207e-03 > 27 KSP Residual norm 5.391044121226e-03 > 28 KSP Residual norm 4.096261122185e-03 > 29 KSP Residual norm 3.821143506943e-03 30 KSP Residual norm > 2.369891585552e-03 > 31 KSP Residual norm 1.726252735068e-03 > 32 KSP Residual norm 1.330257887371e-03 > 33 KSP Residual norm 9.565460328854e-04 > 34 KSP Residual norm 8.134595787555e-04 > 35 KSP Residual norm 5.318612397027e-04 > 36 KSP Residual norm 4.258345241608e-04 > 37 KSP Residual norm 3.061218892187e-04 > 38 KSP Residual norm 2.242561068076e-04 > 39 KSP Residual norm 1.751430116154e-04 40 KSP Residual norm > 1.409093607762e-04 > 41 KSP Residual norm 1.035510364928e-04 > 42 KSP Residual norm 8.161852163909e-05 > 43 KSP Residual norm 4.877043330106e-05 KSP Object: > type: cg > maximum iterations=10000 > tolerances: relative=1e-09, absolute=1e-50, divergence=10000 left > preconditioning PC Object: > type: hypre > HYPRE BoomerAMG preconditioning > HYPRE BoomerAMG: Cycle type V > HYPRE BoomerAMG: Maximum number of levels 25 > HYPRE BoomerAMG: Maximum number of iterations PER hypre call 10 > HYPRE BoomerAMG: Convergence tolerance PER hypre call 1e-09 > HYPRE BoomerAMG: Threshold for strong coupling 0.25 > HYPRE BoomerAMG: Interpolation truncation factor 0 > HYPRE BoomerAMG: Interpolation: max elements per row 0 > HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 > HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 > HYPRE BoomerAMG: Maximum row sums 0.9 > HYPRE BoomerAMG: Sweeps down 1 > HYPRE BoomerAMG: Sweeps up 1 > HYPRE BoomerAMG: Sweeps on coarse 1 > HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi > HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi > HYPRE BoomerAMG: Relax on coarse Gaussian-elimination > HYPRE BoomerAMG: Relax weight (all) 1 > HYPRE BoomerAMG: Outer relax weight (all) 1 > HYPRE BoomerAMG: Using CF-relaxation > HYPRE BoomerAMG: Measure type local > HYPRE BoomerAMG: Coarsen type Falgout > HYPRE BoomerAMG: Interpolation type classical linear system > matrix = precond matrix: > Matrix Object: > type=mpiaij, rows=717486, cols=717486 > total: nonzeros=14085842, allocated nonzeros=38744244 > not using I-node (on process 0) routines Time in PETSc solver: > 163.976773 seconds > The number of iteration = 43 > The solution residual error = 4.877043e-05 infinity norm A*x -b > 2.831224e-01 > 2 norm 5.680606e-04 > infinity norm 2.154956e-05 > 1 norm 1.409420e-01 > From baraip at ornl.gov Thu May 14 15:58:39 2009 From: baraip at ornl.gov (Barai, Pallab) Date: Thu, 14 May 2009 16:58:39 -0400 Subject: Conjugate Gradient technique Message-ID: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> Hello, I am using PETSc to solve a set of linear equations "Sx=b". Here "b" is known and "x" is the trial solution. The complete (assembled) form of S is not known. That is why I am not able to use something like "KSPSolve". But given a trial solution "x", I can calculate "S*x" using a "MatVec" routine. Is it possible to use the Conjugate Gradient (CG) technique to find a solution in this case? In place of "S", I can give "S*x" as the input. It will be great if someone can show me some direction. If this thing has already been discussed before, the link to that thread will be sufficient. Thanking you. Pallab Barai From knepley at gmail.com Thu May 14 16:03:13 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 14 May 2009 16:03:13 -0500 Subject: Conjugate Gradient technique In-Reply-To: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> References: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> Message-ID: On Thu, May 14, 2009 at 3:58 PM, Barai, Pallab wrote: > Hello, > > I am using PETSc to solve a set of linear equations "Sx=b". > > Here "b" is known and "x" is the trial solution. > > The complete (assembled) form of S is not known. That is why I am not able > to use something like "KSPSolve". > > But given a trial solution "x", I can calculate "S*x" using a "MatVec" > routine. > > Is it possible to use the Conjugate Gradient (CG) technique to find a > solution in this case? In place of "S", I can give "S*x" as the input. You can use CG if S is symmetric. If not, try GMRES. > > It will be great if someone can show me some direction. If this thing has > already been discussed before, the link to that thread will be sufficient. You can wrap your function in a MatShell: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateShell.html Matt > > Thanking you. > > Pallab Barai > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu May 14 16:03:41 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 14 May 2009 16:03:41 -0500 Subject: Conjugate Gradient technique In-Reply-To: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> References: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> Message-ID: <2EBAD9E1-8771-47F4-AC63-132E502C43EE@mcs.anl.gov> Please take a look at the manual pages for MATSHELL and MatShellCreate(). You will need to create a matrix with MatCreateShell() and then supply your matrix vector operation with MatShellSetOperation() A fortran example can be found at http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/ksp/ksp/examples/tutorials/ex14f.F.html Barry On May 14, 2009, at 3:58 PM, Barai, Pallab wrote: > Hello, > > I am using PETSc to solve a set of linear equations "Sx=b". > > Here "b" is known and "x" is the trial solution. > > The complete (assembled) form of S is not known. That is why I am > not able to use something like "KSPSolve". > > But given a trial solution "x", I can calculate "S*x" using a > "MatVec" routine. > > Is it possible to use the Conjugate Gradient (CG) technique to find > a solution in this case? In place of "S", I can give "S*x" as the > input. > > It will be great if someone can show me some direction. If this > thing has already been discussed before, the link to that thread > will be sufficient. > > Thanking you. > > Pallab Barai From baraip at ornl.gov Thu May 14 16:26:19 2009 From: baraip at ornl.gov (Barai, Pallab) Date: Thu, 14 May 2009 17:26:19 -0400 Subject: Conjugate Gradient technique References: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> Message-ID: <537C6C0940C6C143AA46A88946B854170DA467A0@ORNLEXCHANGE.ornl.gov> Hello Matt, Thank you very much for your reply. Here "S" is a symmetric matrix, so I have to use the CG. But I can not figure out how to use the Conjugate Gradient technique to find the solution. In the "MatShell" I can wrap my "MatVec" routine which calculates the "S*x" vector. But how will I be able to do the comparison between the known "b" vector and the calculated "S*x" using a CG (and come up with a new trial solution "x" for the next iteration). What is the function that I should call for this purpose? Thanking you. Pallab -----Original Message----- From: petsc-users-bounces at mcs.anl.gov on behalf of Matthew Knepley Sent: Thu 5/14/2009 5:03 PM To: PETSc users list Subject: Re: Conjugate Gradient technique On Thu, May 14, 2009 at 3:58 PM, Barai, Pallab wrote: > Hello, > > I am using PETSc to solve a set of linear equations "Sx=b". > > Here "b" is known and "x" is the trial solution. > > The complete (assembled) form of S is not known. That is why I am not able > to use something like "KSPSolve". > > But given a trial solution "x", I can calculate "S*x" using a "MatVec" > routine. > > Is it possible to use the Conjugate Gradient (CG) technique to find a > solution in this case? In place of "S", I can give "S*x" as the input. You can use CG if S is symmetric. If not, try GMRES. > > It will be great if someone can show me some direction. If this thing has > already been discussed before, the link to that thread will be sufficient. You can wrap your function in a MatShell: http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateShell.html Matt > > Thanking you. > > Pallab Barai > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From bsmith at mcs.anl.gov Thu May 14 16:29:42 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 14 May 2009 16:29:42 -0500 Subject: Conjugate Gradient technique In-Reply-To: <537C6C0940C6C143AA46A88946B854170DA467A0@ORNLEXCHANGE.ornl.gov> References: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> <537C6C0940C6C143AA46A88946B854170DA467A0@ORNLEXCHANGE.ornl.gov> Message-ID: <42A8CF5C-2D87-4BC9-A82C-6191E3EABC9D@mcs.anl.gov> You just pass the shell matrix into KSPSetOperators() and KSPSolve() will do the rest. Note: you will need to use -pc_type none as the preconditioner since you are not providing a matrix from which to build the preconditioner. Barry On May 14, 2009, at 4:26 PM, Barai, Pallab wrote: > Hello Matt, > > Thank you very much for your reply. > > Here "S" is a symmetric matrix, so I have to use the CG. > > But I can not figure out how to use the Conjugate Gradient technique > to find the solution. In the "MatShell" I can wrap my "MatVec" > routine which calculates the "S*x" vector. > > But how will I be able to do the comparison between the known "b" > vector and the calculated "S*x" using a CG (and come up with a new > trial solution "x" for the next iteration). What is the function > that I should call for this purpose? > > Thanking you. > > Pallab > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov on behalf of Matthew Knepley > Sent: Thu 5/14/2009 5:03 PM > To: PETSc users list > Subject: Re: Conjugate Gradient technique > > On Thu, May 14, 2009 at 3:58 PM, Barai, Pallab > wrote: > >> Hello, >> >> I am using PETSc to solve a set of linear equations "Sx=b". >> >> Here "b" is known and "x" is the trial solution. >> >> The complete (assembled) form of S is not known. That is why I am >> not able >> to use something like "KSPSolve". >> >> But given a trial solution "x", I can calculate "S*x" using a >> "MatVec" >> routine. >> >> Is it possible to use the Conjugate Gradient (CG) technique to find a >> solution in this case? In place of "S", I can give "S*x" as the >> input. > > > You can use CG if S is symmetric. If not, try GMRES. > > >> >> It will be great if someone can show me some direction. If this >> thing has >> already been discussed before, the link to that thread will be >> sufficient. > > > You can wrap your function in a MatShell: > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateShell.html > > Matt > > >> >> Thanking you. >> >> Pallab Barai >> > -- > What most experimenters take for granted before they begin their > experiments > is infinitely more interesting than any results to which their > experiments > lead. > -- Norbert Wiener > From baraip at ornl.gov Thu May 14 16:36:43 2009 From: baraip at ornl.gov (Barai, Pallab) Date: Thu, 14 May 2009 17:36:43 -0400 Subject: Conjugate Gradient technique References: <537C6C0940C6C143AA46A88946B854170DA4679F@ORNLEXCHANGE.ornl.gov> <537C6C0940C6C143AA46A88946B854170DA467A0@ORNLEXCHANGE.ornl.gov> <42A8CF5C-2D87-4BC9-A82C-6191E3EABC9D@mcs.anl.gov> Message-ID: <537C6C0940C6C143AA46A88946B854170DA467A1@ORNLEXCHANGE.ornl.gov> Thank you very much Barry for the solution. I will give it a try. Pallab -----Original Message----- From: petsc-users-bounces at mcs.anl.gov on behalf of Barry Smith Sent: Thu 5/14/2009 5:29 PM To: PETSc users list Subject: Re: Conjugate Gradient technique You just pass the shell matrix into KSPSetOperators() and KSPSolve() will do the rest. Note: you will need to use -pc_type none as the preconditioner since you are not providing a matrix from which to build the preconditioner. Barry On May 14, 2009, at 4:26 PM, Barai, Pallab wrote: > Hello Matt, > > Thank you very much for your reply. > > Here "S" is a symmetric matrix, so I have to use the CG. > > But I can not figure out how to use the Conjugate Gradient technique > to find the solution. In the "MatShell" I can wrap my "MatVec" > routine which calculates the "S*x" vector. > > But how will I be able to do the comparison between the known "b" > vector and the calculated "S*x" using a CG (and come up with a new > trial solution "x" for the next iteration). What is the function > that I should call for this purpose? > > Thanking you. > > Pallab > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov on behalf of Matthew Knepley > Sent: Thu 5/14/2009 5:03 PM > To: PETSc users list > Subject: Re: Conjugate Gradient technique > > On Thu, May 14, 2009 at 3:58 PM, Barai, Pallab > wrote: > >> Hello, >> >> I am using PETSc to solve a set of linear equations "Sx=b". >> >> Here "b" is known and "x" is the trial solution. >> >> The complete (assembled) form of S is not known. That is why I am >> not able >> to use something like "KSPSolve". >> >> But given a trial solution "x", I can calculate "S*x" using a >> "MatVec" >> routine. >> >> Is it possible to use the Conjugate Gradient (CG) technique to find a >> solution in this case? In place of "S", I can give "S*x" as the >> input. > > > You can use CG if S is symmetric. If not, try GMRES. > > >> >> It will be great if someone can show me some direction. If this >> thing has >> already been discussed before, the link to that thread will be >> sufficient. > > > You can wrap your function in a MatShell: > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/Mat/MatCreateShell.html > > Matt > > >> >> Thanking you. >> >> Pallab Barai >> > -- > What most experimenters take for granted before they begin their > experiments > is infinitely more interesting than any results to which their > experiments > lead. > -- Norbert Wiener > From amari at cpht.polytechnique.fr Fri May 15 03:41:01 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 10:41:01 +0200 Subject: petsc on mac os x Message-ID: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Hello , I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 and 10.5), but always have errors. Has anyone already built petsc for MAC OS X (any version of petsc or mac os x) . If yes I would be grateful to get the configure options and the minimum requirements. If it is not possible to built it I would also appreciate to know it. Thank you very much Tahar -------------------------------------------- T. Amari Centre de Physique Theorique Ecole Polytechnique 91128 Palaiseau Cedex France tel : 33 1 69 33 42 52 fax: 33 1 69 33 30 08 email: URL : http://www.cpht.polytechnique.fr/cpht/amari -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chun.SUN at 3ds.com Fri May 15 08:58:02 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Fri, 15 May 2009 09:58:02 -0400 Subject: MatLoad a large matrix into SEQAIJ In-Reply-To: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E3C9@CORP-CLT-EXB01.ds> Hello PETSc developers, I was trying to MatLoad a large matrix into a serial machine. The matrix is about 6G on disk (binary). I'm seeing signal 11 on MatLoad phase. However, I was able to load other smaller matrices (less than 1G) in serial (although I haven't binary-searched to find out what is the critical size of the crash). Also, I was able to load the 6G matrix when I ran with 4 cores, meaning SEQAIJ type has become MPIAIJ type. BTW I built and ran PETSc on both 64-bit lnx86_64 machines. I'm not sure if you have seen this. I'm not sure if this is a bug because I'm still using 2.3.3 instead of 3.0, and I also suspect I might have missed something on build or run. Have you seen this? Thanks, Chun -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdettrick at gmail.com Fri May 15 09:39:31 2009 From: sdettrick at gmail.com (Sean Dettrick) Date: Fri, 15 May 2009 07:39:31 -0700 Subject: petsc on mac os x In-Reply-To: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: <51322510-77E8-4DFD-B8C1-5AB382FA495C@gmail.com> I configured petsc-2.3.3-p10 on OSX 10.5 using an openmpi built with gcc 4.0.1 gcc compiler. It works. The petsc config flags were: --with-matlab=0 --with-shared=0 --with-dynamic=0 --with-fortran=0 I believe that --with-dynamic=0 is important on OSX. Also if you happen to have intel MKL: --with-blas-lapack-dir=/Library/Frameworks/Intel_MKL.framework/ Libraries/32 I hope that helps. Sean On May 15, 2009, at 1:41 AM, Tahar Amari wrote: > Hello , > > I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 > and 10.5), but > always have errors. > Has anyone already built petsc for MAC OS X (any version of petsc or > mac os x) . > If yes I would be grateful to get the configure options and the > minimum requirements. > If it is not possible to built it I would also appreciate to know it. > > Thank you very much > > Tahar > > > -------------------------------------------- > T. Amari > Centre de Physique Theorique > Ecole Polytechnique > 91128 Palaiseau Cedex France > tel : 33 1 69 33 42 52 > fax: 33 1 69 33 30 08 > email: > URL : http://www.cpht.polytechnique.fr/cpht/amari > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 09:45:21 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 16:45:21 +0200 Subject: petsc on mac os x In-Reply-To: <51322510-77E8-4DFD-B8C1-5AB382FA495C@gmail.com> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <51322510-77E8-4DFD-B8C1-5AB382FA495C@gmail.com> Message-ID: <64D56F8C-67E8-4FCD-A87E-C90351AB2B2A@cpht.polytechnique.fr> Hello Sean Thank you very much for this help. I cannot see in the configure anything about MPI. Do you mean that you do need to link with MPI , and this will be only at the link of your application ? Many thanks again Tahar Le 15 mai 09 ? 16:39, Sean Dettrick a ?crit : > > I configured petsc-2.3.3-p10 on OSX 10.5 using an openmpi built with > gcc 4.0.1 gcc compiler. It works. The petsc config flags were: > --with-matlab=0 --with-shared=0 --with-dynamic=0 --with-fortran=0 > I believe that --with-dynamic=0 is important on OSX. > > Also if you happen to have intel MKL: > --with-blas-lapack-dir=/Library/Frameworks/Intel_MKL.framework/ > Libraries/32 > > I hope that helps. > > Sean > > > On May 15, 2009, at 1:41 AM, Tahar Amari wrote: > >> Hello , >> >> I am a new petsc user. I am trying to built petsc on mac os 10 >> (10.4 and 10.5), but >> always have errors. >> Has anyone already built petsc for MAC OS X (any version of petsc >> or mac os x) . >> If yes I would be grateful to get the configure options and the >> minimum requirements. >> If it is not possible to built it I would also appreciate to know >> it. >> >> Thank you very much >> >> Tahar >> >> >> -------------------------------------------- >> T. Amari >> Centre de Physique Theorique >> Ecole Polytechnique >> 91128 Palaiseau Cedex France >> tel : 33 1 69 33 42 52 >> fax: 33 1 69 33 30 08 >> email: >> URL : http://www.cpht.polytechnique.fr/cpht/amari >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hakan.jakobsson at math.umu.se Fri May 15 11:07:16 2009 From: hakan.jakobsson at math.umu.se (=?ISO-8859-1?Q?H=E5kan_Jakobsson?=) Date: Fri, 15 May 2009 18:07:16 +0200 Subject: petsc on mac os x In-Reply-To: <64D56F8C-67E8-4FCD-A87E-C90351AB2B2A@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <51322510-77E8-4DFD-B8C1-5AB382FA495C@gmail.com> <64D56F8C-67E8-4FCD-A87E-C90351AB2B2A@cpht.polytechnique.fr> Message-ID: <687bb3e80905150907j7e78f819j6fed966023fe9597@mail.gmail.com> Hi, I can confirm petsc 3.0.0 on 10.5 with openmpi 1.3.2. Configuration options were --with-fc=0 with-mpi-dir=/usr/local/openmpi H?kan On Fri, May 15, 2009 at 4:45 PM, Tahar Amari wrote: > Hello Sean > > Thank you very much for this help. > I cannot see in the configure anything about MPI. Do you mean > that you do need to link with MPI , and this will be only at the link of > your application ? > > Many thanks again > > Tahar > > > Le 15 mai 09 ? 16:39, Sean Dettrick a ?crit : > > > I configured petsc-2.3.3-p10 on OSX 10.5 using an openmpi built with gcc > 4.0.1 gcc compiler. It works. The petsc config flags were:--with-matlab=0 > --with-shared=0 --with-dynamic=0 --with-fortran=0I believe that > --with-dynamic=0 is important on OSX. > > Also if you happen to have intel MKL: > > --with-blas-lapack-dir=/Library/Frameworks/Intel_MKL.framework/Libraries/32 > > I hope that helps. > > Sean > > > On May 15, 2009, at 1:41 AM, Tahar Amari wrote: > > Hello , > I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 and > 10.5), but > always have errors. > Has anyone already built petsc for MAC OS X (any version of petsc or mac os > x) . > If yes I would be grateful to get the configure options and the minimum > requirements. > If it is not possible to built it I would also appreciate to know it. > > Thank you very much > > Tahar > > > -------------------------------------------- > T. Amari > Centre de Physique Theorique > Ecole Polytechnique > 91128 Palaiseau Cedex France > tel : 33 1 69 33 42 52 > fax: 33 1 69 33 30 08 > email: > > URL : http://www.cpht.polytechnique.fr/cpht/amari > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 11:11:13 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 18:11:13 +0200 Subject: petsc on mac os x In-Reply-To: <687bb3e80905150907j7e78f819j6fed966023fe9597@mail.gmail.com> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <51322510-77E8-4DFD-B8C1-5AB382FA495C@gmail.com> <64D56F8C-67E8-4FCD-A87E-C90351AB2B2A@cpht.polytechnique.fr> <687bb3e80905150907j7e78f819j6fed966023fe9597@mail.gmail.com> Message-ID: <900650F2-8105-4C15-82E7-B6CBDB888799@cpht.polytechnique.fr> Hello, Many thanks. I noticed that none of you to built it with fortran, but I guess putting --with-fc=1 should make it ? Tahar Le 15 mai 09 ? 18:07, H?kan Jakobsson a ?crit : > Hi, I can confirm petsc 3.0.0 on 10.5 with openmpi 1.3.2. > Configuration options were > > --with-fc=0 with-mpi-dir=/usr/local/openmpi > > H?kan > > On Fri, May 15, 2009 at 4:45 PM, Tahar Amari > wrote: > Hello Sean > > Thank you very much for this help. > I cannot see in the configure anything about MPI. Do you mean > that you do need to link with MPI , and this will be only at the > link of your application ? > > Many thanks again > > Tahar > > > Le 15 mai 09 ? 16:39, Sean Dettrick a ?crit : > >> >> I configured petsc-2.3.3-p10 on OSX 10.5 using an openmpi built >> with gcc 4.0.1 gcc compiler. It works. The petsc config flags were: >> --with-matlab=0 --with-shared=0 --with-dynamic=0 --with-fortran=0 >> I believe that --with-dynamic=0 is important on OSX. >> >> Also if you happen to have intel MKL: >> --with-blas-lapack-dir=/Library/Frameworks/Intel_MKL.framework/ >> Libraries/32 >> >> I hope that helps. >> >> Sean >> >> >> On May 15, 2009, at 1:41 AM, Tahar Amari wrote: >> >>> Hello , >>> >>> I am a new petsc user. I am trying to built petsc on mac os 10 >>> (10.4 and 10.5), but >>> always have errors. >>> Has anyone already built petsc for MAC OS X (any version of petsc >>> or mac os x) . >>> If yes I would be grateful to get the configure options and the >>> minimum requirements. >>> If it is not possible to built it I would also appreciate to know >>> it. >>> >>> Thank you very much >>> >>> Tahar >>> >>> >>> -------------------------------------------- >>> T. Amari >>> Centre de Physique Theorique >>> Ecole Polytechnique >>> 91128 Palaiseau Cedex France >>> tel : 33 1 69 33 42 52 >>> fax: 33 1 69 33 30 08 >>> email: >>> URL : http://www.cpht.polytechnique.fr/cpht/amari >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 15 11:17:49 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 11:17:49 -0500 (CDT) Subject: petsc on mac os x In-Reply-To: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: It appears you want to build PETSc sequentially with fortran - so the following should work: ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 [if it doesn't work - send us configure.log at petsc-maint at mcs.anl.gov] Satish On Fri, 15 May 2009, Tahar Amari wrote: > Hello , > > I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 and > 10.5), but > always have errors. > Has anyone already built petsc for MAC OS X (any version of petsc or mac os x) > . > If yes I would be grateful to get the configure options and the minimum > requirements. > If it is not possible to built it I would also appreciate to know it. > > Thank you very much > > Tahar > > > -------------------------------------------- > T. Amari > Centre de Physique Theorique > Ecole Polytechnique > 91128 Palaiseau Cedex France > tel : 33 1 69 33 42 52 > fax: 33 1 69 33 30 08 > email: > URL : http://www.cpht.polytechnique.fr/cpht/amari > From amari at cpht.polytechnique.fr Fri May 15 11:19:38 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 18:19:38 +0200 Subject: petsc on mac os x In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: <850A6737-B8C4-4993-9676-541F03137C81@cpht.polytechnique.fr> Hello, Excuse me if was confusing. I want mpi and fortran on mac os x. Best regards, Tahar Le 15 mai 09 ? 18:17, Satish Balay a ?crit : > It appears you want to build PETSc sequentially with fortran - so the > following should work: > > ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 > > [if it doesn't work - send us configure.log at petsc- > maint at mcs.anl.gov] > > Satish > > On Fri, 15 May 2009, Tahar Amari wrote: > >> Hello , >> >> I am a new petsc user. I am trying to built petsc on mac os 10 >> (10.4 and >> 10.5), but >> always have errors. >> Has anyone already built petsc for MAC OS X (any version of petsc >> or mac os x) >> . >> If yes I would be grateful to get the configure options and the >> minimum >> requirements. >> If it is not possible to built it I would also appreciate to know >> it. >> >> Thank you very much >> >> Tahar >> >> >> -------------------------------------------- >> T. Amari >> Centre de Physique Theorique >> Ecole Polytechnique >> 91128 Palaiseau Cedex France >> tel : 33 1 69 33 42 52 >> fax: 33 1 69 33 30 08 >> email: >> URL : http://www.cpht.polytechnique.fr/cpht/amari >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 15 11:23:38 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 11:23:38 -0500 (CDT) Subject: petsc on mac os x In-Reply-To: <850A6737-B8C4-4993-9676-541F03137C81@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <850A6737-B8C4-4993-9676-541F03137C81@cpht.polytechnique.fr> Message-ID: Then use: ./config/configure.py --with-cc=gcc --with-fc=gfortran --download-mpich=1 On OSX 10.4 - you might have to remove the incomplete OpenMPI Apple shipps. [as it isn't built with Fortran - and might conflict with any other MPI - you might build with fortran] Satish On Fri, 15 May 2009, Tahar Amari wrote: > > Hello, > > Excuse me if was confusing. I want mpi and fortran on mac os x. > > Best regards, > > Tahar > > > > > Le 15 mai 09 ? 18:17, Satish Balay a ?crit : > > > It appears you want to build PETSc sequentially with fortran - so the > > following should work: > > > > ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 > > > > [if it doesn't work - send us configure.log at petsc-maint at mcs.anl.gov] > > > > Satish > > > > On Fri, 15 May 2009, Tahar Amari wrote: > > > > > Hello , > > > > > > I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 and > > > 10.5), but > > > always have errors. > > > Has anyone already built petsc for MAC OS X (any version of petsc or mac > > > os x) > > > . > > > If yes I would be grateful to get the configure options and the minimum > > > requirements. > > > If it is not possible to built it I would also appreciate to know it. > > > > > > Thank you very much > > > > > > Tahar > > > > > > > > > -------------------------------------------- > > > T. Amari > > > Centre de Physique Theorique > > > Ecole Polytechnique > > > 91128 Palaiseau Cedex France > > > tel : 33 1 69 33 42 52 > > > fax: 33 1 69 33 30 08 > > > email: > > > URL : http://www.cpht.polytechnique.fr/cpht/amari > > > > From hzhang at mcs.anl.gov Fri May 15 11:30:46 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 15 May 2009 11:30:46 -0500 (CDT) Subject: petsc on mac os x In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: Here is my config options on mac os x: --with-cc=gcc --download-mpich --with-shared=0 --download-f-blas-lapack --with-clanguage=cxx --with-cxx=g++ --with-fc=g95 It works well for me. Hong On Fri, 15 May 2009, Satish Balay wrote: > It appears you want to build PETSc sequentially with fortran - so the > following should work: > > ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 > > [if it doesn't work - send us configure.log at petsc-maint at mcs.anl.gov] > > Satish > > On Fri, 15 May 2009, Tahar Amari wrote: > >> Hello , >> >> I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 and >> 10.5), but >> always have errors. >> Has anyone already built petsc for MAC OS X (any version of petsc or mac os x) >> . >> If yes I would be grateful to get the configure options and the minimum >> requirements. >> If it is not possible to built it I would also appreciate to know it. >> >> Thank you very much >> >> Tahar >> >> >> -------------------------------------------- >> T. Amari >> Centre de Physique Theorique >> Ecole Polytechnique >> 91128 Palaiseau Cedex France >> tel : 33 1 69 33 42 52 >> fax: 33 1 69 33 30 08 >> email: >> URL : http://www.cpht.polytechnique.fr/cpht/amari >> > > From amari at cpht.polytechnique.fr Fri May 15 11:42:13 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 18:42:13 +0200 Subject: petsc on mac os x In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: Thanks to all of you. I will try all those. By the way Hong, which version of Mac OS and of petsc was it ? Tahar Le 15 mai 09 ? 18:30, Hong Zhang a ?crit : > > Here is my config options on mac os x: > > --with-cc=gcc --download-mpich --with-shared=0 --download-f-blas- > lapack --with-clanguage=cxx --with-cxx=g++ --with-fc=g95 > > It works well for me. > > Hong > > On Fri, 15 May 2009, Satish Balay wrote: > >> It appears you want to build PETSc sequentially with fortran - so the >> following should work: >> >> ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 >> >> [if it doesn't work - send us configure.log at petsc-maint at mcs.anl.gov >> ] >> >> Satish >> >> On Fri, 15 May 2009, Tahar Amari wrote: >> >>> Hello , >>> >>> I am a new petsc user. I am trying to built petsc on mac os 10 >>> (10.4 and >>> 10.5), but >>> always have errors. >>> Has anyone already built petsc for MAC OS X (any version of petsc >>> or mac os x) >>> . >>> If yes I would be grateful to get the configure options and the >>> minimum >>> requirements. >>> If it is not possible to built it I would also appreciate to know >>> it. >>> >>> Thank you very much >>> >>> Tahar >>> >>> >>> -------------------------------------------- >>> T. Amari >>> Centre de Physique Theorique >>> Ecole Polytechnique >>> 91128 Palaiseau Cedex France >>> tel : 33 1 69 33 42 52 >>> fax: 33 1 69 33 30 08 >>> email: >>> URL : http://www.cpht.polytechnique.fr/cpht/amari >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Fri May 15 11:49:28 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Fri, 15 May 2009 11:49:28 -0500 (CDT) Subject: petsc on mac os x In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: On Fri, 15 May 2009, Tahar Amari wrote: > > Thanks to all of you. > I will try all those. > By the way Hong, which version of Mac OS and of petsc was it ? Leopard 10.5 petsc-3.0.0 Hong > > Tahar > > > > > Le 15 mai 09 ? 18:30, Hong Zhang a ?crit : > >> >> Here is my config options on mac os x: >> >> --with-cc=gcc --download-mpich --with-shared=0 --download-f-blas-lapack >> --with-clanguage=cxx --with-cxx=g++ --with-fc=g95 >> >> It works well for me. >> >> Hong >> >> On Fri, 15 May 2009, Satish Balay wrote: >> >>> It appears you want to build PETSc sequentially with fortran - so the >>> following should work: >>> >>> ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 >>> >>> [if it doesn't work - send us configure.log at petsc-maint at mcs.anl.gov] >>> >>> Satish >>> >>> On Fri, 15 May 2009, Tahar Amari wrote: >>> >>>> Hello , >>>> >>>> I am a new petsc user. I am trying to built petsc on mac os 10 (10.4 and >>>> 10.5), but >>>> always have errors. >>>> Has anyone already built petsc for MAC OS X (any version of petsc or mac >>>> os x) >>>> . >>>> If yes I would be grateful to get the configure options and the minimum >>>> requirements. >>>> If it is not possible to built it I would also appreciate to know it. >>>> >>>> Thank you very much >>>> >>>> Tahar >>>> >>>> >>>> -------------------------------------------- >>>> T. Amari >>>> Centre de Physique Theorique >>>> Ecole Polytechnique >>>> 91128 Palaiseau Cedex France >>>> tel : 33 1 69 33 42 52 >>>> fax: 33 1 69 33 30 08 >>>> email: >>>> URL : http://www.cpht.polytechnique.fr/cpht/amari >>>> >>> >>> > From amari at cpht.polytechnique.fr Fri May 15 11:52:59 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 18:52:59 +0200 Subject: petsc on mac os x In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: <3807341E-7C9B-4206-83D6-AC1FE6CF0334@cpht.polytechnique.fr> Great. Tahar Le 15 mai 09 ? 18:49, Hong Zhang a ?crit : > > > On Fri, 15 May 2009, Tahar Amari wrote: > >> >> Thanks to all of you. >> I will try all those. >> By the way Hong, which version of Mac OS and of petsc was it ? > > Leopard 10.5 > petsc-3.0.0 > > Hong >> >> Tahar >> >> >> >> >> Le 15 mai 09 ? 18:30, Hong Zhang a ?crit : >> >>> Here is my config options on mac os x: >>> --with-cc=gcc --download-mpich --with-shared=0 --download-f-blas- >>> lapack --with-clanguage=cxx --with-cxx=g++ --with-fc=g95 >>> It works well for me. >>> Hong >>> On Fri, 15 May 2009, Satish Balay wrote: >>>> It appears you want to build PETSc sequentially with fortran - so >>>> the >>>> following should work: >>>> ./config/configure.py --with-cc=gcc --with-fc=gfortran --with-mpi=0 >>>> [if it doesn't work - send us configure.log at petsc-maint at mcs.anl.gov >>>> ] >>>> Satish >>>> On Fri, 15 May 2009, Tahar Amari wrote: >>>>> Hello , >>>>> I am a new petsc user. I am trying to built petsc on mac os 10 >>>>> (10.4 and >>>>> 10.5), but >>>>> always have errors. >>>>> Has anyone already built petsc for MAC OS X (any version of >>>>> petsc or mac os x) >>>>> . >>>>> If yes I would be grateful to get the configure options and the >>>>> minimum >>>>> requirements. >>>>> If it is not possible to built it I would also appreciate to >>>>> know it. >>>>> Thank you very much >>>>> Tahar >>>>> -------------------------------------------- >>>>> T. Amari >>>>> Centre de Physique Theorique >>>>> Ecole Polytechnique >>>>> 91128 Palaiseau Cedex France >>>>> tel : 33 1 69 33 42 52 >>>>> fax: 33 1 69 33 30 08 >>>>> email: >>>>> URL : http://www.cpht.polytechnique.fr/cpht/amari >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From vyan2000 at gmail.com Fri May 15 12:20:49 2009 From: vyan2000 at gmail.com (Ryan Yan) Date: Fri, 15 May 2009 13:20:49 -0400 Subject: About the -pc_type tfs Message-ID: Hi all, I am tring to use the tfs preconditioner to solve a large sparse mpiaij matrix. 11111111111111111111111111111111111111111 It works very well with a small matrix 45*45(Actually a 9*9 block matrix with blocksize 5) on 2 processors; Out put is as follows: 0 KSP preconditioned resid norm 3.014544557924e+04 true resid norm 2.219812091849e+04 ||Ae||/||Ax|| 1.000000000000e+00 1 KSP preconditioned resid norm 3.679021546908e-03 true resid norm 1.502747104104e-03 ||Ae||/||Ax|| 6.769704109737e-08 2 KSP preconditioned resid norm 2.331909907779e-09 true resid norm 8.737892755044e-10 ||Ae||/||Ax|| 3.936320910733e-14 KSP Object: type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-10, absolute=1e-50, divergence=10000 left preconditioning PC Object: type: tfs linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=45, cols=45 total: nonzeros=825, allocated nonzeros=1350 using I-node (on process 0) routines: found 5 nodes, limit used is 5 Norm of error 2.33234e-09, Iterations 2 2222222222222222222222222222222222222222 However, when I use the same code for a larger sparse matrix, a 18656 * 18656 block matrix with blocksize 5); it encounters the followins error.(Same error message for using 1 and 2 processors, seperately) [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably memory access out of range [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try http://valgrind.org on linux or man libgmalloc on Apple to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: [0] PCSetUp_TFS line 116 src/ksp/pc/impls/tfs/tfs.c [0]PETSC ERROR: [0] PCSetUp line 764 src/ksp/pc/interface/precon.c [0]PETSC ERROR: [0] KSPSetUp line 183 src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: [0] KSPSolve line 305 src/ksp/ksp/interface/itfunc.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./kspex1reader_binmpiaij on a linux-gnu named vyan2000-linux by vyan2000 Fri May 15 01:06:12 2009 [0]PETSC ERROR: Libraries linked from /home/vyan2000/local/PPETSc/petsc-2.3.3-p15//lib/linux-gnu-c-debug [0]PETSC ERROR: Configure run at Mon May 4 00:59:41 2009 [0]PETSC ERROR: Configure options --with-mpi-dir=/home/vyan2000/local/mpich2-1.0.8p1/ --with-debugger=gdb --with-shared=0 --download-hypre=1 --download-parmetis=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[cli_0]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 3333333333333333333333333333333333333333333333 I have the exact solution x in hands, so before I push the matrix into the ksp solver, I did check the PETSC loaded matrix A and rhs vector b, by verifying Ax-b=0, in both cases of 1 processor and 2 processors. Any sugeestions? Thank you very much, Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri May 15 12:34:15 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 12:34:15 -0500 Subject: About the -pc_type tfs In-Reply-To: References: Message-ID: If you send the matrix in PETSc binary format we can check this. Matt On Fri, May 15, 2009 at 12:20 PM, Ryan Yan wrote: > Hi all, > I am tring to use the tfs preconditioner to solve a large sparse mpiaij > matrix. > > 11111111111111111111111111111111111111111 > It works very well with a small matrix 45*45(Actually a 9*9 block matrix > with blocksize 5) on 2 processors; Out put is as follows: > > 0 KSP preconditioned resid norm 3.014544557924e+04 true resid norm > 2.219812091849e+04 ||Ae||/||Ax|| 1.000000000000e+00 > 1 KSP preconditioned resid norm 3.679021546908e-03 true resid norm > 1.502747104104e-03 ||Ae||/||Ax|| 6.769704109737e-08 > 2 KSP preconditioned resid norm 2.331909907779e-09 true resid norm > 8.737892755044e-10 ||Ae||/||Ax|| 3.936320910733e-14 > KSP Object: > type: gmres > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt > Orthogonalization with no iterative refinement > GMRES: happy breakdown tolerance 1e-30 > maximum iterations=10000, initial guess is zero > tolerances: relative=1e-10, absolute=1e-50, divergence=10000 > left preconditioning > PC Object: > type: tfs > linear system matrix = precond matrix: > Matrix Object: > type=mpiaij, rows=45, cols=45 > total: nonzeros=825, allocated nonzeros=1350 > using I-node (on process 0) routines: found 5 nodes, limit used is 5 > Norm of error 2.33234e-09, Iterations 2 > > 2222222222222222222222222222222222222222 > > However, when I use the same code for a larger sparse matrix, a 18656 * > 18656 block matrix with blocksize 5); it encounters the followins > error.(Same error message for using 1 and 2 processors, seperately) > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, > probably memory access out of range > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on linux or man libgmalloc on Apple to find memory > corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: [0] PCSetUp_TFS line 116 src/ksp/pc/impls/tfs/tfs.c > [0]PETSC ERROR: [0] PCSetUp line 764 src/ksp/pc/interface/precon.c > [0]PETSC ERROR: [0] KSPSetUp line 183 src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: [0] KSPSolve line 305 src/ksp/ksp/interface/itfunc.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 10:02:49 > CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: ./kspex1reader_binmpiaij on a linux-gnu named > vyan2000-linux by vyan2000 Fri May 15 01:06:12 2009 > [0]PETSC ERROR: Libraries linked from > /home/vyan2000/local/PPETSc/petsc-2.3.3-p15//lib/linux-gnu-c-debug > [0]PETSC ERROR: Configure run at Mon May 4 00:59:41 2009 > [0]PETSC ERROR: Configure options > --with-mpi-dir=/home/vyan2000/local/mpich2-1.0.8p1/ --with-debugger=gdb > --with-shared=0 --download-hypre=1 --download-parmetis=1 > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[cli_0]: > aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > 3333333333333333333333333333333333333333333333 > > I have the exact solution x in hands, so before I push the matrix into the > ksp solver, I did check the PETSC loaded matrix A and rhs vector b, by > verifying Ax-b=0, in both cases of 1 processor and 2 processors. > > Any sugeestions? > > Thank you very much, > > Yan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri May 15 12:51:22 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 12:51:22 -0500 Subject: MatLoad a large matrix into SEQAIJ In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA28E3C9@CORP-CLT-EXB01.ds> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2545DC7A42DF804AAAB2ADA5043D57DA28E3C9@CORP-CLT-EXB01.ds> Message-ID: I have not seen this. I would recommend upgrading. Matt On Fri, May 15, 2009 at 8:58 AM, SUN Chun wrote: > Hello PETSc developers, > > > > I was trying to MatLoad a large matrix into a serial machine. The matrix is > about 6G on disk (binary). I'm seeing signal 11 on MatLoad phase. However, I > was able to load other smaller matrices (less than 1G) in serial (although I > haven't binary-searched to find out what is the critical size of the crash). > Also, I was able to load the 6G matrix when I ran with 4 cores, meaning > SEQAIJ type has become MPIAIJ type. BTW I built and ran PETSc on both > 64-bit lnx86_64 machines. > > > > I'm not sure if you have seen this. I'm not sure if this is a bug because > I'm still using 2.3.3 instead of 3.0, and I also suspect I might have missed > something on build or run. Have you seen this? > > > > Thanks, > > Chun > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 15 13:02:07 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 13:02:07 -0500 (CDT) Subject: MatLoad a large matrix into SEQAIJ In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2545DC7A42DF804AAAB2ADA5043D57DA28E3C9@CORP-CLT-EXB01.ds> Message-ID: If you still have issues - you an upload the matrix to ftp://ftp.mcs.anl.gov/pub/incoming and then we can take a look. Satish On Fri, 15 May 2009, Matthew Knepley wrote: > I have not seen this. I would recommend upgrading. > > Matt > > On Fri, May 15, 2009 at 8:58 AM, SUN Chun wrote: > > > Hello PETSc developers, > > > > > > > > I was trying to MatLoad a large matrix into a serial machine. The matrix is > > about 6G on disk (binary). I'm seeing signal 11 on MatLoad phase. However, I > > was able to load other smaller matrices (less than 1G) in serial (although I > > haven't binary-searched to find out what is the critical size of the crash). > > Also, I was able to load the 6G matrix when I ran with 4 cores, meaning > > SEQAIJ type has become MPIAIJ type. BTW I built and ran PETSc on both > > 64-bit lnx86_64 machines. > > > > > > > > I'm not sure if you have seen this. I'm not sure if this is a bug because > > I'm still using 2.3.3 instead of 3.0, and I also suspect I might have missed > > something on build or run. Have you seen this? > > > > > > > > Thanks, > > > > Chun > > > > > > From Chun.SUN at 3ds.com Fri May 15 14:08:04 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Fri, 15 May 2009 15:08:04 -0400 Subject: MatLoad a large matrix into SEQAIJ In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr><2545DC7A42DF804AAAB2ADA5043D57DA28E3C9@CORP-CLT-EXB01.ds> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA28E3CB@CORP-CLT-EXB01.ds> Thank you so much for the help. I'll first try upgrading and making sure to separate the issue before uploading. Thanks, Chun -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay Sent: Friday, May 15, 2009 2:02 PM To: PETSc users list Subject: Re: MatLoad a large matrix into SEQAIJ If you still have issues - you an upload the matrix to ftp://ftp.mcs.anl.gov/pub/incoming and then we can take a look. Satish On Fri, 15 May 2009, Matthew Knepley wrote: > I have not seen this. I would recommend upgrading. > > Matt > > On Fri, May 15, 2009 at 8:58 AM, SUN Chun wrote: > > > Hello PETSc developers, > > > > > > > > I was trying to MatLoad a large matrix into a serial machine. The matrix is > > about 6G on disk (binary). I'm seeing signal 11 on MatLoad phase. However, I > > was able to load other smaller matrices (less than 1G) in serial (although I > > haven't binary-searched to find out what is the critical size of the crash). > > Also, I was able to load the 6G matrix when I ran with 4 cores, meaning > > SEQAIJ type has become MPIAIJ type. BTW I built and ran PETSc on both > > 64-bit lnx86_64 machines. > > > > > > > > I'm not sure if you have seen this. I'm not sure if this is a bug because > > I'm still using 2.3.3 instead of 3.0, and I also suspect I might have missed > > something on build or run. Have you seen this? > > > > > > > > Thanks, > > > > Chun > > > > > > From cmay at phys.ethz.ch Fri May 15 14:29:09 2009 From: cmay at phys.ethz.ch (Christian May) Date: Fri, 15 May 2009 21:29:09 +0200 (CEST) Subject: solving a particular linear equation system Message-ID: Hi everybody, I have trouble solving a linear equation system with PETSc 3.0.0. I tried the following combinations of solvers and preconditioners and list the results: cg+sor: Linear solve did not converge due to DIVERGED_NAN cg+bjacobi: Linear solve did not converge due to DIVERGED_INDEFINITE_PC cg+ilu: Linear solve did not converge due to DIVERGED_INDEFINITE_MAT gmres+sor: Linear solve did not converge due to DIVERGED_NAN gmres+bjacobi:Linear solve did not converge due to DIVERGED_ITS gmres+ilu: Linear solve did not converge due to DIVERGED_ITS It might be that the matrix is not positive definite, which explains some of the problems. In the last two cases it kind of works, but convergence is horribly slow and it aborts after 10000 iterations. If somebody could please have a look at the matrix and the right hand side vector, I've put them here: http://www.mayarea.de/download/small_matrix.tgz The archive contains a file named A.txt, consisting of three columns i j Aij and a file b.txt with two columns i and bi. Any hint on how to solve this linear equation system (in reasonable time) is appreciated. Thanks a lot in advance! Christian From amari at cpht.polytechnique.fr Fri May 15 15:06:18 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 22:06:18 +0200 Subject: petsc on mac Ox In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: <83DA17FC-278E-4DAE-BD30-4E9EA303CDC0@cpht.polytechnique.fr> Just to tell that petsc-3.0.0-p5 succesiively built on mac ox 10.5 thanks to you The examples were OK. Here is my final configuration = configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 --with- shared=0 --download-f-blas-lapack --with-clanguage=cxx --with-cxx=g++ --with-fc=ifort --with-dynamic=0 sudo cp -R petsc-3.0.0-p5 /usr/local/ set version=-3.0.0-p5 sudo ln -s petsc${version} petsc Tahar From knepley at gmail.com Fri May 15 15:10:14 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 15:10:14 -0500 Subject: solving a particular linear equation system In-Reply-To: References: Message-ID: On Fri, May 15, 2009 at 2:29 PM, Christian May wrote: > Hi everybody, > > I have trouble solving a linear equation system with PETSc 3.0.0. > I tried the following combinations of solvers and preconditioners and list > the results: > > cg+sor: Linear solve did not converge due to DIVERGED_NAN > cg+bjacobi: Linear solve did not converge due to DIVERGED_INDEFINITE_PC > cg+ilu: Linear solve did not converge due to DIVERGED_INDEFINITE_MAT > 1) This matrix is not SPD > gmres+sor: Linear solve did not converge due to DIVERGED_NAN > gmres+bjacobi:Linear solve did not converge due to DIVERGED_ITS > gmres+ilu: Linear solve did not converge due to DIVERGED_ITS 2) From DIVERGED_NAN, there is either a bug in the code, or you had some sort of floating point badness from excessive iteration. I think the former. 3) Run with -pc_type lu -ksp_type preonly first to test. 4) If you need parallelism with such an ill-conditioned matrix, you can try MUMPS Matt > It might be that the matrix is not positive definite, which explains some > of the problems. In the last two cases it kind of works, but convergence is > horribly slow and it aborts after 10000 iterations. > > If somebody could please have a look at the matrix and the right hand side > vector, I've put them here: > http://www.mayarea.de/download/small_matrix.tgz > The archive contains a file named A.txt, consisting of three columns > i j Aij and a file b.txt with two columns i and bi. > > Any hint on how to solve this linear equation system (in reasonable time) > is appreciated. > > Thanks a lot in advance! > > Christian > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri May 15 15:11:49 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 May 2009 15:11:49 -0500 Subject: solving a particular linear equation system In-Reply-To: References: Message-ID: Could you please save this matrix and right hand side using - ksp_view_binary (just run solver like you usually do but use this command line option also) then post the file binaryoutput that is generated. This way people can trivially load the matrix and right hand side with MatLoad() and not monkey with parsing ASCII files. Thanks Barry On May 15, 2009, at 2:29 PM, Christian May wrote: > Hi everybody, > > I have trouble solving a linear equation system with PETSc 3.0.0. > I tried the following combinations of solvers and preconditioners > and list the results: > > cg+sor: Linear solve did not converge due to DIVERGED_NAN > cg+bjacobi: Linear solve did not converge due to > DIVERGED_INDEFINITE_PC > cg+ilu: Linear solve did not converge due to > DIVERGED_INDEFINITE_MAT > gmres+sor: Linear solve did not converge due to DIVERGED_NAN > gmres+bjacobi:Linear solve did not converge due to DIVERGED_ITS > gmres+ilu: Linear solve did not converge due to DIVERGED_ITS > > > It might be that the matrix is not positive definite, which explains > some of the problems. In the last two cases it kind of works, but > convergence is horribly slow and it aborts after 10000 iterations. > > If somebody could please have a look at the matrix and the right > hand side vector, I've put them here: > http://www.mayarea.de/download/small_matrix.tgz > The archive contains a file named A.txt, consisting of three columns > i j Aij and a file b.txt with two columns i and bi. > > Any hint on how to solve this linear equation system (in reasonable > time) is appreciated. > > Thanks a lot in advance! > > Christian > From bsmith at mcs.anl.gov Fri May 15 15:15:20 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 May 2009 15:15:20 -0500 Subject: petsc on mac Ox In-Reply-To: <83DA17FC-278E-4DAE-BD30-4E9EA303CDC0@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <83DA17FC-278E-4DAE-BD30-4E9EA303CDC0@cpht.polytechnique.fr> Message-ID: <14B1BA89-FC3C-4ABF-8677-71E67A0BBC60@mcs.anl.gov> You should not need -download-f-blas-lapack. The Apple comes with all the blas/lapack that is needed and it is found automatically, so you do not need to mention the blas/lapack in the arguments at all. Barry On May 15, 2009, at 3:06 PM, Tahar Amari wrote: > Just to tell that petsc-3.0.0-p5 succesiively built on mac ox 10.5 > thanks to you > The examples were OK. > > Here is my final configuration = > > > configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 -- > with-shared=0 --download-f-blas-lapack --with-clanguage=cxx --with- > cxx=g++ --with-fc=ifort --with-dynamic=0 > > sudo cp -R petsc-3.0.0-p5 /usr/local/ > set version=-3.0.0-p5 > sudo ln -s petsc${version} petsc > > > Tahar From amari at cpht.polytechnique.fr Fri May 15 15:19:25 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 22:19:25 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: Hello Here is what I have from a FORTRAN code "toto.fpp" which was compiled with petsc2..xxxx c----------------------------------------------------------------------- #include "include/finclude/petsc.h" #include "include/finclude/petscvec.h" #include "include/finclude/petscmat.h" #include "include/finclude/petscao.h" I compiled it with ifort -assume byterecl -g -I/usr/local/petsc/ -I/usr/local/petsc// macx/include -I/usr/X11R6/include/X11 -DPETSC_HAVE_PARMETIS - DPETSC_USE_DEBUG -DPETSC_USE_LOG -DPETSC_USE_BOPT_g -DPETSC_USE_STACK - DPETSC_AVOID_MPIF_H -c toto.fpp I have the following kind of errors petsc.h(6): #error: can't find include file: petscversion.h petsc.h(7): #error: can't find include file: finclude/petscdef.h petscvec.h(5): #error: can't find include file: finclude/petscvecdef.h I looked at my petsc tree and II have the "include/finclude/" directory. I have a petsc.h file inside which does #include "petscconf.h" #include "petscversion.h" #include "finclude/petscdef.h" Does anyone knows why it does not find those paths or what is wrong with those paths ? Tahar From amari at cpht.polytechnique.fr Fri May 15 15:19:52 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 22:19:52 +0200 Subject: petsc on mac Ox In-Reply-To: <14B1BA89-FC3C-4ABF-8677-71E67A0BBC60@mcs.anl.gov> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <83DA17FC-278E-4DAE-BD30-4E9EA303CDC0@cpht.polytechnique.fr> <14B1BA89-FC3C-4ABF-8677-71E67A0BBC60@mcs.anl.gov> Message-ID: Thank you very much Tahar Le 15 mai 09 ? 22:15, Barry Smith a ?crit : > > You should not need -download-f-blas-lapack. The Apple comes with > all the blas/lapack that is needed and it is found automatically, so > you do not need to mention the blas/lapack in the arguments at all. > > Barry > > On May 15, 2009, at 3:06 PM, Tahar Amari wrote: > >> Just to tell that petsc-3.0.0-p5 succesiively built on mac ox >> 10.5 thanks to you >> The examples were OK. >> >> Here is my final configuration = >> >> >> configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 -- >> with-shared=0 --download-f-blas-lapack --with-clanguage=cxx --with- >> cxx=g++ --with-fc=ifort --with-dynamic=0 >> >> sudo cp -R petsc-3.0.0-p5 /usr/local/ >> set version=-3.0.0-p5 >> sudo ln -s petsc${version} petsc >> >> >> Tahar From balay at mcs.anl.gov Fri May 15 15:21:32 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 15:21:32 -0500 (CDT) Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: You'll have to modify to: #include "finclude/petsc.h" #include "finclude/petscvec.h" #include "finclude/petscmat.h" #include "finclude/petscao.h" Satish On Fri, 15 May 2009, Tahar Amari wrote: > Hello > > Here is what I have from a FORTRAN code "toto.fpp" which was compiled with > petsc2..xxxx > > c----------------------------------------------------------------------- > #include "include/finclude/petsc.h" > #include "include/finclude/petscvec.h" > #include "include/finclude/petscmat.h" > #include "include/finclude/petscao.h" > > > I compiled it with > > ifort -assume byterecl -g -I/usr/local/petsc/ > -I/usr/local/petsc//macx/include -I/usr/X11R6/include/X11 > -DPETSC_HAVE_PARMETIS -DPETSC_USE_DEBUG -DPETSC_USE_LOG -DPETSC_USE_BOPT_g > -DPETSC_USE_STACK -DPETSC_AVOID_MPIF_H -c toto.fpp > > > I have the following kind of errors > > petsc.h(6): #error: can't find include file: petscversion.h > petsc.h(7): #error: can't find include file: finclude/petscdef.h > petscvec.h(5): #error: can't find include file: finclude/petscvecdef.h > > > I looked at my petsc tree and II have the "include/finclude/" directory. > I have a petsc.h file inside > > which does > > #include "petscconf.h" > #include "petscversion.h" > #include "finclude/petscdef.h" > > > Does anyone knows why it does not find those paths or what is wrong with > those paths ? > > Tahar From amari at cpht.polytechnique.fr Fri May 15 15:25:50 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 22:25:50 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> Message-ID: <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> Thank you very much , excuse me, I might not have well understood, ctually there are petsc.h , petscvec.h .... files in petsc/include/finclude/ Le 15 mai 09 ? 22:21, Satish Balay a ?crit : > You'll have to modify to: > > #include "finclude/petsc.h" > #include "finclude/petscvec.h" > #include "finclude/petscmat.h" > #include "finclude/petscao.h" > > Satish > > > On Fri, 15 May 2009, Tahar Amari wrote: > >> Hello >> >> Here is what I have from a FORTRAN code "toto.fpp" which was >> compiled with >> petsc2..xxxx >> >> c >> ----------------------------------------------------------------------- >> #include "include/finclude/petsc.h" >> #include "include/finclude/petscvec.h" >> #include "include/finclude/petscmat.h" >> #include "include/finclude/petscao.h" >> >> >> I compiled it with >> >> ifort -assume byterecl -g -I/usr/local/petsc/ >> -I/usr/local/petsc//macx/include -I/usr/X11R6/include/X11 >> -DPETSC_HAVE_PARMETIS -DPETSC_USE_DEBUG -DPETSC_USE_LOG - >> DPETSC_USE_BOPT_g >> -DPETSC_USE_STACK -DPETSC_AVOID_MPIF_H -c toto.fpp >> >> >> I have the following kind of errors >> >> petsc.h(6): #error: can't find include file: petscversion.h >> petsc.h(7): #error: can't find include file: finclude/petscdef.h >> petscvec.h(5): #error: can't find include file: finclude/ >> petscvecdef.h >> >> >> I looked at my petsc tree and II have the "include/finclude/" >> directory. >> I have a petsc.h file inside >> >> which does >> >> #include "petscconf.h" >> #include "petscversion.h" >> #include "finclude/petscdef.h" >> >> >> Does anyone knows why it does not find those paths or what is >> wrong with >> those paths ? >> >> Tahar From balay at mcs.anl.gov Fri May 15 15:34:54 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 15:34:54 -0500 (CDT) Subject: include file fortran In-Reply-To: <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> Message-ID: Ah. - the problem is your makfile. Its best to use PETSc makefiles. > > > ifort -assume byterecl -g -I/usr/local/petsc/ ^^^^^^^^^^^^^^^^^^ It should be: -I/usr/local/petsc/include Satish On Fri, 15 May 2009, Tahar Amari wrote: > Thank you very much , > > excuse me, I might not have well understood, > ctually there are petsc.h , petscvec.h .... files > > in > > petsc/include/finclude/ > > > > > Le 15 mai 09 ? 22:21, Satish Balay a ?crit : > > > You'll have to modify to: > > > > #include "finclude/petsc.h" > > #include "finclude/petscvec.h" > > #include "finclude/petscmat.h" > > #include "finclude/petscao.h" > > > > Satish > > > > > > On Fri, 15 May 2009, Tahar Amari wrote: > > > > > Hello > > > > > > Here is what I have from a FORTRAN code "toto.fpp" which was compiled with > > > petsc2..xxxx > > > > > > c----------------------------------------------------------------------- > > > #include "include/finclude/petsc.h" > > > #include "include/finclude/petscvec.h" > > > #include "include/finclude/petscmat.h" > > > #include "include/finclude/petscao.h" > > > > > > > > > I compiled it with > > > > > > ifort -assume byterecl -g -I/usr/local/petsc/ > > > -I/usr/local/petsc//macx/include -I/usr/X11R6/include/X11 > > > -DPETSC_HAVE_PARMETIS -DPETSC_USE_DEBUG -DPETSC_USE_LOG -DPETSC_USE_BOPT_g > > > -DPETSC_USE_STACK -DPETSC_AVOID_MPIF_H -c toto.fpp > > > > > > > > > I have the following kind of errors > > > > > > petsc.h(6): #error: can't find include file: petscversion.h > > > petsc.h(7): #error: can't find include file: finclude/petscdef.h > > > petscvec.h(5): #error: can't find include file: finclude/petscvecdef.h > > > > > > > > > I looked at my petsc tree and II have the "include/finclude/" directory. > > > I have a petsc.h file inside > > > > > > which does > > > > > > #include "petscconf.h" > > > #include "petscversion.h" > > > #include "finclude/petscdef.h" > > > > > > > > > Does anyone knows why it does not find those paths or what is wrong with > > > those paths ? > > > > > > Tahar > From amari at cpht.polytechnique.fr Fri May 15 16:04:00 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 23:04:00 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> Message-ID: <25277988-3AE8-4C59-8480-F6BA46D386F4@cpht.polytechnique.fr> Excuse me I did not know there are petsc makefiles ? This is certainly better. Do you suggest me to look in the example or tutorials folders ? Tahar Le 15 mai 09 ? 22:34, Satish Balay a ?crit : > Ah. - the problem is your makfile. Its best to use PETSc makefiles. > >>>> ifort -assume byterecl -g -I/usr/local/petsc/ > ^^^^^^^^^^^^^^^^^^ > It should be: -I/usr/local/petsc/include > > Satish > > > On Fri, 15 May 2009, Tahar Amari wrote: > >> Thank you very much , >> >> excuse me, I might not have well understood, >> ctually there are petsc.h , petscvec.h .... files >> >> in >> >> petsc/include/finclude/ >> >> >> >> >> Le 15 mai 09 ? 22:21, Satish Balay a ?crit : >> >>> You'll have to modify to: >>> >>> #include "finclude/petsc.h" >>> #include "finclude/petscvec.h" >>> #include "finclude/petscmat.h" >>> #include "finclude/petscao.h" >>> >>> Satish >>> >>> >>> On Fri, 15 May 2009, Tahar Amari wrote: >>> >>>> Hello >>>> >>>> Here is what I have from a FORTRAN code "toto.fpp" which was >>>> compiled with >>>> petsc2..xxxx >>>> >>>> c >>>> ----------------------------------------------------------------------- >>>> #include "include/finclude/petsc.h" >>>> #include "include/finclude/petscvec.h" >>>> #include "include/finclude/petscmat.h" >>>> #include "include/finclude/petscao.h" >>>> >>>> >>>> I compiled it with >>>> >>>> ifort -assume byterecl -g -I/usr/local/petsc/ >>>> -I/usr/local/petsc//macx/include -I/usr/X11R6/include/X11 >>>> -DPETSC_HAVE_PARMETIS -DPETSC_USE_DEBUG -DPETSC_USE_LOG - >>>> DPETSC_USE_BOPT_g >>>> -DPETSC_USE_STACK -DPETSC_AVOID_MPIF_H -c toto.fpp >>>> >>>> >>>> I have the following kind of errors >>>> >>>> petsc.h(6): #error: can't find include file: petscversion.h >>>> petsc.h(7): #error: can't find include file: finclude/petscdef.h >>>> petscvec.h(5): #error: can't find include file: finclude/ >>>> petscvecdef.h >>>> >>>> >>>> I looked at my petsc tree and II have the "include/finclude/" >>>> directory. >>>> I have a petsc.h file inside >>>> >>>> which does >>>> >>>> #include "petscconf.h" >>>> #include "petscversion.h" >>>> #include "finclude/petscdef.h" >>>> >>>> >>>> Does anyone knows why it does not find those paths or what is >>>> wrong with >>>> those paths ? >>>> >>>> Tahar >> From amari at cpht.polytechnique.fr Fri May 15 16:22:57 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 23:22:57 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> Message-ID: <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> Hello again, I am sorry , I followed your first suggestion and changed the path I ended up with following error at link, do you have any guess please ? ifort -assume byterecl -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/ local/hdf/HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf - lsz -ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib - lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich - lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack Undefined symbols: "__ZNSt19basic_ostringstreamIcSt11char_traitsIcESaIcEEC1ESt13_Ios_Openmode ", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) "__ZTVSt9exception", referenced from: __ZTVSt9exception$non_lazy_ptr in libpetsc.a(err.o) "__ZNKSt9exception4whatEv", referenced from: __ZTVN5PETSc9ExceptionE in libpetsc.a(err.o) "__ZNSolsEPFRSoS_E", referenced from: __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) "___cxa_allocate_exception", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZNSolsEd", referenced from: __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) "__ZNSolsEi", referenced from: __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) "___gxx_personality_v0", referenced from: ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matrixf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itfuncf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(aof.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(zitcreatef.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(rvectorf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vmpicrf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(zaobasicf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(zvectorf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vectorf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itcreatef.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(axpyf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itclf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(zmpiaijf.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zstart.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgtypef.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vector.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(axpy.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itcreate.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zutils.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matrix.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mprint.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mal.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itfunc.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(rvector.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(init.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(str.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pinit.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(ao.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawv.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itcl.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(errtrace.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(aobasic.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(reg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(options.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(memc.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(vcreatea.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fhost.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zstartf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vmpicr.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(binv.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgtype.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(verboseinfo.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(err.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(plog.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(send.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fdate.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(classLog.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(gcreate.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matstash.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(index.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pname.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(precon.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pythonsys.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecreg.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiptap.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vscat.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itregis.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(stageLog.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dlregispetsc.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(xmon.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(iterativ.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpidense.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpiuopen.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(psplit.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(inherit.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(adebug.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(aoptions.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(view.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dscatter.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mtr.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpimesg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dclear.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewa.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ctable.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(stack.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(isltog.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcset.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(eventLog.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aij.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(destroy.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(prefix.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pbvec.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sorti.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(general.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(iguess.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(gcomm.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dupl.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pdisplay.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecstash.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(filev.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(convert.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ghome.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewreg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(arch.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(signal.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(ptap.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pmap.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(dlregisksp.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fwd.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(lg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mem.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpinit.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawreg.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(iscoloring.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(tagm.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(axis.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpiu.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(random.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sysio.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matnull.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(draw.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpimatmatmult.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(flush.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dline.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dl.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(dlregisvec.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(freespace.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(eige.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(errstop.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(strgen.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiov.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(errabort.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(block.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(veccreate.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matreg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(psleep.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pgname.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(stride.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(iscomp.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fp.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(state.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vseqcr.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(inode.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fuser.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ptype.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dlregismat.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpicsrperm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecregall.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fdmpiaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fretrieve.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dense.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dlregisdm.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dsflush.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpitr.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mcrl.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(randreg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dflush.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(symmlq.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijfact.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bcgsl.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcregis.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dadestroy.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(dvec2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(crl.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vinv.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cheby.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pvec2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matmatmult.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drect.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(rich.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(inode2.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ftest.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(preonly.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(olist.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vpscat.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sregis.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pdvec.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pbarrier.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(spartition.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dgcoor.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(bvec1.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matstashspace.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmdense.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa3.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(zerodiag.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dlimpl.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawregall.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa4.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(stringv.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matptap.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa5.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecio.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtext.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dpause.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(compressedrow.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lgmres.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(ibcgs.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmres.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewregall.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mffd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(shvec.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgs.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(csrperm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lcd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(scolor.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(qcg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dcoor.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(hists.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(comb.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(bvec2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bcgs.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgne.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtextv.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(symtranspose.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gltr.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bicg.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijsbaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(minres.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(partition.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(nash.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtextgs.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lsqr.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(ij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(fgmres.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(stcg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtri.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sortip.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cg.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cr.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijbaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fdaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(gcookie.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(tcqmr.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(tfqmr.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dlregisrand.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zoom.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matregis.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dsclear.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(modpcf.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpibaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sorder.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewers.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(color.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpisbaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(maij.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dmouse.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pbjacobi.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgeig.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijtype.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(petscvu.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(richscale.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(borthog.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmpre.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(rand48.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(fieldsplit.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(pmetis.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bjacobi.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pops.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cp.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sp1wd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(mg.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lu.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(nn.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mscatter.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(shell.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itres.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sprcm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(spqmd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bfbt.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmreig.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiadj.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(none.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(borthog2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(blockmat.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mcomposite.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(shellpc.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baij.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(xops.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(asa.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(asm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(icc.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(ilu.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcksp.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcmat.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(tfs.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(sor.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mfregis.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(composite.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(galerkin.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(rand.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(spnd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matis.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(eisen.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(jacobi.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(wb.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmres2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(openmp.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cholesky.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(redundant.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dgpause.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaij2.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(tone.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact3.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(factimpl.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcis.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baij2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(schurm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(smg.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(xxt.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(factor.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(mgfunc.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(genrcm.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(subcomm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(xinit.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact11.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact3.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmbaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(xyt.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sortd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(shellcnv.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact12.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijov.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact9.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(gen1wd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmsbaij.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijov.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(text.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact10.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sro.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(fmg.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(hue.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mffddef.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(wp.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawopenx.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(gennd.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dbuff.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact8.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact7.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact5.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daint.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(genqmd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(ido.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact4.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(gtype.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecmpitoseq.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact6.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mhas.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(ivec.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa6.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dacorn.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa7.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(rcm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daghost.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fnroot.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact6.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact11.o) ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(isblock.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact8.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact10.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(qmdupd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(comm.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact5.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(wmap.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gs.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(xcolor.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact13.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fn1wd.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact12.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fndsep.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact14.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact7.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daview.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact9.o) ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact4.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dagtol.o) ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bit_mask.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(da1.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(da3.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(da2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dadist.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dalocal.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dainterp.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daindex.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(fdda.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dagetarray.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(gr2.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daltog.o) ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(gr1.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dviewp.o) ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dpoint.o) "__ZNKSs5c_strEv", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_", referenced from: __ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_ $non_lazy_ptr in libpetsc.a(errtrace.o) "__ZdlPv", referenced from: __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) "___cxa_free_exception", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZNSt19basic_ostringstreamIcSt11char_traitsIcESaIcEED1Ev", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) "__ZNSsC1EPKcRKSaIcE", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "___cxa_call_unexpected", referenced from: __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) "__ZNSaIcEC1Ev", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZNSaIcED1Ev", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZTVN10__cxxabiv120__si_class_type_infoE", referenced from: __ZTIN5PETSc9ExceptionE in libpetsc.a(err.o) "__ZSt9terminatev", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc", referenced from: __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in libpetsc.a(errtrace.o) "__ZStlsIcSt11char_traitsIcESaIcEERSt13basic_ostreamIT_T0_ES7_RKSbIS4_S5_T1_E ", referenced from: __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) "__ZNSt9exceptionD2Ev", referenced from: __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) "__Unwind_Resume", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) "__ZNKSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE3strEv", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "___cxa_throw", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) "__ZTISt9exception", referenced from: __ZTIN5PETSc9ExceptionE in libpetsc.a(err.o) "__ZNSsD1Ev", referenced from: __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) ld: symbol(s) not found Le 15 mai 09 ? 22:34, Satish Balay a ?crit : > Ah. - the problem is your makfile. Its best to use PETSc makefiles. > >>>> ifort -assume byterecl -g -I/usr/local/petsc/ > ^^^^^^^^^^^^^^^^^^ > It should be: -I/usr/local/petsc/include > > Satish > > > On Fri, 15 May 2009, Tahar Amari wrote: > >> Thank you very much , >> >> excuse me, I might not have well understood, >> ctually there are petsc.h , petscvec.h .... files >> >> in >> >> petsc/include/finclude/ >> >> >> >> >> Le 15 mai 09 ? 22:21, Satish Balay a ?crit : >> >>> You'll have to modify to: >>> >>> #include "finclude/petsc.h" >>> #include "finclude/petscvec.h" >>> #include "finclude/petscmat.h" >>> #include "finclude/petscao.h" >>> >>> Satish >>> >>> >>> On Fri, 15 May 2009, Tahar Amari wrote: >>> >>>> Hello >>>> >>>> Here is what I have from a FORTRAN code "toto.fpp" which was >>>> compiled with >>>> petsc2..xxxx >>>> >>>> c >>>> ----------------------------------------------------------------------- >>>> #include "include/finclude/petsc.h" >>>> #include "include/finclude/petscvec.h" >>>> #include "include/finclude/petscmat.h" >>>> #include "include/finclude/petscao.h" >>>> >>>> >>>> I compiled it with >>>> >>>> ifort -assume byterecl -g -I/usr/local/petsc/ >>>> -I/usr/local/petsc//macx/include -I/usr/X11R6/include/X11 >>>> -DPETSC_HAVE_PARMETIS -DPETSC_USE_DEBUG -DPETSC_USE_LOG - >>>> DPETSC_USE_BOPT_g >>>> -DPETSC_USE_STACK -DPETSC_AVOID_MPIF_H -c toto.fpp >>>> >>>> >>>> I have the following kind of errors >>>> >>>> petsc.h(6): #error: can't find include file: petscversion.h >>>> petsc.h(7): #error: can't find include file: finclude/petscdef.h >>>> petscvec.h(5): #error: can't find include file: finclude/ >>>> petscvecdef.h >>>> >>>> >>>> I looked at my petsc tree and II have the "include/finclude/" >>>> directory. >>>> I have a petsc.h file inside >>>> >>>> which does >>>> >>>> #include "petscconf.h" >>>> #include "petscversion.h" >>>> #include "finclude/petscdef.h" >>>> >>>> >>>> Does anyone knows why it does not find those paths or what is >>>> wrong with >>>> those paths ? >>>> >>>> Tahar >> From cmay at phys.ethz.ch Fri May 15 16:23:19 2009 From: cmay at phys.ethz.ch (Christian May) Date: Fri, 15 May 2009 23:23:19 +0200 (CEST) Subject: solving a particular linear equation system In-Reply-To: References: Message-ID: On Fri, 15 May 2009, Matthew Knepley wrote: > 3) Run with -pc_type lu -ksp_type preonly first to test. Thanks for your hints. This works fine: Linear solve converged due to CONVERGED_ITS iterations 1 On Fri, 15 May 2009, Barry Smith wrote: > Could you please save this matrix and right hand side using > -ksp_view_binary (just run solver like you usually do but use this command > line option also) then post the file binaryoutput that is generated. This way > people can trivially load the matrix and right hand side with MatLoad() and > not monkey with parsing ASCII files. Sure, I wasn't aware of this option. The file is now available at http://www.mayarea.de/download/binaryoutput Thanks a lot Christian From knepley at gmail.com Fri May 15 16:29:22 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 16:29:22 -0500 Subject: include file fortran In-Reply-To: <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> Message-ID: On Fri, May 15, 2009 at 4:22 PM, Tahar Amari wrote: > Hello again, > > I am sorry , I followed your first suggestion and changed the path > I ended up with following error at link, do you have any guess please You are missing the C++ symbols since you are using the Fortran linker. We always use the C linker. program: program.o crap.o ${CLINKER} -o $@ $< ${PETSC_TS_LIB} Matt > > ifort -assume byterecl -o mh4d mh4d.o petsc.o comm.o setbc.o local.o > gridutil.o mympi.o terminator.o operator.o shellsort.o edge.o side.o > vertex.o tetrahedron.o rotation.o tetrahedralgrid.o field.o > -I/usr/local/hdf/HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf > -lsz -ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec > -lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich > -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack > -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/local/petsc/macx/lib > -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib -lpetscts -lpetscdm > -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 > -lparmetis -lmetis -lfblas -lflapack > Undefined symbols: > "__ZNSt19basic_ostringstreamIcSt11char_traitsIcESaIcEEC1ESt13_Ios_Openmode", > referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) > "__ZTVSt9exception", referenced from: > __ZTVSt9exception$non_lazy_ptr in libpetsc.a(err.o) > "__ZNKSt9exception4whatEv", referenced from: > __ZTVN5PETSc9ExceptionE in libpetsc.a(err.o) > "__ZNSolsEPFRSoS_E", referenced from: > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > "___cxa_allocate_exception", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZNSolsEd", referenced from: > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > "__ZNSolsEi", referenced from: > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > "___gxx_personality_v0", referenced from: > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matrixf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itfuncf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(aof.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(zitcreatef.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(rvectorf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vmpicrf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(zaobasicf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(zvectorf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vectorf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itcreatef.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(axpyf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itclf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(zmpiaijf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zstart.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgtypef.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vector.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(axpy.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itcreate.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zutils.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matrix.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mprint.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mal.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itfunc.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(rvector.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(init.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(str.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pinit.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(ao.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawv.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itcl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(errtrace.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(aobasic.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(reg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(options.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(memc.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(vcreatea.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fhost.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zstartf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vmpicr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(binv.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgtype.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(verboseinfo.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(err.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(plog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(send.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fdate.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(classLog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(gcreate.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matstash.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(index.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pname.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(precon.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pythonsys.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecreg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiptap.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vscat.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itregis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(stageLog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dlregispetsc.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(xmon.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(iterativ.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpidense.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpiuopen.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(psplit.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(inherit.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(adebug.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(aoptions.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(view.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dscatter.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mtr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpimesg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dclear.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewa.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ctable.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(stack.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(isltog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcset.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(eventLog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(destroy.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(prefix.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pbvec.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sorti.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(general.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(iguess.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(gcomm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dupl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pdisplay.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecstash.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(filev.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(convert.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ghome.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewreg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(arch.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(signal.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(ptap.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pmap.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(dlregisksp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fwd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(lg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mem.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpinit.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawreg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(iscoloring.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(tagm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(axis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpiu.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(random.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sysio.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matnull.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(draw.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpimatmatmult.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(flush.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dline.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(dlregisvec.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(freespace.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(eige.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(errstop.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(strgen.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiov.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(errabort.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(block.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(veccreate.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matreg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(psleep.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pgname.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(stride.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(iscomp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(state.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vseqcr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(inode.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fuser.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ptype.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dlregismat.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpicsrperm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecregall.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fdmpiaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(fretrieve.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dense.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dlregisdm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dsflush.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(mpitr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mcrl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(randreg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dflush.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(symmlq.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijfact.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bcgsl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcregis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dadestroy.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(dvec2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(crl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vinv.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cheby.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pvec2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matmatmult.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drect.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(rich.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(inode2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(ftest.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(preonly.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(olist.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vpscat.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sregis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(pdvec.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pbarrier.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(spartition.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dgcoor.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(bvec1.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matstashspace.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmdense.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa3.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(zerodiag.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dlimpl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawregall.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa4.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(stringv.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matptap.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa5.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecio.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtext.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dpause.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(compressedrow.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lgmres.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(ibcgs.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmres.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewregall.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mffd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(shvec.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgs.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(csrperm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lcd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(scolor.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(qcg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dcoor.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(hists.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(comb.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(bvec2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bcgs.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgne.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtextv.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(symtranspose.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gltr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bicg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijsbaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(minres.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(partition.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(nash.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtextgs.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lsqr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(ij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(fgmres.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(stcg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dtri.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sortip.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijbaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fdaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(gcookie.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(tcqmr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(tfqmr.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dlregisrand.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(zoom.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matregis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dsclear.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(modpcf.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpibaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sorder.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(viewers.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(color.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpisbaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(maij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dmouse.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pbjacobi.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cgeig.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(aijtype.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(petscvu.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(richscale.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(borthog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmpre.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(rand48.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(fieldsplit.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(pmetis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bjacobi.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(pops.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sp1wd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(mg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(lu.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(nn.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mscatter.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(shell.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(itres.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sprcm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(spqmd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bfbt.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmreig.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mpiadj.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(none.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(borthog2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(blockmat.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mcomposite.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(shellpc.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(xops.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(asa.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(asm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(icc.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(ilu.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcksp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcmat.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(tfs.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(sor.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mfregis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(composite.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(galerkin.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(rand.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(spnd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(matis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(eisen.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(jacobi.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(wb.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gmres2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(openmp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(cholesky.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(redundant.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dgpause.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaij2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(tone.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact3.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(factimpl.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(pcis.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baij2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(schurm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(smg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(xxt.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(factor.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(mgfunc.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(genrcm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(subcomm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(xinit.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact11.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact3.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmbaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(xyt.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(sortd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(shellcnv.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact12.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijov.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact9.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(gen1wd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mmsbaij.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijov.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(text.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact10.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sro.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(fmg.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(hue.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mffddef.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(wp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(drawopenx.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(gennd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dbuff.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact8.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact7.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact5.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daint.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(genqmd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(ido.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact4.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(gtype.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(vecmpitoseq.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(sbaijfact6.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(mhas.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(ivec.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa6.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dacorn.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa7.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(rcm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daghost.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(dgefa.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fnroot.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact6.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact11.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscvec.a(isblock.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact8.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact10.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(qmdupd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(comm.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact5.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(wmap.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(gs.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(xcolor.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact13.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fn1wd.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact12.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(fndsep.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact14.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact7.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daview.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact9.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscmat.a(baijfact4.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dagtol.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscksp.a(bit_mask.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(da1.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(da3.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(da2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dadist.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dalocal.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dainterp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daindex.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(fdda.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(dagetarray.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(gr2.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(daltog.o) > ___gxx_personality_v0$non_lazy_ptr in libpetscdm.a(gr1.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dviewp.o) > ___gxx_personality_v0$non_lazy_ptr in libpetsc.a(dpoint.o) > "__ZNKSs5c_strEv", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_", referenced > from: > > __ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_$non_lazy_ptr in > libpetsc.a(errtrace.o) > "__ZdlPv", referenced from: > __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) > "___cxa_free_exception", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZNSt19basic_ostringstreamIcSt11char_traitsIcESaIcEED1Ev", referenced > from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) > "__ZNSsC1EPKcRKSaIcE", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "___cxa_call_unexpected", referenced from: > __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) > "__ZNSaIcEC1Ev", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZNSaIcED1Ev", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZTVN10__cxxabiv120__si_class_type_infoE", referenced from: > __ZTIN5PETSc9ExceptionE in libpetsc.a(err.o) > "__ZSt9terminatev", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc", referenced > from: > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > > __Z29PetscTraceBackErrorHandlerCxxiPKcS0_S0_iiRSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE in > libpetsc.a(errtrace.o) > "__ZStlsIcSt11char_traitsIcESaIcEERSt13basic_ostreamIT_T0_ES7_RKSbIS4_S5_T1_E", > referenced from: > __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) > "__ZNSt9exceptionD2Ev", referenced from: > __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) > "__Unwind_Resume", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __ZN5PETSc9ExceptionC2ERKSs in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD2Ev in libpetsc.a(err.o) > __ZN5PETSc9ExceptionD0Ev in libpetsc.a(err.o) > "__ZNKSt19basic_ostringstreamIcSt11char_traitsIcESaIcEE3strEv", referenced > from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "___cxa_throw", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > "__ZTISt9exception", referenced from: > __ZTIN5PETSc9ExceptionE in libpetsc.a(err.o) > "__ZNSsD1Ev", referenced from: > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > __Z13PetscErrorCxxiPKcS0_S0_ii in libpetsc.a(err.o) > ld: symbol(s) not found > > > Le 15 mai 09 ? 22:34, Satish Balay a ?crit : > > Ah. - the problem is your makfile. Its best to use PETSc makefiles. >> >> ifort -assume byterecl -g -I/usr/local/petsc/ >>>>> >>>> ^^^^^^^^^^^^^^^^^^ >> It should be: -I/usr/local/petsc/include >> >> Satish >> >> >> On Fri, 15 May 2009, Tahar Amari wrote: >> >> Thank you very much , >>> >>> excuse me, I might not have well understood, >>> ctually there are petsc.h , petscvec.h .... files >>> >>> in >>> >>> petsc/include/finclude/ >>> >>> >>> >>> >>> Le 15 mai 09 ? 22:21, Satish Balay a ?crit : >>> >>> You'll have to modify to: >>>> >>>> #include "finclude/petsc.h" >>>> #include "finclude/petscvec.h" >>>> #include "finclude/petscmat.h" >>>> #include "finclude/petscao.h" >>>> >>>> Satish >>>> >>>> >>>> On Fri, 15 May 2009, Tahar Amari wrote: >>>> >>>> Hello >>>>> >>>>> Here is what I have from a FORTRAN code "toto.fpp" which was compiled >>>>> with >>>>> petsc2..xxxx >>>>> >>>>> >>>>> c----------------------------------------------------------------------- >>>>> #include "include/finclude/petsc.h" >>>>> #include "include/finclude/petscvec.h" >>>>> #include "include/finclude/petscmat.h" >>>>> #include "include/finclude/petscao.h" >>>>> >>>>> >>>>> I compiled it with >>>>> >>>>> ifort -assume byterecl -g -I/usr/local/petsc/ >>>>> -I/usr/local/petsc//macx/include -I/usr/X11R6/include/X11 >>>>> -DPETSC_HAVE_PARMETIS -DPETSC_USE_DEBUG -DPETSC_USE_LOG >>>>> -DPETSC_USE_BOPT_g >>>>> -DPETSC_USE_STACK -DPETSC_AVOID_MPIF_H -c toto.fpp >>>>> >>>>> >>>>> I have the following kind of errors >>>>> >>>>> petsc.h(6): #error: can't find include file: petscversion.h >>>>> petsc.h(7): #error: can't find include file: finclude/petscdef.h >>>>> petscvec.h(5): #error: can't find include file: finclude/petscvecdef.h >>>>> >>>>> >>>>> I looked at my petsc tree and II have the "include/finclude/" >>>>> directory. >>>>> I have a petsc.h file inside >>>>> >>>>> which does >>>>> >>>>> #include "petscconf.h" >>>>> #include "petscversion.h" >>>>> #include "finclude/petscdef.h" >>>>> >>>>> >>>>> Does anyone knows why it does not find those paths or what is wrong >>>>> with >>>>> those paths ? >>>>> >>>>> Tahar >>>>> >>>> >>> > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 16:35:11 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 23:35:11 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> Message-ID: <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Thanks , Now this a different kind of errors (just a piece of those below :) gcc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/ HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz - ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib - lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich - lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack Undefined symbols: "std::basic_ostringstream, std::allocator >::basic_ostringstream(std::_Ios_Openmode)", referenced from: PetscErrorCxx(int, char const*, char const*, char const*, int, int)in libpetsc.a(err.o) PETSc::Exception::Exception(std::basic_string, std::allocator > const&)in libpetsc.a(err.o) "_for_stop_core", referenced from: _advmom_cv_ in mh4d.o _advmom_cv_ in mh4d.o _advmom_cv_ in mh4d.o _terminators_mp_terminator_ in terminator.o _terminators_mp_terminator_all_ in terminator.o _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o Le 15 mai 09 ? 23:29, Matthew Knepley a ?crit : > CLINKER From knepley at gmail.com Fri May 15 16:40:57 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 16:40:57 -0500 Subject: solving a particular linear equation system In-Reply-To: References: Message-ID: On Fri, May 15, 2009 at 4:23 PM, Christian May wrote: > On Fri, 15 May 2009, Matthew Knepley wrote: > >> 3) Run with -pc_type lu -ksp_type preonly first to test. >> > > Thanks for your hints. > This works fine: Linear solve converged due to CONVERGED_ITS iterations 1 I do not think its fine. I ran with KSP ex10: knepley at khan:/PETSc3/petsc/petsc-dev/src/ksp/ksp/examples/tutorials$ ./ex10 -f0 ~/Desktop/binaryoutput -ksp_monitor -pc_type lu -ksp_type preonly Number of iterations = 1 Residual norm 1.12564e+06 The huge residual says that LU bit the dust on roundoff with this matrix, which must have an astronomical condition number. Do you have a good physics reason that it should be almost singular? If not, I would reformulate the problem. Matt > > On Fri, 15 May 2009, Barry Smith wrote: > >> Could you please save this matrix and right hand side using >> -ksp_view_binary (just run solver like you usually do but use this command >> line option also) then post the file binaryoutput that is generated. This >> way people can trivially load the matrix and right hand side with MatLoad() >> and not monkey with parsing ASCII files. >> > > Sure, I wasn't aware of this option. The file is now available at > http://www.mayarea.de/download/binaryoutput > > Thanks a lot > Christian > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri May 15 16:49:32 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 16:49:32 -0500 Subject: include file fortran In-Reply-To: <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: On Fri, May 15, 2009 at 4:35 PM, Tahar Amari wrote: > Thanks , > > Now this a different kind of errors (just a piece of those below :) If you want to use C++, you should configure using --with-clanguage=cxx. Then you will get the C++ linker. Matt > > gcc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o > terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o > rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/HDF4.2r1/include > -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz -ljpeg -lz > -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib > -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich -lfmpich > -lmpichf90 -lparmetis -lmetis -lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt > -lXext -lX11 -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat > -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx > -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack > Undefined symbols: > "std::basic_ostringstream, > std::allocator >::basic_ostringstream(std::_Ios_Openmode)", referenced > from: > PetscErrorCxx(int, char const*, char const*, char const*, int, int)in > libpetsc.a(err.o) > PETSc::Exception::Exception(std::basic_string std::char_traits, std::allocator > const&)in libpetsc.a(err.o) > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > > > > > Le 15 mai 09 ? 23:29, Matthew Knepley a ?crit : > > CLINKER >> > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 16:56:02 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Fri, 15 May 2009 23:56:02 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: I am sorry (I might have missed something) > If you want to use C++, you should configure using --with- > clanguage=cxx. Then you will get the C++ linker. I do not really want to use C++ linker I did it with the C Linker and got an error. I do not see where the C+ + is now used Tahar cc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/ HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz - ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib - lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich - lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack Undefined symbols: "std::basic_ostringstream, std::allocator >::basic_ostringstream(std::_Ios_Openmode)", referenced from: PetscErrorCxx(int, char const*, char const*, char const*, int, int)in libpetsc.a(err.o) PETSc::Exception::Exception(std::basic_string, std::allocator > const&)in libpetsc.a(err.o) "_for_stop_core", referenced from: _advmom_cv_ in mh4d.o _advmom_cv_ in mh4d.o _advmom_cv_ in mh4d.o _terminators_mp_terminator_ in terminator.o _terminators_mp_terminator_all_ in terminator.o _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o _xerbla_ in libfblas.a(xerbla.o) "_for_exit", referenced from: _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o From knepley at gmail.com Fri May 15 17:00:27 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 17:00:27 -0500 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: On Fri, May 15, 2009 at 4:56 PM, Tahar Amari wrote: > I am sorry (I might have missed something) > > > If you want to use C++, you should configure using --with-clanguage=cxx. >> Then you will get the C++ linker. >> > > I do not really want to use C++ linker > > I did it with the C Linker and got an error. I do not see where the C++ is > now used You have C++ code in there somewhere. It is hard to see what is going on since we do not have the source. Matt > > Tahar > > cc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o > terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o > rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/HDF4.2r1/include > -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz -ljpeg -lz > -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib > -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich -lfmpich > -lmpichf90 -lparmetis -lmetis -lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt > -lXext -lX11 -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat > -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx > -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack > Undefined symbols: > "std::basic_ostringstream, > std::allocator >::basic_ostringstream(std::_Ios_Openmode)", referenced > from: > PetscErrorCxx(int, char const*, char const*, char const*, int, int)in > libpetsc.a(err.o) > PETSc::Exception::Exception(std::basic_string std::char_traits, std::allocator > const&)in libpetsc.a(err.o) > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _xerbla_ in libfblas.a(xerbla.o) > "_for_exit", referenced from: > _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 17:02:10 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:02:10 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: Just to let you know is that I built petsc with c++ anyway since I did configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 --with- #shared=0 --download-f-blas-lapack --with-clanguage=cxx --with-cxx=g++ --with-fc=ifort --with-dynamic=0 Tahar Le 15 mai 09 ? 23:49, Matthew Knepley a ?crit : > On Fri, May 15, 2009 at 4:35 PM, Tahar Amari > wrote: > Thanks , > > Now this a different kind of errors (just a piece of those below :) > > If you want to use C++, you should configure using --with- > clanguage=cxx. Then you will get the C++ linker. > > Matt > > > gcc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o > mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o > tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/local/ > hdf/HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz > -ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - > lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - > lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - > lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ > local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat - > lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich - > lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas - > lflapack > Undefined symbols: > "std::basic_ostringstream, > std::allocator >::basic_ostringstream(std::_Ios_Openmode)", > referenced from: > PetscErrorCxx(int, char const*, char const*, char const*, int, > int)in libpetsc.a(err.o) > PETSc::Exception::Exception(std::basic_string std::char_traits, std::allocator > const&)in > libpetsc.a(err.o) > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > > > > > Le 15 mai 09 ? 23:29, Matthew Knepley a ?crit : > > CLINKER > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 17:05:44 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:05:44 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: <50272C8F-CD5B-417C-BE03-E5AF2B3A7D48@cpht.polytechnique.fr> I am compiling a fortran code in principle not even one line of C++ and link with libraries. Do you mean that the line > Undefined symbols: > "std::basic_ostringstream, > std::allocator >::basic_ostringstream(std::_Ios_Openmode)", > referenced from: > PetscErrorCxx(int, char const*, char const*, char const*, int, > int)in libpetsc.a(err.o) is not telling that the c++ missing symbol is not in libpetsc.a(err.o) ? which is from what I understand petsc library and not my source ? Tahar Le 16 mai 09 ? 00:00, Matthew Knepley a ?crit : > On Fri, May 15, 2009 at 4:56 PM, Tahar Amari > wrote: > I am sorry (I might have missed something) > > > > If you want to use C++, you should configure using --with- > clanguage=cxx. Then you will get the C++ linker. > > I do not really want to use C++ linker > > I did it with the C Linker and got an error. I do not see where the C > ++ is now used > > You have C++ code in there somewhere. It is hard to see what is > going on since we do not have the source. > > Matt > > > Tahar > > > cc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o > mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o > tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/local/ > hdf/HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz > -ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - > lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - > lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - > lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ > local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat - > lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich - > lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas - > lflapack > Undefined symbols: > "std::basic_ostringstream, > std::allocator >::basic_ostringstream(std::_Ios_Openmode)", > referenced from: > PetscErrorCxx(int, char const*, char const*, char const*, int, > int)in libpetsc.a(err.o) > PETSc::Exception::Exception(std::basic_string std::char_traits, std::allocator > const&)in > libpetsc.a(err.o) > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _xerbla_ in libfblas.a(xerbla.o) > "_for_exit", referenced from: > _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From amari at cpht.polytechnique.fr Fri May 15 17:10:01 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:10:01 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: I changed CLINKER to g++ and the symbols where found. Now I have other remaining errors which seams to be link with some petsc fortran ? Do you have any idea please where (which petsc library) those symbols are supposed to be in ? Many thanks Tahar Undefined symbols: "_for_stop_core", referenced from: _advmom_cv_ in mh4d.o _advmom_cv_ in mh4d.o _advmom_cv_ in mh4d.o _terminators_mp_terminator_ in terminator.o _terminators_mp_terminator_all_ in terminator.o _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o _xerbla_ in libfblas.a(xerbla.o) "_for_exit", referenced from: _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o "_for_write_seq", referenced from: _wrrsfile_ in mh4d.o _wrrsfile_ in mh4d.o _wrrsfile_ in mh4d.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o "_for_check_mult_overflow", referenced from: Le 16 mai 09 ? 00:00, Matthew Knepley a ?crit : > On Fri, May 15, 2009 at 4:56 PM, Tahar Amari > wrote: > I am sorry (I might have missed something) > > > > If you want to use C++, you should configure using --with- > clanguage=cxx. Then you will get the C++ linker. > > I do not really want to use C++ linker > > I did it with the C Linker and got an error. I do not see where the C > ++ is now used > > You have C++ code in there somewhere. It is hard to see what is > going on since we do not have the source. > > Matt > > > Tahar > > > cc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o > mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o > tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/local/ > hdf/HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz > -ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - > lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - > lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - > lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ > local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat - > lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich - > lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas - > lflapack > Undefined symbols: > "std::basic_ostringstream, > std::allocator >::basic_ostringstream(std::_Ios_Openmode)", > referenced from: > PetscErrorCxx(int, char const*, char const*, char const*, int, > int)in libpetsc.a(err.o) > PETSc::Exception::Exception(std::basic_string std::char_traits, std::allocator > const&)in > libpetsc.a(err.o) > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _xerbla_ in libfblas.a(xerbla.o) > "_for_exit", referenced from: > _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Fri May 15 17:10:39 2009 From: jed at 59A2.org (Jed Brown) Date: Sat, 16 May 2009 00:10:39 +0200 Subject: solving a particular linear equation system In-Reply-To: References: Message-ID: <4A0DE85F.70901@59A2.org> Matthew Knepley wrote: > I do not think its fine. I ran with KSP ex10: > > knepley at khan:/PETSc3/petsc/petsc-dev/src/ksp/ksp/examples/tutorials$ ./ex10 > -f0 ~/Desktop/binaryoutput -ksp_monitor -pc_type lu -ksp_type preonly > Number of iterations = 1 > Residual norm 1.12564e+06 Interesting, this instability is highly dependent on the ordering $ ./ex10 -f ~/dl/binaryoutput -ksp_converged_reason -ksp_monitor_singular_value -pc_type lu -pc_factor_mat_ordering_type rcm 0 KSP Residual norm 4.424612243135e+02 % max 1 min 1 max/min 1 1 KSP Residual norm 1.211880345965e-09 % max 1 min 1 max/min 1 Linear solve converged due to CONVERGED_RTOL iterations 1 Number of iterations = 1 Residual norm 0.00398084 also, $ ./ex10 -f ~/dl/binaryoutput -ksp_converged_reason -pc_type lu -pc_factor_mat_solver_package umfpack Linear solve converged due to CONVERGED_RTOL iterations 2 Number of iterations = 2 Residual norm 0.0797212 $ ./ex10 -f ~/dl/binaryoutput -ksp_converged_reason -pc_type lu -pc_factor_mat_solver_package mumps Linear solve converged due to CONVERGED_RTOL iterations 1 Number of iterations = 1 Residual norm 0.00322144 I agree with Matt about trying to reformulate the problem, at least if you need a scalable preconditioner. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 260 bytes Desc: OpenPGP digital signature URL: From bsmith at mcs.anl.gov Fri May 15 17:13:20 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 15 May 2009 17:13:20 -0500 Subject: solving a particular linear equation system In-Reply-To: <4A0DE85F.70901@59A2.org> References: <4A0DE85F.70901@59A2.org> Message-ID: <6BD52FB0-CEF8-42A3-B481-5DB222E172E1@mcs.anl.gov> Note that even in these cases the resulting residual norm is very large, relative to 1.e-14. You might try superlu (not superlu_dist) it tries hard to do enough pivoting to generate an accurate answer. Barry On May 15, 2009, at 5:10 PM, Jed Brown wrote: > Matthew Knepley wrote: >> I do not think its fine. I ran with KSP ex10: >> >> knepley at khan:/PETSc3/petsc/petsc-dev/src/ksp/ksp/examples/tutorials >> $ ./ex10 >> -f0 ~/Desktop/binaryoutput -ksp_monitor -pc_type lu -ksp_type preonly >> Number of iterations = 1 >> Residual norm 1.12564e+06 > > Interesting, this instability is highly dependent on the ordering > > $ ./ex10 -f ~/dl/binaryoutput -ksp_converged_reason - > ksp_monitor_singular_value -pc_type lu -pc_factor_mat_ordering_type > rcm > 0 KSP Residual norm 4.424612243135e+02 % max 1 min 1 max/min 1 > 1 KSP Residual norm 1.211880345965e-09 % max 1 min 1 max/min 1 > Linear solve converged due to CONVERGED_RTOL iterations 1 > Number of iterations = 1 > Residual norm 0.00398084 > > > also, > > $ ./ex10 -f ~/dl/binaryoutput -ksp_converged_reason -pc_type lu - > pc_factor_mat_solver_package umfpack > Linear solve converged due to CONVERGED_RTOL iterations 2 > Number of iterations = 2 > Residual norm 0.0797212 > > $ ./ex10 -f ~/dl/binaryoutput -ksp_converged_reason -pc_type lu - > pc_factor_mat_solver_package mumps > Linear solve converged due to CONVERGED_RTOL iterations 1 > Number of iterations = 1 > Residual norm 0.00322144 > > > I agree with Matt about trying to reformulate the problem, at least if > you need a scalable preconditioner. > > Jed > From knepley at gmail.com Fri May 15 17:15:02 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 15 May 2009 17:15:02 -0500 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: On Fri, May 15, 2009 at 5:10 PM, Tahar Amari wrote: > I changed CLINKER to g++ and the symbols where found. > This does not make sense. What version are you using? The latest? > Now I have other remaining errors which seams to be link with some petsc > fortran ? > Do you have any idea please where (which petsc library) those symbols are > supposed to be in ? > Those symbols are not in PETSc. They look like Fortran symbols, and so should be included in PETSC_TS_LIB, if you configured with the same Fortran compiler that you used to compile those files. If you want to talk about it more, more the discussion to petsc-maint at mcs.anl.gov and send the configure.log. Matt > Many thanks > > Tahar > > Undefined symbols: > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _xerbla_ in libfblas.a(xerbla.o) > "_for_exit", referenced from: > _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o > "_for_write_seq", referenced from: > _wrrsfile_ in mh4d.o > _wrrsfile_ in mh4d.o > _wrrsfile_ in mh4d.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in tetrahedralgrid.o > "_for_check_mult_overflow", referenced from: > > > > > Le 16 mai 09 ? 00:00, Matthew Knepley a ?crit : > > On Fri, May 15, 2009 at 4:56 PM, Tahar Amari wrote: > >> I am sorry (I might have missed something) >> >> >> If you want to use C++, you should configure using --with-clanguage=cxx. >>> Then you will get the C++ linker. >>> >> >> I do not really want to use C++ linker >> >> I did it with the C Linker and got an error. I do not see where the C++ is >> now used > > > You have C++ code in there somewhere. It is hard to see what is going on > since we do not have the source. > > Matt > > >> >> Tahar >> >> cc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o >> terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o >> rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/HDF4.2r1/include >> -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz -ljpeg -lz >> -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib >> -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich -lfmpich >> -lmpichf90 -lparmetis -lmetis -lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt >> -lXext -lX11 -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat >> -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx >> -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack >> Undefined symbols: >> "std::basic_ostringstream, >> std::allocator >::basic_ostringstream(std::_Ios_Openmode)", referenced >> from: >> PetscErrorCxx(int, char const*, char const*, char const*, int, int)in >> libpetsc.a(err.o) >> PETSc::Exception::Exception(std::basic_string> std::char_traits, std::allocator > const&)in libpetsc.a(err.o) >> "_for_stop_core", referenced from: >> _advmom_cv_ in mh4d.o >> _advmom_cv_ in mh4d.o >> _advmom_cv_ in mh4d.o >> _terminators_mp_terminator_ in terminator.o >> _terminators_mp_terminator_all_ in terminator.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o >> _xerbla_ in libfblas.a(xerbla.o) >> "_for_exit", referenced from: >> _tetrahedralgrid_mod_mp_partition_tetragrid_ in tetrahedralgrid.o >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 15 17:21:29 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 17:21:29 -0500 (CDT) Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: If using PETSc from fortran - why use --with-clanguage=cxx ? Also - verify if PETSc examples work - before attempting your code. And then use PETSc makefiles. [i.e copy the makefile from PETSc example dir - and modify the target to build your code] Satish On Sat, 16 May 2009, Tahar Amari wrote: > Just to let you know is that I built petsc with c++ anyway since I did > > configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 > --with-#shared=0 --download-f-blas-lapack --with-clanguage=cxx --with-cxx=g++ > --with-fc=ifort --with-dynamic=0 > > > Tahar > > > Le 15 mai 09 ? 23:49, Matthew Knepley a ?crit : > > > On Fri, May 15, 2009 at 4:35 PM, Tahar Amari > > wrote: > > Thanks , > > > > Now this a different kind of errors (just a piece of those below :) > > > > If you want to use C++, you should configure using --with-clanguage=cxx. > > Then you will get the C++ linker. > > > > Matt > > > > > > gcc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o > > terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o > > rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/HDF4.2r1/include > > -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz -ljpeg -lz > > -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat -lpetsccontrib > > -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich -lfmpich > > -lmpichf90 -lparmetis -lmetis -lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt > > -lXext -lX11 -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat > > -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx > > -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack > > Undefined symbols: > > "std::basic_ostringstream, std::allocator > > >::basic_ostringstream(std::_Ios_Openmode)", referenced from: > > PetscErrorCxx(int, char const*, char const*, char const*, int, int)in > > libpetsc.a(err.o) > > PETSc::Exception::Exception(std::basic_string > std::char_traits, std::allocator > const&)in libpetsc.a(err.o) > > "_for_stop_core", referenced from: > > _advmom_cv_ in mh4d.o > > _advmom_cv_ in mh4d.o > > _advmom_cv_ in mh4d.o > > _terminators_mp_terminator_ in terminator.o > > _terminators_mp_terminator_all_ in terminator.o > > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > > > > > > > > > > Le 15 mai 09 ? 23:29, Matthew Knepley a ?crit : > > > > CLINKER > > > > > > > > > > -- > > What most experimenters take for granted before they begin their experiments > > is infinitely more interesting than any results to which their experiments > > lead. > > -- Norbert Wiener > From amari at cpht.polytechnique.fr Fri May 15 17:21:52 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:21:52 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: My configure was (as you can see for petsc-3.0.0-p5 ) configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 --with- #shared=0 --download-f-blas-lapack --with-clanguage=cxx --with-cxx=g++ --with-fc=ifort --with-dynamic=0 sudo cp -R petsc-3.0.0-p5 /usr/local/ set version=-3.0.0-p5 sudo ln -s petsc${version} petsc You are right , once I did g++ , found after that those were symbols of the intel compiler. So I am a little embarrassed because if I use as linker ifort , It complains about g++ symbols and if I use g++ it complains about ifort symbols ? Do you think there is a solution to this please ? Tahar Le 16 mai 09 ? 00:15, Matthew Knepley a ?crit : > On Fri, May 15, 2009 at 5:10 PM, Tahar Amari > wrote: > I changed CLINKER to g++ and the symbols where found. > > This does not make sense. What version are you using? The latest? > > Now I have other remaining errors which seams to be link with some > petsc fortran ? > Do you have any idea please where (which petsc library) those > symbols are supposed to be in ? > > Those symbols are not in PETSc. They look like Fortran symbols, and > so should be included in > PETSC_TS_LIB, if you configured with the same Fortran compiler that > you used to compile > those files. > > If you want to talk about it more, more the discussion to petsc-maint at mcs.anl.gov > and send the > configure.log. > > Matt > > Many thanks > > Tahar > > Undefined symbols: > "_for_stop_core", referenced from: > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _advmom_cv_ in mh4d.o > _terminators_mp_terminator_ in terminator.o > _terminators_mp_terminator_all_ in terminator.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o > _xerbla_ in libfblas.a(xerbla.o) > "_for_exit", referenced from: > _tetrahedralgrid_mod_mp_partition_tetragrid_ in > tetrahedralgrid.o > "_for_write_seq", referenced from: > _wrrsfile_ in mh4d.o > _wrrsfile_ in mh4d.o > _wrrsfile_ in mh4d.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvs_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tcv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_save_tvv_ in tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > _tetrahedralgrid_mod_mp_write_tetragrid_data_ in > tetrahedralgrid.o > "_for_check_mult_overflow", referenced from: > > > > > Le 16 mai 09 ? 00:00, Matthew Knepley a ?crit : > >> On Fri, May 15, 2009 at 4:56 PM, Tahar Amari > > wrote: >> I am sorry (I might have missed something) >> >> >> >> If you want to use C++, you should configure using --with- >> clanguage=cxx. Then you will get the C++ linker. >> >> I do not really want to use C++ linker >> >> I did it with the C Linker and got an error. I do not see where the >> C++ is now used >> >> You have C++ code in there somewhere. It is hard to see what is >> going on since we do not have the source. >> >> Matt >> >> >> Tahar >> >> >> cc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o >> mympi.o terminator.o operator.o shellsort.o edge.o side.o vertex.o >> tetrahedron.o rotation.o tetrahedralgrid.o field.o -I/usr/local/ >> hdf/HDF4.2r1/include -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf - >> lsz -ljpeg -lz -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - >> lpetscmat -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc - >> lmpich -lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis - >> lfblas -lflapack -L/usr/X11R6/lib -lX11 -lXt -lXext -lX11 -L/usr/ >> local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat - >> lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich - >> lmpichcxx -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas - >> lflapack >> Undefined symbols: >> "std::basic_ostringstream, >> std::allocator >::basic_ostringstream(std::_Ios_Openmode)", >> referenced from: >> PetscErrorCxx(int, char const*, char const*, char const*, int, >> int)in libpetsc.a(err.o) >> PETSc::Exception::Exception(std::basic_string> std::char_traits, std::allocator > const&)in >> libpetsc.a(err.o) >> "_for_stop_core", referenced from: >> _advmom_cv_ in mh4d.o >> _advmom_cv_ in mh4d.o >> _advmom_cv_ in mh4d.o >> _terminators_mp_terminator_ in terminator.o >> _terminators_mp_terminator_all_ in terminator.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_idnt_bndr_tvv_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_tvvaxpy_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_scalar_bc_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_zero_bndr_tvs_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bc0_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o >> _tetrahedralgrid_mod_mp_v2v_operator_bct_ in tetrahedralgrid.o >> _xerbla_ in libfblas.a(xerbla.o) >> "_for_exit", referenced from: >> _tetrahedralgrid_mod_mp_partition_tetragrid_ in >> tetrahedralgrid.o >> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to >> which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri May 15 17:26:17 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 17:26:17 -0500 (CDT) Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: On Sat, 16 May 2009, Tahar Amari wrote: > My configure was (as you can see for petsc-3.0.0-p5 ) > > configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 > --with-#shared=0 --download-f-blas-lapack --with-clanguage=cxx --with-cxx=g++ > --with-fc=ifort --with-dynamic=0 > > sudo cp -R petsc-3.0.0-p5 /usr/local/ > set version=-3.0.0-p5 > sudo ln -s petsc${version} petsc > > > You are right , once I did g++ , found after that those were symbols of the > intel compiler. > > > > So I am a little embarrassed because if I use as linker ifort , It complains > about g++ symbols > and if I use g++ it complains about ifort symbols ? > > Do you think there is a solution to this please ? Verify if PETSc examples work [both C and fortran] - and if so - use PETSc makefiles.. Attaching one in the simplest form [after removing the unnecessary stuff from - src/ksp/ksp/examples/tutorials/makefile] Satish -------------- next part -------------- CFLAGS = FFLAGS = CPPFLAGS = FPPFLAGS = CLEANFILES = include ${PETSC_DIR}/conf/base ex1: ex1.o chkopts -${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB} ${RM} ex1.o From amari at cpht.polytechnique.fr Fri May 15 17:26:37 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:26:37 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: By the way I forgot to say that , I build mpich with C++ and fortran and both works. So I assumed that petsc would work too. Tahar Le 16 mai 09 ? 00:21, Satish Balay a ?crit : > If using PETSc from fortran - why use --with-clanguage=cxx ? > > Also - verify if PETSc examples work - before attempting your code. > And then use PETSc makefiles. > > [i.e copy the makefile from PETSc example dir - and modify the target > to build your code] > > Satish > > On Sat, 16 May 2009, Tahar Amari wrote: > >> Just to let you know is that I built petsc with c++ anyway since I >> did >> >> configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 >> --with-#shared=0 --download-f-blas-lapack --with-clanguage=cxx -- >> with-cxx=g++ >> --with-fc=ifort --with-dynamic=0 >> >> >> Tahar >> >> >> Le 15 mai 09 ? 23:49, Matthew Knepley a ?crit : >> >>> On Fri, May 15, 2009 at 4:35 PM, Tahar Amari >> > >>> wrote: >>> Thanks , >>> >>> Now this a different kind of errors (just a piece of those below :) >>> >>> If you want to use C++, you should configure using --with- >>> clanguage=cxx. >>> Then you will get the C++ linker. >>> >>> Matt >>> >>> >>> gcc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o >>> mympi.o >>> terminator.o operator.o shellsort.o edge.o side.o vertex.o >>> tetrahedron.o >>> rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/HDF4.2r1/ >>> include >>> -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz -ljpeg -lz >>> -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat - >>> lpetsccontrib >>> -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich >>> -lfmpich >>> -lmpichf90 -lparmetis -lmetis -lfblas -lflapack -L/usr/X11R6/lib - >>> lX11 -lXt >>> -lXext -lX11 -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec - >>> lpetscmat >>> -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich - >>> lmpichcxx >>> -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack >>> Undefined symbols: >>> "std::basic_ostringstream, >>> std::allocator >>>> ::basic_ostringstream(std::_Ios_Openmode)", referenced from: >>> PetscErrorCxx(int, char const*, char const*, char const*, int, >>> int)in >>> libpetsc.a(err.o) >>> PETSc::Exception::Exception(std::basic_string>> std::char_traits, std::allocator > const&)in >>> libpetsc.a(err.o) >>> "_for_stop_core", referenced from: >>> _advmom_cv_ in mh4d.o >>> _advmom_cv_ in mh4d.o >>> _advmom_cv_ in mh4d.o >>> _terminators_mp_terminator_ in terminator.o >>> _terminators_mp_terminator_all_ in terminator.o >>> _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o >>> >>> >>> >>> >>> Le 15 mai 09 ? 23:29, Matthew Knepley a ?crit : >>> >>> CLINKER >>> >>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments >>> is infinitely more interesting than any results to which their >>> experiments >>> lead. >>> -- Norbert Wiener >> From balay at mcs.anl.gov Fri May 15 17:28:23 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 17:28:23 -0500 (CDT) Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: PETSc works from C,C++,Fortran. If you are using PETSc from fortran - then you don't need --with-clanguage=cxx [irrespective of how MPICH is built] Satish On Sat, 16 May 2009, Tahar Amari wrote: > By the way I forgot to say that , > > I build mpich with C++ and fortran and both works. So I assumed that petsc > would work too. > > > Tahar > > > Le 16 mai 09 ? 00:21, Satish Balay a ?crit : > > > If using PETSc from fortran - why use --with-clanguage=cxx ? > > > > Also - verify if PETSc examples work - before attempting your code. > > And then use PETSc makefiles. > > > > [i.e copy the makefile from PETSc example dir - and modify the target > > to build your code] > > > > Satish > > > > On Sat, 16 May 2009, Tahar Amari wrote: > > > > > Just to let you know is that I built petsc with c++ anyway since I did > > > > > > configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 > > > --with-#shared=0 --download-f-blas-lapack --with-clanguage=cxx > > > --with-cxx=g++ > > > --with-fc=ifort --with-dynamic=0 > > > > > > > > > Tahar > > > > > > > > > Le 15 mai 09 ? 23:49, Matthew Knepley a ?crit : > > > > > > > On Fri, May 15, 2009 at 4:35 PM, Tahar Amari > > > > > > > > wrote: > > > > Thanks , > > > > > > > > Now this a different kind of errors (just a piece of those below :) > > > > > > > > If you want to use C++, you should configure using --with-clanguage=cxx. > > > > Then you will get the C++ linker. > > > > > > > > Matt > > > > > > > > > > > > gcc -o mh4d mh4d.o petsc.o comm.o setbc.o local.o gridutil.o mympi.o > > > > terminator.o operator.o shellsort.o edge.o side.o vertex.o tetrahedron.o > > > > rotation.o tetrahedralgrid.o field.o -I/usr/local/hdf/HDF4.2r1/include > > > > -L/usr/local/hdf/HDF4.2r1/lib -lmfhdf -ldf -lsz -ljpeg -lz > > > > -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec -lpetscmat > > > > -lpetsccontrib > > > > -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx -lpmpich > > > > -lfmpich > > > > -lmpichf90 -lparmetis -lmetis -lfblas -lflapack -L/usr/X11R6/lib -lX11 > > > > -lXt > > > > -lXext -lX11 -L/usr/local/petsc/macx/lib -lpetscsnes -lpetscvec > > > > -lpetscmat > > > > -lpetsccontrib -lpetscts -lpetscdm -lpetscksp -lpetsc -lmpich -lmpichcxx > > > > -lpmpich -lfmpich -lmpichf90 -lparmetis -lmetis -lfblas -lflapack > > > > Undefined symbols: > > > > "std::basic_ostringstream, > > > > std::allocator > > > > > ::basic_ostringstream(std::_Ios_Openmode)", referenced from: > > > > PetscErrorCxx(int, char const*, char const*, char const*, int, int)in > > > > libpetsc.a(err.o) > > > > PETSc::Exception::Exception(std::basic_string > > > std::char_traits, std::allocator > const&)in > > > > libpetsc.a(err.o) > > > > "_for_stop_core", referenced from: > > > > _advmom_cv_ in mh4d.o > > > > _advmom_cv_ in mh4d.o > > > > _advmom_cv_ in mh4d.o > > > > _terminators_mp_terminator_ in terminator.o > > > > _terminators_mp_terminator_all_ in terminator.o > > > > _tetrahedralgrid_mod_mp_zero_bndr_tvv_ in tetrahedralgrid.o > > > > > > > > > > > > > > > > > > > > Le 15 mai 09 ? 23:29, Matthew Knepley a ?crit : > > > > > > > > CLINKER > > > > > > > > > > > > > > > > > > > > -- > > > > What most experimenters take for granted before they begin their > > > > experiments > > > > is infinitely more interesting than any results to which their > > > > experiments > > > > lead. > > > > -- Norbert Wiener > > > > From amari at cpht.polytechnique.fr Fri May 15 17:29:55 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:29:55 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: <1560469B-ABA8-4C20-AC06-4A8C42C4BD51@cpht.polytechnique.fr> The reason is that : I am developing different codes . Some are in fortran, others in C and others in C++. Why should I built only the fortran please ? I did make tests and those were OK. Do you mean it is not sufficient ? Tahar Le 16 mai 09 ? 00:21, Satish Balay a ?crit : > If using PETSc from fortran - why use --with-clanguage=cxx ? > > Also - verify if PETSc examples work - before attempting your code. > And then use PETSc makefiles. > > [i.e copy the makefile from PETSc example dir - and modify the target > to build your code] > > Satish > > On Sat, 16 May 2009, Tahar Amari wrote: > >> Just to let you know is that I built petsc with c++ anyway since I >> did >> >> configure -with-cc=gcc --download-mpich=1 --download-parmetis=1 >> --with-#shared=0 --download-f-blas-lapack --with-clanguage=cxx -- >> with-cxx=g++ >> --with-fc=ifort --with-dynamic=0 >> >> >> Tahar >> >> >> From balay at mcs.anl.gov Fri May 15 17:29:59 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 17:29:59 -0500 (CDT) Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: On Fri, 15 May 2009, Satish Balay wrote: > Verify if PETSc examples work [both C and fortran] - and if so - use > PETSc makefiles.. > > Attaching one in the simplest form [after removing the unnecessary > stuff from - src/ksp/ksp/examples/tutorials/makefile] You are using PETSc from fortran - so use the attached makefile and replace ex1f[.F] with the name of your source file[s]. Satish -------------- next part -------------- CFLAGS = FFLAGS = CPPFLAGS = FPPFLAGS = LOCDIR = CLEANFILES = include ${PETSC_DIR}/conf/base ex1f: ex1f.o chkopts -${FLINKER} -o ex1f ex1f.o ${PETSC_KSP_LIB} ${RM} ex1f.o From balay at mcs.anl.gov Fri May 15 17:37:11 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 17:37:11 -0500 (CDT) Subject: include file fortran In-Reply-To: <1560469B-ABA8-4C20-AC06-4A8C42C4BD51@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <1560469B-ABA8-4C20-AC06-4A8C42C4BD51@cpht.polytechnique.fr> Message-ID: On Sat, 16 May 2009, Tahar Amari wrote: > The reason is that : > I am developing different codes . Some are > in fortran, others in C and others in C++. > Why should I built only the fortran please ? You either build C/fortran or C++/fortran. > I did make tests and those were OK. Do you mean it is not sufficient ? Yes this is sufficient. You can have a single C++/fortran build of PETSc and use from all the 3 cases above [usually - by compiling your c application as c++]. Satish From amari at cpht.polytechnique.fr Fri May 15 17:37:52 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:37:52 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> Message-ID: <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> It seems the line include ${PETSC_DIR}/conf/base is not taken into account the make command on my Mac OS. Is it ony a gnu make directive please ? Tahar Le 16 mai 09 ? 00:29, Satish Balay a ?crit : > On Fri, 15 May 2009, Satish Balay wrote: > >> Verify if PETSc examples work [both C and fortran] - and if so - use >> PETSc makefiles.. >> >> Attaching one in the simplest form [after removing the unnecessary >> stuff from - src/ksp/ksp/examples/tutorials/makefile] > > You are using PETSc from fortran - so use the attached makefile > and replace ex1f[.F] with the name of your source file[s]. > > Satish From amari at cpht.polytechnique.fr Fri May 15 17:39:03 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:39:03 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <1560469B-ABA8-4C20-AC06-4A8C42C4BD51@cpht.polytechnique.fr> Message-ID: <4F4FA418-C71E-486A-B8AC-D864338F6539@cpht.polytechnique.fr> Thanks a lot for this information. Tahar Le 16 mai 09 ? 00:37, Satish Balay a ?crit : > On Sat, 16 May 2009, Tahar Amari wrote: > >> The reason is that : >> I am developing different codes . Some are >> in fortran, others in C and others in C++. > >> Why should I built only the fortran please ? > > You either build C/fortran or C++/fortran. > >> I did make tests and those were OK. Do you mean it is not >> sufficient ? > > Yes this is sufficient. > > You can have a single C++/fortran build of PETSc and use from all the > 3 cases above [usually - by compiling your c application as c++]. > > Satish From balay at mcs.anl.gov Fri May 15 17:41:01 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 17:41:01 -0500 (CDT) Subject: include file fortran In-Reply-To: <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> Message-ID: On Sat, 16 May 2009, Tahar Amari wrote: > It seems the line > > include ${PETSC_DIR}/conf/base > > is not taken into account the make command > on my Mac OS. > Is it ony a gnu make directive please ? It should work with all makes. And you have gnumake on the Mac. Did you say PETSc examples worked fine with the example makefile? What errors do you get now? Satish From amari at cpht.polytechnique.fr Fri May 15 17:49:46 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 00:49:46 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <2AFB9704-A83A-4B7F-B996-563618937D60@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> Message-ID: <0C04D3D0-70F2-48AF-9E2D-CAAC6A9DCE88@cpht.polytechnique.fr> I do not have any "reply". Following your advice I will rebuild PETSC with only C and fortran (even when I do C++ I do not intend to use the C++ wrappers, but the C ones) at the level of linear algebra. Would you aggree with this ? I followed the advice of not loading -- download-f-blas-lapack since one of your colleague explained that it should look for the APPLE one ? I changed with-cc=gcc by with-cc=cc, is is OK ? configure -with-cc=cc --download-mpich=1 --download-parmetis=1 --with- shared=0 --with-fc=ifort --with-dynamic=0 Best regards, Tahar Le 16 mai 09 ? 00:41, Satish Balay a ?crit : > On Sat, 16 May 2009, Tahar Amari wrote: > >> It seems the line >> >> include ${PETSC_DIR}/conf/base >> >> is not taken into account the make command >> on my Mac OS. >> Is it ony a gnu make directive please ? > > It should work with all makes. And you have gnumake on the Mac. > > Did you say PETSc examples worked fine with the example makefile? > > What errors do you get now? > > Satish From balay at mcs.anl.gov Fri May 15 18:54:35 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 15 May 2009 18:54:35 -0500 (CDT) Subject: include file fortran In-Reply-To: <0C04D3D0-70F2-48AF-9E2D-CAAC6A9DCE88@cpht.polytechnique.fr> References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> <0C04D3D0-70F2-48AF-9E2D-CAAC6A9DCE88@cpht.polytechnique.fr> Message-ID: On Sat, 16 May 2009, Tahar Amari wrote: > I do not have any "reply". > > Following your advice I will rebuild PETSC with > > only C and fortran (even when I do C++ I do not intend to use the C++ > wrappers, but the C ones) at the level > of linear algebra. If you have PETSc calls from C++ code - its best to build c++/fortran version as indicated before. [but using c-petsc from c++ code is also possible - it just means the makefile will get more complex] > Would you aggree with this ? I followed the advice of not loading > --download-f-blas-lapack > since one of your colleague explained that it should look for the APPLE one ? > > I changed with-cc=gcc by with-cc=cc, is is OK ? cc is same as gcc > > configure -with-cc=cc --download-mpich=1 --download-parmetis=1 --with-shared=0 > --with-fc=ifort --with-dynamic=0 If you still have problems compiling your fortran code with a PETSc makefile - let us know.. Satish From amari at cpht.polytechnique.fr Fri May 15 19:03:27 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Sat, 16 May 2009 02:03:27 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> <0C04D3D0-70F2-48AF-9E2D-CAAC6A9DCE88@cpht.polytechnique.fr> Message-ID: <82CB2900-639B-470F-8C37-8A56D0A5663C@cpht.polytechnique.fr> Hello, Thanks a lot, I will let you know. Tahar Le 16 mai 09 ? 01:54, Satish Balay a ?crit : > On Sat, 16 May 2009, Tahar Amari wrote: > >> I do not have any "reply". >> >> Following your advice I will rebuild PETSC with >> >> only C and fortran (even when I do C++ I do not intend to use the C++ >> wrappers, but the C ones) at the level >> of linear algebra. > > If you have PETSc calls from C++ code - its best to build c++/fortran > version as indicated before. [but using c-petsc from c++ code is also > possible - it just means the makefile will get more complex] > >> Would you aggree with this ? I followed the advice of not loading >> --download-f-blas-lapack >> since one of your colleague explained that it should look for the >> APPLE one ? >> >> I changed with-cc=gcc by with-cc=cc, is is OK ? > > cc is same as gcc > >> >> configure -with-cc=cc --download-mpich=1 --download-parmetis=1 -- >> with-shared=0 >> --with-fc=ifort --with-dynamic=0 > > If you still have problems compiling your fortran code with a PETSc > makefile - let us know.. > > Satish From amari at cpht.polytechnique.fr Mon May 18 07:00:57 2009 From: amari at cpht.polytechnique.fr (Tahar Amari) Date: Mon, 18 May 2009 14:00:57 +0200 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> <0C04D3D0-70F2-48AF-9E2D-CAAC6A9DCE88@cpht.polytechnique.fr> Message-ID: Hello, I finally succeed building Petsc and the fortran code thanks to your advice . However when I run the code (piece of it is below) if(pe==0) write(*,*) "before MatCopy" CALL MatCopy(dvoeta_petsc,lhs_petsc,SAME_NONZERO_PATTERN, & ierror) if(pe==0) write(*,*) "after MatCopy" I got the following message (in between my printing for debugging) . before MatCopy [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Corrupt argument: see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Corrupt ! [0]PETSC ERROR: Invalid Pointer to Object: Parameter # 1! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 09:15:37 CDT 2009 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: mh4d on a macx named imac-de-tahar-amari.local by amari Mon May 18 13:47:42 2009 [0]PETSC ERROR: Libraries linked from /Data/Poub1/petsc-3.0.0-p5/macx/ lib [0]PETSC ERROR: Configure run at Sat May 16 13:49:21 2009 [0]PETSC ERROR: Configure options -with-cc=gcc --download-mpich=1 -- download-f-bl after MatCopy I looked at the piece of code which define the first argument "dvoeta_petsc" which is Mat, SAVE :: dvoeta_petsc CALL init_petsc_mat_vector(dvoeta_petsc,diagonal=dvoeta) This code was working with Petsc version 2.xxx I wonder if some changes to version 3 might be the reason ? May it be the call to CALL MatSetOption(matrix,MAT_NO_NEW_NONZERO_LOCATIONS,ierr) in this function please ? c subprogram 8. init_petsc_mat_vector c Make a petsc matrix out of an operator c----------------------------------------------------------------------- SUBROUTINE init_petsc_mat_vector(matrix,v2v_operator_vv,diagonal, &bctype) c----------------------------------------------------------------------- USE operator_mod #include "include/finclude/petsc.h" #include "include/finclude/petscvec.h" #include "include/finclude/petscmat.h" #include "include/finclude/petscao.h" c----------------------------------------------------------------------- Mat, INTENT(INOUT) :: matrix INTERFACE SUBROUTINE v2v_operator_vv(x) USE operator_mod TYPE (v2v_operators), INTENT(INOUT) :: x(:) END SUBROUTINE v2v_operator_vv END INTERFACE OPTIONAL :: v2v_operator_vv TYPE (vertex_scalar), INTENT(IN), OPTIONAL :: diagonal INTEGER(i4), INTENT(IN), OPTIONAL :: bctype INTEGER(i4) :: m,n,ierr,i,n1,m1,nv_local_inside, & iv_local,iv_couple,d_nz,o_nz TYPE(v2v_operators), ALLOCATABLE :: v2voperator(:) REAL(r8) :: v(3,3) LOGICAL, SAVE :: first=.true. c----------------------------------------------------------------------- CALL get_grid(thegrid,nv_local_inside=nv_local_inside) CALL get_nonzero(thegrid,d_nz,o_nz) CALL MatCreateMPIAIJ(comm_grid,3*nv_local_inside, & 3*nv_local_inside,PETSC_DECIDE,PETSC_DECIDE,3*d_nz, & PETSC_NULL_INTEGER,3*o_nz,PETSC_NULL_INTEGER,matrix,ierr) ALLOCATE(v2voperator(nv_local_inside)) c----------------------------------------------------------------------- c See if it is a diagonal operator or not. c----------------------------------------------------------------------- IF (PRESENT(diagonal)) THEN IF(first) CALL terminator("Error in init_petsc_mat_vector: "// & "You shouldn't call it the first time with a diagonal operator") IF(PRESENT(bctype)) THEN CALL v2v_diagonal_vv(v2voperator,diagonal,bctype) ELSE CALL v2v_diagonal_vv(v2voperator,diagonal,1) ENDIF ELSE CALL v2v_operator_vv(v2voperator) ENDIF DO iv_local=1,nv_local_inside CALL index_vv2petsc(m,iv_local,1) DO i=1,SIZE(v2voperator(iv_local)%neighbors) iv_couple=v2voperator(iv_local)%neighbors(i) CALL index_vv2petsc(n,iv_couple,1) v=v2voperator(iv_local)%coupling_with(i)%a DO m1=0,2 DO n1=0,2 CALL MatSetValues(matrix,1,m+m1,1,n+n1,v(1+m1,1+n1), & INSERT_VALUES,ierr) ENDDO ENDDO ENDDO ENDDO CALL MatAssemblyBegin(matrix,MAT_FINAL_ASSEMBLY,ierr) CALL MatAssemblyEnd(matrix,MAT_FINAL_ASSEMBLY,ierr) IF(first) THEN CALL MatSetOption(matrix,MAT_NO_NEW_NONZERO_LOCATIONS,ierr) first=.false. ENDIF CALL destroy(v2voperator) DEALLOCATE(v2voperator) RETURN From bsmith at mcs.anl.gov Mon May 18 08:23:59 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 18 May 2009 08:23:59 -0500 Subject: include file fortran In-Reply-To: References: <457B5D2A-2FD3-4FC5-B63E-6B8D9FCB086A@cpht.polytechnique.fr> <7D7DF9E1-B84B-4DBF-9BF2-285BA1180C3F@cpht.polytechnique.fr> <9FAD541C-70CD-4438-A14A-1FC2D5C2400D@cpht.polytechnique.fr> <9D354721-3A7F-49DE-A2A9-48161CB220B3@cpht.polytechnique.fr> <0C04D3D0-70F2-48AF-9E2D-CAAC6A9DCE88@cpht.polytechnique.fr> Message-ID: <9878ADDD-4238-4D55-B4AC-1DFB6BEFB41D@mcs.anl.gov> The calling sequence was changed for VecSetOption and MatSetOption see http://www.mcs.anl.gov/petsc/petsc-as/documentation/changes/300.html Barry On May 18, 2009, at 7:00 AM, Tahar Amari wrote: > Hello, > > I finally succeed building Petsc and the fortran code thanks to > your advice . > > However when I run the code (piece of it is below) > > if(pe==0) write(*,*) "before MatCopy" > CALL MatCopy(dvoeta_petsc,lhs_petsc,SAME_NONZERO_PATTERN, > & ierror) > if(pe==0) write(*,*) "after MatCopy" > > > I got the following message (in between my printing for debugging) . > > before MatCopy > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Corrupt argument: see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Corrupt > ! > [0]PETSC ERROR: Invalid Pointer to Object: Parameter # 1! > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.0.0, Patch 5, Mon Apr 13 > 09:15:37 CDT 2009 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: > ------------------------------------------------------------------------ > [0]PETSC ERROR: mh4d on a macx named imac-de-tahar-amari.local by > amari Mon May 18 13:47:42 2009 > [0]PETSC ERROR: Libraries linked from /Data/Poub1/petsc-3.0.0-p5/ > macx/lib > [0]PETSC ERROR: Configure run at Sat May 16 13:49:21 2009 > [0]PETSC ERROR: Configure options -with-cc=gcc --download-mpich=1 -- > download-f-bl > > after MatCopy > > > I looked at the piece of code which define the first argument > "dvoeta_petsc" > > which is > > Mat, SAVE :: dvoeta_petsc > CALL init_petsc_mat_vector(dvoeta_petsc,diagonal=dvoeta) > > > This code was working with Petsc version 2.xxx > I wonder if some changes to version 3 might be the reason ? > > > May it be the call to > > CALL MatSetOption(matrix,MAT_NO_NEW_NONZERO_LOCATIONS,ierr) > > in this function please ? > > c subprogram 8. init_petsc_mat_vector > c Make a petsc matrix out of an operator > c > ----------------------------------------------------------------------- > SUBROUTINE init_petsc_mat_vector(matrix,v2v_operator_vv,diagonal, > &bctype) > c > ----------------------------------------------------------------------- > USE operator_mod > #include "include/finclude/petsc.h" > #include "include/finclude/petscvec.h" > #include "include/finclude/petscmat.h" > #include "include/finclude/petscao.h" > > c > ----------------------------------------------------------------------- > Mat, INTENT(INOUT) :: matrix > INTERFACE > SUBROUTINE v2v_operator_vv(x) > USE operator_mod > TYPE (v2v_operators), INTENT(INOUT) :: x(:) > END SUBROUTINE v2v_operator_vv > END INTERFACE > OPTIONAL :: v2v_operator_vv > TYPE (vertex_scalar), INTENT(IN), OPTIONAL :: diagonal > INTEGER(i4), INTENT(IN), OPTIONAL :: bctype > INTEGER(i4) :: m,n,ierr,i,n1,m1,nv_local_inside, > & iv_local,iv_couple,d_nz,o_nz > TYPE(v2v_operators), ALLOCATABLE :: v2voperator(:) > REAL(r8) :: v(3,3) > LOGICAL, SAVE :: first=.true. > c > ----------------------------------------------------------------------- > > CALL get_grid(thegrid,nv_local_inside=nv_local_inside) > CALL get_nonzero(thegrid,d_nz,o_nz) > > CALL MatCreateMPIAIJ(comm_grid,3*nv_local_inside, > & 3*nv_local_inside,PETSC_DECIDE,PETSC_DECIDE,3*d_nz, > & PETSC_NULL_INTEGER,3*o_nz,PETSC_NULL_INTEGER,matrix,ierr) > > ALLOCATE(v2voperator(nv_local_inside)) > c > ----------------------------------------------------------------------- > c See if it is a diagonal operator or not. > c > ----------------------------------------------------------------------- > IF (PRESENT(diagonal)) THEN > IF(first) CALL terminator("Error in init_petsc_mat_vector: "// > & "You shouldn't call it the first time with a diagonal > operator") > IF(PRESENT(bctype)) THEN > CALL v2v_diagonal_vv(v2voperator,diagonal,bctype) > ELSE > CALL v2v_diagonal_vv(v2voperator,diagonal,1) > ENDIF > ELSE > CALL v2v_operator_vv(v2voperator) > ENDIF > > DO iv_local=1,nv_local_inside > CALL index_vv2petsc(m,iv_local,1) > DO i=1,SIZE(v2voperator(iv_local)%neighbors) > iv_couple=v2voperator(iv_local)%neighbors(i) > CALL index_vv2petsc(n,iv_couple,1) > v=v2voperator(iv_local)%coupling_with(i)%a > DO m1=0,2 > DO n1=0,2 > CALL MatSetValues(matrix,1,m+m1,1,n+n1,v(1+m1,1+n1), > & INSERT_VALUES,ierr) > ENDDO > ENDDO > ENDDO > ENDDO > > CALL MatAssemblyBegin(matrix,MAT_FINAL_ASSEMBLY,ierr) > CALL MatAssemblyEnd(matrix,MAT_FINAL_ASSEMBLY,ierr) > > IF(first) THEN > CALL MatSetOption(matrix,MAT_NO_NEW_NONZERO_LOCATIONS,ierr) > first=.false. > ENDIF > > CALL destroy(v2voperator) > DEALLOCATE(v2voperator) > > RETURN > > > > From tribur at vision.ee.ethz.ch Tue May 19 11:02:54 2009 From: tribur at vision.ee.ethz.ch (tribur at vision.ee.ethz.ch) Date: Tue, 19 May 2009 18:02:54 +0200 Subject: Time for MatAssembly Message-ID: <20090519180254.1414352rn2ez3226@email.ee.ethz.ch> Distinguished PETSc experts, Assuming processor k has defined N entries of a parallel matrix using MatSetValues. The half of the entries are in matrix rows belonging to this processor, but the other half are situated in rows of other processors. My question: When does MatAssemblyBegin+MatAssemblyEnd take longer, if the rows where the second half of the entries are situated belong all to one single other processor, e.g. processor k+1, or if these rows are distributed across several, let's say 4, other processors? Is there a significant difference? Thanks in advance for your answer, Trini From knepley at gmail.com Tue May 19 11:28:52 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 19 May 2009 11:28:52 -0500 Subject: Time for MatAssembly In-Reply-To: <20090519180254.1414352rn2ez3226@email.ee.ethz.ch> References: <20090519180254.1414352rn2ez3226@email.ee.ethz.ch> Message-ID: On Tue, May 19, 2009 at 11:02 AM, wrote: > Distinguished PETSc experts, > > Assuming processor k has defined N entries of a parallel matrix using > MatSetValues. The half of the entries are in matrix rows belonging to this > processor, but the other half are situated in rows of other processors. > > My question: > > When does MatAssemblyBegin+MatAssemblyEnd take longer, if the rows where > the second half of the entries are situated belong all to one single other > processor, e.g. processor k+1, or if these rows are distributed across > several, let's say 4, other processors? Is there a significant difference? Since we aggregate the rows and send a single message per proc, this is probably dominated by bandwidth, not latency. It takes the same bandwidth to send the messages, but if only one guys is sending, it is probably better to split the message. If everyone is doing the same thing, it will not matter at all. Generally, optimizing these things is WAY down the list of important things for runtime. Matt > > Thanks in advance for your answer, > Trini > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue May 19 11:47:39 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 19 May 2009 11:47:39 -0500 (CDT) Subject: Time for MatAssembly In-Reply-To: <20090519180254.1414352rn2ez3226@email.ee.ethz.ch> References: <20090519180254.1414352rn2ez3226@email.ee.ethz.ch> Message-ID: On Tue, 19 May 2009, tribur at vision.ee.ethz.ch wrote: > Distinguished PETSc experts, > > Assuming processor k has defined N entries of a parallel matrix using > MatSetValues. The half of the entries are in matrix rows belonging to this > processor, but the other half are situated in rows of other processors. > > My question: > > When does MatAssemblyBegin+MatAssemblyEnd take longer, if the rows where the > second half of the entries are situated belong all to one single other > processor, e.g. processor k+1, or if these rows are distributed across > several, let's say 4, other processors? Is there a significant difference? Obviously there will be a difference. But it will depend upon the network/MPI behavior. A single large one-to-one message vs multiple small all-to-all messages. Wrt PETSc part - you might have to make sure enough memory is allocated for these buffers. If the default is small - then there could be multiple malloc/copies that could slow things down. Run with '-info' and look for "stash". The number of mallocs here should be 0 for efficient matrix assembly [The stash size can be changed with a command line option -matstash_initial_size] Satish From balay at mcs.anl.gov Tue May 19 11:51:13 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 19 May 2009 11:51:13 -0500 (CDT) Subject: Time for MatAssembly In-Reply-To: References: <20090519180254.1414352rn2ez3226@email.ee.ethz.ch> Message-ID: On Tue, 19 May 2009, Satish Balay wrote: > On Tue, 19 May 2009, tribur at vision.ee.ethz.ch wrote: > > > Distinguished PETSc experts, > > > > Assuming processor k has defined N entries of a parallel matrix using > > MatSetValues. The half of the entries are in matrix rows belonging to this > > processor, but the other half are situated in rows of other processors. > > > > My question: > > > > When does MatAssemblyBegin+MatAssemblyEnd take longer, if the rows where the > > second half of the entries are situated belong all to one single other > > processor, e.g. processor k+1, or if these rows are distributed across > > several, let's say 4, other processors? Is there a significant difference? > > Obviously there will be a difference. But it will depend upon the > network/MPI behavior. > > A single large one-to-one message vs multiple small all-to-all messages. > > Wrt PETSc part - you might have to make sure enough memory is > allocated for these buffers. If the default is small - then there > could be multiple malloc/copies that could slow things down. > > Run with '-info' and look for "stash". The number of mallocs here > should be 0 for efficient matrix assembly [The stash size can be > changed with a command line option -matstash_initial_size] Another note: If you have lot of data movement during matassembly - you can do a MatAssemblyBegin/End(MAT_FLUSH_ASSEMBLY) - to flush out the currently accumulated off-proc-data - and continue with more MatSetValues(). It might help on some network/mpi types [we don't know for sure..].. Satish From kuiper at mpia-hd.mpg.de Wed May 20 10:28:27 2009 From: kuiper at mpia-hd.mpg.de (Rolf Kuiper) Date: Wed, 20 May 2009 17:28:27 +0200 Subject: Parallelization of KSPSolve() in multidimensions Message-ID: <668E765D-86D7-47BC-A60E-7B3275775FF1@mpia.de> Hi PETSc-Users, when solving an implicit equation with KSPSolve() in 3D (communication with 7-point-stencil) I experienced the following: Parallelization of the e.g. 64 x 64 x 64 domain on n cpus in the last direction (every cpu has a 64 x 64 x 64/n subdomain) leads to a parallel efficiency of approximately 90%, which is fine for us. Parallelization of the e.g. 64 x 64 x 64 domain on n cpus in more than one direction (every cpu has e.g. a 64 x 64/sqrt(n) x 64/sqrt(n) subdomain) leads to a parallel efficiency of approximately 10%, which is absolutely unusable. Is this behavior generally true for this kind of solver? If so, why? If not: What did I do wrong most presumably? Has anybody made the same experience and/or could help me? Thanks in advance, Rolf From bsmith at mcs.anl.gov Wed May 20 12:39:12 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 20 May 2009 12:39:12 -0500 Subject: Parallelization of KSPSolve() in multidimensions In-Reply-To: <668E765D-86D7-47BC-A60E-7B3275775FF1@mpia.de> References: <668E765D-86D7-47BC-A60E-7B3275775FF1@mpia.de> Message-ID: This is not universal behavior. We generally divide in all three dimensions. It is possibly a function of your machine and its network. You need a good network for sparse iterative solvers, no type of ethernet (no matter how good it claims to be) is suitable. You can send to petsc-maint at mcs.anl.gov the -log_summary output from a one process run, a slice in the last direction run and a 3d decomposition run and we'll take a look at it. Barry On May 20, 2009, at 10:28 AM, Rolf Kuiper wrote: > Hi PETSc-Users, > > when solving an implicit equation with KSPSolve() in 3D > (communication with 7-point-stencil) I experienced the following: > Parallelization of the e.g. 64 x 64 x 64 domain on n cpus in the > last direction (every cpu has a 64 x 64 x 64/n subdomain) leads to a > parallel efficiency of approximately 90%, which is fine for us. > Parallelization of the e.g. 64 x 64 x 64 domain on n cpus in more > than one direction (every cpu has e.g. a 64 x 64/sqrt(n) x 64/ > sqrt(n) subdomain) leads to a parallel efficiency of approximately > 10%, which is absolutely unusable. > > Is this behavior generally true for this kind of solver? If so, why? > If not: What did I do wrong most presumably? > Has anybody made the same experience and/or could help me? > > Thanks in advance, > Rolf From vyan2000 at gmail.com Thu May 21 13:36:26 2009 From: vyan2000 at gmail.com (Ryan Yan) Date: Thu, 21 May 2009 14:36:26 -0400 Subject: About the -pc_type tfs In-Reply-To: References: Message-ID: Hi Matt, May I make an inquiry about the file that I send out last week. If there is any thing missing, please let me know. I have other choices on the -pc_type, but it seems like that the 'tfs' is very promising, given it's excellent convergence trajectory when it was applied onto the smaller matrix. Thank you very much, Yan On Fri, May 15, 2009 at 10:46 PM, Ryan Yan wrote: > Hi Matt, > Since the bin file is quit big, I did not post this email to the group. You > can reply the next response to the group. > > The matrix bin file is in the attachment. > petsc_matrix_coef.bin is the bin file to generate the matrix, which have > the format aij. > > petsc_vec_knownsolu.bin is the bin file to generate the exact solution x. > > > petsc_vec_rhs.bin is the bin file to generate the right hand side b. > > A c script named "tfs_binarymatrix_verify.c" is also attached for your > convenience to check Ax-b =0. > > If you need any more information, please let me know. > > Thank you very much, > > Yan > > > > > > On Fri, May 15, 2009 at 1:34 PM, Matthew Knepley wrote: > >> If you send the matrix in PETSc binary format we can check this. >> >> Matt >> >> >> On Fri, May 15, 2009 at 12:20 PM, Ryan Yan wrote: >> >>> Hi all, >>> I am tring to use the tfs preconditioner to solve a large sparse mpiaij >>> matrix. >>> >>> 11111111111111111111111111111111111111111 >>> It works very well with a small matrix 45*45(Actually a 9*9 block matrix >>> with blocksize 5) on 2 processors; Out put is as follows: >>> >>> 0 KSP preconditioned resid norm 3.014544557924e+04 true resid norm >>> 2.219812091849e+04 ||Ae||/||Ax|| 1.000000000000e+00 >>> 1 KSP preconditioned resid norm 3.679021546908e-03 true resid norm >>> 1.502747104104e-03 ||Ae||/||Ax|| 6.769704109737e-08 >>> 2 KSP preconditioned resid norm 2.331909907779e-09 true resid norm >>> 8.737892755044e-10 ||Ae||/||Ax|| 3.936320910733e-14 >>> KSP Object: >>> type: gmres >>> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt >>> Orthogonalization with no iterative refinement >>> GMRES: happy breakdown tolerance 1e-30 >>> maximum iterations=10000, initial guess is zero >>> tolerances: relative=1e-10, absolute=1e-50, divergence=10000 >>> left preconditioning >>> PC Object: >>> type: tfs >>> linear system matrix = precond matrix: >>> Matrix Object: >>> type=mpiaij, rows=45, cols=45 >>> total: nonzeros=825, allocated nonzeros=1350 >>> using I-node (on process 0) routines: found 5 nodes, limit used is >>> 5 >>> Norm of error 2.33234e-09, Iterations 2 >>> >>> 2222222222222222222222222222222222222222 >>> >>> However, when I use the same code for a larger sparse matrix, a 18656 * >>> 18656 block matrix with blocksize 5); it encounters the followins >>> error.(Same error message for using 1 and 2 processors, seperately) >>> >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, >>> probably memory access out of range >>> [0]PETSC ERROR: Try option -start_in_debugger or >>> -on_error_attach_debugger >>> [0]PETSC ERROR: or see >>> http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try >>> http://valgrind.org on linux or man libgmalloc on Apple to find memory >>> corruption errors >>> [0]PETSC ERROR: likely location of problem given in stack below >>> [0]PETSC ERROR: --------------------- Stack Frames >>> ------------------------------------ >>> [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not >>> available, >>> [0]PETSC ERROR: INSTEAD the line number of the start of the >>> function >>> [0]PETSC ERROR: is given. >>> [0]PETSC ERROR: [0] PCSetUp_TFS line 116 src/ksp/pc/impls/tfs/tfs.c >>> [0]PETSC ERROR: [0] PCSetUp line 764 src/ksp/pc/interface/precon.c >>> [0]PETSC ERROR: [0] KSPSetUp line 183 src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: [0] KSPSolve line 305 src/ksp/ksp/interface/itfunc.c >>> [0]PETSC ERROR: --------------------- Error Message >>> ------------------------------------ >>> [0]PETSC ERROR: Signal received! >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: Petsc Release Version 2.3.3, Patch 15, Tue Sep 23 >>> 10:02:49 CDT 2008 HG revision: 31306062cd1a6f6a2496fccb4878f485c9b91760 >>> [0]PETSC ERROR: See docs/changes/index.html for recent updates. >>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. >>> [0]PETSC ERROR: See docs/index.html for manual pages. >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: ./kspex1reader_binmpiaij on a linux-gnu named >>> vyan2000-linux by vyan2000 Fri May 15 01:06:12 2009 >>> [0]PETSC ERROR: Libraries linked from >>> /home/vyan2000/local/PPETSc/petsc-2.3.3-p15//lib/linux-gnu-c-debug >>> [0]PETSC ERROR: Configure run at Mon May 4 00:59:41 2009 >>> [0]PETSC ERROR: Configure options >>> --with-mpi-dir=/home/vyan2000/local/mpich2-1.0.8p1/ --with-debugger=gdb >>> --with-shared=0 --download-hypre=1 --download-parmetis=1 >>> [0]PETSC ERROR: >>> ------------------------------------------------------------------------ >>> [0]PETSC ERROR: User provided function() line 0 in unknown directory >>> unknown file >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[cli_0]: >>> aborting job: >>> application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 >>> >>> >>> 3333333333333333333333333333333333333333333333 >>> >>> I have the exact solution x in hands, so before I push the matrix into >>> the ksp solver, I did check the PETSC loaded matrix A and rhs vector b, by >>> verifying Ax-b=0, in both cases of 1 processor and 2 processors. >>> >>> Any sugeestions? >>> >>> Thank you very much, >>> >>> Yan >>> >> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments is infinitely more interesting than any results to which their >> experiments lead. >> -- Norbert Wiener >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vyan2000 at gmail.com Thu May 21 22:45:24 2009 From: vyan2000 at gmail.com (Ryan Yan) Date: Thu, 21 May 2009 23:45:24 -0400 Subject: prometheus related question, Message-ID: Hi all, I did a numerical experiment of ksp solve on a large sparse matrix. For the first step, I load the matrix from petsc readable files. If I load the matrix as a MPIAIJ matrix and then apply the prometheus preconditioner with GMRES, it will not converge in 10000 iterations. However, if I load the matrix as a MPIBAIJ matrix and then apply the prometheus preconditioner with GEMRES, it will converge with in 800 iterations Can anyone please provide some hints on how to explain this numerical result. Thanks a lot, Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu May 21 22:49:03 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 May 2009 22:49:03 -0500 Subject: prometheus related question, In-Reply-To: References: Message-ID: Block matrices can use block smoothers, which invert whole blocks exactly. This can be much more effective than point smoothers if you actually have block structure in the matrix. Matt On Thu, May 21, 2009 at 10:45 PM, Ryan Yan wrote: > Hi all, > I did a numerical experiment of ksp solve on a large sparse matrix. > > For the first step, I load the matrix from petsc readable files. If I load > the matrix as a MPIAIJ matrix and then apply the prometheus preconditioner > with GMRES, it will not converge in 10000 iterations. > > However, if I load the matrix as a MPIBAIJ matrix and then apply the > prometheus preconditioner with GEMRES, it will converge with in 800 > iterations > > Can anyone please provide some hints on how to explain this numerical > result. > > Thanks a lot, > > Yan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.kramer at imperial.ac.uk Sat May 23 09:00:15 2009 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Sat, 23 May 2009 15:00:15 +0100 Subject: Mismatch in explicit fortran interface for MatGetInfo Message-ID: <4A18016F.6030805@imperial.ac.uk> Hi all, First of all thanks of a lot for providing explicit fortran interfaces for most functions in Petsc 3. This is of great help. I do however run into a problem using MatGetInfo. The calling sequence for fortran (according to the manual) is: double precision info(MAT_INFO_SIZE) Mat A integer ierr call MatGetInfo (A,MAT_LOCAL,info,ierr) The interface however seems to indicate the info argument has to be a single double precision (i.e. a scalar not an array). I guess with implicit interfaces this sort of thing would work, but with the provided explicit interface, at least gfortran won't let me have it. Cheers Stephan From bsmith at mcs.anl.gov Sat May 23 18:59:30 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 23 May 2009 18:59:30 -0500 Subject: Mismatch in explicit fortran interface for MatGetInfo In-Reply-To: <4A18016F.6030805@imperial.ac.uk> References: <4A18016F.6030805@imperial.ac.uk> Message-ID: <745A9FCE-66A6-4EEA-B506-8179C26BB859@mcs.anl.gov> Stephan, Thanks for reporting the problem. Our tool generates either both the Fortran stub and Fortran interface definition or neither. For this function it generates the stub correctly but the wrong interface. The "quick" fix is to turn off the auto generation for this function and provide both manually. I'll see about making a patch. Unless someone has a better solution. Barry On May 23, 2009, at 9:00 AM, Stephan Kramer wrote: > Hi all, > > First of all thanks of a lot for providing explicit fortran > interfaces for most functions in Petsc 3. This is of great help. I > do however run into a problem using MatGetInfo. The calling sequence > for fortran (according to the manual) is: > > double precision info(MAT_INFO_SIZE) > Mat A > integer ierr > > call MatGetInfo > (A,MAT_LOCAL,info,ierr) > > The interface however seems to indicate the info argument has to be > a single double precision (i.e. a scalar not an array). I guess with > implicit interfaces this sort of thing would work, but with the > provided explicit interface, at least gfortran won't let me have it. > > Cheers > Stephan > From tomjan at jay.au.poznan.pl Wed May 27 04:35:03 2009 From: tomjan at jay.au.poznan.pl (Tomasz Jankowski) Date: Wed, 27 May 2009 11:35:03 +0200 (CEST) Subject: looking for opportunity to take training in HPC Message-ID: I'm looking for opportunity to take training in HPC in one of European Union country. I'm thinking about fifth scheme - "Support for training and career development of researchers" . I don't know, maybe it could be in framework of ITN - Initial Training Networks. But maybe someone know about another possibilities (for example IAPP - Industry-Academia Partnerships and Pathways). I decided to disturb members of this group because I didn't found yet anything interesting in this area. Any help greatly appreciated. thanks, tomasz jankowski skills/capabilities: - c/c++/perl (multithrading with pthread,boost::thread,MPI) - I have hands-on computing experience with building sparse systems of equations with C/C++ code from raw data and solving them with PETSC on PC's cluster set up by me. - & much,much more ;-) About me: After I finished my Ph.D. studies in quantitative genetics field in 2007 (defended in April 2008) I have switched from science to IT. (It was possible because I did a lot of programming to obtain the results in my Ph.D.). Actually I'm system programmer in supercomputing and networking center which serves for the needs (especially HPC) of the European scientific community and provides advanced services for the benefit of regional administration. Because of this two field career I'm fluent in quantitative genetics, molecular genetics, genomics, biotechnology, c/c++ programming, linux os administration and also practiced in statistics. ######################################################## # tomjan at jay.au.poznan.pl # # jay.au.poznan.pl/~tomjan/ # ######################################################## From bsmith at mcs.anl.gov Wed May 27 17:12:42 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 27 May 2009 17:12:42 -0500 Subject: Mismatch in explicit fortran interface for MatGetInfo In-Reply-To: <4A18016F.6030805@imperial.ac.uk> References: <4A18016F.6030805@imperial.ac.uk> Message-ID: <0A67546F-4327-4265-B94D-B889B94644E5@mcs.anl.gov> Stephan, Satish is working on the patch for this and will get it to you shortly. Sorry for the delay, we were debating how to handle it. Barry On May 23, 2009, at 9:00 AM, Stephan Kramer wrote: > Hi all, > > First of all thanks of a lot for providing explicit fortran > interfaces for most functions in Petsc 3. This is of great help. I > do however run into a problem using MatGetInfo. The calling sequence > for fortran (according to the manual) is: > > double precision info(MAT_INFO_SIZE) > Mat A > integer ierr > > call MatGetInfo > (A,MAT_LOCAL,info,ierr) > > The interface however seems to indicate the info argument has to be > a single double precision (i.e. a scalar not an array). I guess with > implicit interfaces this sort of thing would work, but with the > provided explicit interface, at least gfortran won't let me have it. > > Cheers > Stephan > From dosterpf at gmail.com Wed May 27 18:20:25 2009 From: dosterpf at gmail.com (Paul Dostert) Date: Wed, 27 May 2009 17:20:25 -0600 Subject: MatLUFactor for Complex Matrices - Returns inverse of the diagonal in LU?? Message-ID: <8b7e18c00905271620v594aca28h220d3ae9aa7b1cc8@mail.gmail.com> I'm a beginner with PETSc, so please forgive me if this is obvious, but I couldn't seem to find any help in the archives. I'm trying to just the hang of the software, so I've been messing around with routines. I'm going to need complex matrices (Maxwell's with PML) so everything is configured for this. I'm messing around with some very simple test cases, and have a symmetric (but not Hermitian) complex matrix with 2-2i on the diagonal and -1+i on the upper and lower diagonals. I am reading this in from a petsc binary file (again, for testing purposes, eventually I'm going to be just reading in my matrix and RHS). I view the matrix, and it has been read in correctly. I perform LU factorization by doing the following (where ISCreateStride(PETSC_COMM_WORLD,m,0,1,&perm); has been called earlier): MatConvert(A,MATSAME,MAT_INITIAL_MATRIX,&LU); MatFactorInfoInitialize(&luinfo); MatLUFactor(LU,perm,perm,&luinfo); I get that LU is: (1,1) 0.2500 + 0.2500i (2,1) -0.5000 (1,2) -1.0000 + 1.0000i (2,2) 0.3333 + 0.3333i (3,2) -0.6667 (2,3) -1.0000 + 1.0000i (3,3) 0.3750 + 0.3750i (4,3) -0.7500 (3,4) -1.0000 + 1.0000i (4,4) 0.4000 + 0.4000i (5,4) -0.8000 (4,5) -1.0000 + 1.0000i (5,5) 0.4167 + 0.4167i (6,5) -0.8333 (5,6) -1.0000 + 1.0000i (6,6) 0.4286 + 0.4286i Now, I am interpreting this as L being unit on the diagonal and the lower diagonal portion of this "LU" matrix, while U being the diagona + upper of this "LU" matrix. I can interpret this the other way around as well, and it doesn't matter. However, knowing the LU factorization, it is VERY clear the the proper LU decomposition would have the inverse of the diagonal elements presented here. So I believe I should have LU as: (1,1) 2.0000 - 2.0000i (2,1) -0.5000 (1,2) -1.0000 + 1.0000i (2,2) 1.5000 - 1.5000i (3,2) -0.6667 (2,3) -1.0000 + 1.0000i (3,3) 1.3333 - 1.3333i (4,3) -0.7500 (3,4) -1.0000 + 1.0000i (4,4) 1.2500 - 1.2500i (5,4) -0.8000 (4,5) -1.0000 + 1.0000i (5,5) 1.2000 - 1.2000i (6,5) -0.8333 (5,6) -1.0000 + 1.0000i (6,6) 1.1667 - 1.1667i Is there some reason this returns the inverse of the diagonal entries, or am I completely missing something? Is returning the inverse something standard?? Since I'm new, I'm not quite sure where to look for actual source code. Is there a location where the LU factorization code is written and well commented? Thank you very much! From bsmith at mcs.anl.gov Wed May 27 19:59:14 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 27 May 2009 19:59:14 -0500 Subject: MatLUFactor for Complex Matrices - Returns inverse of the diagonal in LU?? In-Reply-To: <8b7e18c00905271620v594aca28h220d3ae9aa7b1cc8@mail.gmail.com> References: <8b7e18c00905271620v594aca28h220d3ae9aa7b1cc8@mail.gmail.com> Message-ID: <8A081AD8-1C8E-4201-B8FC-52B1FA756CA6@mcs.anl.gov> Paul, In PETSc for the AIJ sparse matrix storage format we store the inverse of the diagonal entries of U. This is because then when it comes to the triangular solves they only involve floating point multiplies and adds (no time consuming divisions). I think this is pretty common for sparse matrix solvers. You could argue that the MatView() should invert those values so what gets printed out is clearer; I view the MatView() for factored matrices as only a useful tool for debugging for developers so it just prints out the stored values. Yes it is a bit confusing. Barry For BAIJ matrices it gets even more confusing. With BAIJ the factorizations use the little blocks of the matrix as the basic computational unit of the factorization. Here we actually compute the exact inverse of the diagonal block and store that (a block version of what we do for AIJ). Again this is to make the solves as efficient as possible. On May 27, 2009, at 6:20 PM, Paul Dostert wrote: > I'm a beginner with PETSc, so please forgive me if this is obvious, > but I > couldn't seem to find any help in the archives. > > I'm trying to just the hang of the software, so I've been messing > around > with routines. I'm going to need complex matrices (Maxwell's with > PML) so > everything is configured for this. I'm messing around with some very > simple > test cases, and have a symmetric (but not Hermitian) complex matrix > with > 2-2i on the diagonal and -1+i on the upper and lower diagonals. I am > reading > this in from a petsc binary file (again, for testing purposes, > eventually > I'm going to be just reading in my matrix and RHS). I view the > matrix, and > it has been read in correctly. I perform LU factorization by doing the > following (where ISCreateStride(PETSC_COMM_WORLD,m,0,1,&perm); has > been > called earlier): > > MatConvert(A,MATSAME,MAT_INITIAL_MATRIX,&LU); > MatFactorInfoInitialize(&luinfo); > MatLUFactor(LU,perm,perm,&luinfo); > > I get that LU is: > > (1,1) 0.2500 + 0.2500i > (2,1) -0.5000 > (1,2) -1.0000 + 1.0000i > (2,2) 0.3333 + 0.3333i > (3,2) -0.6667 > (2,3) -1.0000 + 1.0000i > (3,3) 0.3750 + 0.3750i > (4,3) -0.7500 > (3,4) -1.0000 + 1.0000i > (4,4) 0.4000 + 0.4000i > (5,4) -0.8000 > (4,5) -1.0000 + 1.0000i > (5,5) 0.4167 + 0.4167i > (6,5) -0.8333 > (5,6) -1.0000 + 1.0000i > (6,6) 0.4286 + 0.4286i > > Now, I am interpreting this as L being unit on the diagonal and the > lower diagonal portion of this "LU" matrix, while U being the diagona > + upper of this "LU" matrix. I can interpret this the other way around > as well, and it doesn't matter. > > However, knowing the LU factorization, it is VERY clear the the proper > LU decomposition would have the inverse of the diagonal elements > presented here. So I believe I should have LU as: > > (1,1) 2.0000 - 2.0000i > (2,1) -0.5000 > (1,2) -1.0000 + 1.0000i > (2,2) 1.5000 - 1.5000i > (3,2) -0.6667 > (2,3) -1.0000 + 1.0000i > (3,3) 1.3333 - 1.3333i > (4,3) -0.7500 > (3,4) -1.0000 + 1.0000i > (4,4) 1.2500 - 1.2500i > (5,4) -0.8000 > (4,5) -1.0000 + 1.0000i > (5,5) 1.2000 - 1.2000i > (6,5) -0.8333 > (5,6) -1.0000 + 1.0000i > (6,6) 1.1667 - 1.1667i > > Is there some reason this returns the inverse of the diagonal entries, > or am I completely missing something? Is returning the inverse > something standard?? > > Since I'm new, I'm not quite sure where to look for actual source > code. Is there a location where the LU factorization code is written > and well commented? > > Thank you very much! From dosterpf at gmail.com Wed May 27 20:06:15 2009 From: dosterpf at gmail.com (Paul Dostert) Date: Wed, 27 May 2009 18:06:15 -0700 Subject: MatLUFactor for Complex Matrices - Returns inverse of the diagonal in LU?? In-Reply-To: <8A081AD8-1C8E-4201-B8FC-52B1FA756CA6@mcs.anl.gov> References: <8b7e18c00905271620v594aca28h220d3ae9aa7b1cc8@mail.gmail.com> <8A081AD8-1C8E-4201-B8FC-52B1FA756CA6@mcs.anl.gov> Message-ID: <8b7e18c00905271806o4b524892uc2079349a50e55cc@mail.gmail.com> Barry, Thank you so much for getting back to me so quickly. Just out of curiosity, is there a slightly more in depth version of the documentation that I would be able to look at if I have some other "simple" questions like this in the future? On Wed, May 27, 2009 at 5:59 PM, Barry Smith wrote: > > Paul, > > In PETSc for the AIJ sparse matrix storage format we store the inverse > of the diagonal entries of U. This is because then when it comes to the > triangular solves they only involve floating point multiplies and adds (no > time consuming divisions). I think this is pretty common for sparse matrix > solvers. You could argue that the MatView() should invert those values so > what gets printed out is clearer; I view the MatView() for factored matrices > as only a useful tool for debugging for developers so it just prints out the > stored values. Yes it is a bit confusing. > > Barry > > For BAIJ matrices it gets even more confusing. With BAIJ the > factorizations use the little blocks of the matrix as the basic > computational unit of the factorization. Here we actually compute the exact > inverse of the diagonal block and store that (a block version of what we do > for AIJ). Again this is to make the solves as efficient as possible. > > > > > On May 27, 2009, at 6:20 PM, Paul Dostert wrote: > > I'm a beginner with PETSc, so please forgive me if this is obvious, but I >> couldn't seem to find any help in the archives. >> >> I'm trying to just the hang of the software, so I've been messing around >> with routines. I'm going to need complex matrices (Maxwell's with PML) so >> everything is configured for this. I'm messing around with some very >> simple >> test cases, and have a symmetric (but not Hermitian) complex matrix with >> 2-2i on the diagonal and -1+i on the upper and lower diagonals. I am >> reading >> this in from a petsc binary file (again, for testing purposes, eventually >> I'm going to be just reading in my matrix and RHS). I view the matrix, and >> it has been read in correctly. I perform LU factorization by doing the >> following (where ISCreateStride(PETSC_COMM_WORLD,m,0,1,&perm); has been >> called earlier): >> >> MatConvert(A,MATSAME,MAT_INITIAL_MATRIX,&LU); >> MatFactorInfoInitialize(&luinfo); >> MatLUFactor(LU,perm,perm,&luinfo); >> >> I get that LU is: >> >> (1,1) 0.2500 + 0.2500i >> (2,1) -0.5000 >> (1,2) -1.0000 + 1.0000i >> (2,2) 0.3333 + 0.3333i >> (3,2) -0.6667 >> (2,3) -1.0000 + 1.0000i >> (3,3) 0.3750 + 0.3750i >> (4,3) -0.7500 >> (3,4) -1.0000 + 1.0000i >> (4,4) 0.4000 + 0.4000i >> (5,4) -0.8000 >> (4,5) -1.0000 + 1.0000i >> (5,5) 0.4167 + 0.4167i >> (6,5) -0.8333 >> (5,6) -1.0000 + 1.0000i >> (6,6) 0.4286 + 0.4286i >> >> Now, I am interpreting this as L being unit on the diagonal and the >> lower diagonal portion of this "LU" matrix, while U being the diagona >> + upper of this "LU" matrix. I can interpret this the other way around >> as well, and it doesn't matter. >> >> However, knowing the LU factorization, it is VERY clear the the proper >> LU decomposition would have the inverse of the diagonal elements >> presented here. So I believe I should have LU as: >> >> (1,1) 2.0000 - 2.0000i >> (2,1) -0.5000 >> (1,2) -1.0000 + 1.0000i >> (2,2) 1.5000 - 1.5000i >> (3,2) -0.6667 >> (2,3) -1.0000 + 1.0000i >> (3,3) 1.3333 - 1.3333i >> (4,3) -0.7500 >> (3,4) -1.0000 + 1.0000i >> (4,4) 1.2500 - 1.2500i >> (5,4) -0.8000 >> (4,5) -1.0000 + 1.0000i >> (5,5) 1.2000 - 1.2000i >> (6,5) -0.8333 >> (5,6) -1.0000 + 1.0000i >> (6,6) 1.1667 - 1.1667i >> >> Is there some reason this returns the inverse of the diagonal entries, >> or am I completely missing something? Is returning the inverse >> something standard?? >> >> Since I'm new, I'm not quite sure where to look for actual source >> code. Is there a location where the LU factorization code is written >> and well commented? >> >> Thank you very much! >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed May 27 20:09:48 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 27 May 2009 20:09:48 -0500 Subject: MatLUFactor for Complex Matrices - Returns inverse of the diagonal in LU?? In-Reply-To: <8b7e18c00905271806o4b524892uc2079349a50e55cc@mail.gmail.com> References: <8b7e18c00905271620v594aca28h220d3ae9aa7b1cc8@mail.gmail.com> <8A081AD8-1C8E-4201-B8FC-52B1FA756CA6@mcs.anl.gov> <8b7e18c00905271806o4b524892uc2079349a50e55cc@mail.gmail.com> Message-ID: On May 27, 2009, at 8:06 PM, Paul Dostert wrote: > Barry, > > Thank you so much for getting back to me so quickly. Just out of > curiosity, is there a slightly more in depth version of the > documentation that I would be able to look at if I have some other > "simple" questions like this in the future? No, all the docs that exist are included with the software. The details are in the source code. For the basic AIJ matrices the source code is src/mat/impls/aij/seq Barry > > > On Wed, May 27, 2009 at 5:59 PM, Barry Smith > wrote: > > Paul, > > In PETSc for the AIJ sparse matrix storage format we store the > inverse of the diagonal entries of U. This is because then when it > comes to the triangular solves they only involve floating point > multiplies and adds (no time consuming divisions). I think this is > pretty common for sparse matrix solvers. You could argue that the > MatView() should invert those values so what gets printed out is > clearer; I view the MatView() for factored matrices as only a useful > tool for debugging for developers so it just prints out the stored > values. Yes it is a bit confusing. > > Barry > > For BAIJ matrices it gets even more confusing. With BAIJ the > factorizations use the little blocks of the matrix as the basic > computational unit of the factorization. Here we actually compute > the exact inverse of the diagonal block and store that (a block > version of what we do for AIJ). Again this is to make the solves as > efficient as possible. > > > > > On May 27, 2009, at 6:20 PM, Paul Dostert wrote: > > I'm a beginner with PETSc, so please forgive me if this is obvious, > but I > couldn't seem to find any help in the archives. > > I'm trying to just the hang of the software, so I've been messing > around > with routines. I'm going to need complex matrices (Maxwell's with > PML) so > everything is configured for this. I'm messing around with some very > simple > test cases, and have a symmetric (but not Hermitian) complex matrix > with > 2-2i on the diagonal and -1+i on the upper and lower diagonals. I am > reading > this in from a petsc binary file (again, for testing purposes, > eventually > I'm going to be just reading in my matrix and RHS). I view the > matrix, and > it has been read in correctly. I perform LU factorization by doing the > following (where ISCreateStride(PETSC_COMM_WORLD,m,0,1,&perm); has > been > called earlier): > > MatConvert(A,MATSAME,MAT_INITIAL_MATRIX,&LU); > MatFactorInfoInitialize(&luinfo); > MatLUFactor(LU,perm,perm,&luinfo); > > I get that LU is: > > (1,1) 0.2500 + 0.2500i > (2,1) -0.5000 > (1,2) -1.0000 + 1.0000i > (2,2) 0.3333 + 0.3333i > (3,2) -0.6667 > (2,3) -1.0000 + 1.0000i > (3,3) 0.3750 + 0.3750i > (4,3) -0.7500 > (3,4) -1.0000 + 1.0000i > (4,4) 0.4000 + 0.4000i > (5,4) -0.8000 > (4,5) -1.0000 + 1.0000i > (5,5) 0.4167 + 0.4167i > (6,5) -0.8333 > (5,6) -1.0000 + 1.0000i > (6,6) 0.4286 + 0.4286i > > Now, I am interpreting this as L being unit on the diagonal and the > lower diagonal portion of this "LU" matrix, while U being the diagona > + upper of this "LU" matrix. I can interpret this the other way around > as well, and it doesn't matter. > > However, knowing the LU factorization, it is VERY clear the the proper > LU decomposition would have the inverse of the diagonal elements > presented here. So I believe I should have LU as: > > (1,1) 2.0000 - 2.0000i > (2,1) -0.5000 > (1,2) -1.0000 + 1.0000i > (2,2) 1.5000 - 1.5000i > (3,2) -0.6667 > (2,3) -1.0000 + 1.0000i > (3,3) 1.3333 - 1.3333i > (4,3) -0.7500 > (3,4) -1.0000 + 1.0000i > (4,4) 1.2500 - 1.2500i > (5,4) -0.8000 > (4,5) -1.0000 + 1.0000i > (5,5) 1.2000 - 1.2000i > (6,5) -0.8333 > (5,6) -1.0000 + 1.0000i > (6,6) 1.1667 - 1.1667i > > Is there some reason this returns the inverse of the diagonal entries, > or am I completely missing something? Is returning the inverse > something standard?? > > Since I'm new, I'm not quite sure where to look for actual source > code. Is there a location where the LU factorization code is written > and well commented? > > Thank you very much! > > From Andreas.Grassl at student.uibk.ac.at Fri May 29 04:34:42 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Fri, 29 May 2009 11:34:42 +0200 Subject: VecView behaviour Message-ID: <4A1FAC32.4010507@student.uibk.ac.at> Hello, I'm working with the PCNN preconditioner and hence with ISLocalToGlobalMapping. After solving I want to write the solution to an ASCII-file where only the values belonging to the "external" global numbering are given and not followed by the zeros. Currently I'm giving this commands: ierr = PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_SYMMODU);CHKERRQ(ierr); ierr = VecView(X,viewer);CHKERRQ(ierr); Does anybody have an idea, which option or function could help me? cheers, ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From balay at mcs.anl.gov Fri May 29 11:44:30 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 29 May 2009 11:44:30 -0500 (CDT) Subject: Mismatch in explicit fortran interface for MatGetInfo In-Reply-To: <0A67546F-4327-4265-B94D-B889B94644E5@mcs.anl.gov> References: <4A18016F.6030805@imperial.ac.uk> <0A67546F-4327-4265-B94D-B889B94644E5@mcs.anl.gov> Message-ID: I have the following fixed files [untested yet]. But could you tell me how you've configured PETSc? - and what patchlevel? [It appears that there are quiet a few breakages with --with-fortran-interfaces=1 - I might fix this in petsc-dev - not 3.0.0 - as it depends upon some f90 interface changes that are only in petsc-dev] Attaching the modified files that go with my untested fix. include/finclude/ftn-auto/petscmat.h90 include/finclude/ftn-custom/petscmat.h90 src/mat/interface/ftn-auto/matrixf.c src/mat/interface/ftn-custom/zmatrixf.c src/mat/interface/matrix.c Satish On Wed, 27 May 2009, Barry Smith wrote: > > Stephan, > > Satish is working on the patch for this and will get it to you shortly. > > Sorry for the delay, we were debating how to handle it. > > Barry > > On May 23, 2009, at 9:00 AM, Stephan Kramer wrote: > > > Hi all, > > > > First of all thanks of a lot for providing explicit fortran interfaces for > > most functions in Petsc 3. This is of great help. I do however run into a > > problem using MatGetInfo. The calling sequence for fortran (according to the > > manual) is: > > > > double precision info(MAT_INFO_SIZE) > > Mat A > > integer ierr > > > > call MatGetInfo > > (A,MAT_LOCAL,info,ierr) > > > > The interface however seems to indicate the info argument has to be a single > > double precision (i.e. a scalar not an array). I guess with implicit > > interfaces this sort of thing would work, but with the provided explicit > > interface, at least gfortran won't let me have it. > > > > Cheers > > Stephan > > > -------------- next part -------------- subroutine MatNullSpaceCreate(comm, has_cnst, n, vecs, SP ,ierr)& & integer comm ! MPI_Comm PetscTruth has_cnst ! PetscTruth PetscInt n ! PetscInt Vec vecs (*) ! Vec MatNullSpace SP ! MatNullSpace integer ierr end subroutine subroutine MatNullSpaceDestroy(sp ,ierr) MatNullSpace sp ! MatNullSpace integer ierr end subroutine subroutine MatNullSpaceTest(sp, mat, isNull ,ierr) MatNullSpace sp ! MatNullSpace Mat mat ! Mat PetscTruth isNull ! PetscTruth integer ierr end subroutine subroutine MatGetDiagonalBlock(aupper, iscopy, reuse, a ,ierr) Mat aupper ! Mat PetscTruth iscopy ! PetscTruth MatReuse reuse ! MatReuse Mat a ! Mat integer ierr end subroutine subroutine MatRealPart(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatImaginaryPart(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatMissingDiagonal(mat, missing, dd ,ierr) Mat mat ! Mat PetscTruth missing ! PetscTruth PetscInt dd ! PetscInt integer ierr end subroutine subroutine MatConjugate(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatGetRowUpperTriangular(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatRestoreRowUpperTriangular(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatSetUp(A ,ierr) Mat A ! Mat integer ierr end subroutine subroutine MatScaleSystem(mat, b, x ,ierr) Mat mat ! Mat Vec b ! Vec Vec x ! Vec integer ierr end subroutine subroutine MatUnScaleSystem(mat, b, x ,ierr) Mat mat ! Mat Vec b ! Vec Vec x ! Vec integer ierr end subroutine subroutine MatUseScaledForm(mat, scaled ,ierr) Mat mat ! Mat PetscTruth scaled ! PetscTruth integer ierr end subroutine subroutine MatDestroy(A ,ierr) Mat A ! Mat integer ierr end subroutine subroutine MatValid(m, flg ,ierr) Mat m ! Mat PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatSetValues(mat, m, idxm, n, idxn, v, addv ,ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt idxm (*) ! PetscInt PetscInt n ! PetscInt PetscInt idxn (*) ! PetscInt PetscScalar v (*) ! PetscScalar InsertMode addv ! InsertMode integer ierr end subroutine subroutine MatSetValuesRowLocal(mat, row, v ,ierr) Mat mat ! Mat PetscInt row ! PetscInt PetscScalar v (*) ! PetscScalar integer ierr end subroutine subroutine MatSetValuesRow(mat, row, v ,ierr) Mat mat ! Mat PetscInt row ! PetscInt PetscScalar v (*) ! PetscScalar integer ierr end subroutine subroutine MatSetValuesStencil(mat, m, idxm, n, idxn, v, addv , & &ierr) Mat mat ! Mat PetscInt m ! PetscInt MatStencil idxm (*) ! MatStencil PetscInt n ! PetscInt MatStencil idxn (*) ! MatStencil PetscScalar v (*) ! PetscScalar InsertMode addv ! InsertMode integer ierr end subroutine subroutine MatSetStencil(mat, dim, dims, starts, dof ,ierr) Mat mat ! Mat PetscInt dim ! PetscInt PetscInt dims (*) ! PetscInt PetscInt starts (*) ! PetscInt PetscInt dof ! PetscInt integer ierr end subroutine subroutine MatSetValuesBlocked(mat, m, idxm, n, idxn, v, addv , & &ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt idxm (*) ! PetscInt PetscInt n ! PetscInt PetscInt idxn (*) ! PetscInt PetscScalar v (*) ! PetscScalar InsertMode addv ! InsertMode integer ierr end subroutine subroutine MatGetValues(mat, m, idxm, n, idxn, v ,ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt idxm (*) ! PetscInt PetscInt n ! PetscInt PetscInt idxn (*) ! PetscInt PetscScalar v (*) ! PetscScalar integer ierr end subroutine subroutine MatSetLocalToGlobalMapping(x, mapping ,ierr) Mat x ! Mat ISLocalToGlobalMapping mapping ! ISLocalToGlobalMapping integer ierr end subroutine subroutine MatSetLocalToGlobalMappingBlock(x, mapping ,ierr) Mat x ! Mat ISLocalToGlobalMapping mapping ! ISLocalToGlobalMapping integer ierr end subroutine subroutine MatSetValuesLocal(mat, nrow, irow, ncol, icol, y, & &addv ,ierr) Mat mat ! Mat PetscInt nrow ! PetscInt PetscInt irow (*) ! PetscInt PetscInt ncol ! PetscInt PetscInt icol (*) ! PetscInt PetscScalar y (*) ! PetscScalar InsertMode addv ! InsertMode integer ierr end subroutine subroutine MatSetValuesBlockedLocal(mat, nrow, irow, ncol, icol & &, y, addv ,ierr) Mat mat ! Mat PetscInt nrow ! PetscInt PetscInt irow (*) ! PetscInt PetscInt ncol ! PetscInt PetscInt icol (*) ! PetscInt PetscScalar y (*) ! PetscScalar InsertMode addv ! InsertMode integer ierr end subroutine subroutine MatMult(mat, x, y ,ierr) Mat mat ! Mat Vec x ! Vec Vec y ! Vec integer ierr end subroutine subroutine MatMultTranspose(mat, x, y ,ierr) Mat mat ! Mat Vec x ! Vec Vec y ! Vec integer ierr end subroutine subroutine MatMultAdd(mat, v1, v2, v3 ,ierr) Mat mat ! Mat Vec v1 ! Vec Vec v2 ! Vec Vec v3 ! Vec integer ierr end subroutine subroutine MatMultTransposeAdd(mat, v1, v2, v3 ,ierr) Mat mat ! Mat Vec v1 ! Vec Vec v2 ! Vec Vec v3 ! Vec integer ierr end subroutine subroutine MatMultConstrained(mat, x, y ,ierr) Mat mat ! Mat Vec x ! Vec Vec y ! Vec integer ierr end subroutine subroutine MatMultTransposeConstrained(mat, x, y ,ierr) Mat mat ! Mat Vec x ! Vec Vec y ! Vec integer ierr end subroutine subroutine MatLUFactor(mat, row, col, info ,ierr) Mat mat ! Mat IS row ! IS IS col ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatILUFactor(mat, row, col, info ,ierr) Mat mat ! Mat IS row ! IS IS col ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatLUFactorSymbolic(fact, mat, row, col, info ,ierr) Mat fact ! Mat Mat mat ! Mat IS row ! IS IS col ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatLUFactorNumeric(fact, mat, info ,ierr) Mat fact ! Mat Mat mat ! Mat MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatCholeskyFactor(mat, perm, info ,ierr) Mat mat ! Mat IS perm ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatCholeskyFactorSymbolic(fact, mat, perm, info ,ierr& &) Mat fact ! Mat Mat mat ! Mat IS perm ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatCholeskyFactorNumeric(fact, mat, info ,ierr) Mat fact ! Mat Mat mat ! Mat MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatSolve(mat, b, x ,ierr) Mat mat ! Mat Vec b ! Vec Vec x ! Vec integer ierr end subroutine subroutine MatMatSolve(A, B, X ,ierr) Mat A ! Mat Mat B ! Mat Mat X ! Mat integer ierr end subroutine subroutine MatSolveAdd(mat, b, y, x ,ierr) Mat mat ! Mat Vec b ! Vec Vec y ! Vec Vec x ! Vec integer ierr end subroutine subroutine MatSolveTranspose(mat, b, x ,ierr) Mat mat ! Mat Vec b ! Vec Vec x ! Vec integer ierr end subroutine subroutine MatSolveTransposeAdd(mat, b, y, x ,ierr) Mat mat ! Mat Vec b ! Vec Vec y ! Vec Vec x ! Vec integer ierr end subroutine subroutine MatRelax(mat, b, omega, flag, shift, its, lits, x , & &ierr) Mat mat ! Mat Vec b ! Vec PetscReal omega ! PetscReal MatSORType flag ! MatSORType PetscReal shift ! PetscReal PetscInt its ! PetscInt PetscInt lits ! PetscInt Vec x ! Vec integer ierr end subroutine subroutine MatPBRelax(mat, b, omega, flag, shift, its, lits, x ,& &ierr) Mat mat ! Mat Vec b ! Vec PetscReal omega ! PetscReal MatSORType flag ! MatSORType PetscReal shift ! PetscReal PetscInt its ! PetscInt PetscInt lits ! PetscInt Vec x ! Vec integer ierr end subroutine subroutine MatCopy(A, B, str ,ierr) Mat A ! Mat Mat B ! Mat MatStructure str ! MatStructure integer ierr end subroutine subroutine MatDuplicate(mat, op, M ,ierr) Mat mat ! Mat MatDuplicateOption op ! MatDuplicateOption Mat M ! Mat integer ierr end subroutine subroutine MatGetDiagonal(mat, v ,ierr) Mat mat ! Mat Vec v ! Vec integer ierr end subroutine subroutine MatGetRowMin(mat, v, idx ,ierr) Mat mat ! Mat Vec v ! Vec PetscInt idx (*) ! PetscInt integer ierr end subroutine subroutine MatGetRowMinAbs(mat, v, idx ,ierr) Mat mat ! Mat Vec v ! Vec PetscInt idx (*) ! PetscInt integer ierr end subroutine subroutine MatGetRowMax(mat, v, idx ,ierr) Mat mat ! Mat Vec v ! Vec PetscInt idx (*) ! PetscInt integer ierr end subroutine subroutine MatGetRowMaxAbs(mat, v, idx ,ierr) Mat mat ! Mat Vec v ! Vec PetscInt idx (*) ! PetscInt integer ierr end subroutine subroutine MatGetRowSum(mat, v ,ierr) Mat mat ! Mat Vec v ! Vec integer ierr end subroutine subroutine MatTranspose(mat, reuse, B ,ierr) Mat mat ! Mat MatReuse reuse ! MatReuse Mat B ! Mat integer ierr end subroutine subroutine MatIsTranspose(A, B, tol, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscReal tol ! PetscReal PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatIsHermitianTranspose(A, B, tol, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscReal tol ! PetscReal PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatPermute(mat, row, col, B ,ierr) Mat mat ! Mat IS row ! IS IS col ! IS Mat B ! Mat integer ierr end subroutine subroutine MatPermuteSparsify(A, band, frac, tol, rowp, colp, B & &,ierr) Mat A ! Mat PetscInt band ! PetscInt PetscReal frac ! PetscReal PetscReal tol ! PetscReal IS rowp ! IS IS colp ! IS Mat B ! Mat integer ierr end subroutine subroutine MatEqual(A, B, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatDiagonalScale(mat, l, r ,ierr) Mat mat ! Mat Vec l ! Vec Vec r ! Vec integer ierr end subroutine subroutine MatScale(mat, a ,ierr) Mat mat ! Mat PetscScalar a ! PetscScalar integer ierr end subroutine subroutine MatNorm(mat, type, nrm ,ierr) Mat mat ! Mat NormType type ! NormType PetscReal nrm ! PetscReal integer ierr end subroutine subroutine MatAssemblyBegin(mat, type ,ierr) Mat mat ! Mat MatAssemblyType type ! MatAssemblyType integer ierr end subroutine subroutine MatAssembled(mat, assembled ,ierr) Mat mat ! Mat PetscTruth assembled ! PetscTruth integer ierr end subroutine subroutine MatAssemblyEnd(mat, type ,ierr) Mat mat ! Mat MatAssemblyType type ! MatAssemblyType integer ierr end subroutine subroutine MatCompress(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatSetOption(mat, op, flg ,ierr) Mat mat ! Mat MatOption op ! MatOption PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatZeroEntries(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatGetSize(mat, m, n ,ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt n ! PetscInt integer ierr end subroutine subroutine MatGetLocalSize(mat, m, n ,ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt n ! PetscInt integer ierr end subroutine subroutine MatGetOwnershipRangeColumn(mat, m, n ,ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt n ! PetscInt integer ierr end subroutine subroutine MatGetOwnershipRange(mat, m, n ,ierr) Mat mat ! Mat PetscInt m ! PetscInt PetscInt n ! PetscInt integer ierr end subroutine subroutine MatILUFactorSymbolic(fact, mat, row, col, info ,ierr)& & Mat fact ! Mat Mat mat ! Mat IS row ! IS IS col ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatICCFactorSymbolic(fact, mat, perm, info ,ierr) Mat fact ! Mat Mat mat ! Mat IS perm ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatIncreaseOverlap(mat, n, is, ov ,ierr) Mat mat ! Mat PetscInt n ! PetscInt IS is (*) ! IS PetscInt ov ! PetscInt integer ierr end subroutine subroutine MatGetBlockSize(mat, bs ,ierr) Mat mat ! Mat PetscInt bs ! PetscInt integer ierr end subroutine subroutine MatSetBlockSize(mat, bs ,ierr) Mat mat ! Mat PetscInt bs ! PetscInt integer ierr end subroutine subroutine MatSetUnfactored(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatGetSubMatrix(mat, isrow, iscol, csize, cll, newmat& & ,ierr) Mat mat ! Mat IS isrow ! IS IS iscol ! IS PetscInt csize ! PetscInt MatReuse cll ! MatReuse Mat newmat ! Mat integer ierr end subroutine subroutine MatGetSubMatrixRaw(mat, nrows, rows, ncols, cols, & &csize, cll, newmat ,ierr) Mat mat ! Mat PetscInt nrows ! PetscInt PetscInt rows (*) ! PetscInt PetscInt ncols ! PetscInt PetscInt cols (*) ! PetscInt PetscInt csize ! PetscInt MatReuse cll ! MatReuse Mat newmat ! Mat integer ierr end subroutine subroutine MatStashSetInitialSize(mat, size, bsize ,ierr) Mat mat ! Mat PetscInt size ! PetscInt PetscInt bsize ! PetscInt integer ierr end subroutine subroutine MatInterpolateAdd(A, x, y, w ,ierr) Mat A ! Mat Vec x ! Vec Vec y ! Vec Vec w ! Vec integer ierr end subroutine subroutine MatInterpolate(A, x, y ,ierr) Mat A ! Mat Vec x ! Vec Vec y ! Vec integer ierr end subroutine subroutine MatRestrict(A, x, y ,ierr) Mat A ! Mat Vec x ! Vec Vec y ! Vec integer ierr end subroutine subroutine MatNullSpaceAttach(mat, nullsp ,ierr) Mat mat ! Mat MatNullSpace nullsp ! MatNullSpace integer ierr end subroutine subroutine MatICCFactor(mat, row, info ,ierr) Mat mat ! Mat IS row ! IS MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatSetValuesAdic(mat, v ,ierr) Mat mat ! Mat PetscVoid v ! void integer ierr end subroutine subroutine MatSetColoring(mat, coloring ,ierr) Mat mat ! Mat ISColoring coloring ! ISColoring integer ierr end subroutine subroutine MatSetValuesAdifor(mat, nl, v ,ierr) Mat mat ! Mat PetscInt nl ! PetscInt PetscVoid v ! void integer ierr end subroutine subroutine MatDiagonalScaleLocal(mat, diag ,ierr) Mat mat ! Mat Vec diag ! Vec integer ierr end subroutine subroutine MatGetInertia(mat, nneg, nzero, npos ,ierr) Mat mat ! Mat PetscInt nneg ! PetscInt PetscInt nzero ! PetscInt PetscInt npos ! PetscInt integer ierr end subroutine subroutine MatIsSymmetric(A, tol, flg ,ierr) Mat A ! Mat PetscReal tol ! PetscReal PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatIsHermitian(A, tol, flg ,ierr) Mat A ! Mat PetscReal tol ! PetscReal PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatIsSymmetricKnown(A, set, flg ,ierr) Mat A ! Mat PetscTruth set ! PetscTruth PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatIsHermitianKnown(A, set, flg ,ierr) Mat A ! Mat PetscTruth set ! PetscTruth PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatIsStructurallySymmetric(A, flg ,ierr) Mat A ! Mat PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatStashGetInfo(mat, nstash, reallocs, bnstash, & &breallocs ,ierr) Mat mat ! Mat PetscInt nstash ! PetscInt PetscInt reallocs ! PetscInt PetscInt bnstash ! PetscInt PetscInt breallocs ! PetscInt integer ierr end subroutine subroutine MatFactorInfoInitialize(info ,ierr) MatFactorInfo info ! MatFactorInfo integer ierr end subroutine subroutine MatPtAP(A, P, scall, fill, C ,ierr) Mat A ! Mat Mat P ! Mat MatReuse scall ! MatReuse PetscReal fill ! PetscReal Mat C ! Mat integer ierr end subroutine subroutine MatPtAPNumeric(A, P, C ,ierr) Mat A ! Mat Mat P ! Mat Mat C ! Mat integer ierr end subroutine subroutine MatPtAPSymbolic(A, P, fill, C ,ierr) Mat A ! Mat Mat P ! Mat PetscReal fill ! PetscReal Mat C ! Mat integer ierr end subroutine subroutine MatMatMult(A, B, scall, fill, C ,ierr) Mat A ! Mat Mat B ! Mat MatReuse scall ! MatReuse PetscReal fill ! PetscReal Mat C ! Mat integer ierr end subroutine subroutine MatMatMultSymbolic(A, B, fill, C ,ierr) Mat A ! Mat Mat B ! Mat PetscReal fill ! PetscReal Mat C ! Mat integer ierr end subroutine subroutine MatMatMultNumeric(A, B, C ,ierr) Mat A ! Mat Mat B ! Mat Mat C ! Mat integer ierr end subroutine subroutine MatMatMultTranspose(A, B, scall, fill, C ,ierr) Mat A ! Mat Mat B ! Mat MatReuse scall ! MatReuse PetscReal fill ! PetscReal Mat C ! Mat integer ierr end subroutine subroutine MatHasOperation(mat, op, has ,ierr) Mat mat ! Mat MatOperation op ! MatOperation PetscTruth has ! PetscTruth integer ierr end subroutine subroutine MatReorderForNonzeroDiagonal(mat, abstol, ris, cis , & &ierr) Mat mat ! Mat PetscReal abstol ! PetscReal IS ris ! IS IS cis ! IS integer ierr end subroutine subroutine MatMultEqual(A, B, n, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscInt n ! PetscInt PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatMultAddEqual(A, B, n, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscInt n ! PetscInt PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatMultTransposeEqual(A, B, n, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscInt n ! PetscInt PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatMultTransposeAddEqual(A, B, n, flg ,ierr) Mat A ! Mat Mat B ! Mat PetscInt n ! PetscInt PetscTruth flg ! PetscTruth integer ierr end subroutine subroutine MatAXPY(Y, a, X, str ,ierr) Mat Y ! Mat PetscScalar a ! PetscScalar Mat X ! Mat MatStructure str ! MatStructure integer ierr end subroutine subroutine MatShift(Y, a ,ierr) Mat Y ! Mat PetscScalar a ! PetscScalar integer ierr end subroutine subroutine MatDiagonalSet(Y, D, is ,ierr) Mat Y ! Mat Vec D ! Vec InsertMode is ! InsertMode integer ierr end subroutine subroutine MatAYPX(Y, a, X, str ,ierr) Mat Y ! Mat PetscScalar a ! PetscScalar Mat X ! Mat MatStructure str ! MatStructure integer ierr end subroutine subroutine MatComputeExplicitOperator(inmat, mat ,ierr) Mat inmat ! Mat Mat mat ! Mat integer ierr end subroutine subroutine MatCreate(comm, A ,ierr) integer comm ! MPI_Comm Mat A ! Mat integer ierr end subroutine subroutine MatSetSizes(A, m, n, mupper, nupper ,ierr) Mat A ! Mat PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt mupper ! PetscInt PetscInt nupper ! PetscInt integer ierr end subroutine subroutine MatSetFromOptions(B ,ierr) Mat B ! Mat integer ierr end subroutine subroutine MatSetUpPreallocation(B ,ierr) Mat B ! Mat integer ierr end subroutine subroutine MatGetColumnVector(A, yy, col ,ierr) Mat A ! Mat Vec yy ! Vec PetscInt col ! PetscInt integer ierr end subroutine subroutine MatShellSetContext(mat, ctx ,ierr) Mat mat ! Mat PetscVoid ctx ! void integer ierr end subroutine subroutine MatSeqBAIJInvertBlockDiagonal(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatSeqBAIJSetColumnIndices(mat, indices ,ierr) Mat mat ! Mat PetscInt indices ! PetscInt integer ierr end subroutine subroutine MatCreateSeqBAIJWithArrays(comm, bs, m, n, i, j, a, & &mat ,ierr) integer comm ! MPI_Comm PetscInt bs ! PetscInt PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt i ! PetscInt PetscInt j ! PetscInt PetscScalar a ! PetscScalar Mat mat ! Mat integer ierr end subroutine subroutine MatMPIBAIJSetHashTableFactor(mat, fact ,ierr) Mat mat ! Mat PetscReal fact ! PetscReal integer ierr end subroutine subroutine MatSeqAIJSetColumnIndices(mat, indices ,ierr) Mat mat ! Mat PetscInt indices ! PetscInt integer ierr end subroutine subroutine MatStoreValues(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatRetrieveValues(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatSeqAIJSetPreallocationCSR(B, i, j, v ,ierr) Mat B ! Mat PetscInt i (*) ! PetscInt PetscInt j (*) ! PetscInt PetscScalar v (*) ! PetscScalar integer ierr end subroutine subroutine MatCreateSeqAIJWithArrays(comm, m, n, i, j, a, mat , & &ierr) integer comm ! MPI_Comm PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt i ! PetscInt PetscInt j ! PetscInt PetscScalar a ! PetscScalar Mat mat ! Mat integer ierr end subroutine subroutine MatMPIAIJSetPreallocationCSR(B, i, j, v ,ierr) Mat B ! Mat PetscInt i (*) ! PetscInt PetscInt j (*) ! PetscInt PetscScalar v (*) ! PetscScalar integer ierr end subroutine subroutine MatCreateMPIAIJWithArrays(comm, m, n, mupper, nupper & &, i, j, a, mat ,ierr) integer comm ! MPI_Comm PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt mupper ! PetscInt PetscInt nupper ! PetscInt PetscInt i (*) ! PetscInt PetscInt j (*) ! PetscInt PetscScalar a (*) ! PetscScalar Mat mat ! Mat integer ierr end subroutine subroutine MatMerge(comm, inmat, n, scall, outmat ,ierr) integer comm ! MPI_Comm Mat inmat ! Mat PetscInt n ! PetscInt MatReuse scall ! MatReuse Mat outmat ! Mat integer ierr end subroutine subroutine MatGetLocalMat(A, scall, A_loc ,ierr) Mat A ! Mat MatReuse scall ! MatReuse Mat A_loc ! Mat integer ierr end subroutine subroutine MatCreateMPIAIJWithSplitArrays(comm, m, n, mupper, & &nupper, i, j, a, oi, oj, oa, mat ,ierr) integer comm ! MPI_Comm PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt mupper ! PetscInt PetscInt nupper ! PetscInt PetscInt i (*) ! PetscInt PetscInt j (*) ! PetscInt PetscScalar a (*) ! PetscScalar PetscInt oi (*) ! PetscInt PetscInt oj (*) ! PetscInt PetscScalar oa (*) ! PetscScalar Mat mat ! Mat integer ierr end subroutine subroutine MatMFFDDSSetUmin(A, umin ,ierr) Mat A ! Mat PetscReal umin ! PetscReal integer ierr end subroutine subroutine MatMFFDWPSetComputeNormU(A, flag ,ierr) Mat A ! Mat PetscTruth flag ! PetscTruth integer ierr end subroutine subroutine MatMFFDSetFromOptions(mat ,ierr) Mat mat ! Mat integer ierr end subroutine subroutine MatCreateMFFD(comm, m, n, mupper, nupper, J ,ierr) integer comm ! MPI_Comm PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt mupper ! PetscInt PetscInt nupper ! PetscInt Mat J ! Mat integer ierr end subroutine subroutine MatMFFDGetH(mat, h ,ierr) Mat mat ! Mat PetscScalar h ! PetscScalar integer ierr end subroutine subroutine MatMFFDSetPeriod(mat, period ,ierr) Mat mat ! Mat PetscInt period ! PetscInt integer ierr end subroutine subroutine MatMFFDSetFunctionError(mat, error ,ierr) Mat mat ! Mat PetscReal error ! PetscReal integer ierr end subroutine subroutine MatMFFDAddNullSpace(J, nullsp ,ierr) Mat J ! Mat MatNullSpace nullsp ! MatNullSpace integer ierr end subroutine subroutine MatMFFDSetHHistory(J, history, nhistory ,ierr) Mat J ! Mat PetscScalar history (*) ! PetscScalar PetscInt nhistory ! PetscInt integer ierr end subroutine subroutine MatMFFDResetHHistory(J ,ierr) Mat J ! Mat integer ierr end subroutine subroutine MatMFFDSetBase(J, U, F ,ierr) Mat J ! Mat Vec U ! Vec Vec F ! Vec integer ierr end subroutine subroutine MatMFFDCheckPositivity(dummy, U, a, h ,ierr) PetscVoid dummy ! void Vec U ! Vec Vec a ! Vec PetscScalar h ! PetscScalar integer ierr end subroutine subroutine MatSeqSBAIJSetColumnIndices(mat, indices ,ierr) Mat mat ! Mat PetscInt indices ! PetscInt integer ierr end subroutine subroutine MatCreateSeqSBAIJWithArrays(comm, bs, m, n, i, j, a, & &mat ,ierr) integer comm ! MPI_Comm PetscInt bs ! PetscInt PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt i ! PetscInt PetscInt j ! PetscInt PetscScalar a ! PetscScalar Mat mat ! Mat integer ierr end subroutine subroutine MatISGetLocalMat(mat, local ,ierr) Mat mat ! Mat Mat local ! Mat integer ierr end subroutine subroutine MatCreateIS(comm, m, n, mupper, nupper, map, A ,ierr)& & integer comm ! MPI_Comm PetscInt m ! PetscInt PetscInt n ! PetscInt PetscInt mupper ! PetscInt PetscInt nupper ! PetscInt ISLocalToGlobalMapping map ! ISLocalToGlobalMapping Mat A ! Mat integer ierr end subroutine subroutine MatDenseGetLocalMatrix(A, B ,ierr) Mat A ! Mat Mat B ! Mat integer ierr end subroutine subroutine MatCreateNormal(A, N ,ierr) Mat A ! Mat Mat N ! Mat integer ierr end subroutine subroutine MatCreateLRC(A, U, V, N ,ierr) Mat A ! Mat Mat U ! Mat Mat V ! Mat Mat N ! Mat integer ierr end subroutine subroutine MatScatterGetVecScatter(mat, scatter ,ierr) Mat mat ! Mat VecScatter scatter ! VecScatter integer ierr end subroutine subroutine MatScatterSetVecScatter(mat, scatter ,ierr) Mat mat ! Mat VecScatter scatter ! VecScatter integer ierr end subroutine subroutine MatCreateSeqFFTW(comm, ndim, dim, A ,ierr) integer comm ! MPI_Comm PetscInt ndim ! PetscInt PetscInt dim (*) ! PetscInt Mat A ! Mat integer ierr end subroutine subroutine MatCompositeAddMat(mat, smat ,ierr) Mat mat ! Mat Mat smat ! Mat integer ierr end subroutine subroutine MatCreateTranspose(A, N ,ierr) Mat A ! Mat Mat N ! Mat integer ierr end subroutine subroutine MatRegisterDAAD(ierr) integer ierr end subroutine -------------- next part -------------- #if !defined(PETSC_USE_FORTRAN_MODULES) #include "finclude/ftn-custom/petscmatdef.h90" #endif #if defined(PETSC_USE_FORTRAN_DATATYPES) && !defined(MAT_HIDE) #define MAT_HIDE type(Mat) #define MATFDCOLORING_HIDE type(MatFDColoring) #define MATNULLSPACE_HIDE type(MatNullSpace) #define USE_MAT_HIDE use petscmatdef #elif !defined(MAT_HIDE) #define MAT_HIDE Mat #define MATFDCOLORING_HIDE MatFDColoring #define MATNULLSPACE_HIDE MatNullSpace #define USE_MAT_HIDE #endif Interface Subroutine MatGetArrayF90(mat,array,ierr) USE_MAT_HIDE PetscScalar, pointer :: array(:,:) PetscErrorCode ierr MAT_HIDE mat End Subroutine End Interface Interface Subroutine MatRestoreArrayF90(mat,array,ierr) USE_MAT_HIDE PetscScalar, pointer :: array(:,:) PetscErrorCode ierr MAT_HIDE mat End Subroutine End Interface Interface Subroutine MatGetInfo(mat, flag, info ,ierr) USE_MAT_HIDE MAT_HIDE mat MatInfoType flag MatInfo info(:) PetscErrorCode ierr End Subroutine End Interface -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: matrixf.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: zmatrixf.c URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: matrix.c URL: From bsmith at mcs.anl.gov Fri May 29 15:32:22 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 29 May 2009 15:32:22 -0500 Subject: VecView behaviour In-Reply-To: <4A1FAC32.4010507@student.uibk.ac.at> References: <4A1FAC32.4010507@student.uibk.ac.at> Message-ID: <2B57ADF8-D4AB-4938-BCA5-291C96063C8E@mcs.anl.gov> On May 29, 2009, at 4:34 AM, Andreas Grassl wrote: > Hello, > > I'm working with the PCNN preconditioner and hence with > ISLocalToGlobalMapping. > After solving I want to write the solution to an ASCII-file where > only the > values belonging to the "external" global numbering are given and > not followed ^^^^^^^^^^^^^^^^^^^^^^^ > > by the zeros. What do you mean? What parts of the vector do you want? > > > Currently I'm giving this commands: > > ierr = > PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_SYMMODU);CHKERRQ(ierr); > ierr = VecView(X,viewer);CHKERRQ(ierr); > > Does anybody have an idea, which option or function could help me? > > cheers, > > ando > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 From s.kramer at imperial.ac.uk Sat May 30 07:44:09 2009 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Sat, 30 May 2009 13:44:09 +0100 Subject: Mismatch in explicit fortran interface for MatGetInfo In-Reply-To: References: <4A18016F.6030805@imperial.ac.uk> <0A67546F-4327-4265-B94D-B889B94644E5@mcs.anl.gov> Message-ID: <4A212A19.3090404@imperial.ac.uk> Thanks a lot for looking into this. The explicit fortran interfaces are in general very useful. The problem occurred for me with petsc-3.0.0-p1. I'm happy to try it out with a more recent patch-level or with petsc-dev. My current workaround is to simply wrap the call the MatGetInfo inside an external subroutine that doesn't use the fortran interfaces, so we can still apply the interfaces in the rest of the code. This is part of our CFD code that has to build on a number of different platforms, so I will probably have to maintain this workaround, so it will build with whatever petsc 3 version is available on a given platform. The attached files were for current petsc-dev? Cheers Stephan Satish Balay wrote: > I have the following fixed files [untested yet]. But could you tell me > how you've configured PETSc? - and what patchlevel? > > [It appears that there are quiet a few breakages with > --with-fortran-interfaces=1 - I might fix this in petsc-dev - not > 3.0.0 - as it depends upon some f90 interface changes that are only in > petsc-dev] > > > Attaching the modified files that go with my untested fix. > > include/finclude/ftn-auto/petscmat.h90 > include/finclude/ftn-custom/petscmat.h90 > src/mat/interface/ftn-auto/matrixf.c > src/mat/interface/ftn-custom/zmatrixf.c > src/mat/interface/matrix.c > > Satish > > > On Wed, 27 May 2009, Barry Smith wrote: > >> Stephan, >> >> Satish is working on the patch for this and will get it to you shortly. >> >> Sorry for the delay, we were debating how to handle it. >> >> Barry >> >> On May 23, 2009, at 9:00 AM, Stephan Kramer wrote: >> >>> Hi all, >>> >>> First of all thanks of a lot for providing explicit fortran interfaces for >>> most functions in Petsc 3. This is of great help. I do however run into a >>> problem using MatGetInfo. The calling sequence for fortran (according to the >>> manual) is: >>> >>> double precision info(MAT_INFO_SIZE) >>> Mat A >>> integer ierr >>> >>> call MatGetInfo >>> (A,MAT_LOCAL,info,ierr) >>> >>> The interface however seems to indicate the info argument has to be a single >>> double precision (i.e. a scalar not an array). I guess with implicit >>> interfaces this sort of thing would work, but with the provided explicit >>> interface, at least gfortran won't let me have it. >>> >>> Cheers >>> Stephan >>> From balay at mcs.anl.gov Sat May 30 11:02:56 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 30 May 2009 11:02:56 -0500 (CDT) Subject: Mismatch in explicit fortran interface for MatGetInfo In-Reply-To: <4A212A19.3090404@imperial.ac.uk> References: <4A18016F.6030805@imperial.ac.uk> <0A67546F-4327-4265-B94D-B889B94644E5@mcs.anl.gov> <4A212A19.3090404@imperial.ac.uk> Message-ID: On Sat, 30 May 2009, Stephan Kramer wrote: > Thanks a lot for looking into this. The explicit fortran interfaces are in > general very useful. The problem occurred for me with petsc-3.0.0-p1. I'm > happy to try it out with a more recent patch-level or with petsc-dev. Did you configure with '--with-fortran-interfaces=1' or are you directly using '#include "finclude/ftn-auto/petscmat.h90"'? With my builds --with-fortran-interfaces=1 is broken with p0 [so p1 might also be broken] There was some reorganizing of f90 interface in the newer patchlevel [p4/ or p5] - and thats also broken [so petsc-dev is also broken] > My current workaround is to simply wrap the call the MatGetInfo inside an > external subroutine that doesn't use the fortran interfaces, so we can still > apply the interfaces in the rest of the code. This is part of our CFD code > that has to build on a number of different platforms, so I will probably have > to maintain this workaround, so it will build with whatever petsc 3 version is > available on a given platform. > > The attached files were for current petsc-dev? Thare are from petsc-3.0.0 - but it should also work with petsc-dev. Its tested by building PETSc normally - and changing the example ex12f.F as follows: asterix:/home/balay/tmp/petsc-dist-test>hg diff src/ksp/ksp/examples/tests/ex12f.F diff -r 981c76f817e6 src/ksp/ksp/examples/tests/ex12f.F --- a/src/ksp/ksp/examples/tests/ex12f.F Tue May 26 22:13:06 2009 -0500 +++ b/src/ksp/ksp/examples/tests/ex12f.F Sat May 30 11:02:03 2009 -0500 @@ -8,6 +8,10 @@ #include "finclude/petscpc.h" #include "finclude/petscksp.h" #include "finclude/petscviewer.h" +#define PETSC_USE_FORTRAN_INTERFACES +#include "finclude/petscmat.h90" +#undef PETSC_USE_FORTRAN_INTERFACES + ! ! This example is the Fortran version of ex6.c. The program reads a PETSc matrix ! and vector from a file and solves a linear system. Input arguments are: asterix:/home/balay/tmp/petsc-dist-test> Satish > > Cheers > Stephan > > Satish Balay wrote: > > I have the following fixed files [untested yet]. But could you tell me > > how you've configured PETSc? - and what patchlevel? > > > > [It appears that there are quiet a few breakages with > > --with-fortran-interfaces=1 - I might fix this in petsc-dev - not > > 3.0.0 - as it depends upon some f90 interface changes that are only in > > petsc-dev] > > > > > > Attaching the modified files that go with my untested fix. > > > > include/finclude/ftn-auto/petscmat.h90 > > include/finclude/ftn-custom/petscmat.h90 > > src/mat/interface/ftn-auto/matrixf.c > > src/mat/interface/ftn-custom/zmatrixf.c > > src/mat/interface/matrix.c > > > > Satish > > > > > > On Wed, 27 May 2009, Barry Smith wrote: > > > > > Stephan, > > > > > > Satish is working on the patch for this and will get it to you shortly. > > > > > > Sorry for the delay, we were debating how to handle it. > > > > > > Barry > > > > > > On May 23, 2009, at 9:00 AM, Stephan Kramer wrote: > > > > > > > Hi all, > > > > > > > > First of all thanks of a lot for providing explicit fortran interfaces > > > > for > > > > most functions in Petsc 3. This is of great help. I do however run into > > > > a > > > > problem using MatGetInfo. The calling sequence for fortran (according to > > > > the > > > > manual) is: > > > > > > > > double precision info(MAT_INFO_SIZE) > > > > Mat A > > > > integer ierr > > > > > > > > call MatGetInfo > > > > (A,MAT_LOCAL,info,ierr) > > > > > > > > The interface however seems to indicate the info argument has to be a > > > > single > > > > double precision (i.e. a scalar not an array). I guess with implicit > > > > interfaces this sort of thing would work, but with the provided explicit > > > > interface, at least gfortran won't let me have it. > > > > > > > > Cheers > > > > Stephan > > > > > > From s.kramer at imperial.ac.uk Sat May 30 11:11:30 2009 From: s.kramer at imperial.ac.uk (Stephan Kramer) Date: Sat, 30 May 2009 17:11:30 +0100 Subject: Mismatch in explicit fortran interface for MatGetInfo In-Reply-To: References: <4A18016F.6030805@imperial.ac.uk> <0A67546F-4327-4265-B94D-B889B94644E5@mcs.anl.gov> <4A212A19.3090404@imperial.ac.uk> Message-ID: <4A215AB2.2010900@imperial.ac.uk> Satish Balay wrote: > On Sat, 30 May 2009, Stephan Kramer wrote: > >> Thanks a lot for looking into this. The explicit fortran interfaces are in >> general very useful. The problem occurred for me with petsc-3.0.0-p1. I'm >> happy to try it out with a more recent patch-level or with petsc-dev. > > Did you configure with '--with-fortran-interfaces=1' or are you > directly using '#include "finclude/ftn-auto/petscmat.h90"'? > Configured with '--with-fortran-interfaces=1', yes, and then using them via the fortran modules: "use petscksp", "use petscmat", etc. > With my builds --with-fortran-interfaces=1 is broken with p0 [so p1 > might also be broken] There was some reorganizing of f90 interface in > the newer patchlevel [p4/ or p5] - and thats also broken [so petsc-dev > is also broken] > > >> My current workaround is to simply wrap the call the MatGetInfo inside an >> external subroutine that doesn't use the fortran interfaces, so we can still >> apply the interfaces in the rest of the code. This is part of our CFD code >> that has to build on a number of different platforms, so I will probably have >> to maintain this workaround, so it will build with whatever petsc 3 version is >> available on a given platform. >> >> The attached files were for current petsc-dev? > > Thare are from petsc-3.0.0 - but it should also work with > petsc-dev. Its tested by building PETSc normally - and changing the > example ex12f.F as follows: > > asterix:/home/balay/tmp/petsc-dist-test>hg diff src/ksp/ksp/examples/tests/ex12f.F > diff -r 981c76f817e6 src/ksp/ksp/examples/tests/ex12f.F > --- a/src/ksp/ksp/examples/tests/ex12f.F Tue May 26 22:13:06 2009 -0500 > +++ b/src/ksp/ksp/examples/tests/ex12f.F Sat May 30 11:02:03 2009 -0500 > @@ -8,6 +8,10 @@ > #include "finclude/petscpc.h" > #include "finclude/petscksp.h" > #include "finclude/petscviewer.h" > +#define PETSC_USE_FORTRAN_INTERFACES > +#include "finclude/petscmat.h90" > +#undef PETSC_USE_FORTRAN_INTERFACES > + > ! > ! This example is the Fortran version of ex6.c. The program reads a PETSc matrix > ! and vector from a file and solves a linear system. Input arguments are: > asterix:/home/balay/tmp/petsc-dist-test> > > Satish > I'll have a go at it >> Cheers >> Stephan >> >> Satish Balay wrote: >>> I have the following fixed files [untested yet]. But could you tell me >>> how you've configured PETSc? - and what patchlevel? >>> >>> [It appears that there are quiet a few breakages with >>> --with-fortran-interfaces=1 - I might fix this in petsc-dev - not >>> 3.0.0 - as it depends upon some f90 interface changes that are only in >>> petsc-dev] >>> >>> >>> Attaching the modified files that go with my untested fix. >>> >>> include/finclude/ftn-auto/petscmat.h90 >>> include/finclude/ftn-custom/petscmat.h90 >>> src/mat/interface/ftn-auto/matrixf.c >>> src/mat/interface/ftn-custom/zmatrixf.c >>> src/mat/interface/matrix.c >>> >>> Satish >>> >>> >>> On Wed, 27 May 2009, Barry Smith wrote: >>> >>>> Stephan, >>>> >>>> Satish is working on the patch for this and will get it to you shortly. >>>> >>>> Sorry for the delay, we were debating how to handle it. >>>> >>>> Barry >>>> >>>> On May 23, 2009, at 9:00 AM, Stephan Kramer wrote: >>>> >>>>> Hi all, >>>>> >>>>> First of all thanks of a lot for providing explicit fortran interfaces >>>>> for >>>>> most functions in Petsc 3. This is of great help. I do however run into >>>>> a >>>>> problem using MatGetInfo. The calling sequence for fortran (according to >>>>> the >>>>> manual) is: >>>>> >>>>> double precision info(MAT_INFO_SIZE) >>>>> Mat A >>>>> integer ierr >>>>> >>>>> call MatGetInfo >>>>> (A,MAT_LOCAL,info,ierr) >>>>> >>>>> The interface however seems to indicate the info argument has to be a >>>>> single >>>>> double precision (i.e. a scalar not an array). I guess with implicit >>>>> interfaces this sort of thing would work, but with the provided explicit >>>>> interface, at least gfortran won't let me have it. >>>>> >>>>> Cheers >>>>> Stephan >>>>> >> > >