From knepley at gmail.com Wed Apr 1 11:57:56 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 1 Apr 2009 11:57:56 -0500 Subject: Parallel partitioning of the matrix In-Reply-To: References: Message-ID: On Tue, Mar 31, 2009 at 3:58 PM, Nguyen, Hung V ERDC-ITL-MS < Hung.V.Nguyen at usace.army.mil> wrote: > All, > > I have a test case that each processor reads its owned part of matrix in > csr > format dumped out by CFD application. > Note: the partitions of matrix were done by ParMetis. > > Code below shows how to insert data into PETSc matrix (gmap is globalmap). > The solution from PETSc is very closed to CFD solution so I think it is > correct. > > My question is whether the parallel partitioning of the matrix is > determined > by PETSc at runtime or is the same as ParMetis? > > Thank you, > > -hung > --- > /* create a matrix object */ > MatCreateMPIAIJ(PETSC_COMM_WORLD, my_own, my_own,M,M, mnnz, ^^^^^^^^^^ You have determined the partitioning right here. Matt > > PETSC_NULL, mnnz, PETSC_NULL, &A); > > for(i =0; i < my_own; i++) { > int row = gmap[i]; > for (j = ia[i]; j < ia[i+1]; j++) { > int col = ja[j]; > jj = gmap[col]; > MatSetValues(A,1,&row,1,&jj,&val[j], INSERT_VALUES); > } > } > /* free temporary arrays */ > free(val); free(ja); free(ia); > > /* assemble the matrix and vectors*/ > MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hung.V.Nguyen at usace.army.mil Wed Apr 1 13:34:18 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Wed, 1 Apr 2009 13:34:18 -0500 Subject: Parallel partitioning of the matrix In-Reply-To: References: Message-ID: Matt, Thank you for the info. -Hung -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Wednesday, April 01, 2009 11:58 AM To: PETSc users list Subject: Re: Parallel partitioning of the matrix On Tue, Mar 31, 2009 at 3:58 PM, Nguyen, Hung V ERDC-ITL-MS wrote: All, I have a test case that each processor reads its owned part of matrix in csr format dumped out by CFD application. Note: the partitions of matrix were done by ParMetis. Code below shows how to insert data into PETSc matrix (gmap is globalmap). The solution from PETSc is very closed to CFD solution so I think it is correct. My question is whether the parallel partitioning of the matrix is determined by PETSc at runtime or is the same as ParMetis? Thank you, -hung --- /* create a matrix object */ MatCreateMPIAIJ(PETSC_COMM_WORLD, my_own, my_own,M,M, mnnz, ^^^^^^^^^^ You have determined the partitioning right here. Matt PETSC_NULL, mnnz, PETSC_NULL, &A); for(i =0; i < my_own; i++) { int row = gmap[i]; for (j = ia[i]; j < ia[i+1]; j++) { int col = ja[j]; jj = gmap[col]; MatSetValues(A,1,&row,1,&jj,&val[j], INSERT_VALUES); } } /* free temporary arrays */ free(val); free(ja); free(ia); /* assemble the matrix and vectors*/ MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener From sperif at gmail.com Thu Apr 2 05:28:49 2009 From: sperif at gmail.com (Pierre-Yves Aquilanti) Date: Thu, 2 Apr 2009 12:28:49 +0200 Subject: ksp parallel excecution, question on ex5 ksp tutorials Message-ID: <2b9153980904020328o66166842kbcbacda44542784f@mail.gmail.com> Hello, i had a few questions on the ex5 tutorial of ksp. It is described as a parallel execution of two ksp. Does it mean that each ksp solve is running on a single processor or the both are running in parallel on both processors (on the cas mpirun -np 2) ? Because i don't really see the distinction with ex2 of the same tutorial. If it's not running two ksp in parallel, each on a single processor (in ex5), is it possible to do that anyway ? Would i need to directly use MPI or is there is any Petsc way to do that ? Thank you very much for your time. Best Regards PYA -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Thu Apr 2 09:27:00 2009 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Thu, 2 Apr 2009 09:27:00 -0500 (CDT) Subject: ksp parallel excecution, question on ex5 ksp tutorials In-Reply-To: <2b9153980904020328o66166842kbcbacda44542784f@mail.gmail.com> References: <2b9153980904020328o66166842kbcbacda44542784f@mail.gmail.com> Message-ID: PYA, > i had a few questions on the ex5 tutorial of ksp. It is described as a > parallel execution of two ksp. Does it mean that each ksp solve is running > on a single processor or the both are running in parallel on both processors > (on the cas mpirun -np 2) ? Because i don't really see the distinction with > ex2 of the same tutorial. > > If it's not running two ksp in parallel, each on a single processor (in > ex5), is it possible to do that anyway ? Would i need to directly use MPI or > is there is any Petsc way to do that ? KSP is an abstract PETSc object that manages all linear methods. It requires a small overhead to create. In ex5, we solve two linear systems with same number of processors: C_1 x = b and C_2 x = b Thus, we only create the object ksp once and use it for both systems. You can run ex2 with any number of processors, e.g. mpiexec -n ./ex2 You do not need call MPI message passing routines when using petsc. PETSc wrapps the MPI communication and enable users focus on the high level math modeling and computation. See http://www.mcs.anl.gov/petsc/petsc-as/documentation/tutorials/index.html. Hong From irfan.khan at gatech.edu Thu Apr 2 11:54:34 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Thu, 2 Apr 2009 12:54:34 -0400 (EDT) Subject: setting values in parallel vectors In-Reply-To: <2129348064.1067761238690041564.JavaMail.root@mail4.gatech.edu> Message-ID: <993247267.1074641238691274946.JavaMail.root@mail4.gatech.edu> Hello I have a question about setting values in parallel vectors. Which of the following two options is more efficient or does it matter at all. Using VecGhostGetLocalForm: - Obtain the local array form of global vector using VeGhostGetLocalForm() and VecGetArray() - Fill in the values - Use VecGhostRestoreLocalForm() and VecRestoreArray() - Use VecGhostUpdateBegin() and VecGhostUpdateEnd() Using VecSetValues: - Fill in the values of the values in the global parallel vector using VecSetValues() - Use VecAssemblyBegin() and VecAssemblyEnd() Please note that in both the cases the values being filled are local values to the rank. Thanks Irfan Graduate Research Assistant Woodruff School of Mechanical Engineering Georgia Institute of Technology Atlanta, GA From knepley at gmail.com Thu Apr 2 12:02:44 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Apr 2009 12:02:44 -0500 Subject: setting values in parallel vectors In-Reply-To: <993247267.1074641238691274946.JavaMail.root@mail4.gatech.edu> References: <2129348064.1067761238690041564.JavaMail.root@mail4.gatech.edu> <993247267.1074641238691274946.JavaMail.root@mail4.gatech.edu> Message-ID: They are the same. This just pull out a pointer to the storage. Matt On Thu, Apr 2, 2009 at 11:54 AM, Khan, Irfan wrote: > Hello > I have a question about setting values in parallel vectors. Which of the > following two options is more efficient or does it matter at all. > > Using VecGhostGetLocalForm: > > - Obtain the local array form of global vector using VeGhostGetLocalForm() > and VecGetArray() > - Fill in the values > - Use VecGhostRestoreLocalForm() and VecRestoreArray() > - Use VecGhostUpdateBegin() and VecGhostUpdateEnd() > > Using VecSetValues: > > - Fill in the values of the values in the global parallel vector using > VecSetValues() > - Use VecAssemblyBegin() and VecAssemblyEnd() > > > Please note that in both the cases the values being filled are local values > to the rank. > > Thanks > Irfan > Graduate Research Assistant > Woodruff School of Mechanical Engineering > Georgia Institute of Technology > Atlanta, GA > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Apr 2 13:16:09 2009 From: jed at 59A2.org (Jed Brown) Date: Thu, 2 Apr 2009 20:16:09 +0200 Subject: setting values in parallel vectors In-Reply-To: <993247267.1074641238691274946.JavaMail.root@mail4.gatech.edu> References: <2129348064.1067761238690041564.JavaMail.root@mail4.gatech.edu> <993247267.1074641238691274946.JavaMail.root@mail4.gatech.edu> Message-ID: <20090402181609.GA13759@brakk.ethz.ch> On Thu 2009-04-02 12:54, Khan, Irfan wrote: > Hello > I have a question about setting values in parallel vectors. Which of the following two options is more efficient or does it matter at all. > > Using VecGhostGetLocalForm: > > - Obtain the local array form of global vector using VeGhostGetLocalForm() and VecGetArray() > - Fill in the values > - Use VecGhostRestoreLocalForm() and VecRestoreArray() > - Use VecGhostUpdateBegin() and VecGhostUpdateEnd() > > Using VecSetValues: > > - Fill in the values of the values in the global parallel vector using VecSetValues() > - Use VecAssemblyBegin() and VecAssemblyEnd() > > > Please note that in both the cases the values being filled are local values to the rank. These choices are not equivalent. Assuming you use VecGhostUpdateBegin(x,INSERT_VALUES,SCATTER_FORWARD); VecGhostUpdateEnd(x,INSERT_VALUES,SCATTER_FORWARD); the ghosted values will be updated on every process. In contrast VecAssemblyBegin/End only updates the owner's copy, it knows nothing about the ghost values. If you are only setting owned values, VecAssembly* does almost nothing, you will still have to update the ghost values. Note that if you only need owned values, you can call VecGetArray on the global form instead of working with the local form. Setting local values directly after VecGetArray (with or without the local form) is faster, but it's irrelevant (VecSetValues is plenty fast). Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From irfan.khan at gatech.edu Thu Apr 2 14:09:51 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Thu, 2 Apr 2009 15:09:51 -0400 (EDT) Subject: setting values in parallel vectors In-Reply-To: <1518491891.1135751238699151851.JavaMail.root@mail4.gatech.edu> Message-ID: <853651851.1137531238699391807.JavaMail.root@mail4.gatech.edu> Thank you, that was very helpful. Please do let me know if I understood this right. Generally VecSetValues()+VecGhostUpdateBegin/End() is faster than VecGetArray()+assign_array()+VecRestoreArray()+VecGhostUpdateBegin/End(). Also both these operation would be equivalent. Thank you Irfan ----- Original Message ----- From: "Jed Brown" To: petsc-users at mcs.anl.gov Sent: Thursday, April 2, 2009 2:16:09 PM GMT -05:00 US/Canada Eastern Subject: Re: setting values in parallel vectors On Thu 2009-04-02 12:54, Khan, Irfan wrote: > Hello > I have a question about setting values in parallel vectors. Which of the following two options is more efficient or does it matter at all. > > Using VecGhostGetLocalForm: > > - Obtain the local array form of global vector using VeGhostGetLocalForm() and VecGetArray() > - Fill in the values > - Use VecGhostRestoreLocalForm() and VecRestoreArray() > - Use VecGhostUpdateBegin() and VecGhostUpdateEnd() > > Using VecSetValues: > > - Fill in the values of the values in the global parallel vector using VecSetValues() > - Use VecAssemblyBegin() and VecAssemblyEnd() > > > Please note that in both the cases the values being filled are local values to the rank. These choices are not equivalent. Assuming you use VecGhostUpdateBegin(x,INSERT_VALUES,SCATTER_FORWARD); VecGhostUpdateEnd(x,INSERT_VALUES,SCATTER_FORWARD); the ghosted values will be updated on every process. In contrast VecAssemblyBegin/End only updates the owner's copy, it knows nothing about the ghost values. If you are only setting owned values, VecAssembly* does almost nothing, you will still have to update the ghost values. Note that if you only need owned values, you can call VecGetArray on the global form instead of working with the local form. Setting local values directly after VecGetArray (with or without the local form) is faster, but it's irrelevant (VecSetValues is plenty fast). Jed From jed at 59A2.org Thu Apr 2 14:18:06 2009 From: jed at 59A2.org (Jed Brown) Date: Thu, 2 Apr 2009 21:18:06 +0200 Subject: setting values in parallel vectors In-Reply-To: <853651851.1137531238699391807.JavaMail.root@mail4.gatech.edu> References: <1518491891.1135751238699151851.JavaMail.root@mail4.gatech.edu> <853651851.1137531238699391807.JavaMail.root@mail4.gatech.edu> Message-ID: <20090402191806.GC13759@brakk.ethz.ch> On Thu 2009-04-02 15:09, Khan, Irfan wrote: > Thank you, that was very helpful. Please do let me know if I understood this right. Generally VecSetValues()+VecGhostUpdateBegin/End() is faster than VecGetArray()+assign_array()+VecRestoreArray()+VecGhostUpdateBegin/End(). Also both these operation would be equivalent. These operations would be equivalent. The second option would be faster, but only because it doesn't do a function call and range-checking in an inner loop. It would be nearly impossible to tell the difference in a real code so don't worry about it, just use whichever is more natural. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From knepley at gmail.com Thu Apr 2 14:18:56 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Apr 2009 14:18:56 -0500 Subject: setting values in parallel vectors In-Reply-To: <853651851.1137531238699391807.JavaMail.root@mail4.gatech.edu> References: <1518491891.1135751238699151851.JavaMail.root@mail4.gatech.edu> <853651851.1137531238699391807.JavaMail.root@mail4.gatech.edu> Message-ID: On Thu, Apr 2, 2009 at 2:09 PM, Khan, Irfan wrote: > Thank you, that was very helpful. Please do let me know if I understood > this right. Generally VecSetValues()+VecGhostUpdateBegin/End() is faster > than > VecGetArray()+assign_array()+VecRestoreArray()+VecGhostUpdateBegin/End(). > Also both these operation would be equivalent. 1) The ghost update is independent of the method used to set values 2) Getting the array is general faster than a function call Matt > > Thank you > Irfan > > ----- Original Message ----- > From: "Jed Brown" > To: petsc-users at mcs.anl.gov > Sent: Thursday, April 2, 2009 2:16:09 PM GMT -05:00 US/Canada Eastern > Subject: Re: setting values in parallel vectors > > On Thu 2009-04-02 12:54, Khan, Irfan wrote: > > Hello > > I have a question about setting values in parallel vectors. Which of the > following two options is more efficient or does it matter at all. > > > > Using VecGhostGetLocalForm: > > > > - Obtain the local array form of global vector using > VeGhostGetLocalForm() and VecGetArray() > > - Fill in the values > > - Use VecGhostRestoreLocalForm() and VecRestoreArray() > > - Use VecGhostUpdateBegin() and VecGhostUpdateEnd() > > > > Using VecSetValues: > > > > - Fill in the values of the values in the global parallel vector using > VecSetValues() > > - Use VecAssemblyBegin() and VecAssemblyEnd() > > > > > > Please note that in both the cases the values being filled are local > values to the rank. > > These choices are not equivalent. Assuming you use > > VecGhostUpdateBegin(x,INSERT_VALUES,SCATTER_FORWARD); > VecGhostUpdateEnd(x,INSERT_VALUES,SCATTER_FORWARD); > > the ghosted values will be updated on every process. In contrast > VecAssemblyBegin/End only updates the owner's copy, it knows nothing > about the ghost values. If you are only setting owned values, > VecAssembly* does almost nothing, you will still have to update the > ghost values. Note that if you only need owned values, you can call > VecGetArray on the global form instead of working with the local form. > > Setting local values directly after VecGetArray (with or without the > local form) is faster, but it's irrelevant (VecSetValues is plenty > fast). > > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From irfan.khan at gatech.edu Thu Apr 2 14:22:59 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Thu, 2 Apr 2009 15:22:59 -0400 (EDT) Subject: setting values in parallel vectors In-Reply-To: <20090402191806.GC13759@brakk.ethz.ch> Message-ID: <438343194.1144411238700179358.JavaMail.root@mail4.gatech.edu> Excellent! Thank you Irfan ----- Original Message ----- From: "Jed Brown" To: petsc-users at mcs.anl.gov Sent: Thursday, April 2, 2009 3:18:06 PM GMT -05:00 US/Canada Eastern Subject: Re: setting values in parallel vectors On Thu 2009-04-02 15:09, Khan, Irfan wrote: > Thank you, that was very helpful. Please do let me know if I understood this right. Generally VecSetValues()+VecGhostUpdateBegin/End() is faster than VecGetArray()+assign_array()+VecRestoreArray()+VecGhostUpdateBegin/End(). Also both these operation would be equivalent. These operations would be equivalent. The second option would be faster, but only because it doesn't do a function call and range-checking in an inner loop. It would be nearly impossible to tell the difference in a real code so don't worry about it, just use whichever is more natural. Jed From irfan.khan at gatech.edu Thu Apr 2 14:46:32 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Thu, 2 Apr 2009 15:46:32 -0400 (EDT) Subject: VecSwap() Message-ID: <1041473720.1155901238701591986.JavaMail.root@mail4.gatech.edu> Does VecSwap() copy ghost entries too. i.e. if I have two similar parallel ghosted vectors, then swaping them using VecSwap(), would it also swap the local ghost entries of the two vectors? It doesn't seem to do so, please let me know if I am wrong. Thank you Irfan From knepley at gmail.com Thu Apr 2 14:59:45 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 2 Apr 2009 14:59:45 -0500 Subject: VecSwap() In-Reply-To: <1041473720.1155901238701591986.JavaMail.root@mail4.gatech.edu> References: <1041473720.1155901238701591986.JavaMail.root@mail4.gatech.edu> Message-ID: On Thu, Apr 2, 2009 at 2:46 PM, Khan, Irfan wrote: > Does VecSwap() copy ghost entries too. i.e. if I have two similar parallel > ghosted vectors, then swaping them using VecSwap(), would it also swap the > local ghost entries of the two vectors? > It doesn't seem to do so, please let me know if I am wrong. No, VecSwap() does not swap ghost values as it is intended to be purely local. You can call GhostUpdate() afterwards to get this behavior. Matt > > Thank you > Irfan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 2 17:01:04 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 2 Apr 2009 17:01:04 -0500 Subject: VecSwap() In-Reply-To: References: <1041473720.1155901238701591986.JavaMail.root@mail4.gatech.edu> Message-ID: On Apr 2, 2009, at 2:59 PM, Matthew Knepley wrote: > On Thu, Apr 2, 2009 at 2:46 PM, Khan, Irfan > wrote: > Does VecSwap() copy ghost entries too. i.e. if I have two similar > parallel ghosted vectors, then swaping them using VecSwap(), would > it also swap the local ghost entries of the two vectors? > It doesn't seem to do so, please let me know if I am wrong. > > No, VecSwap() does not swap ghost values as it is intended to be > purely local. You can call GhostUpdate() > afterwards to get this behavior. Or you can use VecGetLocalRepresentation() and do the swap on that. > > > Matt > > > Thank you > Irfan > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From recrusader at gmail.com Thu Apr 2 21:19:18 2009 From: recrusader at gmail.com (Yujie) Date: Thu, 2 Apr 2009 19:19:18 -0700 Subject: MatMatMult_MPIDense_MPIDense() works currently? Message-ID: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> Hi, PETSc Developers I am wondering whether MatMatMult_MPIDense_MPIDense() works currently based on PLAPACK? Thanks a lot. Regards, Yujie -------------- next part -------------- An HTML attachment was scrubbed... URL: From irfan.khan at gatech.edu Thu Apr 2 21:19:19 2009 From: irfan.khan at gatech.edu (Khan, Irfan) Date: Thu, 2 Apr 2009 22:19:19 -0400 (EDT) Subject: creating parallel vectors In-Reply-To: <30296014.1262531238721678792.JavaMail.root@mail4.gatech.edu> Message-ID: <1595747583.1271951238725159944.JavaMail.root@mail4.gatech.edu> Thank you for the prompt help. It have benefited me a lot. I have another question about creating parallel vectors. Is it possible to create parallel vectors by specifying the global indices of the entries? For instance in VecCreateGhost(MPI_Comm comm,PetscInt n,PetscInt N,PetscInt nghost,const PetscInt global_ordering[],Vec *vv); is it possible to give the global ordering of all the local+ghost entries of the vector through "global_ordering". Thus the vector will not create a new petsc ordering. Thank you Irfan From bsmith at mcs.anl.gov Thu Apr 2 22:02:58 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 2 Apr 2009 22:02:58 -0500 Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> Message-ID: On Apr 2, 2009, at 9:19 PM, Yujie wrote: > Hi, PETSc Developers > > I am wondering whether MatMatMult_MPIDense_MPIDense() works > currently based on PLAPACK? Thanks a lot. > No, if you run it you will see it print an error message. I tried to debug PLAPACK to determine the problem but it was awfully complicated and had to give up. Certainly someone else could try to debug PLAPACK to determine the problem. PLAPACK is not supported so unfortunately there is no one to complain to about it and you'd have to fix it yourself. Barry > Regards, > > Yujie > From bsmith at mcs.anl.gov Thu Apr 2 22:04:19 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 2 Apr 2009 22:04:19 -0500 Subject: creating parallel vectors In-Reply-To: <1595747583.1271951238725159944.JavaMail.root@mail4.gatech.edu> References: <1595747583.1271951238725159944.JavaMail.root@mail4.gatech.edu> Message-ID: <216A084B-916B-40B4-8B92-5F9B33C44B9B@mcs.anl.gov> On Apr 2, 2009, at 9:19 PM, Khan, Irfan wrote: > Thank you for the prompt help. It have benefited me a lot. > > I have another question about creating parallel vectors. Is it > possible to create parallel vectors by specifying the global indices > of the entries? > For instance in > VecCreateGhost(MPI_Comm comm,PetscInt n,PetscInt N,PetscInt > nghost,const PetscInt global_ordering[],Vec *vv); > > is it possible to give the global ordering of all the local+ghost > entries of the vector through "global_ordering". Thus the vector > will not create a new petsc ordering. No, indexing for PETSc parallel vectors is ALWAYS with the numbering where the first process has 0 to n1-1 second process n1 to n2_1 etc. Barry > > > > Thank you > Irfan From recrusader at gmail.com Thu Apr 2 22:52:45 2009 From: recrusader at gmail.com (Yujie) Date: Thu, 2 Apr 2009 19:52:45 -0800 Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> Message-ID: <7ff0ee010904022052g4b0e3b2bg70785389532230ea@mail.gmail.com> thanks for your reply, Barry. According to your judgement, where is the problem? thanks. Regards, Yujie On Thu, Apr 2, 2009 at 7:02 PM, Barry Smith wrote: > > On Apr 2, 2009, at 9:19 PM, Yujie wrote: > > Hi, PETSc Developers >> >> I am wondering whether MatMatMult_MPIDense_MPIDense() works currently >> based on PLAPACK? Thanks a lot. >> >> > No, if you run it you will see it print an error message. > > I tried to debug PLAPACK to determine the problem but it was awfully > complicated and had to give up. Certainly someone else > could try to debug PLAPACK to determine the problem. PLAPACK is not > supported so unfortunately there is no one to complain to about it and you'd > have to fix it yourself. > > Barry > > > > Regards, >> >> Yujie >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Andreas.Grassl at student.uibk.ac.at Fri Apr 3 04:29:31 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Fri, 03 Apr 2009 11:29:31 +0200 Subject: PCNN preconditioner and setting the interface In-Reply-To: References: <49C8E3F7.9000900@student.uibk.ac.at> <2309C09C-D6F2-4163-B8D6-78EF4A2CEA93@mcs.anl.gov> <49C90B8F.5020007@student.uibk.ac.at> Message-ID: <49D5D6FB.4050602@student.uibk.ac.at> Barry Smith schrieb: > > On Mar 24, 2009, at 11:34 AM, Andreas Grassl wrote: > >> Barry Smith schrieb: >>> >>> On Mar 24, 2009, at 8:45 AM, Andreas Grassl wrote: >>> >>>> Hello, >>>> >>>> I'm working with a FE-Software where I get out the element stiffness >>>> matrices and the element-node correspondency to setup the stiffness >>>> matrix for solving with PETSc. >>>> >>>> I'm currently fighting with the interface definition. My >>>> LocalToGlobalMapping for test-purposes was the "identity"-IS, but I >>>> guess this is far from the optimum, because nowhere is defined a node >>>> set of interface nodes. >>>> >>>> How do I declare the interface? Is it simply a reordering of the nodes, >>>> the inner nodes are numbered first and the interface nodes last? >>> >>> Here's the deal. Over all the processors you have to have a single >>> GLOBAL numbering of the >>> nodes. The first process starts with 0 and each process starts off with >>> one more than then previous process had. >> >> I am confused now, because after you said to use MatSetValuesLocal() to >> put the values in the matrix, i thought local means the unique >> (sequential) numbering independent of the processors in use and global a >> processor-specific (parallel) numbering. > > No, each process has its own independent local numbering from 0 to > nlocal-1 > the islocaltoglobalmapping you create gives the global number for each > local number. >> >> >> So the single GLOBAL numbering is the numbering obtained from the >> FE-Software represented by {0,...,23} >> >> 0 o o O o 5 >> | >> 6 o o O o o >> | >> O--O--O--O--O--O >> | >> o o o O o 23 >> >> And I set the 4 different local numberings {0,...,11}, {0,...,8}, >> {0,...7}, {0,...,5} with the call of ISLocalToGlobalMappingCreate? >> >> How do I set the different indices? >> {0,1,2,3,6,7,8,9,12,13,14,15} would be the index vector for the upper >> left subdomain and {3,9,12,13,14,15} the index vector for the interface >> f it. > > I don't understand your figure, but I don't think it matters. It is a 2D grid arising from a FE-discretization with 4 node-elements. The small o-nodes are inner nodes, the big O-nodes are interface nodes, numbered row-wise from upper left to lower right. Let's assume this node numbers correspond to the DOF-number in the system of equation and we don't regard the boundary for now, so I receive a 24x24 Matrix which has to be partitioned into 4 subdomains. >> >> >> The struct PC_IS defined in src/ksp/pc/impls/is/pcis.h contains IS >> holding such an information (I suppose at least), but I have no idea how >> to use them efficiently. >> >> Do I have to manage a PC_IS object for every subdomain? > > In the way it is implemented EACH process has ONE subdomain. Thus > each process has > ONE local to global mapping. Is there a possibility to set this mentioned mapping from my global view? Or do I have to assemble the matrix locally? > > You are getting yourself confused thinking things are more > complicated than they really > are. I'll try to change my point of view to understand the things easier ;-) cheers ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From recrusader at gmail.com Fri Apr 3 13:02:33 2009 From: recrusader at gmail.com (Yujie) Date: Fri, 3 Apr 2009 11:02:33 -0700 Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> Message-ID: <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> Dear Barry: I am trying to debug the codes you have written with ex123.c. After commenting the error output (SETERRQ(PETSC_ERR_LIB,"Due to aparent bugs in PLAPACK,this is not currently supported");) in MatMatMultSymbolic_MPIDense_MPIDense(). The errors I got is "Caught signal number 11 SEGV: Segmentation Violation". It takes place in "PLA_Obj_set_to_zero(lu->A);" in MatMPIDenseCopyToPlapack(). To my understanding, if you want to set "lu->A" to zero, you first assign memory to "lu->A". However, I can't find which function you do this in? Could you give me some advice? thanks a lot. Regards, Yujie On Thu, Apr 2, 2009 at 8:02 PM, Barry Smith wrote: > > On Apr 2, 2009, at 9:19 PM, Yujie wrote: > > Hi, PETSc Developers >> >> I am wondering whether MatMatMult_MPIDense_MPIDense() works currently >> based on PLAPACK? Thanks a lot. >> >> > No, if you run it you will see it print an error message. > > I tried to debug PLAPACK to determine the problem but it was awfully > complicated and had to give up. Certainly someone else > could try to debug PLAPACK to determine the problem. PLAPACK is not > supported so unfortunately there is no one to complain to about it and you'd > have to fix it yourself. > > Barry > > > > Regards, >> >> Yujie >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Apr 3 15:14:24 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 3 Apr 2009 15:14:24 -0500 Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> Message-ID: Yujie You are are on your own on this. I spent (wasted) many hours trying to debug the PLAPACK problem, I'm not going to deal with it again. You'll have to work through the code yourself. Barry On Apr 3, 2009, at 1:02 PM, Yujie wrote: > Dear Barry: > > I am trying to debug the codes you have written with ex123.c. After > commenting the error output (SETERRQ(PETSC_ERR_LIB,"Due to aparent > bugs in PLAPACK,this is not currently supported");) in > MatMatMultSymbolic_MPIDense_MPIDense(). > The errors I got is "Caught signal number 11 SEGV: Segmentation > Violation". It takes place in "PLA_Obj_set_to_zero(lu->A);" in > > MatMPIDenseCopyToPlapack(). > > To my understanding, if you want to set "lu->A" to zero, you first > assign memory to "lu->A". However, I can't find which function you > do this in? Could you give me some advice? thanks a lot. > > Regards, > > Yujie > > > On Thu, Apr 2, 2009 at 8:02 PM, Barry Smith > wrote: > > On Apr 2, 2009, at 9:19 PM, Yujie wrote: > > Hi, PETSc Developers > > I am wondering whether MatMatMult_MPIDense_MPIDense() works > currently based on PLAPACK? Thanks a lot. > > > No, if you run it you will see it print an error message. > > I tried to debug PLAPACK to determine the problem but it was > awfully complicated and had to give up. Certainly someone else > could try to debug PLAPACK to determine the problem. PLAPACK is not > supported so unfortunately there is no one to complain to about it > and you'd have to fix it yourself. > > Barry > > > > Regards, > > Yujie > > > From bsmith at mcs.anl.gov Fri Apr 3 15:20:16 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 3 Apr 2009 15:20:16 -0500 Subject: PCNN preconditioner and setting the interface In-Reply-To: <49D5D6FB.4050602@student.uibk.ac.at> References: <49C8E3F7.9000900@student.uibk.ac.at> <2309C09C-D6F2-4163-B8D6-78EF4A2CEA93@mcs.anl.gov> <49C90B8F.5020007@student.uibk.ac.at> <49D5D6FB.4050602@student.uibk.ac.at> Message-ID: <1EF9205C-ACDE-48EE-9B63-B9F1F81EFD6D@mcs.anl.gov> On Apr 3, 2009, at 4:29 AM, Andreas Grassl wrote: > Barry Smith schrieb: >> >> On Mar 24, 2009, at 11:34 AM, Andreas Grassl wrote: >> >>> Barry Smith schrieb: >>>> >>>> On Mar 24, 2009, at 8:45 AM, Andreas Grassl wrote: >>>> >>>>> Hello, >>>>> >>>>> I'm working with a FE-Software where I get out the element >>>>> stiffness >>>>> matrices and the element-node correspondency to setup the >>>>> stiffness >>>>> matrix for solving with PETSc. >>>>> >>>>> I'm currently fighting with the interface definition. My >>>>> LocalToGlobalMapping for test-purposes was the "identity"-IS, >>>>> but I >>>>> guess this is far from the optimum, because nowhere is defined a >>>>> node >>>>> set of interface nodes. >>>>> >>>>> How do I declare the interface? Is it simply a reordering of the >>>>> nodes, >>>>> the inner nodes are numbered first and the interface nodes last? >>>> >>>> Here's the deal. Over all the processors you have to have a single >>>> GLOBAL numbering of the >>>> nodes. The first process starts with 0 and each process starts >>>> off with >>>> one more than then previous process had. >>> >>> I am confused now, because after you said to use >>> MatSetValuesLocal() to >>> put the values in the matrix, i thought local means the unique >>> (sequential) numbering independent of the processors in use and >>> global a >>> processor-specific (parallel) numbering. >> >> No, each process has its own independent local numbering from 0 to >> nlocal-1 >> the islocaltoglobalmapping you create gives the global number for >> each >> local number. >>> >>> >>> So the single GLOBAL numbering is the numbering obtained from the >>> FE-Software represented by {0,...,23} >>> >>> 0 o o O o 5 >>> | >>> 6 o o O o o >>> | >>> O--O--O--O--O--O >>> | >>> o o o O o 23 >>> >>> And I set the 4 different local numberings {0,...,11}, {0,...,8}, >>> {0,...7}, {0,...,5} with the call of ISLocalToGlobalMappingCreate? >>> >>> How do I set the different indices? >>> {0,1,2,3,6,7,8,9,12,13,14,15} would be the index vector for the >>> upper >>> left subdomain and {3,9,12,13,14,15} the index vector for the >>> interface >>> f it. >> >> I don't understand your figure, but I don't think it matters. > > It is a 2D grid arising from a FE-discretization with 4 node- > elements. The small > o-nodes are inner nodes, the big O-nodes are interface nodes, > numbered row-wise > from upper left to lower right. Let's assume this node numbers > correspond to the > DOF-number in the system of equation and we don't regard the > boundary for now, > so I receive a 24x24 Matrix which has to be partitioned into 4 > subdomains. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Once you have the global matrix you CANNOT partition it into the pieces needed for Neumann-Neumann type methods. That is the whole idea of NN methods. Each subdomain matrix is the contribution from certain ELEMENTS only. > > >>> >>> >>> The struct PC_IS defined in src/ksp/pc/impls/is/pcis.h contains IS >>> holding such an information (I suppose at least), but I have no >>> idea how >>> to use them efficiently. >>> >>> Do I have to manage a PC_IS object for every subdomain? >> >> In the way it is implemented EACH process has ONE subdomain. Thus >> each process has >> ONE local to global mapping. > > Is there a possibility to set this mentioned mapping from my global > view? Or do > I have to assemble the matrix locally? You have to assemble locally. Barry > > >> >> You are getting yourself confused thinking things are more >> complicated than they really >> are. > > I'll try to change my point of view to understand the things > easier ;-) > > cheers > > ando > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 From fuentesdt at gmail.com Sun Apr 5 15:48:54 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sun, 5 Apr 2009 15:48:54 -0500 (CDT) Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> Message-ID: Hi Yujie, as a work around have you tried converting your dense matrices to aij format and using MatMatMult_MPIAIJ_MPIAIJ()?? df On Fri, 3 Apr 2009, Yujie wrote: > > Dear Barry: > > I am trying to debug the codes you have written with ex123.c. After commenting the error output > (SETERRQ(PETSC_ERR_LIB,"Due to aparent bugs in PLAPACK,this is not currently supported");) in > MatMatMultSymbolic_MPIDense_MPIDense().? > > The errors I got is "Caught signal number 11 SEGV: Segmentation Violation". It takes place in > ?"PLA_Obj_set_to_zero(lu->A);" in? > > MatMPIDenseCopyToPlapack().? > > To my understanding, if you want to set "lu->A" to zero, you first assign memory to "lu->A". However, I can't find > which function you do this in? Could you give me some advice? thanks a lot. > > Regards, > > Yujie > > > On Thu, Apr 2, 2009 at 8:02 PM, Barry Smith wrote: > > On Apr 2, 2009, at 9:19 PM, Yujie wrote: > > Hi, PETSc Developers > > I am wondering whether MatMatMult_MPIDense_MPIDense() works currently based on PLAPACK? > Thanks a lot. > > > ?No, if you run it you will see it print an error message. > > ?I tried to debug PLAPACK to determine the problem but it was awfully complicated and had to give up. > Certainly someone else > could try to debug PLAPACK to determine the problem. PLAPACK is not supported so unfortunately there is no one > to complain to about it and you'd have to fix it yourself. > > ?Barry > > > > Regards, > > Yujie > > > > > From fuentesdt at gmail.com Sun Apr 5 15:53:23 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sun, 5 Apr 2009 15:53:23 -0500 (CDT) Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> Message-ID: also, MatMatMult_MPIAIJ_MPIDense() and MatMatMult_MPIDense_MPIAIJ() seem to work if you don't want/need to convert both matrices. df On Sun, 5 Apr 2009, David Fuentes wrote: > Hi Yujie, > > > as a work around have you tried converting your dense > matrices to aij format and using MatMatMult_MPIAIJ_MPIAIJ()?? > > > > > df > > > > > > > > On Fri, 3 Apr 2009, Yujie wrote: > >> >> Dear Barry: >> >> I am trying to debug the codes you have written with ex123.c. After >> commenting the error output >> (SETERRQ(PETSC_ERR_LIB,"Due to aparent bugs in PLAPACK,this is not >> currently supported");) in >> MatMatMultSymbolic_MPIDense_MPIDense().? >> >> The errors I got is "Caught signal number 11 SEGV: Segmentation Violation". >> It takes place in >> ?"PLA_Obj_set_to_zero(lu->A);" in? >> >> MatMPIDenseCopyToPlapack().? >> >> To my understanding, if you want to set "lu->A" to zero, you first assign >> memory to "lu->A". However, I can't find >> which function you do this in? Could you give me some advice? thanks a lot. >> >> Regards, >> >> Yujie >> >> >> On Thu, Apr 2, 2009 at 8:02 PM, Barry Smith wrote: >> >> On Apr 2, 2009, at 9:19 PM, Yujie wrote: >> >> Hi, PETSc Developers >> >> I am wondering whether MatMatMult_MPIDense_MPIDense() works >> currently based on PLAPACK? >> Thanks a lot. >> >> >> ?No, if you run it you will see it print an error message. >> >> ?I tried to debug PLAPACK to determine the problem but it was awfully >> complicated and had to give up. >> Certainly someone else >> could try to debug PLAPACK to determine the problem. PLAPACK is not >> supported so unfortunately there is no one >> to complain to about it and you'd have to fix it yourself. >> >> ?Barry >> >> >> >> Regards, >> >> Yujie >> >> >> >> > From fuentesdt at gmail.com Sun Apr 5 16:21:45 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Sun, 5 Apr 2009 16:21:45 -0500 (CDT) Subject: MatMatMult_MPIDense_MPIDense() works currently? In-Reply-To: <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> Message-ID: i've been preallocating the full matrix const PetscScalar zerotol = 1.e-6; PetscInt M,N,m,n; ierr = MatGetSize(MatDense,&M,&N);CHKERRQ(ierr); ierr = MatGetLocalSize(MatDense,&m,&n);CHKERRQ(ierr); ierr = MatCreateMPIAIJ(PETSC_COMM_WORLD,m,n,M,N,n,PETSC_NULL, N-n,PETSC_NULL,&SparseMat);CHKERRQ(ierr); and when I convert I try to get only the non-zero entries const PetscScalar *vwork; const PetscInt *cwork; PetscInt Istart,Iend, nz; ierr=PetscPrintf(PETSC_COMM_WORLD, " Converting...\n");CHKERRQ(ierr); ierr = MatGetOwnershipRange(MatDense,&Istart,&Iend);CHKERRQ(ierr); for (PetscInt Ii=Istart; Ii zerotol ) { MatSetValue(SparseMat ,Ii,cwork[Jj],vwork[Jj],INSERT_VALUES); } } ierr = MatRestoreRow(MatDense,Ii,&nz,&cwork,&vwork);CHKERRQ(ierr); } ierr=PetscPrintf(PETSC_COMM_WORLD, " Assembling...\n");CHKERRQ(ierr); ierr = MatAssemblyBegin(SparseMat ,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); ierr = MatAssemblyEnd( SparseMat ,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); I haven't gotten a chance to look further into it, but I'm not sure if the if( std::abs(vwork[Jj]) > zerotol ) { MatSetValue(SparseMat ,Ii,cwork[Jj],vwork[Jj],INSERT_VALUES); } helps performance w/ future calls to matmatmult_mpiaij_mpiaij as I would suspect... df On Sun, 5 Apr 2009, Yujie wrote: > > Dear David: > > Thank you very much for your help. I tried to do this. However, there is not MatConvert_MPIDense() to convert dense > matrix to aij format. Maybe, the possible method is to create new aij matrix and copy the data into new matrix. > Thanks. > > Regards, > > Yujie > > > On Sun, Apr 5, 2009 at 1:48 PM, David Fuentes wrote: > Hi Yujie, > > > as a work around have you tried converting your dense > matrices to aij format and using MatMatMult_MPIAIJ_MPIAIJ()?? > > > > > df > > > > > > > > On Fri, 3 Apr 2009, Yujie wrote: > > > Dear Barry: > > I am trying to debug the codes you have written with ex123.c. After commenting the error > output > (SETERRQ(PETSC_ERR_LIB,"Due to aparent bugs in PLAPACK,this is not currently supported");) > in > MatMatMultSymbolic_MPIDense_MPIDense().? > > The errors I got is "Caught signal number 11 SEGV: Segmentation Violation". It takes place > in > ?"PLA_Obj_set_to_zero(lu->A);" in? > > MatMPIDenseCopyToPlapack().? > > To my understanding, if you want to set "lu->A" to zero, you first assign memory to "lu->A". > However, I can't find > which function you do this in? Could you give me some advice? thanks a lot. > > Regards, > > Yujie > > > On Thu, Apr 2, 2009 at 8:02 PM, Barry Smith wrote: > > ? ? ?On Apr 2, 2009, at 9:19 PM, Yujie wrote: > > ? ? ? ? ? ?Hi, PETSc Developers > > ? ? ? ? ? ?I am wondering whether MatMatMult_MPIDense_MPIDense() works currently based on > PLAPACK? > ? ? ? ? ? ?Thanks a lot. > > > ?No, if you run it you will see it print an error message. > > ?I tried to debug PLAPACK to determine the problem but it was awfully complicated and had to > give up. > Certainly someone else > could try to debug PLAPACK to determine the problem. PLAPACK is not supported so > unfortunately there is no one > to complain to about it and you'd have to fix it yourself. > > ?Barry > > > > ? ? ?Regards, > > ? ? ?Yujie > > > > > > > From Andreas.Grassl at student.uibk.ac.at Wed Apr 8 11:12:15 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Wed, 08 Apr 2009 18:12:15 +0200 Subject: problems with MatLoad Message-ID: <49DCCCDF.4090104@student.uibk.ac.at> Hello, I got some success on the localtoglobalmapping, but now I'm stuck with writing to/reading from files. In a sequential code I write out some matrices with PetscViewerBinaryOpen(comms,matrixname,FILE_MODE_WRITE,&viewer); for (k=0;k References: <49DCCCDF.4090104@student.uibk.ac.at> Message-ID: On Wed, Apr 8, 2009 at 11:12 AM, Andreas Grassl < Andreas.Grassl at student.uibk.ac.at> wrote: > Hello, > > I got some success on the localtoglobalmapping, but now I'm stuck with > writing > to/reading from files. In a sequential code I write out some matrices with > > PetscViewerBinaryOpen(comms,matrixname,FILE_MODE_WRITE,&viewer); > for (k=0;k MatView(AS[k],viewer);} > PetscViewerDestroy(viewer); > > and want to read them in in a parallel program, where each processor should > own > one matrix: > > ierr = > > PetscViewerBinaryOpen(PETSC_COMM_WORLD,matrixname,FILE_MODE_READ,&viewer);CHKERRQ(ierr); The Viewer has COMM_WORLD, but you are reading a matrix with COMM_SELF. You need to create a separate viewer for each process to do what you want. Matt > > ierr = MatLoad(viewer,MATSEQAIJ,&AS[rank]);CHKERRQ(ierr); > ierr = MatAssemblyBegin(AS[rank], MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = MatAssemblyEnd(AS[rank], MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); > > The program is hanging in the line with MatLoad and giving following output > on > every node: > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Argument out of range! > [0]PETSC ERROR: Comm must be of size 1! > > I tried to sequentialize with PetscSequentialPhaseBegin(PETSC_COMM_WORLD,1) > and > performing the file read with a loop. > > Any suggestions what could go wrong? > > thank you > > ando > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From enjoywm at cs.wm.edu Wed Apr 8 17:08:29 2009 From: enjoywm at cs.wm.edu (Yixun Liu) Date: Wed, 08 Apr 2009 18:08:29 -0400 Subject: The solver doesn't converge Message-ID: <49DD205D.8090007@cs.wm.edu> Hi, I build a finite element system and the related PETSc codes are following, KSPSetOperators(ksp,sparseMechanicalStiffnessMatrix,sparseMechanicalStiffnessMatrix,DIFFERENT_NONZERO_PATTERN); KSPSetTolerances(ksp,0.001,1.e-50,PETSC_DEFAULT, PETSC_DEFAULT); I output the solution and find its magnitude is about 1.0e+10. It's definitely wrong. The correct solution should be around 1 or 2. It seems the solver cannot converge. How do deal with this issue? Thanks. Yixun From knepley at gmail.com Wed Apr 8 17:16:38 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 8 Apr 2009 17:16:38 -0500 Subject: The solver doesn't converge In-Reply-To: <49DD205D.8090007@cs.wm.edu> References: <49DD205D.8090007@cs.wm.edu> Message-ID: On Wed, Apr 8, 2009 at 5:08 PM, Yixun Liu wrote: > Hi, > I build a finite element system and the related PETSc codes are following, > > > KSPSetOperators(ksp,sparseMechanicalStiffnessMatrix,sparseMechanicalStiffnessMatrix,DIFFERENT_NONZERO_PATTERN); > KSPSetTolerances(ksp,0.001,1.e-50,PETSC_DEFAULT, PETSC_DEFAULT); > > I output the solution and find its magnitude is about 1.0e+10. It's > definitely wrong. The correct solution should be around 1 or 2. It > seems the solver cannot converge. How do deal with this issue? Always start with LU to make sure your system is constructed correctly: -ksp_type preonly -pc_type lu Matt > > Thanks. > > Yixun > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Thu Apr 9 02:51:13 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Thu, 09 Apr 2009 15:51:13 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> Message-ID: <49DDA8F1.8090401@gmail.com> Hi, I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled my windows xp and installed mpich in the same directory. I'm using CVF Now, I found that when I'm trying to compile my code, I got the error: :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes of this name conflict with those made accessible by a USE statement. [MPI_STATUS_SIZE] INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) --------------------------------^ E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes of this name conflict with those made accessible by a USE statement. [MPI_STATUS_SIZE] INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) ----------------------------------^ E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: The attributes of this name conflict with those made accessible by a USE statement. [MPI_DOUBLE_PRECISION] parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) ------------------------------^ Error executing df.exe. flux_area.obj - 3 error(s), 0 warning(s) My include option is : Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include Interestingly, when I change my PETSC_DIR to petsc-dev, which correspond to an old build of petsc-2.3.3-p13, there is no problem. May I know what's wrong? Btw, I've converted my mpif.h from using "C" as comments to "!". Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay >> From katsura at yamaguchi-u.ac.jp Thu Apr 9 07:17:01 2009 From: katsura at yamaguchi-u.ac.jp (Hiroshi Katsurayama) Date: Thu, 9 Apr 2009 21:17:01 +0900 Subject: DeprecationWarning in ./config/configure.py Message-ID: <20090409211701.f50f7769.katsura@yamaguchi-u.ac.jp> Hello, I try to use PETSC-3.0.0-p4 on Ubuntu 9.04 linux (present beta version) whose python version is 2.6.1. When ./config/configure.py, the following there "DeprecationWarning"s appear before "TESTING:". ================================================================================= Configuring PETSc to compile on your system ================================================================================= /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/BuildSystem/config/compilers.py:7: DeprecationWarning: the sets module is deprecated import sets /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/PETSc/package.py:7: DeprecationWarning: the md5 module is deprecated; use hashlib instead import md5 /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/BuildSystem/script.py:101: DeprecationWarning: The popen2 module is deprecated. Use the subprocess module. import popen2 TESTING: The configure process after TESTING seems to finish successfully, and next "make all" and "make test" also seem to finish normally. Are these warnings ignorable? I attache the detailed log file of my configure (my_linux_gnu_intel.log). But these warnings also appear in simply "./config/configure.py" irrespective of compiler or blas. When I tried the configure on Ubuntu 8.10 whose python version is 2.5.2, these warning did not appear. Hence the latest python (version 2.6.1) on Ubuntu 9.04 BETA maybe cause these warnings. Thank you. Sincerely, Hiroshi Katsurayama -------------- next part -------------- A non-text attachment was scrubbed... Name: my_linux_gnu_intel.log Type: application/octet-stream Size: 45776 bytes Desc: not available URL: From balay at mcs.anl.gov Thu Apr 9 09:45:13 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 9 Apr 2009 09:45:13 -0500 (CDT) Subject: DeprecationWarning in ./config/configure.py In-Reply-To: <20090409211701.f50f7769.katsura@yamaguchi-u.ac.jp> References: <20090409211701.f50f7769.katsura@yamaguchi-u.ac.jp> Message-ID: > Are these warnings ignorable? Yes - you can saftely ignore these 'DeprecationWarning:' messages. It means : "the functionality works now - but it won't with newer versions". Python changed considerably between python-2 and the new python-3. Python-2.6 is supporsed to be a bridge that enables easier transition to python-3. We need to eventually get configure working with both python3 and python2 Satish On Thu, 9 Apr 2009, Hiroshi Katsurayama wrote: > Hello, > > I try to use PETSC-3.0.0-p4 on Ubuntu 9.04 linux (present beta version) > whose python version is 2.6.1. > > When ./config/configure.py, the following there "DeprecationWarning"s appear before "TESTING:". > > ================================================================================= > Configuring PETSc to compile on your system > ================================================================================= > /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/BuildSystem/config/compilers.py:7: DeprecationWarning: the sets module is deprecated > import sets > /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/PETSc/package.py:7: DeprecationWarning: the md5 module is deprecated; use hashlib instead > import md5 > /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/BuildSystem/script.py:101: DeprecationWarning: The popen2 module is deprecated. Use the subprocess module. > import popen2 > > TESTING: > > The configure process after TESTING seems to finish successfully, > and next "make all" and "make test" also seem to finish normally. > > Are these warnings ignorable? > > I attache the detailed log file of my configure (my_linux_gnu_intel.log). > But these warnings also appear in > simply > > "./config/configure.py" > > irrespective of compiler or blas. > > When I tried the configure on Ubuntu 8.10 whose python version is 2.5.2, > these warning did not appear. > > Hence the latest python (version 2.6.1) on Ubuntu 9.04 BETA maybe cause these warnings. > > Thank you. > > Sincerely, > Hiroshi Katsurayama > From balay at mcs.anl.gov Thu Apr 9 10:07:16 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 9 Apr 2009 10:07:16 -0500 (CDT) Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49DDA8F1.8090401@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> Message-ID: Do you get these errors with PETSc f90 examples? what 'USE statement' do you have in your code? I guess you'll have to check your code to see how you are using f90 modules/includes. If you can get a minimal compileable code that can reproduce this error - send us the code so that we can reproduce the issue Satish On Thu, 9 Apr 2009, Wee-Beng TAY wrote: > Hi, > > I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled my > windows xp and installed mpich in the same directory. I'm using CVF > > Now, I found that when I'm trying to compile my code, I got the error: > > :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes of this > name conflict with those made accessible by a USE statement. > [MPI_STATUS_SIZE] > INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) > --------------------------------^ > E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes of this > name conflict with those made accessible by a USE statement. > [MPI_STATUS_SIZE] > INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) > ----------------------------------^ > E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: The > attributes of this name conflict with those made accessible by a USE > statement. [MPI_DOUBLE_PRECISION] > parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) > ------------------------------^ > Error executing df.exe. > > flux_area.obj - 3 error(s), 0 warning(s) > > My include option is : > > Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include > > > Interestingly, when I change my PETSC_DIR to petsc-dev, which correspond to an > old build of petsc-2.3.3-p13, there is no problem. > > May I know what's wrong? Btw, I've converted my mpif.h from using "C" as > comments to "!". > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > > > From katsura at yamaguchi-u.ac.jp Thu Apr 9 19:57:57 2009 From: katsura at yamaguchi-u.ac.jp (Hiroshi Katsurayama) Date: Fri, 10 Apr 2009 09:57:57 +0900 Subject: DeprecationWarning in ./config/configure.py In-Reply-To: References: <20090409211701.f50f7769.katsura@yamaguchi-u.ac.jp> Message-ID: <20090410095757.8c57256d.katsura@yamaguchi-u.ac.jp> Dear Satish > Yes - you can saftely ignore these 'DeprecationWarning:' messages. It > means : "the functionality works now - but it won't with newer > versions". Thank you very much for the detailed information about Python status. I was relieved to hear the cause of the warnings. Sincerely, Hiroshi Katsurayama On Thu, 9 Apr 2009 09:45:13 -0500 (CDT) Satish Balay wrote: > > Are these warnings ignorable? > > Yes - you can saftely ignore these 'DeprecationWarning:' messages. It > means : "the functionality works now - but it won't with newer > versions". > > Python changed considerably between python-2 and the new > python-3. Python-2.6 is supporsed to be a bridge that enables easier > transition to python-3. > > We need to eventually get configure working with both python3 and python2 > > Satish > > On Thu, 9 Apr 2009, Hiroshi Katsurayama wrote: > > > Hello, > > > > I try to use PETSC-3.0.0-p4 on Ubuntu 9.04 linux (present beta version) > > whose python version is 2.6.1. > > > > When ./config/configure.py, the following there "DeprecationWarning"s appear before "TESTING:". > > > > ================================================================================= > > Configuring PETSc to compile on your system > > ================================================================================= > > /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/BuildSystem/config/compilers.py:7: DeprecationWarning: the sets module is deprecated > > import sets > > /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/PETSc/package.py:7: DeprecationWarning: the md5 module is deprecated; use hashlib instead > > import md5 > > /home/hiroshi/local/petsc/petsc-3.0.0-p4/config/BuildSystem/script.py:101: DeprecationWarning: The popen2 module is deprecated. Use the subprocess module. > > import popen2 > > > > TESTING: > > > > The configure process after TESTING seems to finish successfully, > > and next "make all" and "make test" also seem to finish normally. > > > > Are these warnings ignorable? > > > > I attache the detailed log file of my configure (my_linux_gnu_intel.log). > > But these warnings also appear in > > simply > > > > "./config/configure.py" > > > > irrespective of compiler or blas. > > > > When I tried the configure on Ubuntu 8.10 whose python version is 2.5.2, > > these warning did not appear. > > > > Hence the latest python (version 2.6.1) on Ubuntu 9.04 BETA maybe cause these warnings. > > > > Thank you. > > > > Sincerely, > > Hiroshi Katsurayama > > > -- ???? ???????????????????????? ??????????????????????? TEL : 0836-85-9108 FAX : 0836-85-9101 E-mail: katsura at yamaguchi-u.ac.jp ?755-8611??????????2-16-1 ?????????????? From zonexo at gmail.com Sun Apr 12 06:14:58 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Sun, 12 Apr 2009 19:14:58 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> Message-ID: <49E1CD32.5090602@gmail.com> Hi Satish, I am now using the PETSc ex2f example. I tried "make ex2f" and manage to build and run the file. Then I used the options as a reference for my visual fortran and it worked. The options are: /compile_only /debug:full /include:"Debug/" /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads /warn:nofileopt /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" /fpp:"/m" and ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib libpetscdm.lib libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib libflapack.lib /nologo /subsystem:console /incremental:yes /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" /pdbtype:sept /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" /libpath:"E:\cygwin\codes\MPICH\SDK\lib" Now I add my own file called global.F and tried to compile, using the same options.But now it failed. The error is: --------------------Configuration: ex2f - Win32 Debug-------------------- Compiling Fortran... ------------------------------------------------------------------------ D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: 7: #include "include/finclude/petsc.h" ^ ** error on line 7 in D:\cygwin\codes\pets: cannot find file specified in include directive. 8: #include "include/finclude/petscvec.h" ^ ** error on line 8 in D:\cygwin\codes\pets: cannot find file specified in include directive. 9: #include "include/finclude/petscmat.h" ^ ** error on line 9 in D:\cygwin\codes\pets: cannot find file specified in include directive. 10: #include "include/finclude/petscksp.h" ^ ** error on line 10 in D:\cygwin\codes\pets: cannot find file specified in include directive. 11: #include "include/finclude/petscpc.h" ^ ** error on line 11 in D:\cygwin\codes\pets: cannot find file specified in include directive. 12: #include "include/finclude/petscsys.h" ^ ** error on line 12 in D:\cygwin\codes\pets: cannot find file specified in include directive. 97: #include "include/finclude/petsc.h" ^ ** error on line 97 in D:\cygwin\codes\pets: cannot find file specified in include directive. 98: #include "include/finclude/petscvec.h" ^ ** error on line 98 in D:\cygwin\codes\pets: cannot find file specified in include directive. 99: #include "include/finclude/petscmat.h" ^ ** error on line 99 in D:\cygwin\codes\pets: cannot find file specified in include directive. 100: #include "include/finclude/petscksp.h" ^ ** error on line 100 in D:\cygwin\codes\pets: cannot find file specified in include directive. 101: #include "include/finclude/petscpc.h" ^ ** error on line 101 in D:\cygwin\codes\pets: cannot find file specified in include directive. 102: #include "include/finclude/petscsys.h" ^ ** error on line 102 in D:\cygwin\codes\pets: cannot find file specified in include directive. global.i D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) : Error: Syntax error, found ',' when expecting one of: ( : % . = => Vec xx,b_rhs,xx_uv,b_rhs_uv -----------------^ D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) : Error: Syntax error, found ',' when expecting one of: ( : % . = => Mat A_mat,A_mat_uv ! /* sparse matrix */ --------------------^ D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) : Error: Syntax error, found ',' when expecting one of: ( : % . = => KSP ksp,ksp_uv !/* linear solver context */ -----------------^ D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) : Error: Syntax error, found ',' when expecting one of: ( : % . = => PC pc,pc_uv ------------------^ D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) : Error: Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => PCType ptype -------------------------^ D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) : Error: Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => KSPType ksptype.... I can get it to compile if I use : Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include Compared to the original one above which is: Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" I have attached my global.F. I wonder if this is the cause of the MPICH error. Currently, I have removed all other f90 files, except for global.F and flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the MPI error stated below. I got the same error if I compile under cygwin using the same parameters. Hope you can help. Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > Do you get these errors with PETSc f90 examples? > > what 'USE statement' do you have in your code? > > I guess you'll have to check your code to see how you are using f90 > modules/includes. > > If you can get a minimal compileable code that can reproduce this > error - send us the code so that we can reproduce the issue > > Satish > > On Thu, 9 Apr 2009, Wee-Beng TAY wrote: > > >> Hi, >> >> I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled my >> windows xp and installed mpich in the same directory. I'm using CVF >> >> Now, I found that when I'm trying to compile my code, I got the error: >> >> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes of this >> name conflict with those made accessible by a USE statement. >> [MPI_STATUS_SIZE] >> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >> --------------------------------^ >> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes of this >> name conflict with those made accessible by a USE statement. >> [MPI_STATUS_SIZE] >> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >> ----------------------------------^ >> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: The >> attributes of this name conflict with those made accessible by a USE >> statement. [MPI_DOUBLE_PRECISION] >> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >> ------------------------------^ >> Error executing df.exe. >> >> flux_area.obj - 3 error(s), 0 warning(s) >> >> My include option is : >> >> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include >> >> >> Interestingly, when I change my PETSC_DIR to petsc-dev, which correspond to an >> old build of petsc-2.3.3-p13, there is no problem. >> >> May I know what's wrong? Btw, I've converted my mpif.h from using "C" as >> comments to "!". >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> > > > -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: flux_area.f90 URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: global.F URL: From balay at mcs.anl.gov Sun Apr 12 10:21:58 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 12 Apr 2009 10:21:58 -0500 (CDT) Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49E1CD32.5090602@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> Message-ID: 2 changes you have to make for 3.0.0 1. "include/finclude.. -> "finclude..." 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h equivalnet files. i.e change: #define PETSC_AVOID_DECLARATIONS #include "include/finclude/petsc.h" #include "include/finclude/petscvec.h" #include "include/finclude/petscmat.h" #include "include/finclude/petscksp.h" #include "include/finclude/petscpc.h" #undef PETSC_AVOID_DECLARATIONS to: #include "finclude/petscdef.h" #include "finclude/petscvecdef.h" #include "finclude/petscmatdef.h" #include "finclude/petsckspdef.h" #include "finclude/petscpcdef.h" Satish On Sun, 12 Apr 2009, Wee-Beng TAY wrote: > Hi Satish, > > I am now using the PETSc ex2f example. I tried "make ex2f" and manage to build > and run the file. Then I used the options as a reference for my visual fortran > and it worked. > > The options are: > > /compile_only /debug:full /include:"Debug/" > /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" > /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" > /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads /warn:nofileopt > /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" /fpp:"/m" > > and > > ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib > advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib > odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib libpetscdm.lib > libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib > libflapack.lib /nologo /subsystem:console /incremental:yes > /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" /pdbtype:sept > /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" > /libpath:"E:\cygwin\codes\MPICH\SDK\lib" > > Now I add my own file called global.F and tried to compile, using the same > options.But now it failed. The error is: > > --------------------Configuration: ex2f - Win32 Debug-------------------- > Compiling Fortran... > ------------------------------------------------------------------------ > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: > 7: #include "include/finclude/petsc.h" > ^ > ** error on line 7 in D:\cygwin\codes\pets: cannot find file specified > in include directive. > 8: #include "include/finclude/petscvec.h" > ^ > ** error on line 8 in D:\cygwin\codes\pets: cannot find file specified > in include directive. > 9: #include "include/finclude/petscmat.h" > ^ > ** error on line 9 in D:\cygwin\codes\pets: cannot find file specified > in include directive. > 10: #include "include/finclude/petscksp.h" > ^ > ** error on line 10 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 11: #include "include/finclude/petscpc.h" > ^ > ** error on line 11 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 12: #include "include/finclude/petscsys.h" > ^ > ** error on line 12 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 97: #include "include/finclude/petsc.h" > ^ > ** error on line 97 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 98: #include "include/finclude/petscvec.h" > ^ > ** error on line 98 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 99: #include "include/finclude/petscmat.h" > ^ > ** error on line 99 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 100: #include "include/finclude/petscksp.h" > ^ > ** error on line 100 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 101: #include "include/finclude/petscpc.h" > ^ > ** error on line 101 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > 102: #include "include/finclude/petscsys.h" > ^ > ** error on line 102 in D:\cygwin\codes\pets: cannot find file > specified in include directive. > global.i > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) : Error: > Syntax error, found ',' when expecting one of: ( : % . = => > Vec xx,b_rhs,xx_uv,b_rhs_uv > -----------------^ > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) : Error: > Syntax error, found ',' when expecting one of: ( : % . = => > Mat A_mat,A_mat_uv ! /* sparse matrix */ > --------------------^ > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) : Error: > Syntax error, found ',' when expecting one of: ( : % . = => > KSP ksp,ksp_uv !/* linear solver context */ > -----------------^ > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) : Error: > Syntax error, found ',' when expecting one of: ( : % . = => > PC pc,pc_uv > ------------------^ > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) : Error: > Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => > PCType ptype > -------------------------^ > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) : Error: > Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => > KSPType ksptype.... > > > I can get it to compile if I use : > > Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include > > Compared to the original one above which is: > > Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include > > Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" > > I have attached my global.F. I wonder if this is the cause of the MPICH error. > > Currently, I have removed all other f90 files, except for global.F and > flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the MPI error > stated below. I got the same error if I compile under cygwin using the same > parameters. > > Hope you can help. > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > Satish Balay wrote: > > Do you get these errors with PETSc f90 examples? > > > > what 'USE statement' do you have in your code? > > > > I guess you'll have to check your code to see how you are using f90 > > modules/includes. > > > > If you can get a minimal compileable code that can reproduce this > > error - send us the code so that we can reproduce the issue > > > > Satish > > > > On Thu, 9 Apr 2009, Wee-Beng TAY wrote: > > > > > > > Hi, > > > > > > I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled my > > > windows xp and installed mpich in the same directory. I'm using CVF > > > > > > Now, I found that when I'm trying to compile my code, I got the error: > > > > > > :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes of > > > this > > > name conflict with those made accessible by a USE statement. > > > [MPI_STATUS_SIZE] > > > INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) > > > --------------------------------^ > > > E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes of > > > this > > > name conflict with those made accessible by a USE statement. > > > [MPI_STATUS_SIZE] > > > INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) > > > ----------------------------------^ > > > E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: The > > > attributes of this name conflict with those made accessible by a USE > > > statement. [MPI_DOUBLE_PRECISION] > > > parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) > > > ------------------------------^ > > > Error executing df.exe. > > > > > > flux_area.obj - 3 error(s), 0 warning(s) > > > > > > My include option is : > > > > > > Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > > > > Interestingly, when I change my PETSC_DIR to petsc-dev, which correspond > > > to an > > > old build of petsc-2.3.3-p13, there is no problem. > > > > > > May I know what's wrong? Btw, I've converted my mpif.h from using "C" as > > > comments to "!". > > > > > > Thank you very much and have a nice day! > > > > > > Yours sincerely, > > > > > > Wee-Beng Tay > > > > > > > > > > > > > > > From zonexo at gmail.com Sun Apr 12 10:48:27 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Sun, 12 Apr 2009 23:48:27 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> Message-ID: <49E20D4B.8050502@gmail.com> Hi Satish, I just changed the global.F file as you have instructed. It's now: #include "finclude/petscdef.h" #include "finclude/petscvecdef.h" #include "finclude/petscmatdef.h" #include "finclude/petsckspdef.h" #include "finclude/petscpcdef.h" #include "finclude/petscsysdef.h" Now I'm left with 3 errors: Compiling Fortran... global.i E:\Myprojects2\lid_cavity_2d\global.F(316) : Error: This name does not have a type, and must have an explicit type. [MPI_COMM_WORLD] call MatCreateMPIAIJ(MPI_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,NN,NN,11,PETSC_NULL_INTEGER,11,PETSC_NULL_INTEGER,A_mat_uv,ierr) -----------------------------^ E:\Myprojects2\lid_cavity_2d\global.F(316) : Error: This name does not have a type, and must have an explicit type. [PETSC_DECIDE] call MatCreateMPIAIJ(MPI_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,NN,NN,11,PETSC_NULL_INTEGER,11,PETSC_NULL_INTEGER,A_mat_uv,ierr) --------------------------------------------^ E:\Myprojects2\lid_cavity_2d\global.F(316) : Error: This name does not have a type, and must have an explicit type. [PETSC_NULL_INTEGER] call MatCreateMPIAIJ(MPI_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,NN,NN,11,PETSC_NULL_INTEGER,11,PETSC_NULL_INTEGER,A_mat_uv,ierr) -------------------------------------------------------------------------------^ Error executing df.exe. global.obj - 3 error(s), 0 warning(s) I can't test flux_area.f90 since it's dependent on global.F Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > 2 changes you have to make for 3.0.0 > > 1. "include/finclude.. -> "finclude..." > > 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h > equivalnet files. > i.e > > change: > #define PETSC_AVOID_DECLARATIONS > #include "include/finclude/petsc.h" > #include "include/finclude/petscvec.h" > #include "include/finclude/petscmat.h" > #include "include/finclude/petscksp.h" > #include "include/finclude/petscpc.h" > #undef PETSC_AVOID_DECLARATIONS > > to: > #include "finclude/petscdef.h" > #include "finclude/petscvecdef.h" > #include "finclude/petscmatdef.h" > #include "finclude/petsckspdef.h" > #include "finclude/petscpcdef.h" > > Satish > > On Sun, 12 Apr 2009, Wee-Beng TAY wrote: > > >> Hi Satish, >> >> I am now using the PETSc ex2f example. I tried "make ex2f" and manage to build >> and run the file. Then I used the options as a reference for my visual fortran >> and it worked. >> >> The options are: >> >> /compile_only /debug:full /include:"Debug/" >> /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" >> /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" >> /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads /warn:nofileopt >> /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" /fpp:"/m" >> >> and >> >> ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib >> advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib >> odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib libpetscdm.lib >> libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib >> libflapack.lib /nologo /subsystem:console /incremental:yes >> /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" /pdbtype:sept >> /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" >> /libpath:"E:\cygwin\codes\MPICH\SDK\lib" >> >> Now I add my own file called global.F and tried to compile, using the same >> options.But now it failed. The error is: >> >> --------------------Configuration: ex2f - Win32 Debug-------------------- >> Compiling Fortran... >> ------------------------------------------------------------------------ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: >> 7: #include "include/finclude/petsc.h" >> ^ >> ** error on line 7 in D:\cygwin\codes\pets: cannot find file specified >> in include directive. >> 8: #include "include/finclude/petscvec.h" >> ^ >> ** error on line 8 in D:\cygwin\codes\pets: cannot find file specified >> in include directive. >> 9: #include "include/finclude/petscmat.h" >> ^ >> ** error on line 9 in D:\cygwin\codes\pets: cannot find file specified >> in include directive. >> 10: #include "include/finclude/petscksp.h" >> ^ >> ** error on line 10 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 11: #include "include/finclude/petscpc.h" >> ^ >> ** error on line 11 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 12: #include "include/finclude/petscsys.h" >> ^ >> ** error on line 12 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 97: #include "include/finclude/petsc.h" >> ^ >> ** error on line 97 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 98: #include "include/finclude/petscvec.h" >> ^ >> ** error on line 98 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 99: #include "include/finclude/petscmat.h" >> ^ >> ** error on line 99 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 100: #include "include/finclude/petscksp.h" >> ^ >> ** error on line 100 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 101: #include "include/finclude/petscpc.h" >> ^ >> ** error on line 101 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 102: #include "include/finclude/petscsys.h" >> ^ >> ** error on line 102 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> global.i >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> Vec xx,b_rhs,xx_uv,b_rhs_uv >> -----------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> Mat A_mat,A_mat_uv ! /* sparse matrix */ >> --------------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> KSP ksp,ksp_uv !/* linear solver context */ >> -----------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> PC pc,pc_uv >> ------------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) : Error: >> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => >> PCType ptype >> -------------------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) : Error: >> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => >> KSPType ksptype.... >> >> >> I can get it to compile if I use : >> >> Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include >> >> Compared to the original one above which is: >> >> Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include >> >> Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" >> >> I have attached my global.F. I wonder if this is the cause of the MPICH error. >> >> Currently, I have removed all other f90 files, except for global.F and >> flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the MPI error >> stated below. I got the same error if I compile under cygwin using the same >> parameters. >> >> Hope you can help. >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> >> Satish Balay wrote: >> >>> Do you get these errors with PETSc f90 examples? >>> >>> what 'USE statement' do you have in your code? >>> >>> I guess you'll have to check your code to see how you are using f90 >>> modules/includes. >>> >>> If you can get a minimal compileable code that can reproduce this >>> error - send us the code so that we can reproduce the issue >>> >>> Satish >>> >>> On Thu, 9 Apr 2009, Wee-Beng TAY wrote: >>> >>> >>> >>>> Hi, >>>> >>>> I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled my >>>> windows xp and installed mpich in the same directory. I'm using CVF >>>> >>>> Now, I found that when I'm trying to compile my code, I got the error: >>>> >>>> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes of >>>> this >>>> name conflict with those made accessible by a USE statement. >>>> [MPI_STATUS_SIZE] >>>> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >>>> --------------------------------^ >>>> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes of >>>> this >>>> name conflict with those made accessible by a USE statement. >>>> [MPI_STATUS_SIZE] >>>> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >>>> ----------------------------------^ >>>> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: The >>>> attributes of this name conflict with those made accessible by a USE >>>> statement. [MPI_DOUBLE_PRECISION] >>>> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >>>> ------------------------------^ >>>> Error executing df.exe. >>>> >>>> flux_area.obj - 3 error(s), 0 warning(s) >>>> >>>> My include option is : >>>> >>>> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include >>>> >>>> >>>> Interestingly, when I change my PETSC_DIR to petsc-dev, which correspond >>>> to an >>>> old build of petsc-2.3.3-p13, there is no problem. >>>> >>>> May I know what's wrong? Btw, I've converted my mpif.h from using "C" as >>>> comments to "!". >>>> >>>> Thank you very much and have a nice day! >>>> >>>> Yours sincerely, >>>> >>>> Wee-Beng Tay >>>> >>>> >>>> >>>> >>> >>> > > > From zonexo at gmail.com Sun Apr 12 11:02:52 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Mon, 13 Apr 2009 00:02:52 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> Message-ID: <49E210AC.6000502@gmail.com> Hi Satish, I now used #include "finclude/petsc.h" #include "finclude/petscvec.h" #include "finclude/petscmat.h" #include "finclude/petscksp.h" #include "finclude/petscpc.h" #include "finclude/petscsys.h" for global.F and #include "finclude/petscdef.h" #include "finclude/petscvecdef.h" #include "finclude/petscmatdef.h" #include "finclude/petsckspdef.h" #include "finclude/petscpcdef.h" for flux_area.f90 and it's working now. Can you explain what's happening? Is this the correct way then? Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > 2 changes you have to make for 3.0.0 > > 1. "include/finclude.. -> "finclude..." > > 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h > equivalnet files. > i.e > > change: > #define PETSC_AVOID_DECLARATIONS > #include "include/finclude/petsc.h" > #include "include/finclude/petscvec.h" > #include "include/finclude/petscmat.h" > #include "include/finclude/petscksp.h" > #include "include/finclude/petscpc.h" > #undef PETSC_AVOID_DECLARATIONS > > to: > #include "finclude/petscdef.h" > #include "finclude/petscvecdef.h" > #include "finclude/petscmatdef.h" > #include "finclude/petsckspdef.h" > #include "finclude/petscpcdef.h" > > Satish > > On Sun, 12 Apr 2009, Wee-Beng TAY wrote: > > >> Hi Satish, >> >> I am now using the PETSc ex2f example. I tried "make ex2f" and manage to build >> and run the file. Then I used the options as a reference for my visual fortran >> and it worked. >> >> The options are: >> >> /compile_only /debug:full /include:"Debug/" >> /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" >> /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" >> /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads /warn:nofileopt >> /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" /fpp:"/m" >> >> and >> >> ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib >> advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib >> odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib libpetscdm.lib >> libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib >> libflapack.lib /nologo /subsystem:console /incremental:yes >> /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" /pdbtype:sept >> /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" >> /libpath:"E:\cygwin\codes\MPICH\SDK\lib" >> >> Now I add my own file called global.F and tried to compile, using the same >> options.But now it failed. The error is: >> >> --------------------Configuration: ex2f - Win32 Debug-------------------- >> Compiling Fortran... >> ------------------------------------------------------------------------ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: >> 7: #include "include/finclude/petsc.h" >> ^ >> ** error on line 7 in D:\cygwin\codes\pets: cannot find file specified >> in include directive. >> 8: #include "include/finclude/petscvec.h" >> ^ >> ** error on line 8 in D:\cygwin\codes\pets: cannot find file specified >> in include directive. >> 9: #include "include/finclude/petscmat.h" >> ^ >> ** error on line 9 in D:\cygwin\codes\pets: cannot find file specified >> in include directive. >> 10: #include "include/finclude/petscksp.h" >> ^ >> ** error on line 10 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 11: #include "include/finclude/petscpc.h" >> ^ >> ** error on line 11 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 12: #include "include/finclude/petscsys.h" >> ^ >> ** error on line 12 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 97: #include "include/finclude/petsc.h" >> ^ >> ** error on line 97 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 98: #include "include/finclude/petscvec.h" >> ^ >> ** error on line 98 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 99: #include "include/finclude/petscmat.h" >> ^ >> ** error on line 99 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 100: #include "include/finclude/petscksp.h" >> ^ >> ** error on line 100 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 101: #include "include/finclude/petscpc.h" >> ^ >> ** error on line 101 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> 102: #include "include/finclude/petscsys.h" >> ^ >> ** error on line 102 in D:\cygwin\codes\pets: cannot find file >> specified in include directive. >> global.i >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> Vec xx,b_rhs,xx_uv,b_rhs_uv >> -----------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> Mat A_mat,A_mat_uv ! /* sparse matrix */ >> --------------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> KSP ksp,ksp_uv !/* linear solver context */ >> -----------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) : Error: >> Syntax error, found ',' when expecting one of: ( : % . = => >> PC pc,pc_uv >> ------------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) : Error: >> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => >> PCType ptype >> -------------------------^ >> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) : Error: >> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => >> KSPType ksptype.... >> >> >> I can get it to compile if I use : >> >> Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include >> >> Compared to the original one above which is: >> >> Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include >> >> Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" >> >> I have attached my global.F. I wonder if this is the cause of the MPICH error. >> >> Currently, I have removed all other f90 files, except for global.F and >> flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the MPI error >> stated below. I got the same error if I compile under cygwin using the same >> parameters. >> >> Hope you can help. >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> >> Satish Balay wrote: >> >>> Do you get these errors with PETSc f90 examples? >>> >>> what 'USE statement' do you have in your code? >>> >>> I guess you'll have to check your code to see how you are using f90 >>> modules/includes. >>> >>> If you can get a minimal compileable code that can reproduce this >>> error - send us the code so that we can reproduce the issue >>> >>> Satish >>> >>> On Thu, 9 Apr 2009, Wee-Beng TAY wrote: >>> >>> >>> >>>> Hi, >>>> >>>> I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled my >>>> windows xp and installed mpich in the same directory. I'm using CVF >>>> >>>> Now, I found that when I'm trying to compile my code, I got the error: >>>> >>>> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes of >>>> this >>>> name conflict with those made accessible by a USE statement. >>>> [MPI_STATUS_SIZE] >>>> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >>>> --------------------------------^ >>>> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes of >>>> this >>>> name conflict with those made accessible by a USE statement. >>>> [MPI_STATUS_SIZE] >>>> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >>>> ----------------------------------^ >>>> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: The >>>> attributes of this name conflict with those made accessible by a USE >>>> statement. [MPI_DOUBLE_PRECISION] >>>> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >>>> ------------------------------^ >>>> Error executing df.exe. >>>> >>>> flux_area.obj - 3 error(s), 0 warning(s) >>>> >>>> My include option is : >>>> >>>> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include >>>> >>>> >>>> Interestingly, when I change my PETSC_DIR to petsc-dev, which correspond >>>> to an >>>> old build of petsc-2.3.3-p13, there is no problem. >>>> >>>> May I know what's wrong? Btw, I've converted my mpif.h from using "C" as >>>> comments to "!". >>>> >>>> Thank you very much and have a nice day! >>>> >>>> Yours sincerely, >>>> >>>> Wee-Beng Tay >>>> >>>> >>>> >>>> >>> >>> > > > From balay at mcs.anl.gov Sun Apr 12 11:13:48 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Sun, 12 Apr 2009 11:13:48 -0500 (CDT) Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49E210AC.6000502@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> Message-ID: Yes - all includes statements in both the sourcefiles should start with "finclude/..." [so that -Id:\cygwin\codes\petsc-3.0.0-p4 is not needed] And where you needed PETSC_AVOID_DECLARATIONS - you need to use the 'def.h' equivelent includes.. The def.h files have only the declarations [so the PETSC_AVOID_DECLARATIONS flag is no longer needed/used]. You need only the definitions in the datasection of 'module flux_area'. [All subroutines should use the regular includes] Satish On Mon, 13 Apr 2009, Wee-Beng TAY wrote: > Hi Satish, > > I now used > > #include "finclude/petsc.h" > #include "finclude/petscvec.h" > #include "finclude/petscmat.h" > #include "finclude/petscksp.h" > #include "finclude/petscpc.h" > #include "finclude/petscsys.h" > > for global.F and > > #include "finclude/petscdef.h" > #include "finclude/petscvecdef.h" > #include "finclude/petscmatdef.h" > #include "finclude/petsckspdef.h" > #include "finclude/petscpcdef.h" > > for flux_area.f90 and it's working now. Can you explain what's happening? Is > this the correct way then? > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > Satish Balay wrote: > > 2 changes you have to make for 3.0.0 > > > > 1. "include/finclude.. -> "finclude..." > > > > 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h > > equivalnet files. > > i.e > > > > change: > > #define PETSC_AVOID_DECLARATIONS > > #include "include/finclude/petsc.h" > > #include "include/finclude/petscvec.h" > > #include "include/finclude/petscmat.h" > > #include "include/finclude/petscksp.h" > > #include "include/finclude/petscpc.h" > > #undef PETSC_AVOID_DECLARATIONS > > > > to: > > #include "finclude/petscdef.h" > > #include "finclude/petscvecdef.h" > > #include "finclude/petscmatdef.h" > > #include "finclude/petsckspdef.h" > > #include "finclude/petscpcdef.h" > > > > Satish > > > > On Sun, 12 Apr 2009, Wee-Beng TAY wrote: > > > > > > > Hi Satish, > > > > > > I am now using the PETSc ex2f example. I tried "make ex2f" and manage to > > > build > > > and run the file. Then I used the options as a reference for my visual > > > fortran > > > and it worked. > > > > > > The options are: > > > > > > /compile_only /debug:full /include:"Debug/" > > > /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" > > > /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" > > > /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads > > > /warn:nofileopt > > > /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" /fpp:"/m" > > > > > > and > > > > > > ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib > > > advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib > > > odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib > > > libpetscdm.lib > > > libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib > > > libflapack.lib /nologo /subsystem:console /incremental:yes > > > /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" > > > /pdbtype:sept > > > /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" > > > /libpath:"E:\cygwin\codes\MPICH\SDK\lib" > > > > > > Now I add my own file called global.F and tried to compile, using the same > > > options.But now it failed. The error is: > > > > > > --------------------Configuration: ex2f - Win32 Debug-------------------- > > > Compiling Fortran... > > > ------------------------------------------------------------------------ > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: > > > 7: #include "include/finclude/petsc.h" > > > ^ > > > ** error on line 7 in D:\cygwin\codes\pets: cannot find file specified > > > in include directive. > > > 8: #include "include/finclude/petscvec.h" > > > ^ > > > ** error on line 8 in D:\cygwin\codes\pets: cannot find file specified > > > in include directive. > > > 9: #include "include/finclude/petscmat.h" > > > ^ > > > ** error on line 9 in D:\cygwin\codes\pets: cannot find file specified > > > in include directive. > > > 10: #include "include/finclude/petscksp.h" > > > ^ > > > ** error on line 10 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 11: #include "include/finclude/petscpc.h" > > > ^ > > > ** error on line 11 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 12: #include "include/finclude/petscsys.h" > > > ^ > > > ** error on line 12 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 97: #include "include/finclude/petsc.h" > > > ^ > > > ** error on line 97 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 98: #include "include/finclude/petscvec.h" > > > ^ > > > ** error on line 98 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 99: #include "include/finclude/petscmat.h" > > > ^ > > > ** error on line 99 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 100: #include "include/finclude/petscksp.h" > > > ^ > > > ** error on line 100 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 101: #include "include/finclude/petscpc.h" > > > ^ > > > ** error on line 101 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > 102: #include "include/finclude/petscsys.h" > > > ^ > > > ** error on line 102 in D:\cygwin\codes\pets: cannot find file > > > specified in include directive. > > > global.i > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) : > > > Error: > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > Vec xx,b_rhs,xx_uv,b_rhs_uv > > > -----------------^ > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) : > > > Error: > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > Mat A_mat,A_mat_uv ! /* sparse matrix */ > > > --------------------^ > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) : > > > Error: > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > KSP ksp,ksp_uv !/* linear solver context */ > > > -----------------^ > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) : > > > Error: > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > PC pc,pc_uv > > > ------------------^ > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) : > > > Error: > > > Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => > > > PCType ptype > > > -------------------------^ > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) : > > > Error: > > > Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => > > > KSPType ksptype.... > > > > > > > > > I can get it to compile if I use : > > > > > > Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > Compared to the original one above which is: > > > > > > Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" > > > > > > I have attached my global.F. I wonder if this is the cause of the MPICH > > > error. > > > > > > Currently, I have removed all other f90 files, except for global.F and > > > flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the MPI > > > error > > > stated below. I got the same error if I compile under cygwin using the > > > same > > > parameters. > > > > > > Hope you can help. > > > > > > Thank you very much and have a nice day! > > > > > > Yours sincerely, > > > > > > Wee-Beng Tay > > > > > > > > > > > > Satish Balay wrote: > > > > > > > Do you get these errors with PETSc f90 examples? > > > > > > > > what 'USE statement' do you have in your code? > > > > > > > > I guess you'll have to check your code to see how you are using f90 > > > > modules/includes. > > > > > > > > If you can get a minimal compileable code that can reproduce this > > > > error - send us the code so that we can reproduce the issue > > > > > > > > Satish > > > > > > > > On Thu, 9 Apr 2009, Wee-Beng TAY wrote: > > > > > > > > > > > > > Hi, > > > > > > > > > > I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled > > > > > my > > > > > windows xp and installed mpich in the same directory. I'm using CVF > > > > > > > > > > Now, I found that when I'm trying to compile my code, I got the error: > > > > > > > > > > :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes > > > > > of > > > > > this > > > > > name conflict with those made accessible by a USE statement. > > > > > [MPI_STATUS_SIZE] > > > > > INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) > > > > > --------------------------------^ > > > > > E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes > > > > > of > > > > > this > > > > > name conflict with those made accessible by a USE statement. > > > > > [MPI_STATUS_SIZE] > > > > > INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) > > > > > ----------------------------------^ > > > > > E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: > > > > > The > > > > > attributes of this name conflict with those made accessible by a USE > > > > > statement. [MPI_DOUBLE_PRECISION] > > > > > parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) > > > > > ------------------------------^ > > > > > Error executing df.exe. > > > > > > > > > > flux_area.obj - 3 error(s), 0 warning(s) > > > > > > > > > > My include option is : > > > > > > > > > > Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > > > > > > > > > > Interestingly, when I change my PETSC_DIR to petsc-dev, which > > > > > correspond > > > > > to an > > > > > old build of petsc-2.3.3-p13, there is no problem. > > > > > > > > > > May I know what's wrong? Btw, I've converted my mpif.h from using "C" > > > > > as > > > > > comments to "!". > > > > > > > > > > Thank you very much and have a nice day! > > > > > > > > > > Yours sincerely, > > > > > > > > > > Wee-Beng Tay > > > > > > > > > > > > > > > > > > > > > > > > > From zonexo at gmail.com Mon Apr 13 11:11:10 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 00:11:10 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> Message-ID: <49E3641E.5080304@gmail.com> Hi Satish, Compiling and building now worked without error. However, when I run, I get the error: 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL [0] Aborting program ! [0] Aborting program! Error 323, process 0, host GOTCHAMA-E73BB3: The problem lies in this subroutine: subroutine v_ast_row_copy #include "finclude/petsc.h" #include "finclude/petscvec.h" #include "finclude/petscmat.h" #include "finclude/petscksp.h" #include "finclude/petscpc.h" #include "finclude/petscsys.h" !to copy data of jend row to others integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 inext = myid + 1 iprev = myid - 1 if (myid == num_procs - 1) inext = MPI_PROC_NULL if (myid == 0) iprev = MPI_PROC_NULL CALL MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) CALL MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) CALL MPI_WAIT(isend1, istatus, ierr) CALL MPI_WAIT(irecv1, istatus, ierr) end subroutine v_ast_row_copy I copied this subroutine from the RS6000 mpi manual and it used to work. I wonder if this is a MPI or PETSc problem? Strange because I already specify the type to be MPI_REAL8. However changing to MPI_REAL solves the problem. If this is a MPI problem, then you can just ignore it. I'll check it in some MPI forum. Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > Yes - all includes statements in both the sourcefiles should start > with "finclude/..." [so that -Id:\cygwin\codes\petsc-3.0.0-p4 is not > needed] > > And where you needed PETSC_AVOID_DECLARATIONS - you need to use the > 'def.h' equivelent includes.. The def.h files have only the > declarations [so the PETSC_AVOID_DECLARATIONS flag is no longer > needed/used]. You need only the definitions in the datasection of > 'module flux_area'. [All subroutines should use the regular includes] > > Satish > > > On Mon, 13 Apr 2009, Wee-Beng TAY wrote: > > >> Hi Satish, >> >> I now used >> >> #include "finclude/petsc.h" >> #include "finclude/petscvec.h" >> #include "finclude/petscmat.h" >> #include "finclude/petscksp.h" >> #include "finclude/petscpc.h" >> #include "finclude/petscsys.h" >> >> for global.F and >> >> #include "finclude/petscdef.h" >> #include "finclude/petscvecdef.h" >> #include "finclude/petscmatdef.h" >> #include "finclude/petsckspdef.h" >> #include "finclude/petscpcdef.h" >> >> for flux_area.f90 and it's working now. Can you explain what's happening? Is >> this the correct way then? >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> >> Satish Balay wrote: >> >>> 2 changes you have to make for 3.0.0 >>> >>> 1. "include/finclude.. -> "finclude..." >>> >>> 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h >>> equivalnet files. >>> i.e >>> >>> change: >>> #define PETSC_AVOID_DECLARATIONS >>> #include "include/finclude/petsc.h" >>> #include "include/finclude/petscvec.h" >>> #include "include/finclude/petscmat.h" >>> #include "include/finclude/petscksp.h" >>> #include "include/finclude/petscpc.h" >>> #undef PETSC_AVOID_DECLARATIONS >>> >>> to: >>> #include "finclude/petscdef.h" >>> #include "finclude/petscvecdef.h" >>> #include "finclude/petscmatdef.h" >>> #include "finclude/petsckspdef.h" >>> #include "finclude/petscpcdef.h" >>> >>> Satish >>> >>> On Sun, 12 Apr 2009, Wee-Beng TAY wrote: >>> >>> >>> >>>> Hi Satish, >>>> >>>> I am now using the PETSc ex2f example. I tried "make ex2f" and manage to >>>> build >>>> and run the file. Then I used the options as a reference for my visual >>>> fortran >>>> and it worked. >>>> >>>> The options are: >>>> >>>> /compile_only /debug:full /include:"Debug/" >>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" >>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" >>>> /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads >>>> /warn:nofileopt >>>> /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" /fpp:"/m" >>>> >>>> and >>>> >>>> ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib >>>> advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib >>>> odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib >>>> libpetscdm.lib >>>> libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib >>>> libflapack.lib /nologo /subsystem:console /incremental:yes >>>> /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" >>>> /pdbtype:sept >>>> /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" >>>> /libpath:"E:\cygwin\codes\MPICH\SDK\lib" >>>> >>>> Now I add my own file called global.F and tried to compile, using the same >>>> options.But now it failed. The error is: >>>> >>>> --------------------Configuration: ex2f - Win32 Debug-------------------- >>>> Compiling Fortran... >>>> ------------------------------------------------------------------------ >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: >>>> 7: #include "include/finclude/petsc.h" >>>> ^ >>>> ** error on line 7 in D:\cygwin\codes\pets: cannot find file specified >>>> in include directive. >>>> 8: #include "include/finclude/petscvec.h" >>>> ^ >>>> ** error on line 8 in D:\cygwin\codes\pets: cannot find file specified >>>> in include directive. >>>> 9: #include "include/finclude/petscmat.h" >>>> ^ >>>> ** error on line 9 in D:\cygwin\codes\pets: cannot find file specified >>>> in include directive. >>>> 10: #include "include/finclude/petscksp.h" >>>> ^ >>>> ** error on line 10 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 11: #include "include/finclude/petscpc.h" >>>> ^ >>>> ** error on line 11 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 12: #include "include/finclude/petscsys.h" >>>> ^ >>>> ** error on line 12 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 97: #include "include/finclude/petsc.h" >>>> ^ >>>> ** error on line 97 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 98: #include "include/finclude/petscvec.h" >>>> ^ >>>> ** error on line 98 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 99: #include "include/finclude/petscmat.h" >>>> ^ >>>> ** error on line 99 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 100: #include "include/finclude/petscksp.h" >>>> ^ >>>> ** error on line 100 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 101: #include "include/finclude/petscpc.h" >>>> ^ >>>> ** error on line 101 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> 102: #include "include/finclude/petscsys.h" >>>> ^ >>>> ** error on line 102 in D:\cygwin\codes\pets: cannot find file >>>> specified in include directive. >>>> global.i >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) : >>>> Error: >>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>> Vec xx,b_rhs,xx_uv,b_rhs_uv >>>> -----------------^ >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) : >>>> Error: >>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>> Mat A_mat,A_mat_uv ! /* sparse matrix */ >>>> --------------------^ >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) : >>>> Error: >>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>> KSP ksp,ksp_uv !/* linear solver context */ >>>> -----------------^ >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) : >>>> Error: >>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>> PC pc,pc_uv >>>> ------------------^ >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) : >>>> Error: >>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => >>>> PCType ptype >>>> -------------------------^ >>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) : >>>> Error: >>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . = => >>>> KSPType ksptype.... >>>> >>>> >>>> I can get it to compile if I use : >>>> >>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include >>>> >>>> Compared to the original one above which is: >>>> >>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include >>>> >>>> Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" >>>> >>>> I have attached my global.F. I wonder if this is the cause of the MPICH >>>> error. >>>> >>>> Currently, I have removed all other f90 files, except for global.F and >>>> flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the MPI >>>> error >>>> stated below. I got the same error if I compile under cygwin using the >>>> same >>>> parameters. >>>> >>>> Hope you can help. >>>> >>>> Thank you very much and have a nice day! >>>> >>>> Yours sincerely, >>>> >>>> Wee-Beng Tay >>>> >>>> >>>> >>>> Satish Balay wrote: >>>> >>>> >>>>> Do you get these errors with PETSc f90 examples? >>>>> >>>>> what 'USE statement' do you have in your code? >>>>> >>>>> I guess you'll have to check your code to see how you are using f90 >>>>> modules/includes. >>>>> >>>>> If you can get a minimal compileable code that can reproduce this >>>>> error - send us the code so that we can reproduce the issue >>>>> >>>>> Satish >>>>> >>>>> On Thu, 9 Apr 2009, Wee-Beng TAY wrote: >>>>> >>>>> >>>>> >>>>>> Hi, >>>>>> >>>>>> I just built petsc-3.0.0-p4 with mpich and after that, I reinstalled >>>>>> my >>>>>> windows xp and installed mpich in the same directory. I'm using CVF >>>>>> >>>>>> Now, I found that when I'm trying to compile my code, I got the error: >>>>>> >>>>>> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The attributes >>>>>> of >>>>>> this >>>>>> name conflict with those made accessible by a USE statement. >>>>>> [MPI_STATUS_SIZE] >>>>>> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >>>>>> --------------------------------^ >>>>>> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The attributes >>>>>> of >>>>>> this >>>>>> name conflict with those made accessible by a USE statement. >>>>>> [MPI_STATUS_SIZE] >>>>>> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >>>>>> ----------------------------------^ >>>>>> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : Error: >>>>>> The >>>>>> attributes of this name conflict with those made accessible by a USE >>>>>> statement. [MPI_DOUBLE_PRECISION] >>>>>> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >>>>>> ------------------------------^ >>>>>> Error executing df.exe. >>>>>> >>>>>> flux_area.obj - 3 error(s), 0 warning(s) >>>>>> >>>>>> My include option is : >>>>>> >>>>>> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include >>>>>> >>>>>> >>>>>> Interestingly, when I change my PETSC_DIR to petsc-dev, which >>>>>> correspond >>>>>> to an >>>>>> old build of petsc-2.3.3-p13, there is no problem. >>>>>> >>>>>> May I know what's wrong? Btw, I've converted my mpif.h from using "C" >>>>>> as >>>>>> comments to "!". >>>>>> >>>>>> Thank you very much and have a nice day! >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> Wee-Beng Tay >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>> >>> > > > From bsmith at mcs.anl.gov Mon Apr 13 11:21:29 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 13 Apr 2009 11:21:29 -0500 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49E3641E.5080304@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> Message-ID: Where is your implicit none that all sane programmers begin their Fortran subroutines with? On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: > Hi Satish, > > Compiling and building now worked without error. However, when I > run, I get the error: > > 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL > [0] Aborting program ! > [0] Aborting program! > Error 323, process 0, host GOTCHAMA-E73BB3: > > The problem lies in this subroutine: > > subroutine v_ast_row_copy > > #include "finclude/petsc.h" > #include "finclude/petscvec.h" > #include "finclude/petscmat.h" > #include "finclude/petscksp.h" > #include "finclude/petscpc.h" > #include "finclude/petscsys.h" > > !to copy data of jend row to others > > integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 > > inext = myid + 1 > iprev = myid - 1 > > if (myid == num_procs - 1) inext = MPI_PROC_NULL > > if (myid == 0) iprev = MPI_PROC_NULL > > CALL MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext, > 1,MPI_COMM_WORLD,isend1,ierr) > CALL MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev, > 1,MPI_COMM_WORLD,irecv1,ierr) > CALL MPI_WAIT(isend1, istatus, ierr) > CALL MPI_WAIT(irecv1, istatus, ierr) > > end subroutine v_ast_row_copy > > > I copied this subroutine from the RS6000 mpi manual and it used to > work. I wonder if this is a MPI or PETSc problem? Strange because I > already specify the type to be MPI_REAL8. However changing to > MPI_REAL solves the problem. > > If this is a MPI problem, then you can just ignore it. I'll check it > in some MPI forum. > > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > Satish Balay wrote: >> Yes - all includes statements in both the sourcefiles should start >> with "finclude/..." [so that -Id:\cygwin\codes\petsc-3.0.0-p4 is not >> needed] >> >> And where you needed PETSC_AVOID_DECLARATIONS - you need to use the >> 'def.h' equivelent includes.. The def.h files have only the >> declarations [so the PETSC_AVOID_DECLARATIONS flag is no longer >> needed/used]. You need only the definitions in the datasection of >> 'module flux_area'. [All subroutines should use the regular includes] >> >> Satish >> >> >> On Mon, 13 Apr 2009, Wee-Beng TAY wrote: >> >> >>> Hi Satish, >>> >>> I now used >>> >>> #include "finclude/petsc.h" >>> #include "finclude/petscvec.h" >>> #include "finclude/petscmat.h" >>> #include "finclude/petscksp.h" >>> #include "finclude/petscpc.h" >>> #include "finclude/petscsys.h" >>> >>> for global.F and >>> >>> #include "finclude/petscdef.h" >>> #include "finclude/petscvecdef.h" >>> #include "finclude/petscmatdef.h" >>> #include "finclude/petsckspdef.h" >>> #include "finclude/petscpcdef.h" >>> >>> for flux_area.f90 and it's working now. Can you explain what's >>> happening? Is >>> this the correct way then? >>> >>> Thank you very much and have a nice day! >>> >>> Yours sincerely, >>> >>> Wee-Beng Tay >>> >>> >>> >>> Satish Balay wrote: >>> >>>> 2 changes you have to make for 3.0.0 >>>> >>>> 1. "include/finclude.. -> "finclude..." >>>> >>>> 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h >>>> equivalnet files. >>>> i.e >>>> >>>> change: >>>> #define PETSC_AVOID_DECLARATIONS >>>> #include "include/finclude/petsc.h" >>>> #include "include/finclude/petscvec.h" >>>> #include "include/finclude/petscmat.h" >>>> #include "include/finclude/petscksp.h" >>>> #include "include/finclude/petscpc.h" >>>> #undef PETSC_AVOID_DECLARATIONS >>>> >>>> to: >>>> #include "finclude/petscdef.h" >>>> #include "finclude/petscvecdef.h" >>>> #include "finclude/petscmatdef.h" >>>> #include "finclude/petsckspdef.h" >>>> #include "finclude/petscpcdef.h" >>>> >>>> Satish >>>> >>>> On Sun, 12 Apr 2009, Wee-Beng TAY wrote: >>>> >>>> >>>>> Hi Satish, >>>>> >>>>> I am now using the PETSc ex2f example. I tried "make ex2f" and >>>>> manage to >>>>> build >>>>> and run the file. Then I used the options as a reference for my >>>>> visual >>>>> fortran >>>>> and it worked. >>>>> >>>>> The options are: >>>>> >>>>> /compile_only /debug:full /include:"Debug/" >>>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" >>>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" >>>>> /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads >>>>> /warn:nofileopt >>>>> /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" / >>>>> fpp:"/m" >>>>> >>>>> and >>>>> >>>>> ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib >>>>> comdlg32.lib >>>>> advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib >>>>> odbc32.lib >>>>> odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib >>>>> libpetscdm.lib >>>>> libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib >>>>> libfblas.lib >>>>> libflapack.lib /nologo /subsystem:console /incremental:yes >>>>> /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" >>>>> /pdbtype:sept >>>>> /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" >>>>> /libpath:"E:\cygwin\codes\MPICH\SDK\lib" >>>>> >>>>> Now I add my own file called global.F and tried to compile, >>>>> using the same >>>>> options.But now it failed. The error is: >>>>> >>>>> --------------------Configuration: ex2f - Win32 >>>>> Debug-------------------- >>>>> Compiling Fortran... >>>>> ------------------------------------------------------------------------ >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: >>>>> 7: #include "include/finclude/petsc.h" >>>>> ^ >>>>> ** error on line 7 in D:\cygwin\codes\pets: cannot find file >>>>> specified >>>>> in include directive. >>>>> 8: #include "include/finclude/petscvec.h" >>>>> ^ >>>>> ** error on line 8 in D:\cygwin\codes\pets: cannot find file >>>>> specified >>>>> in include directive. >>>>> 9: #include "include/finclude/petscmat.h" >>>>> ^ >>>>> ** error on line 9 in D:\cygwin\codes\pets: cannot find file >>>>> specified >>>>> in include directive. >>>>> 10: #include "include/finclude/petscksp.h" >>>>> ^ >>>>> ** error on line 10 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 11: #include "include/finclude/petscpc.h" >>>>> ^ >>>>> ** error on line 11 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 12: #include "include/finclude/petscsys.h" >>>>> ^ >>>>> ** error on line 12 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 97: #include "include/finclude/petsc.h" >>>>> ^ >>>>> ** error on line 97 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 98: #include "include/finclude/petscvec.h" >>>>> ^ >>>>> ** error on line 98 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 99: #include "include/finclude/petscmat.h" >>>>> ^ >>>>> ** error on line 99 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 100: #include "include/finclude/petscksp.h" >>>>> ^ >>>>> ** error on line 100 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 101: #include "include/finclude/petscpc.h" >>>>> ^ >>>>> ** error on line 101 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> 102: #include "include/finclude/petscsys.h" >>>>> ^ >>>>> ** error on line 102 in D:\cygwin\codes\pets: cannot find file >>>>> specified in include directive. >>>>> global.i >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f >>>>> \global.F(65) : >>>>> Error: >>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>> Vec xx,b_rhs,xx_uv,b_rhs_uv >>>>> -----------------^ >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f >>>>> \global.F(67) : >>>>> Error: >>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>> Mat A_mat,A_mat_uv ! /* sparse matrix */ >>>>> --------------------^ >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f >>>>> \global.F(69) : >>>>> Error: >>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>> KSP ksp,ksp_uv !/* linear solver context */ >>>>> -----------------^ >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f >>>>> \global.F(71) : >>>>> Error: >>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>> PC pc,pc_uv >>>>> ------------------^ >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f >>>>> \global.F(73) : >>>>> Error: >>>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : >>>>> % . = => >>>>> PCType ptype >>>>> -------------------------^ >>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f >>>>> \global.F(75) : >>>>> Error: >>>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : >>>>> % . = => >>>>> KSPType ksptype.... >>>>> >>>>> >>>>> I can get it to compile if I use : >>>>> >>>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes >>>>> \petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0- >>>>> p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include >>>>> >>>>> Compared to the original one above which is: >>>>> >>>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d: >>>>> \cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK >>>>> \include >>>>> >>>>> Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" >>>>> >>>>> I have attached my global.F. I wonder if this is the cause of >>>>> the MPICH >>>>> error. >>>>> >>>>> Currently, I have removed all other f90 files, except for >>>>> global.F and >>>>> flux_area.f90. It's when I 'm compiling flux_area.f90 that I got >>>>> the MPI >>>>> error >>>>> stated below. I got the same error if I compile under cygwin >>>>> using the >>>>> same >>>>> parameters. >>>>> >>>>> Hope you can help. >>>>> >>>>> Thank you very much and have a nice day! >>>>> >>>>> Yours sincerely, >>>>> >>>>> Wee-Beng Tay >>>>> >>>>> >>>>> >>>>> Satish Balay wrote: >>>>> >>>>>> Do you get these errors with PETSc f90 examples? >>>>>> >>>>>> what 'USE statement' do you have in your code? >>>>>> >>>>>> I guess you'll have to check your code to see how you are using >>>>>> f90 >>>>>> modules/includes. >>>>>> >>>>>> If you can get a minimal compileable code that can reproduce this >>>>>> error - send us the code so that we can reproduce the issue >>>>>> >>>>>> Satish >>>>>> >>>>>> On Thu, 9 Apr 2009, Wee-Beng TAY wrote: >>>>>> >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I just built petsc-3.0.0-p4 with mpich and after that, I >>>>>>> reinstalled >>>>>>> my >>>>>>> windows xp and installed mpich in the same directory. I'm >>>>>>> using CVF >>>>>>> >>>>>>> Now, I found that when I'm trying to compile my code, I got >>>>>>> the error: >>>>>>> >>>>>>> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The >>>>>>> attributes >>>>>>> of >>>>>>> this >>>>>>> name conflict with those made accessible by a USE statement. >>>>>>> [MPI_STATUS_SIZE] >>>>>>> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >>>>>>> --------------------------------^ >>>>>>> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The >>>>>>> attributes >>>>>>> of >>>>>>> this >>>>>>> name conflict with those made accessible by a USE statement. >>>>>>> [MPI_STATUS_SIZE] >>>>>>> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >>>>>>> ----------------------------------^ >>>>>>> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : >>>>>>> Error: >>>>>>> The >>>>>>> attributes of this name conflict with those made accessible by >>>>>>> a USE >>>>>>> statement. [MPI_DOUBLE_PRECISION] >>>>>>> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >>>>>>> ------------------------------^ >>>>>>> Error executing df.exe. >>>>>>> >>>>>>> flux_area.obj - 3 error(s), 0 warning(s) >>>>>>> >>>>>>> My include option is : >>>>>>> >>>>>>> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$ >>>>>>> (PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH >>>>>>> \SDK\include >>>>>>> >>>>>>> >>>>>>> Interestingly, when I change my PETSC_DIR to petsc-dev, which >>>>>>> correspond >>>>>>> to an >>>>>>> old build of petsc-2.3.3-p13, there is no problem. >>>>>>> >>>>>>> May I know what's wrong? Btw, I've converted my mpif.h from >>>>>>> using "C" >>>>>>> as >>>>>>> comments to "!". >>>>>>> >>>>>>> Thank you very much and have a nice day! >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> Wee-Beng Tay >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>> >> >> >> From balay at mcs.anl.gov Mon Apr 13 11:27:30 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2009 11:27:30 -0500 (CDT) Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> Message-ID: And if v_ast() is used with PETSc - then it should be defined 'PetscScalar' and then you use MPI_ISEND(....,MPIU_SCALAR,...) Satish On Mon, 13 Apr 2009, Barry Smith wrote: > > Where is your implicit none that all sane programmers begin their Fortran > subroutines with? > > > On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: > > > Hi Satish, > > > > Compiling and building now worked without error. However, when I run, I get > > the error: > > > > 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL > > [0] Aborting program ! > > [0] Aborting program! > > Error 323, process 0, host GOTCHAMA-E73BB3: > > > > The problem lies in this subroutine: > > > > subroutine v_ast_row_copy > > > > #include "finclude/petsc.h" > > #include "finclude/petscvec.h" > > #include "finclude/petscmat.h" > > #include "finclude/petscksp.h" > > #include "finclude/petscpc.h" > > #include "finclude/petscsys.h" > > > > !to copy data of jend row to others > > > > integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 > > > > inext = myid + 1 > > iprev = myid - 1 > > > > if (myid == num_procs - 1) inext = MPI_PROC_NULL > > > > if (myid == 0) iprev = MPI_PROC_NULL > > > > CALL > > MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) > > CALL > > MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) > > CALL MPI_WAIT(isend1, istatus, ierr) > > CALL MPI_WAIT(irecv1, istatus, ierr) > > > > end subroutine v_ast_row_copy > > > > > > I copied this subroutine from the RS6000 mpi manual and it used to work. I > > wonder if this is a MPI or PETSc problem? Strange because I already specify > > the type to be MPI_REAL8. However changing to MPI_REAL solves the problem. > > > > If this is a MPI problem, then you can just ignore it. I'll check it in some > > MPI forum. > > > > > > Thank you very much and have a nice day! > > > > Yours sincerely, > > > > Wee-Beng Tay > > > > > > > > Satish Balay wrote: > > > Yes - all includes statements in both the sourcefiles should start > > > with "finclude/..." [so that -Id:\cygwin\codes\petsc-3.0.0-p4 is not > > > needed] > > > > > > And where you needed PETSC_AVOID_DECLARATIONS - you need to use the > > > 'def.h' equivelent includes.. The def.h files have only the > > > declarations [so the PETSC_AVOID_DECLARATIONS flag is no longer > > > needed/used]. You need only the definitions in the datasection of > > > 'module flux_area'. [All subroutines should use the regular includes] > > > > > > Satish > > > > > > > > > On Mon, 13 Apr 2009, Wee-Beng TAY wrote: > > > > > > > > > > Hi Satish, > > > > > > > > I now used > > > > > > > > #include "finclude/petsc.h" > > > > #include "finclude/petscvec.h" > > > > #include "finclude/petscmat.h" > > > > #include "finclude/petscksp.h" > > > > #include "finclude/petscpc.h" > > > > #include "finclude/petscsys.h" > > > > > > > > for global.F and > > > > > > > > #include "finclude/petscdef.h" > > > > #include "finclude/petscvecdef.h" > > > > #include "finclude/petscmatdef.h" > > > > #include "finclude/petsckspdef.h" > > > > #include "finclude/petscpcdef.h" > > > > > > > > for flux_area.f90 and it's working now. Can you explain what's > > > > happening? Is > > > > this the correct way then? > > > > > > > > Thank you very much and have a nice day! > > > > > > > > Yours sincerely, > > > > > > > > Wee-Beng Tay > > > > > > > > > > > > > > > > Satish Balay wrote: > > > > > > > > > 2 changes you have to make for 3.0.0 > > > > > > > > > > 1. "include/finclude.. -> "finclude..." > > > > > > > > > > 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h > > > > > equivalnet files. > > > > > i.e > > > > > > > > > > change: > > > > > #define PETSC_AVOID_DECLARATIONS > > > > > #include "include/finclude/petsc.h" > > > > > #include "include/finclude/petscvec.h" > > > > > #include "include/finclude/petscmat.h" > > > > > #include "include/finclude/petscksp.h" > > > > > #include "include/finclude/petscpc.h" > > > > > #undef PETSC_AVOID_DECLARATIONS > > > > > > > > > > to: > > > > > #include "finclude/petscdef.h" > > > > > #include "finclude/petscvecdef.h" > > > > > #include "finclude/petscmatdef.h" > > > > > #include "finclude/petsckspdef.h" > > > > > #include "finclude/petscpcdef.h" > > > > > > > > > > Satish > > > > > > > > > > On Sun, 12 Apr 2009, Wee-Beng TAY wrote: > > > > > > > > > > > > > > > > Hi Satish, > > > > > > > > > > > > I am now using the PETSc ex2f example. I tried "make ex2f" and > > > > > > manage to > > > > > > build > > > > > > and run the file. Then I used the options as a reference for my > > > > > > visual > > > > > > fortran > > > > > > and it worked. > > > > > > > > > > > > The options are: > > > > > > > > > > > > /compile_only /debug:full /include:"Debug/" > > > > > > /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" > > > > > > /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" > > > > > > /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads > > > > > > /warn:nofileopt > > > > > > /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" > > > > > > /fpp:"/m" > > > > > > > > > > > > and > > > > > > > > > > > > ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib > > > > > > comdlg32.lib > > > > > > advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib > > > > > > odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib > > > > > > libpetscdm.lib > > > > > > libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib > > > > > > libflapack.lib /nologo /subsystem:console /incremental:yes > > > > > > /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" > > > > > > /pdbtype:sept > > > > > > /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" > > > > > > /libpath:"E:\cygwin\codes\MPICH\SDK\lib" > > > > > > > > > > > > Now I add my own file called global.F and tried to compile, using > > > > > > the same > > > > > > options.But now it failed. The error is: > > > > > > > > > > > > --------------------Configuration: ex2f - Win32 > > > > > > Debug-------------------- > > > > > > Compiling Fortran... > > > > > > ------------------------------------------------------------------------ > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: > > > > > > 7: #include "include/finclude/petsc.h" > > > > > > ^ > > > > > > ** error on line 7 in D:\cygwin\codes\pets: cannot find file > > > > > > specified > > > > > > in include directive. > > > > > > 8: #include "include/finclude/petscvec.h" > > > > > > ^ > > > > > > ** error on line 8 in D:\cygwin\codes\pets: cannot find file > > > > > > specified > > > > > > in include directive. > > > > > > 9: #include "include/finclude/petscmat.h" > > > > > > ^ > > > > > > ** error on line 9 in D:\cygwin\codes\pets: cannot find file > > > > > > specified > > > > > > in include directive. > > > > > > 10: #include "include/finclude/petscksp.h" > > > > > > ^ > > > > > > ** error on line 10 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 11: #include "include/finclude/petscpc.h" > > > > > > ^ > > > > > > ** error on line 11 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 12: #include "include/finclude/petscsys.h" > > > > > > ^ > > > > > > ** error on line 12 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 97: #include "include/finclude/petsc.h" > > > > > > ^ > > > > > > ** error on line 97 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 98: #include "include/finclude/petscvec.h" > > > > > > ^ > > > > > > ** error on line 98 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 99: #include "include/finclude/petscmat.h" > > > > > > ^ > > > > > > ** error on line 99 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 100: #include "include/finclude/petscksp.h" > > > > > > ^ > > > > > > ** error on line 100 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 101: #include "include/finclude/petscpc.h" > > > > > > ^ > > > > > > ** error on line 101 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > 102: #include "include/finclude/petscsys.h" > > > > > > ^ > > > > > > ** error on line 102 in D:\cygwin\codes\pets: cannot find file > > > > > > specified in include directive. > > > > > > global.i > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) > > > > > > : > > > > > > Error: > > > > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > > > > Vec xx,b_rhs,xx_uv,b_rhs_uv > > > > > > -----------------^ > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) > > > > > > : > > > > > > Error: > > > > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > > > > Mat A_mat,A_mat_uv ! /* sparse matrix */ > > > > > > --------------------^ > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) > > > > > > : > > > > > > Error: > > > > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > > > > KSP ksp,ksp_uv !/* linear solver context */ > > > > > > -----------------^ > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) > > > > > > : > > > > > > Error: > > > > > > Syntax error, found ',' when expecting one of: ( : % . = => > > > > > > PC pc,pc_uv > > > > > > ------------------^ > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) > > > > > > : > > > > > > Error: > > > > > > Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . > > > > > > = => > > > > > > PCType ptype > > > > > > -------------------------^ > > > > > > D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) > > > > > > : > > > > > > Error: > > > > > > Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . > > > > > > = => > > > > > > KSPType ksptype.... > > > > > > > > > > > > > > > > > > I can get it to compile if I use : > > > > > > > > > > > > Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > > > > > > > Compared to the original one above which is: > > > > > > > > > > > > Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > > > > > > > Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" > > > > > > > > > > > > I have attached my global.F. I wonder if this is the cause of the > > > > > > MPICH > > > > > > error. > > > > > > > > > > > > Currently, I have removed all other f90 files, except for global.F > > > > > > and > > > > > > flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the > > > > > > MPI > > > > > > error > > > > > > stated below. I got the same error if I compile under cygwin using > > > > > > the > > > > > > same > > > > > > parameters. > > > > > > > > > > > > Hope you can help. > > > > > > > > > > > > Thank you very much and have a nice day! > > > > > > > > > > > > Yours sincerely, > > > > > > > > > > > > Wee-Beng Tay > > > > > > > > > > > > > > > > > > > > > > > > Satish Balay wrote: > > > > > > > > > > > > > Do you get these errors with PETSc f90 examples? > > > > > > > > > > > > > > what 'USE statement' do you have in your code? > > > > > > > > > > > > > > I guess you'll have to check your code to see how you are using > > > > > > > f90 > > > > > > > modules/includes. > > > > > > > > > > > > > > If you can get a minimal compileable code that can reproduce this > > > > > > > error - send us the code so that we can reproduce the issue > > > > > > > > > > > > > > Satish > > > > > > > > > > > > > > On Thu, 9 Apr 2009, Wee-Beng TAY wrote: > > > > > > > > > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > I just built petsc-3.0.0-p4 with mpich and after that, I > > > > > > > > reinstalled > > > > > > > > my > > > > > > > > windows xp and installed mpich in the same directory. I'm using > > > > > > > > CVF > > > > > > > > > > > > > > > > Now, I found that when I'm trying to compile my code, I got the > > > > > > > > error: > > > > > > > > > > > > > > > > :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The > > > > > > > > attributes > > > > > > > > of > > > > > > > > this > > > > > > > > name conflict with those made accessible by a USE statement. > > > > > > > > [MPI_STATUS_SIZE] > > > > > > > > INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) > > > > > > > > --------------------------------^ > > > > > > > > E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The > > > > > > > > attributes > > > > > > > > of > > > > > > > > this > > > > > > > > name conflict with those made accessible by a USE statement. > > > > > > > > [MPI_STATUS_SIZE] > > > > > > > > INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) > > > > > > > > ----------------------------------^ > > > > > > > > E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : > > > > > > > > Error: > > > > > > > > The > > > > > > > > attributes of this name conflict with those made accessible by a > > > > > > > > USE > > > > > > > > statement. [MPI_DOUBLE_PRECISION] > > > > > > > > parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) > > > > > > > > ------------------------------^ > > > > > > > > Error executing df.exe. > > > > > > > > > > > > > > > > flux_area.obj - 3 error(s), 0 warning(s) > > > > > > > > > > > > > > > > My include option is : > > > > > > > > > > > > > > > > Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include > > > > > > > > > > > > > > > > > > > > > > > > Interestingly, when I change my PETSC_DIR to petsc-dev, which > > > > > > > > correspond > > > > > > > > to an > > > > > > > > old build of petsc-2.3.3-p13, there is no problem. > > > > > > > > > > > > > > > > May I know what's wrong? Btw, I've converted my mpif.h from > > > > > > > > using "C" > > > > > > > > as > > > > > > > > comments to "!". > > > > > > > > > > > > > > > > Thank you very much and have a nice day! > > > > > > > > > > > > > > > > Yours sincerely, > > > > > > > > > > > > > > > > Wee-Beng Tay > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From zonexo at gmail.com Mon Apr 13 19:49:02 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 08:49:02 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> Message-ID: <49E3DD7E.5060803@gmail.com> Hi Satish and Barry, It worked! Btw, this is only one of the subroutines in my module file. Hence, my "implicit none" is found at the top of the module file. Thanks for the reminder though, Barry. Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > And if v_ast() is used with PETSc - then it should be defined 'PetscScalar' > > and then you use MPI_ISEND(....,MPIU_SCALAR,...) > > Satish > > > On Mon, 13 Apr 2009, Barry Smith wrote: > > >> Where is your implicit none that all sane programmers begin their Fortran >> subroutines with? >> >> >> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: >> >> >>> Hi Satish, >>> >>> Compiling and building now worked without error. However, when I run, I get >>> the error: >>> >>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL >>> [0] Aborting program ! >>> [0] Aborting program! >>> Error 323, process 0, host GOTCHAMA-E73BB3: >>> >>> The problem lies in this subroutine: >>> >>> subroutine v_ast_row_copy >>> >>> #include "finclude/petsc.h" >>> #include "finclude/petscvec.h" >>> #include "finclude/petscmat.h" >>> #include "finclude/petscksp.h" >>> #include "finclude/petscpc.h" >>> #include "finclude/petscsys.h" >>> >>> !to copy data of jend row to others >>> >>> integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 >>> >>> inext = myid + 1 >>> iprev = myid - 1 >>> >>> if (myid == num_procs - 1) inext = MPI_PROC_NULL >>> >>> if (myid == 0) iprev = MPI_PROC_NULL >>> >>> CALL >>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) >>> CALL >>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) >>> CALL MPI_WAIT(isend1, istatus, ierr) >>> CALL MPI_WAIT(irecv1, istatus, ierr) >>> >>> end subroutine v_ast_row_copy >>> >>> >>> I copied this subroutine from the RS6000 mpi manual and it used to work. I >>> wonder if this is a MPI or PETSc problem? Strange because I already specify >>> the type to be MPI_REAL8. However changing to MPI_REAL solves the problem. >>> >>> If this is a MPI problem, then you can just ignore it. I'll check it in some >>> MPI forum. >>> >>> >>> Thank you very much and have a nice day! >>> >>> Yours sincerely, >>> >>> Wee-Beng Tay >>> >>> >>> >>> Satish Balay wrote: >>> >>>> Yes - all includes statements in both the sourcefiles should start >>>> with "finclude/..." [so that -Id:\cygwin\codes\petsc-3.0.0-p4 is not >>>> needed] >>>> >>>> And where you needed PETSC_AVOID_DECLARATIONS - you need to use the >>>> 'def.h' equivelent includes.. The def.h files have only the >>>> declarations [so the PETSC_AVOID_DECLARATIONS flag is no longer >>>> needed/used]. You need only the definitions in the datasection of >>>> 'module flux_area'. [All subroutines should use the regular includes] >>>> >>>> Satish >>>> >>>> >>>> On Mon, 13 Apr 2009, Wee-Beng TAY wrote: >>>> >>>> >>>> >>>>> Hi Satish, >>>>> >>>>> I now used >>>>> >>>>> #include "finclude/petsc.h" >>>>> #include "finclude/petscvec.h" >>>>> #include "finclude/petscmat.h" >>>>> #include "finclude/petscksp.h" >>>>> #include "finclude/petscpc.h" >>>>> #include "finclude/petscsys.h" >>>>> >>>>> for global.F and >>>>> >>>>> #include "finclude/petscdef.h" >>>>> #include "finclude/petscvecdef.h" >>>>> #include "finclude/petscmatdef.h" >>>>> #include "finclude/petsckspdef.h" >>>>> #include "finclude/petscpcdef.h" >>>>> >>>>> for flux_area.f90 and it's working now. Can you explain what's >>>>> happening? Is >>>>> this the correct way then? >>>>> >>>>> Thank you very much and have a nice day! >>>>> >>>>> Yours sincerely, >>>>> >>>>> Wee-Beng Tay >>>>> >>>>> >>>>> >>>>> Satish Balay wrote: >>>>> >>>>> >>>>>> 2 changes you have to make for 3.0.0 >>>>>> >>>>>> 1. "include/finclude.. -> "finclude..." >>>>>> >>>>>> 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h >>>>>> equivalnet files. >>>>>> i.e >>>>>> >>>>>> change: >>>>>> #define PETSC_AVOID_DECLARATIONS >>>>>> #include "include/finclude/petsc.h" >>>>>> #include "include/finclude/petscvec.h" >>>>>> #include "include/finclude/petscmat.h" >>>>>> #include "include/finclude/petscksp.h" >>>>>> #include "include/finclude/petscpc.h" >>>>>> #undef PETSC_AVOID_DECLARATIONS >>>>>> >>>>>> to: >>>>>> #include "finclude/petscdef.h" >>>>>> #include "finclude/petscvecdef.h" >>>>>> #include "finclude/petscmatdef.h" >>>>>> #include "finclude/petsckspdef.h" >>>>>> #include "finclude/petscpcdef.h" >>>>>> >>>>>> Satish >>>>>> >>>>>> On Sun, 12 Apr 2009, Wee-Beng TAY wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Hi Satish, >>>>>>> >>>>>>> I am now using the PETSc ex2f example. I tried "make ex2f" and >>>>>>> manage to >>>>>>> build >>>>>>> and run the file. Then I used the options as a reference for my >>>>>>> visual >>>>>>> fortran >>>>>>> and it worked. >>>>>>> >>>>>>> The options are: >>>>>>> >>>>>>> /compile_only /debug:full /include:"Debug/" >>>>>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" >>>>>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" >>>>>>> /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads >>>>>>> /warn:nofileopt >>>>>>> /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" >>>>>>> /fpp:"/m" >>>>>>> >>>>>>> and >>>>>>> >>>>>>> ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib >>>>>>> comdlg32.lib >>>>>>> advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib >>>>>>> odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib >>>>>>> libpetscdm.lib >>>>>>> libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib >>>>>>> libflapack.lib /nologo /subsystem:console /incremental:yes >>>>>>> /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" >>>>>>> /pdbtype:sept >>>>>>> /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" >>>>>>> /libpath:"E:\cygwin\codes\MPICH\SDK\lib" >>>>>>> >>>>>>> Now I add my own file called global.F and tried to compile, using >>>>>>> the same >>>>>>> options.But now it failed. The error is: >>>>>>> >>>>>>> --------------------Configuration: ex2f - Win32 >>>>>>> Debug-------------------- >>>>>>> Compiling Fortran... >>>>>>> ------------------------------------------------------------------------ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: >>>>>>> 7: #include "include/finclude/petsc.h" >>>>>>> ^ >>>>>>> ** error on line 7 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified >>>>>>> in include directive. >>>>>>> 8: #include "include/finclude/petscvec.h" >>>>>>> ^ >>>>>>> ** error on line 8 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified >>>>>>> in include directive. >>>>>>> 9: #include "include/finclude/petscmat.h" >>>>>>> ^ >>>>>>> ** error on line 9 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified >>>>>>> in include directive. >>>>>>> 10: #include "include/finclude/petscksp.h" >>>>>>> ^ >>>>>>> ** error on line 10 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 11: #include "include/finclude/petscpc.h" >>>>>>> ^ >>>>>>> ** error on line 11 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 12: #include "include/finclude/petscsys.h" >>>>>>> ^ >>>>>>> ** error on line 12 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 97: #include "include/finclude/petsc.h" >>>>>>> ^ >>>>>>> ** error on line 97 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 98: #include "include/finclude/petscvec.h" >>>>>>> ^ >>>>>>> ** error on line 98 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 99: #include "include/finclude/petscmat.h" >>>>>>> ^ >>>>>>> ** error on line 99 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 100: #include "include/finclude/petscksp.h" >>>>>>> ^ >>>>>>> ** error on line 100 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 101: #include "include/finclude/petscpc.h" >>>>>>> ^ >>>>>>> ** error on line 101 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 102: #include "include/finclude/petscsys.h" >>>>>>> ^ >>>>>>> ** error on line 102 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> global.i >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> Vec xx,b_rhs,xx_uv,b_rhs_uv >>>>>>> -----------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> Mat A_mat,A_mat_uv ! /* sparse matrix */ >>>>>>> --------------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> KSP ksp,ksp_uv !/* linear solver context */ >>>>>>> -----------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> PC pc,pc_uv >>>>>>> ------------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . >>>>>>> = => >>>>>>> PCType ptype >>>>>>> -------------------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . >>>>>>> = => >>>>>>> KSPType ksptype.... >>>>>>> >>>>>>> >>>>>>> I can get it to compile if I use : >>>>>>> >>>>>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include >>>>>>> >>>>>>> Compared to the original one above which is: >>>>>>> >>>>>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include >>>>>>> >>>>>>> Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" >>>>>>> >>>>>>> I have attached my global.F. I wonder if this is the cause of the >>>>>>> MPICH >>>>>>> error. >>>>>>> >>>>>>> Currently, I have removed all other f90 files, except for global.F >>>>>>> and >>>>>>> flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the >>>>>>> MPI >>>>>>> error >>>>>>> stated below. I got the same error if I compile under cygwin using >>>>>>> the >>>>>>> same >>>>>>> parameters. >>>>>>> >>>>>>> Hope you can help. >>>>>>> >>>>>>> Thank you very much and have a nice day! >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> Wee-Beng Tay >>>>>>> >>>>>>> >>>>>>> >>>>>>> Satish Balay wrote: >>>>>>> >>>>>>> >>>>>>>> Do you get these errors with PETSc f90 examples? >>>>>>>> >>>>>>>> what 'USE statement' do you have in your code? >>>>>>>> >>>>>>>> I guess you'll have to check your code to see how you are using >>>>>>>> f90 >>>>>>>> modules/includes. >>>>>>>> >>>>>>>> If you can get a minimal compileable code that can reproduce this >>>>>>>> error - send us the code so that we can reproduce the issue >>>>>>>> >>>>>>>> Satish >>>>>>>> >>>>>>>> On Thu, 9 Apr 2009, Wee-Beng TAY wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I just built petsc-3.0.0-p4 with mpich and after that, I >>>>>>>>> reinstalled >>>>>>>>> my >>>>>>>>> windows xp and installed mpich in the same directory. I'm using >>>>>>>>> CVF >>>>>>>>> >>>>>>>>> Now, I found that when I'm trying to compile my code, I got the >>>>>>>>> error: >>>>>>>>> >>>>>>>>> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The >>>>>>>>> attributes >>>>>>>>> of >>>>>>>>> this >>>>>>>>> name conflict with those made accessible by a USE statement. >>>>>>>>> [MPI_STATUS_SIZE] >>>>>>>>> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >>>>>>>>> --------------------------------^ >>>>>>>>> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The >>>>>>>>> attributes >>>>>>>>> of >>>>>>>>> this >>>>>>>>> name conflict with those made accessible by a USE statement. >>>>>>>>> [MPI_STATUS_SIZE] >>>>>>>>> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >>>>>>>>> ----------------------------------^ >>>>>>>>> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : >>>>>>>>> Error: >>>>>>>>> The >>>>>>>>> attributes of this name conflict with those made accessible by a >>>>>>>>> USE >>>>>>>>> statement. [MPI_DOUBLE_PRECISION] >>>>>>>>> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >>>>>>>>> ------------------------------^ >>>>>>>>> Error executing df.exe. >>>>>>>>> >>>>>>>>> flux_area.obj - 3 error(s), 0 warning(s) >>>>>>>>> >>>>>>>>> My include option is : >>>>>>>>> >>>>>>>>> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include >>>>>>>>> >>>>>>>>> >>>>>>>>> Interestingly, when I change my PETSC_DIR to petsc-dev, which >>>>>>>>> correspond >>>>>>>>> to an >>>>>>>>> old build of petsc-2.3.3-p13, there is no problem. >>>>>>>>> >>>>>>>>> May I know what's wrong? Btw, I've converted my mpif.h from >>>>>>>>> using "C" >>>>>>>>> as >>>>>>>>> comments to "!". >>>>>>>>> >>>>>>>>> Thank you very much and have a nice day! >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> Wee-Beng Tay >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>> >>>> > > > From zonexo at gmail.com Mon Apr 13 19:53:06 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 08:53:06 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> Message-ID: <49E3DE72.8040402@gmail.com> Hi, So when must I use PetscScalar, and when can I just use real(8)? This is because most of my variables are in real(8), except for the Mat, Vec, required in the solving of the linear eqn. Is there a rule? I initially thought they are the same and if I remember correctly, it seems to work fine in PETSc2.3.3. My v_ast is now defined as: PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > And if v_ast() is used with PETSc - then it should be defined 'PetscScalar' > > and then you use MPI_ISEND(....,MPIU_SCALAR,...) > > Satish > > > On Mon, 13 Apr 2009, Barry Smith wrote: > > >> Where is your implicit none that all sane programmers begin their Fortran >> subroutines with? >> >> >> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: >> >> >>> Hi Satish, >>> >>> Compiling and building now worked without error. However, when I run, I get >>> the error: >>> >>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL >>> [0] Aborting program ! >>> [0] Aborting program! >>> Error 323, process 0, host GOTCHAMA-E73BB3: >>> >>> The problem lies in this subroutine: >>> >>> subroutine v_ast_row_copy >>> >>> #include "finclude/petsc.h" >>> #include "finclude/petscvec.h" >>> #include "finclude/petscmat.h" >>> #include "finclude/petscksp.h" >>> #include "finclude/petscpc.h" >>> #include "finclude/petscsys.h" >>> >>> !to copy data of jend row to others >>> >>> integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 >>> >>> inext = myid + 1 >>> iprev = myid - 1 >>> >>> if (myid == num_procs - 1) inext = MPI_PROC_NULL >>> >>> if (myid == 0) iprev = MPI_PROC_NULL >>> >>> CALL >>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) >>> CALL >>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) >>> CALL MPI_WAIT(isend1, istatus, ierr) >>> CALL MPI_WAIT(irecv1, istatus, ierr) >>> >>> end subroutine v_ast_row_copy >>> >>> >>> I copied this subroutine from the RS6000 mpi manual and it used to work. I >>> wonder if this is a MPI or PETSc problem? Strange because I already specify >>> the type to be MPI_REAL8. However changing to MPI_REAL solves the problem. >>> >>> If this is a MPI problem, then you can just ignore it. I'll check it in some >>> MPI forum. >>> >>> >>> Thank you very much and have a nice day! >>> >>> Yours sincerely, >>> >>> Wee-Beng Tay >>> >>> >>> >>> Satish Balay wrote: >>> >>>> Yes - all includes statements in both the sourcefiles should start >>>> with "finclude/..." [so that -Id:\cygwin\codes\petsc-3.0.0-p4 is not >>>> needed] >>>> >>>> And where you needed PETSC_AVOID_DECLARATIONS - you need to use the >>>> 'def.h' equivelent includes.. The def.h files have only the >>>> declarations [so the PETSC_AVOID_DECLARATIONS flag is no longer >>>> needed/used]. You need only the definitions in the datasection of >>>> 'module flux_area'. [All subroutines should use the regular includes] >>>> >>>> Satish >>>> >>>> >>>> On Mon, 13 Apr 2009, Wee-Beng TAY wrote: >>>> >>>> >>>> >>>>> Hi Satish, >>>>> >>>>> I now used >>>>> >>>>> #include "finclude/petsc.h" >>>>> #include "finclude/petscvec.h" >>>>> #include "finclude/petscmat.h" >>>>> #include "finclude/petscksp.h" >>>>> #include "finclude/petscpc.h" >>>>> #include "finclude/petscsys.h" >>>>> >>>>> for global.F and >>>>> >>>>> #include "finclude/petscdef.h" >>>>> #include "finclude/petscvecdef.h" >>>>> #include "finclude/petscmatdef.h" >>>>> #include "finclude/petsckspdef.h" >>>>> #include "finclude/petscpcdef.h" >>>>> >>>>> for flux_area.f90 and it's working now. Can you explain what's >>>>> happening? Is >>>>> this the correct way then? >>>>> >>>>> Thank you very much and have a nice day! >>>>> >>>>> Yours sincerely, >>>>> >>>>> Wee-Beng Tay >>>>> >>>>> >>>>> >>>>> Satish Balay wrote: >>>>> >>>>> >>>>>> 2 changes you have to make for 3.0.0 >>>>>> >>>>>> 1. "include/finclude.. -> "finclude..." >>>>>> >>>>>> 2. PETSC_AVOID_DECLARATIONS should be removed - and use petscdef.h >>>>>> equivalnet files. >>>>>> i.e >>>>>> >>>>>> change: >>>>>> #define PETSC_AVOID_DECLARATIONS >>>>>> #include "include/finclude/petsc.h" >>>>>> #include "include/finclude/petscvec.h" >>>>>> #include "include/finclude/petscmat.h" >>>>>> #include "include/finclude/petscksp.h" >>>>>> #include "include/finclude/petscpc.h" >>>>>> #undef PETSC_AVOID_DECLARATIONS >>>>>> >>>>>> to: >>>>>> #include "finclude/petscdef.h" >>>>>> #include "finclude/petscvecdef.h" >>>>>> #include "finclude/petscmatdef.h" >>>>>> #include "finclude/petsckspdef.h" >>>>>> #include "finclude/petscpcdef.h" >>>>>> >>>>>> Satish >>>>>> >>>>>> On Sun, 12 Apr 2009, Wee-Beng TAY wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Hi Satish, >>>>>>> >>>>>>> I am now using the PETSc ex2f example. I tried "make ex2f" and >>>>>>> manage to >>>>>>> build >>>>>>> and run the file. Then I used the options as a reference for my >>>>>>> visual >>>>>>> fortran >>>>>>> and it worked. >>>>>>> >>>>>>> The options are: >>>>>>> >>>>>>> /compile_only /debug:full /include:"Debug/" >>>>>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include" >>>>>>> /include:"d:\cygwin\codes\petsc-3.0.0-p4\include" >>>>>>> /include:"E:\cygwin\codes\MPICH\SDK\include" /nologo /threads >>>>>>> /warn:nofileopt >>>>>>> /module:"Debug/" /object:"Debug/" /pdbfile:"Debug/DF60.PDB" >>>>>>> /fpp:"/m" >>>>>>> >>>>>>> and >>>>>>> >>>>>>> ws2_32.lib kernel32.lib user32.lib gdi32.lib winspool.lib >>>>>>> comdlg32.lib >>>>>>> advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib >>>>>>> odbccp32.lib libpetscts.lib libpetscsnes.lib libpetscksp.lib >>>>>>> libpetscdm.lib >>>>>>> libpetscmat.lib libpetscvec.lib libpetsc.lib mpich.lib libfblas.lib >>>>>>> libflapack.lib /nologo /subsystem:console /incremental:yes >>>>>>> /pdb:"Debug/ex2f.pdb" /debug /machine:I386 /out:"Debug/ex2f.exe" >>>>>>> /pdbtype:sept >>>>>>> /libpath:"d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\lib" >>>>>>> /libpath:"E:\cygwin\codes\MPICH\SDK\lib" >>>>>>> >>>>>>> Now I add my own file called global.F and tried to compile, using >>>>>>> the same >>>>>>> options.But now it failed. The error is: >>>>>>> >>>>>>> --------------------Configuration: ex2f - Win32 >>>>>>> Debug-------------------- >>>>>>> Compiling Fortran... >>>>>>> ------------------------------------------------------------------------ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F: >>>>>>> 7: #include "include/finclude/petsc.h" >>>>>>> ^ >>>>>>> ** error on line 7 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified >>>>>>> in include directive. >>>>>>> 8: #include "include/finclude/petscvec.h" >>>>>>> ^ >>>>>>> ** error on line 8 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified >>>>>>> in include directive. >>>>>>> 9: #include "include/finclude/petscmat.h" >>>>>>> ^ >>>>>>> ** error on line 9 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified >>>>>>> in include directive. >>>>>>> 10: #include "include/finclude/petscksp.h" >>>>>>> ^ >>>>>>> ** error on line 10 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 11: #include "include/finclude/petscpc.h" >>>>>>> ^ >>>>>>> ** error on line 11 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 12: #include "include/finclude/petscsys.h" >>>>>>> ^ >>>>>>> ** error on line 12 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 97: #include "include/finclude/petsc.h" >>>>>>> ^ >>>>>>> ** error on line 97 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 98: #include "include/finclude/petscvec.h" >>>>>>> ^ >>>>>>> ** error on line 98 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 99: #include "include/finclude/petscmat.h" >>>>>>> ^ >>>>>>> ** error on line 99 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 100: #include "include/finclude/petscksp.h" >>>>>>> ^ >>>>>>> ** error on line 100 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 101: #include "include/finclude/petscpc.h" >>>>>>> ^ >>>>>>> ** error on line 101 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> 102: #include "include/finclude/petscsys.h" >>>>>>> ^ >>>>>>> ** error on line 102 in D:\cygwin\codes\pets: cannot find file >>>>>>> specified in include directive. >>>>>>> global.i >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(65) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> Vec xx,b_rhs,xx_uv,b_rhs_uv >>>>>>> -----------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(67) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> Mat A_mat,A_mat_uv ! /* sparse matrix */ >>>>>>> --------------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(69) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> KSP ksp,ksp_uv !/* linear solver context */ >>>>>>> -----------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(71) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found ',' when expecting one of: ( : % . = => >>>>>>> PC pc,pc_uv >>>>>>> ------------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(73) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . >>>>>>> = => >>>>>>> PCType ptype >>>>>>> -------------------------^ >>>>>>> D:\cygwin\codes\petsc-3.0.0-p4\projects\fortran\ksp\ex2f\global.F(75) >>>>>>> : >>>>>>> Error: >>>>>>> Syntax error, found END-OF-STATEMENT when expecting one of: ( : % . >>>>>>> = => >>>>>>> KSPType ksptype.... >>>>>>> >>>>>>> >>>>>>> I can get it to compile if I use : >>>>>>> >>>>>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4;d:\cygwin\codes\petsc-3.0.0-p4\include;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;E:\cygwin\codes\MPICH\SDK\include >>>>>>> >>>>>>> Compared to the original one above which is: >>>>>>> >>>>>>> Debug/;d:\cygwin\codes\petsc-3.0.0-p4\win32_mpi_debug\include;d:\cygwin\codes\petsc-3.0.0-p4\include;E:\cygwin\codes\MPICH\SDK\include >>>>>>> >>>>>>> Hence, there is an additional "d:\cygwin\codes\petsc-3.0.0-p4" >>>>>>> >>>>>>> I have attached my global.F. I wonder if this is the cause of the >>>>>>> MPICH >>>>>>> error. >>>>>>> >>>>>>> Currently, I have removed all other f90 files, except for global.F >>>>>>> and >>>>>>> flux_area.f90. It's when I 'm compiling flux_area.f90 that I got the >>>>>>> MPI >>>>>>> error >>>>>>> stated below. I got the same error if I compile under cygwin using >>>>>>> the >>>>>>> same >>>>>>> parameters. >>>>>>> >>>>>>> Hope you can help. >>>>>>> >>>>>>> Thank you very much and have a nice day! >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> Wee-Beng Tay >>>>>>> >>>>>>> >>>>>>> >>>>>>> Satish Balay wrote: >>>>>>> >>>>>>> >>>>>>>> Do you get these errors with PETSc f90 examples? >>>>>>>> >>>>>>>> what 'USE statement' do you have in your code? >>>>>>>> >>>>>>>> I guess you'll have to check your code to see how you are using >>>>>>>> f90 >>>>>>>> modules/includes. >>>>>>>> >>>>>>>> If you can get a minimal compileable code that can reproduce this >>>>>>>> error - send us the code so that we can reproduce the issue >>>>>>>> >>>>>>>> Satish >>>>>>>> >>>>>>>> On Thu, 9 Apr 2009, Wee-Beng TAY wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> I just built petsc-3.0.0-p4 with mpich and after that, I >>>>>>>>> reinstalled >>>>>>>>> my >>>>>>>>> windows xp and installed mpich in the same directory. I'm using >>>>>>>>> CVF >>>>>>>>> >>>>>>>>> Now, I found that when I'm trying to compile my code, I got the >>>>>>>>> error: >>>>>>>>> >>>>>>>>> :\cygwin\codes\MPICH\SDK\include\mpif.h(105) : Error: The >>>>>>>>> attributes >>>>>>>>> of >>>>>>>>> this >>>>>>>>> name conflict with those made accessible by a USE statement. >>>>>>>>> [MPI_STATUS_SIZE] >>>>>>>>> INTEGER MPI_STATUS_IGNORE(MPI_STATUS_SIZE) >>>>>>>>> --------------------------------^ >>>>>>>>> E:\cygwin\codes\MPICH\SDK\include\mpif.h(106) : Error: The >>>>>>>>> attributes >>>>>>>>> of >>>>>>>>> this >>>>>>>>> name conflict with those made accessible by a USE statement. >>>>>>>>> [MPI_STATUS_SIZE] >>>>>>>>> INTEGER MPI_STATUSES_IGNORE(MPI_STATUS_SIZE) >>>>>>>>> ----------------------------------^ >>>>>>>>> E:\cygwin\codes\petsc-3.0.0-p4\include/finclude/petsc.h(154) : >>>>>>>>> Error: >>>>>>>>> The >>>>>>>>> attributes of this name conflict with those made accessible by a >>>>>>>>> USE >>>>>>>>> statement. [MPI_DOUBLE_PRECISION] >>>>>>>>> parameter(MPIU_SCALAR = MPI_DOUBLE_PRECISION) >>>>>>>>> ------------------------------^ >>>>>>>>> Error executing df.exe. >>>>>>>>> >>>>>>>>> flux_area.obj - 3 error(s), 0 warning(s) >>>>>>>>> >>>>>>>>> My include option is : >>>>>>>>> >>>>>>>>> Debug/;$(PETSC_DIR);$(PETSC_DIR)\$(PETSC_ARCH)\;$(PETSC_DIR)\$(PETSC_ARCH)\include;$(PETSC_DIR)\include;E:\cygwin\codes\MPICH\SDK\include >>>>>>>>> >>>>>>>>> >>>>>>>>> Interestingly, when I change my PETSC_DIR to petsc-dev, which >>>>>>>>> correspond >>>>>>>>> to an >>>>>>>>> old build of petsc-2.3.3-p13, there is no problem. >>>>>>>>> >>>>>>>>> May I know what's wrong? Btw, I've converted my mpif.h from >>>>>>>>> using "C" >>>>>>>>> as >>>>>>>>> comments to "!". >>>>>>>>> >>>>>>>>> Thank you very much and have a nice day! >>>>>>>>> >>>>>>>>> Yours sincerely, >>>>>>>>> >>>>>>>>> Wee-Beng Tay >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>> >>>> > > > From balay at mcs.anl.gov Mon Apr 13 20:02:03 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2009 20:02:03 -0500 (CDT) Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49E3DE72.8040402@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> Message-ID: (--with-scalar-type=[real,complex], --with-precision=[single,double]) Depending upon how petsc is configured - PetscScalar can be either real4,real8,complex8. With most common usage - [i.e default build] PetscScalar is real8. Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not MPI_REAL8. This sugested that v_ast was declared as real4 - not real8. Satish On Tue, 14 Apr 2009, Wee-Beng TAY wrote: > Hi, > > So when must I use PetscScalar, and when can I just use real(8)? This is > because most of my variables are in real(8), except for the Mat, Vec, required > in the solving of the linear eqn. Is there a rule? I initially thought they > are the same and if I remember correctly, it seems to work fine in PETSc2.3.3. > > My v_ast is now defined as: > > PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) > > allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate > (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) > > if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > Satish Balay wrote: > > And if v_ast() is used with PETSc - then it should be defined 'PetscScalar' > > > > and then you use MPI_ISEND(....,MPIU_SCALAR,...) > > > > Satish > > > > > > On Mon, 13 Apr 2009, Barry Smith wrote: > > > > > > > Where is your implicit none that all sane programmers begin their > > > Fortran > > > subroutines with? > > > > > > > > > On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: > > > > > > > > > > Hi Satish, > > > > > > > > Compiling and building now worked without error. However, when I run, I > > > > get > > > > the error: > > > > > > > > 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL > > > > [0] Aborting program ! > > > > [0] Aborting program! > > > > Error 323, process 0, host GOTCHAMA-E73BB3: > > > > > > > > The problem lies in this subroutine: > > > > > > > > subroutine v_ast_row_copy > > > > > > > > #include "finclude/petsc.h" > > > > #include "finclude/petscvec.h" > > > > #include "finclude/petscmat.h" > > > > #include "finclude/petscksp.h" > > > > #include "finclude/petscpc.h" > > > > #include "finclude/petscsys.h" > > > > > > > > !to copy data of jend row to others > > > > > > > > integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 > > > > > > > > inext = myid + 1 > > > > iprev = myid - 1 > > > > > > > > if (myid == num_procs - 1) inext = MPI_PROC_NULL > > > > > > > > if (myid == 0) iprev = MPI_PROC_NULL > > > > > > > > CALL > > > > MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) > > > > CALL > > > > MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) > > > > CALL MPI_WAIT(isend1, istatus, ierr) > > > > CALL MPI_WAIT(irecv1, istatus, ierr) > > > > > > > > end subroutine v_ast_row_copy > > > > > > > > > > > > I copied this subroutine from the RS6000 mpi manual and it used to work. > > > > I > > > > wonder if this is a MPI or PETSc problem? Strange because I already > > > > specify > > > > the type to be MPI_REAL8. However changing to MPI_REAL solves the > > > > problem. > > > > > > > > If this is a MPI problem, then you can just ignore it. I'll check it in > > > > some > > > > MPI forum. > > > > > > > > > > > > Thank you very much and have a nice day! > > > > > > > > Yours sincerely, > > > > > > > > Wee-Beng Tay From zonexo at gmail.com Mon Apr 13 20:19:50 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 09:19:50 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> Message-ID: <49E3E4B6.2030308@gmail.com> Hi Satish, That's strange. This is because I initially declared u_ast as real(8), allocatable :: u_ast(:,:) I also have the -r8 option enabled. Any idea what is going on? Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > (--with-scalar-type=[real,complex], --with-precision=[single,double]) > > Depending upon how petsc is configured - PetscScalar can be either > real4,real8,complex8. With most common usage - [i.e default build] > PetscScalar is real8. > > Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not > MPI_REAL8. This sugested that v_ast was declared as real4 - not real8. > > Satish > > On Tue, 14 Apr 2009, Wee-Beng TAY wrote: > > >> Hi, >> >> So when must I use PetscScalar, and when can I just use real(8)? This is >> because most of my variables are in real(8), except for the Mat, Vec, required >> in the solving of the linear eqn. Is there a rule? I initially thought they >> are the same and if I remember correctly, it seems to work fine in PETSc2.3.3. >> >> My v_ast is now defined as: >> >> PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) >> >> allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate >> (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) >> >> if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> >> Satish Balay wrote: >> >>> And if v_ast() is used with PETSc - then it should be defined 'PetscScalar' >>> >>> and then you use MPI_ISEND(....,MPIU_SCALAR,...) >>> >>> Satish >>> >>> >>> On Mon, 13 Apr 2009, Barry Smith wrote: >>> >>> >>> >>>> Where is your implicit none that all sane programmers begin their >>>> Fortran >>>> subroutines with? >>>> >>>> >>>> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: >>>> >>>> >>>> >>>>> Hi Satish, >>>>> >>>>> Compiling and building now worked without error. However, when I run, I >>>>> get >>>>> the error: >>>>> >>>>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL >>>>> [0] Aborting program ! >>>>> [0] Aborting program! >>>>> Error 323, process 0, host GOTCHAMA-E73BB3: >>>>> >>>>> The problem lies in this subroutine: >>>>> >>>>> subroutine v_ast_row_copy >>>>> >>>>> #include "finclude/petsc.h" >>>>> #include "finclude/petscvec.h" >>>>> #include "finclude/petscmat.h" >>>>> #include "finclude/petscksp.h" >>>>> #include "finclude/petscpc.h" >>>>> #include "finclude/petscsys.h" >>>>> >>>>> !to copy data of jend row to others >>>>> >>>>> integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 >>>>> >>>>> inext = myid + 1 >>>>> iprev = myid - 1 >>>>> >>>>> if (myid == num_procs - 1) inext = MPI_PROC_NULL >>>>> >>>>> if (myid == 0) iprev = MPI_PROC_NULL >>>>> >>>>> CALL >>>>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) >>>>> CALL >>>>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) >>>>> CALL MPI_WAIT(isend1, istatus, ierr) >>>>> CALL MPI_WAIT(irecv1, istatus, ierr) >>>>> >>>>> end subroutine v_ast_row_copy >>>>> >>>>> >>>>> I copied this subroutine from the RS6000 mpi manual and it used to work. >>>>> I >>>>> wonder if this is a MPI or PETSc problem? Strange because I already >>>>> specify >>>>> the type to be MPI_REAL8. However changing to MPI_REAL solves the >>>>> problem. >>>>> >>>>> If this is a MPI problem, then you can just ignore it. I'll check it in >>>>> some >>>>> MPI forum. >>>>> >>>>> >>>>> Thank you very much and have a nice day! >>>>> >>>>> Yours sincerely, >>>>> >>>>> Wee-Beng Tay >>>>> > > From zonexo at gmail.com Mon Apr 13 20:24:59 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 09:24:59 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> Message-ID: <49E3E5EB.7050403@gmail.com> Hi Satish, Moreover, I found that as long as I use MPIU_SCALAR instead of MPI_REAL8, there's no problem. Btw, I'm using MPICH 1.25. Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > (--with-scalar-type=[real,complex], --with-precision=[single,double]) > > Depending upon how petsc is configured - PetscScalar can be either > real4,real8,complex8. With most common usage - [i.e default build] > PetscScalar is real8. > > Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not > MPI_REAL8. This sugested that v_ast was declared as real4 - not real8. > > Satish > > On Tue, 14 Apr 2009, Wee-Beng TAY wrote: > > >> Hi, >> >> So when must I use PetscScalar, and when can I just use real(8)? This is >> because most of my variables are in real(8), except for the Mat, Vec, required >> in the solving of the linear eqn. Is there a rule? I initially thought they >> are the same and if I remember correctly, it seems to work fine in PETSc2.3.3. >> >> My v_ast is now defined as: >> >> PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) >> >> allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate >> (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) >> >> if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> >> Satish Balay wrote: >> >>> And if v_ast() is used with PETSc - then it should be defined 'PetscScalar' >>> >>> and then you use MPI_ISEND(....,MPIU_SCALAR,...) >>> >>> Satish >>> >>> >>> On Mon, 13 Apr 2009, Barry Smith wrote: >>> >>> >>> >>>> Where is your implicit none that all sane programmers begin their >>>> Fortran >>>> subroutines with? >>>> >>>> >>>> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: >>>> >>>> >>>> >>>>> Hi Satish, >>>>> >>>>> Compiling and building now worked without error. However, when I run, I >>>>> get >>>>> the error: >>>>> >>>>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL >>>>> [0] Aborting program ! >>>>> [0] Aborting program! >>>>> Error 323, process 0, host GOTCHAMA-E73BB3: >>>>> >>>>> The problem lies in this subroutine: >>>>> >>>>> subroutine v_ast_row_copy >>>>> >>>>> #include "finclude/petsc.h" >>>>> #include "finclude/petscvec.h" >>>>> #include "finclude/petscmat.h" >>>>> #include "finclude/petscksp.h" >>>>> #include "finclude/petscpc.h" >>>>> #include "finclude/petscsys.h" >>>>> >>>>> !to copy data of jend row to others >>>>> >>>>> integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 >>>>> >>>>> inext = myid + 1 >>>>> iprev = myid - 1 >>>>> >>>>> if (myid == num_procs - 1) inext = MPI_PROC_NULL >>>>> >>>>> if (myid == 0) iprev = MPI_PROC_NULL >>>>> >>>>> CALL >>>>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) >>>>> CALL >>>>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) >>>>> CALL MPI_WAIT(isend1, istatus, ierr) >>>>> CALL MPI_WAIT(irecv1, istatus, ierr) >>>>> >>>>> end subroutine v_ast_row_copy >>>>> >>>>> >>>>> I copied this subroutine from the RS6000 mpi manual and it used to work. >>>>> I >>>>> wonder if this is a MPI or PETSc problem? Strange because I already >>>>> specify >>>>> the type to be MPI_REAL8. However changing to MPI_REAL solves the >>>>> problem. >>>>> >>>>> If this is a MPI problem, then you can just ignore it. I'll check it in >>>>> some >>>>> MPI forum. >>>>> >>>>> >>>>> Thank you very much and have a nice day! >>>>> >>>>> Yours sincerely, >>>>> >>>>> Wee-Beng Tay >>>>> > > From bsmith at mcs.anl.gov Mon Apr 13 20:25:46 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 13 Apr 2009 20:25:46 -0500 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49E3E4B6.2030308@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E4B6.2030308@gmail.com> Message-ID: <08932EC1-F8EF-4CFA-B725-C5391E19F451@mcs.anl.gov> The -r8 option is a dangerous beast, we highly recommend PETSc users (and everyone else) NOT use this option. If you use PetscScalar then you don't need the -r option. Barry BTW: are you sure that an implicit none at the top of a module applies to each subroutine inside the module? On Apr 13, 2009, at 8:19 PM, Wee-Beng TAY wrote: > Hi Satish, > > That's strange. This is because I initially declared u_ast as > > real(8), allocatable :: u_ast(:,:) > > I also have the -r8 option enabled. Any idea what is going on? > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > Satish Balay wrote: >> (--with-scalar-type=[real,complex], --with-precision=[single,double]) >> >> Depending upon how petsc is configured - PetscScalar can be either >> real4,real8,complex8. With most common usage - [i.e default build] >> PetscScalar is real8. >> >> Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not >> MPI_REAL8. This sugested that v_ast was declared as real4 - not >> real8. >> >> Satish >> >> On Tue, 14 Apr 2009, Wee-Beng TAY wrote: >> >> >>> Hi, >>> >>> So when must I use PetscScalar, and when can I just use real(8)? >>> This is >>> because most of my variables are in real(8), except for the Mat, >>> Vec, required >>> in the solving of the linear eqn. Is there a rule? I initially >>> thought they >>> are the same and if I remember correctly, it seems to work fine in >>> PETSc2.3.3. >>> >>> My v_ast is now defined as: >>> >>> PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) >>> >>> allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), >>> STAT=status(1));allocate >>> (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) >>> >>> if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" >>> >>> Thank you very much and have a nice day! >>> >>> Yours sincerely, >>> >>> Wee-Beng Tay >>> >>> >>> >>> Satish Balay wrote: >>> >>>> And if v_ast() is used with PETSc - then it should be defined >>>> 'PetscScalar' >>>> >>>> and then you use MPI_ISEND(....,MPIU_SCALAR,...) >>>> >>>> Satish >>>> >>>> >>>> On Mon, 13 Apr 2009, Barry Smith wrote: >>>> >>>> >>>>> Where is your implicit none that all sane programmers begin their >>>>> Fortran >>>>> subroutines with? >>>>> >>>>> >>>>> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: >>>>> >>>>> >>>>>> Hi Satish, >>>>>> >>>>>> Compiling and building now worked without error. However, when >>>>>> I run, I >>>>>> get >>>>>> the error: >>>>>> >>>>>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL >>>>>> [0] Aborting program ! >>>>>> [0] Aborting program! >>>>>> Error 323, process 0, host GOTCHAMA-E73BB3: >>>>>> >>>>>> The problem lies in this subroutine: >>>>>> >>>>>> subroutine v_ast_row_copy >>>>>> >>>>>> #include "finclude/petsc.h" >>>>>> #include "finclude/petscvec.h" >>>>>> #include "finclude/petscmat.h" >>>>>> #include "finclude/petscksp.h" >>>>>> #include "finclude/petscpc.h" >>>>>> #include "finclude/petscsys.h" >>>>>> >>>>>> !to copy data of jend row to others >>>>>> >>>>>> integer :: >>>>>> inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 >>>>>> >>>>>> inext = myid + 1 >>>>>> iprev = myid - 1 >>>>>> >>>>>> if (myid == num_procs - 1) inext = MPI_PROC_NULL >>>>>> >>>>>> if (myid == 0) iprev = MPI_PROC_NULL >>>>>> >>>>>> CALL >>>>>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext, >>>>>> 1,MPI_COMM_WORLD,isend1,ierr) >>>>>> CALL >>>>>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev, >>>>>> 1,MPI_COMM_WORLD,irecv1,ierr) >>>>>> CALL MPI_WAIT(isend1, istatus, ierr) >>>>>> CALL MPI_WAIT(irecv1, istatus, ierr) >>>>>> >>>>>> end subroutine v_ast_row_copy >>>>>> >>>>>> >>>>>> I copied this subroutine from the RS6000 mpi manual and it used >>>>>> to work. >>>>>> I >>>>>> wonder if this is a MPI or PETSc problem? Strange because I >>>>>> already >>>>>> specify >>>>>> the type to be MPI_REAL8. However changing to MPI_REAL solves the >>>>>> problem. >>>>>> >>>>>> If this is a MPI problem, then you can just ignore it. I'll >>>>>> check it in >>>>>> some >>>>>> MPI forum. >>>>>> >>>>>> >>>>>> Thank you very much and have a nice day! >>>>>> >>>>>> Yours sincerely, >>>>>> >>>>>> Wee-Beng Tay >>>>>> >> >> From balay at mcs.anl.gov Mon Apr 13 20:29:54 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 13 Apr 2009 20:29:54 -0500 (CDT) Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: <49E3E5EB.7050403@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E5EB.7050403@gmail.com> Message-ID: MPIU_SCALAR is defined to be MPI_DOUBLE_PRECISION I guess MPI_REAL/MPI_REAL8 behavior is affected by -r8. Satish On Tue, 14 Apr 2009, Wee-Beng TAY wrote: > Hi Satish, > > Moreover, I found that as long as I use MPIU_SCALAR instead of MPI_REAL8, > there's no problem. Btw, I'm using MPICH 1.25. > > Thank you very much and have a nice day! > > Yours sincerely, > > Wee-Beng Tay > > > > Satish Balay wrote: > > (--with-scalar-type=[real,complex], --with-precision=[single,double]) > > > > Depending upon how petsc is configured - PetscScalar can be either > > real4,real8,complex8. With most common usage - [i.e default build] > > PetscScalar is real8. > > > > Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not > > MPI_REAL8. This sugested that v_ast was declared as real4 - not real8. > > > > Satish > > > > On Tue, 14 Apr 2009, Wee-Beng TAY wrote: > > > > > > > Hi, > > > > > > So when must I use PetscScalar, and when can I just use real(8)? This is > > > because most of my variables are in real(8), except for the Mat, Vec, > > > required > > > in the solving of the linear eqn. Is there a rule? I initially thought > > > they > > > are the same and if I remember correctly, it seems to work fine in > > > PETSc2.3.3. > > > > > > My v_ast is now defined as: > > > > > > PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) > > > > > > allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate > > > (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) > > > > > > if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" > > > > > > Thank you very much and have a nice day! > > > > > > Yours sincerely, > > > > > > Wee-Beng Tay > > > > > > > > > > > > Satish Balay wrote: > > > > > > > And if v_ast() is used with PETSc - then it should be defined > > > > 'PetscScalar' > > > > > > > > and then you use MPI_ISEND(....,MPIU_SCALAR,...) > > > > > > > > Satish > > > > > > > > > > > > On Mon, 13 Apr 2009, Barry Smith wrote: > > > > > > > > > > > > > Where is your implicit none that all sane programmers begin their > > > > > Fortran > > > > > subroutines with? > > > > > > > > > > > > > > > On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: > > > > > > > > > > > > > > > > Hi Satish, > > > > > > > > > > > > Compiling and building now worked without error. However, when I > > > > > > run, I > > > > > > get > > > > > > the error: > > > > > > > > > > > > 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL > > > > > > [0] Aborting program ! > > > > > > [0] Aborting program! > > > > > > Error 323, process 0, host GOTCHAMA-E73BB3: > > > > > > > > > > > > The problem lies in this subroutine: > > > > > > > > > > > > subroutine v_ast_row_copy > > > > > > > > > > > > #include "finclude/petsc.h" > > > > > > #include "finclude/petscvec.h" > > > > > > #include "finclude/petscmat.h" > > > > > > #include "finclude/petscksp.h" > > > > > > #include "finclude/petscpc.h" > > > > > > #include "finclude/petscsys.h" > > > > > > > > > > > > !to copy data of jend row to others > > > > > > > > > > > > integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 > > > > > > > > > > > > inext = myid + 1 > > > > > > iprev = myid - 1 > > > > > > > > > > > > if (myid == num_procs - 1) inext = MPI_PROC_NULL > > > > > > > > > > > > if (myid == 0) iprev = MPI_PROC_NULL > > > > > > > > > > > > CALL > > > > > > MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) > > > > > > CALL > > > > > > MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) > > > > > > CALL MPI_WAIT(isend1, istatus, ierr) > > > > > > CALL MPI_WAIT(irecv1, istatus, ierr) > > > > > > > > > > > > end subroutine v_ast_row_copy > > > > > > > > > > > > > > > > > > I copied this subroutine from the RS6000 mpi manual and it used to > > > > > > work. > > > > > > I > > > > > > wonder if this is a MPI or PETSc problem? Strange because I already > > > > > > specify > > > > > > the type to be MPI_REAL8. However changing to MPI_REAL solves the > > > > > > problem. > > > > > > > > > > > > If this is a MPI problem, then you can just ignore it. I'll check it > > > > > > in > > > > > > some > > > > > > MPI forum. > > > > > > > > > > > > > > > > > > Thank you very much and have a nice day! > > > > > > > > > > > > Yours sincerely, > > > > > > > > > > > > Wee-Beng Tay > > > > > > > > > > From zonexo at gmail.com Mon Apr 13 20:49:59 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 09:49:59 +0800 Subject: MPICH error when using petsc-3.0.0-p4 In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E5EB.7050403@gmail.com> Message-ID: <49E3EBC7.2030305@gmail.com> Hi Satish, I now changed to MPI_DOUBLE_PRECISION and it worked. Removing -r8 option does not seem to have any effect. Btw, Barry, I tried leaving some variables undefined and the CVF does not allow the file to be compiled, saying that the variables are not explicitly defined. So I believe having an "implicit none" on the top should be enough. Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Satish Balay wrote: > MPIU_SCALAR is defined to be MPI_DOUBLE_PRECISION > > I guess MPI_REAL/MPI_REAL8 behavior is affected by -r8. > > Satish > > On Tue, 14 Apr 2009, Wee-Beng TAY wrote: > > >> Hi Satish, >> >> Moreover, I found that as long as I use MPIU_SCALAR instead of MPI_REAL8, >> there's no problem. Btw, I'm using MPICH 1.25. >> >> Thank you very much and have a nice day! >> >> Yours sincerely, >> >> Wee-Beng Tay >> >> >> >> Satish Balay wrote: >> >>> (--with-scalar-type=[real,complex], --with-precision=[single,double]) >>> >>> Depending upon how petsc is configured - PetscScalar can be either >>> real4,real8,complex8. With most common usage - [i.e default build] >>> PetscScalar is real8. >>> >>> Earlier you claimed: MPI_ISEND(v_ast) worked with MPI_REAL but not >>> MPI_REAL8. This sugested that v_ast was declared as real4 - not real8. >>> >>> Satish >>> >>> On Tue, 14 Apr 2009, Wee-Beng TAY wrote: >>> >>> >>> >>>> Hi, >>>> >>>> So when must I use PetscScalar, and when can I just use real(8)? This is >>>> because most of my variables are in real(8), except for the Mat, Vec, >>>> required >>>> in the solving of the linear eqn. Is there a rule? I initially thought >>>> they >>>> are the same and if I remember correctly, it seems to work fine in >>>> PETSc2.3.3. >>>> >>>> My v_ast is now defined as: >>>> >>>> PetscScalar, allocatable :: u_ast(:,:), v_ast(:,:) >>>> >>>> allocate (u_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(1));allocate >>>> (v_ast(0:size_x+1,jsta_ext:jend_ext), STAT=status(2)) >>>> >>>> if (status(1)/=0 .or. status(2)/=0) STOP "Cannot allocate memory" >>>> >>>> Thank you very much and have a nice day! >>>> >>>> Yours sincerely, >>>> >>>> Wee-Beng Tay >>>> >>>> >>>> >>>> Satish Balay wrote: >>>> >>>> >>>>> And if v_ast() is used with PETSc - then it should be defined >>>>> 'PetscScalar' >>>>> >>>>> and then you use MPI_ISEND(....,MPIU_SCALAR,...) >>>>> >>>>> Satish >>>>> >>>>> >>>>> On Mon, 13 Apr 2009, Barry Smith wrote: >>>>> >>>>> >>>>> >>>>>> Where is your implicit none that all sane programmers begin their >>>>>> Fortran >>>>>> subroutines with? >>>>>> >>>>>> >>>>>> On Apr 13, 2009, at 11:11 AM, Wee-Beng TAY wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Hi Satish, >>>>>>> >>>>>>> Compiling and building now worked without error. However, when I >>>>>>> run, I >>>>>>> get >>>>>>> the error: >>>>>>> >>>>>>> 0 - MPI_ISEND : Datatype is MPI_TYPE_NULL >>>>>>> [0] Aborting program ! >>>>>>> [0] Aborting program! >>>>>>> Error 323, process 0, host GOTCHAMA-E73BB3: >>>>>>> >>>>>>> The problem lies in this subroutine: >>>>>>> >>>>>>> subroutine v_ast_row_copy >>>>>>> >>>>>>> #include "finclude/petsc.h" >>>>>>> #include "finclude/petscvec.h" >>>>>>> #include "finclude/petscmat.h" >>>>>>> #include "finclude/petscksp.h" >>>>>>> #include "finclude/petscpc.h" >>>>>>> #include "finclude/petscsys.h" >>>>>>> >>>>>>> !to copy data of jend row to others >>>>>>> >>>>>>> integer :: inext,iprev,istatus(MPI_STATUS_SIZE),irecv1,ierr,isend1 >>>>>>> >>>>>>> inext = myid + 1 >>>>>>> iprev = myid - 1 >>>>>>> >>>>>>> if (myid == num_procs - 1) inext = MPI_PROC_NULL >>>>>>> >>>>>>> if (myid == 0) iprev = MPI_PROC_NULL >>>>>>> >>>>>>> CALL >>>>>>> MPI_ISEND(v_ast(1,jend),size_x,MPI_REAL8,inext,1,MPI_COMM_WORLD,isend1,ierr) >>>>>>> CALL >>>>>>> MPI_IRECV(v_ast(1,jsta-1),size_x,MPI_REAL8,iprev,1,MPI_COMM_WORLD,irecv1,ierr) >>>>>>> CALL MPI_WAIT(isend1, istatus, ierr) >>>>>>> CALL MPI_WAIT(irecv1, istatus, ierr) >>>>>>> >>>>>>> end subroutine v_ast_row_copy >>>>>>> >>>>>>> >>>>>>> I copied this subroutine from the RS6000 mpi manual and it used to >>>>>>> work. >>>>>>> I >>>>>>> wonder if this is a MPI or PETSc problem? Strange because I already >>>>>>> specify >>>>>>> the type to be MPI_REAL8. However changing to MPI_REAL solves the >>>>>>> problem. >>>>>>> >>>>>>> If this is a MPI problem, then you can just ignore it. I'll check it >>>>>>> in >>>>>>> some >>>>>>> MPI forum. >>>>>>> >>>>>>> >>>>>>> Thank you very much and have a nice day! >>>>>>> >>>>>>> Yours sincerely, >>>>>>> >>>>>>> Wee-Beng Tay >>>>>>> >>>>>>> >>> >>> > > > From zonexo at gmail.com Mon Apr 13 22:50:47 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 11:50:47 +0800 Subject: Use of MatCreateMPIAIJ and VecCreateMPI when ghost cells are present In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <7ff0ee010904031102q404ade3djeaec8a74bb73fab0@mail.gmail.com> <7ff0ee010904051401q3b3ff0d5yd832688de1b097d1@mail.gmail.com> <49DDA8F1.8090401@gmail.com> <49E1CD32.5090602@gmail.com> <49E210AC.6000502@gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E5EB.7050403@gmail.com> Message-ID: <49E40817.6060806@gmail.com> Hi, In the past, I did not use ghost cells. Hence, for e.g., on a grid 8x8, I can divide into 8x2 each for 4 processors i.e. divide the y direction because in my computation, usually y no. of cells > x no. of cells. this will minimize the exchange of values. Now, with ghost cells, it has changed from x,y=1 to 8 to 0 to 9, i.e., the grid is now 10x10 hence to divide to 4 processors, it will not be an integer because 10/4 is not an interger. I'm thinking of using 6x6 grid, and including ghost cells becomes 8x8. Is this the right/best way? Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay > From knepley at gmail.com Tue Apr 14 06:21:38 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Apr 2009 06:21:38 -0500 Subject: Use of MatCreateMPIAIJ and VecCreateMPI when ghost cells are present In-Reply-To: <49E40817.6060806@gmail.com> References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E5EB.7050403@gmail.com> <49E40817.6060806@gmail.com> Message-ID: On Mon, Apr 13, 2009 at 10:50 PM, Wee-Beng TAY wrote: > Hi, > > In the past, I did not use ghost cells. Hence, for e.g., on a grid 8x8, I > can divide into 8x2 each for 4 processors i.e. divide the y direction > because in my computation, usually y no. of cells > x no. of cells. this > will minimize the exchange of values. > > Now, with ghost cells, it has changed from x,y=1 to 8 to 0 to 9, i.e., the > grid is now 10x10 hence to divide to 4 processors, it will not be an integer > because 10/4 is not an interger. I'm thinking of using 6x6 grid, and > including ghost cells becomes 8x8. Is this the right/best way? > > Thank you very much and have a nice day! Usually it is not crucial to divide the grid into exactly equal parts since the number of elements is large. Matt > > Yours sincerely, > > Wee-Beng Tay > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From zonexo at gmail.com Tue Apr 14 09:17:00 2009 From: zonexo at gmail.com (Wee-Beng TAY) Date: Tue, 14 Apr 2009 22:17:00 +0800 Subject: Use of MatCreateMPIAIJ and VecCreateMPI when ghost cells are present In-Reply-To: References: <7ff0ee010904021919q3f3faa0cn9fb286c8db645421@mail.gmail.com> <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E5EB.7050403@gmail.com> <49E40817.6060806@gmail.com> Message-ID: <49E49ADC.2050407@gmail.com> Hi Matthew, Supposed for my 8x8 grid, there's ghost cells on the edge, hence changing to 0->9 x 0->9 no. of grids. I divide them along the y direction such that for 4 processors, myid=0 y=0 to 2 myid=1 y=3 to 4 myid=2 y=5 to 6 myid=3 y=7 to 9 therefore, for myid = 0 and 3, i'll have slightly more cells. however, for VecCreateMPI, if i use: call VecCreateMPI(MPI_COMM_WORLD,PETSC_DECIDE,size_x*size_y,b_rhs,ierr) should size_x = size_y = 8? Different myid also has different no. of cells. however the PETSC_DECIDE is the same for all processors. Will that cause error? Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay Matthew Knepley wrote: > On Mon, Apr 13, 2009 at 10:50 PM, Wee-Beng TAY > wrote: > > Hi, > > In the past, I did not use ghost cells. Hence, for e.g., on a grid > 8x8, I can divide into 8x2 each for 4 processors i.e. divide the y > direction because in my computation, usually y no. of cells > x > no. of cells. this will minimize the exchange of values. > > Now, with ghost cells, it has changed from x,y=1 to 8 to 0 to 9, > i.e., the grid is now 10x10 hence to divide to 4 processors, it > will not be an integer because 10/4 is not an interger. I'm > thinking of using 6x6 grid, and including ghost cells becomes 8x8. > Is this the right/best way? > > Thank you very much and have a nice day! > > > Usually it is not crucial to divide the grid into exactly equal parts > since the number of elements is large. > > Matt > > > > Yours sincerely, > > Wee-Beng Tay > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From jed at 59A2.org Tue Apr 14 09:31:10 2009 From: jed at 59A2.org (Jed Brown) Date: Tue, 14 Apr 2009 16:31:10 +0200 Subject: Use of MatCreateMPIAIJ and VecCreateMPI when ghost cells are present In-Reply-To: <49E49ADC.2050407@gmail.com> References: <49E3641E.5080304@gmail.com> <49E3DE72.8040402@gmail.com> <49E3E5EB.7050403@gmail.com> <49E40817.6060806@gmail.com> <49E49ADC.2050407@gmail.com> Message-ID: <20090414143110.GA2744@brakk.ethz.ch> On Tue 2009-04-14 22:17, Wee-Beng TAY wrote: > Supposed for my 8x8 grid, there's ghost cells on the edge, hence > changing to 0->9 x 0->9 no. of grids. It is very important to distinguish between the global vector which the solver sees and local vectors which often have ghost values. The global vector should have exactly one entry per global degree of freedom---no ghost entries. It sounds like you are using a structured grid so you probably want to look at the DA object which manages the global vectors and local vectors with ghosting. If you insist on doing it yourself, you should partition the domain so that each node is owned by exactly one process. Create a global vector based on this decomposition, it is what the solver will see. Then create a (sequential) local vector on each process that includes the ghost points. Now create a global-to-local scatter to update the local vector. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From yongcheng.zhou at gmail.com Tue Apr 14 13:04:53 2009 From: yongcheng.zhou at gmail.com (Yongcheng Zhou) Date: Tue, 14 Apr 2009 12:04:53 -0600 Subject: preconditioner for matrix-free linear system Message-ID: hi there, I want to link my own package with PETSc in order to make use of its various preconditioners. I am using matrix free method, so that I can directly refer the large matrix saved in my own format. The connection works OK without preconditioners, but runs into trouble when most of the preconditioners is called. For example, I got this message when using PCICC preconditionder: Matrix format shell does not have a built-in PETSc direct solver! So my question is how to utilize PETSc's powerful preconditioners for my matrix-free application. Thanks, Rocky From knepley at gmail.com Tue Apr 14 13:26:55 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 14 Apr 2009 13:26:55 -0500 Subject: preconditioner for matrix-free linear system In-Reply-To: References: Message-ID: On Tue, Apr 14, 2009 at 1:04 PM, Yongcheng Zhou wrote: > hi there, > > I want to link my own package with PETSc in order to make use of its > various preconditioners. I am using > matrix free method, so that I can directly refer the large matrix > saved in my own format. The connection > works OK without preconditioners, but runs into trouble when most of > the preconditioners is called. > For example, I got this message when using PCICC preconditionder: > > Matrix format shell does not have a built-in PETSc direct solver! > > So my question is how to utilize PETSc's powerful preconditioners for > my matrix-free application. Most preconditioners want access directly to the underlying matrix data, rather than its action. I see two alternatives: a) Create a PETSc Mat instead. This is sure to be the easiest alternative. Is there a reason you cannot do this? b) Write a converter to AIJ. This seems like a lot of work. I think all the PCs you want rely on AIJ storage. People in linear algebra don't think much of generic interfaces, and thus do not write their preconditioners to respect them (or you would only have to implement GetRow()). Matt > > Thanks, > > Rocky > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Apr 14 13:27:55 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 14 Apr 2009 13:27:55 -0500 (CDT) Subject: preconditioner for matrix-free linear system In-Reply-To: References: Message-ID: Any perticular reason for using your own matrix storage format? Does this work in parallel? All PETSc preconditioners are matrix based [different petsc matrix storage formats support different set of preconditioners]. We also have interface to external preconditioners [for eg: superlu_dist, mumps etc] that have their own matrix format [so there is a matrix convertion operation that happens when these types are choosen]. So you have 3 options: 1. Continue to use your matrix-format for 'AMat' via MATSHELL but construct a 'PMat' [i,e preconditioner matrix] in the PETSc matrix storage format. A preconditioner matrix can be an approximate matrix - doesn't have to be a full matrix. 2. Create/convert the current matrix in the PETSc storage [perhaps AIJ] format - and use this for both 'AMat' and 'PMat'. 3. Write your own application based preconditioner [by using PCSHELL] Satish On Tue, 14 Apr 2009, Yongcheng Zhou wrote: > hi there, > > I want to link my own package with PETSc in order to make use of its > various preconditioners. I am using > matrix free method, so that I can directly refer the large matrix > saved in my own format. The connection > works OK without preconditioners, but runs into trouble when most of > the preconditioners is called. > For example, I got this message when using PCICC preconditionder: > > Matrix format shell does not have a built-in PETSc direct solver! > > So my question is how to utilize PETSc's powerful preconditioners for > my matrix-free application. > > Thanks, > > Rocky > From bsmith at mcs.anl.gov Tue Apr 14 13:28:25 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 14 Apr 2009 13:28:25 -0500 Subject: preconditioner for matrix-free linear system In-Reply-To: References: Message-ID: "PETSc's powerful preconditioners" are constructed from knowledge about the matrix entries; if you do not provide any information about the matrix entries it cannot construct any preconditioners. You need to provide the matrix as a PETSc AIJ or BAIJ Mat in order to use those preconditioners. Barry On Apr 14, 2009, at 1:04 PM, Yongcheng Zhou wrote: > hi there, > > I want to link my own package with PETSc in order to make use of its > various preconditioners. I am using > matrix free method, so that I can directly refer the large matrix > saved in my own format. The connection > works OK without preconditioners, but runs into trouble when most of > the preconditioners is called. > For example, I got this message when using PCICC preconditionder: > > Matrix format shell does not have a built-in PETSc direct solver! > > So my question is how to utilize PETSc's powerful preconditioners for > my matrix-free application. > > Thanks, > > Rocky From balay at mcs.anl.gov Tue Apr 14 13:44:51 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 14 Apr 2009 13:44:51 -0500 (CDT) Subject: preconditioner for matrix-free linear system In-Reply-To: References: Message-ID: On Tue, 14 Apr 2009, Matthew Knepley wrote: > I think all the PCs you want rely on AIJ storage. People in linear > algebra don't think much of generic interfaces, and thus do not > write their preconditioners to respect them (or you would only have > to implement GetRow()). If a user provides a MATSHELL with MatGetRow() interface - perhaps we should be able to provide a generic MatConvert() interface as well [to MATSHELL] - that gives a AIJ matrix - useable for PMat? [with access to all PCs] [To get efficient assembly we might have to call MatGetRow() twice?] Satish From yongcheng.zhou at gmail.com Tue Apr 14 13:50:01 2009 From: yongcheng.zhou at gmail.com (Yongcheng Zhou) Date: Tue, 14 Apr 2009 12:50:01 -0600 Subject: preconditioner for matrix-free linear system In-Reply-To: References: Message-ID: Thanks Matt, Satish and Barry. I think I have to convert my matrix into a PETSc matrix in block AIJ type. I wan reluctant to do this since I might need to solve linear system many times and I had preferred to use matrix free approach. Another question. In calling PETSc for solving the linear system for the second time, I got this: You can not initialize PETSc a second time! But I did destroy and finalized last PETSc structure, via MatDestroy(A); VecDestroy(x); VecDestroy(b); KSPDestroy(ksp); PetscFinalize(); Any suggestion is greatly welcome! Rocky On Tue, Apr 14, 2009 at 12:28 PM, Barry Smith wrote: > > "PETSc's powerful preconditioners" are constructed from knowledge about the > matrix entries; if you do not provide any information about the matrix > entries it cannot > construct any preconditioners. You need to provide the matrix as a PETSc > ?AIJ or BAIJ Mat > in order to use those preconditioners. > > ?Barry > > > > On Apr 14, 2009, at 1:04 PM, Yongcheng Zhou wrote: > >> hi there, >> >> I want to link my own package with PETSc in order to make use of its >> various preconditioners. ?I am using >> matrix free method, so that I can directly refer the large matrix >> saved in my own format. The connection >> works OK without preconditioners, but runs into trouble when most of >> the preconditioners is called. >> For example, I got this message when using PCICC preconditionder: >> >> Matrix format shell does not have a built-in PETSc direct solver! >> >> So my question is how to utilize PETSc's powerful preconditioners for >> my matrix-free application. >> >> Thanks, >> >> Rocky > > From balay at mcs.anl.gov Tue Apr 14 14:02:10 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 14 Apr 2009 14:02:10 -0500 (CDT) Subject: preconditioner for matrix-free linear system In-Reply-To: References: Message-ID: On Tue, 14 Apr 2009, Yongcheng Zhou wrote: > Thanks Matt, Satish and Barry. I think I have to convert my matrix into a PETSc > matrix in block AIJ type. I wan reluctant to do this since I might need to solve > linear system many times and I had preferred to use matrix free approach. You can still do this with AIJ type.. > Another question. In calling PETSc for solving the linear system for the > second time, I got this: > > You can not initialize PETSc a second time! > > But I did destroy and finalized last PETSc structure, via > > MatDestroy(A); > VecDestroy(x); > VecDestroy(b); > KSPDestroy(ksp); > PetscFinalize(); > > Any suggestion is greatly welcome! As the message says - you can't call PetscInitialize()/Finalize() multiple times. The same thing with MPI_Init()/Finalize(). So you'll have to remove these 2 calls from your loop [or function that gets called multiple times] - to a location where they get called only once. Satish From Chun.SUN at 3ds.com Tue Apr 14 13:52:45 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Tue, 14 Apr 2009 14:52:45 -0400 Subject: MatLoad/VecLoad with user-specified partitioning Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA54C3BC@CORP-CLT-EXB01.ds> Hello PETSc developers, I want to use MatLoad to load matrix binary files from external storage. Then solve it or do whatever in parallel. However, as soon as I load it with MatLoad, I found that PETSc partition it automatically. I need to specify the partition myself. However I can't figure out where to properly use MatSetSizes: if I do MatSetSizes before MatLoad, it does nothing and MatGetSizes still gives me the auto-partitioning result from PETSc; if I do MatSetSizes after MatLoad, it simply crashes with error message "cannot change/reset row sizes...". Neither can I find a proper example... Any comments would be great appreciated. Thank you very much. Chun From bsmith at mcs.anl.gov Tue Apr 14 15:50:21 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 14 Apr 2009 15:50:21 -0500 Subject: MatLoad/VecLoad with user-specified partitioning In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA54C3BC@CORP-CLT-EXB01.ds> References: <2545DC7A42DF804AAAB2ADA5043D57DA54C3BC@CORP-CLT-EXB01.ds> Message-ID: <9CDCEA94-F1EC-4CC1-800B-466B4D02757F@mcs.anl.gov> We don't currently have support for setting the sizes before loading. You can do a default MatLoad() and then call MatGetSubMatrix() with the desired IS for each process to get the values where you want. The MatGetSubMatrix() runs quickly. Barry On Apr 14, 2009, at 1:52 PM, SUN Chun wrote: > Hello PETSc developers, > > I want to use MatLoad to load matrix binary files from external > storage. > Then solve it or do whatever in parallel. However, as soon as I load > it > with MatLoad, I found that PETSc partition it automatically. I need to > specify the partition myself. However I can't figure out where to > properly use MatSetSizes: if I do MatSetSizes before MatLoad, it does > nothing and MatGetSizes still gives me the auto-partitioning result > from > PETSc; if I do MatSetSizes after MatLoad, it simply crashes with error > message "cannot change/reset row sizes...". > > Neither can I find a proper example... > > Any comments would be great appreciated. Thank you very much. > > Chun From Andreas.Grassl at student.uibk.ac.at Wed Apr 15 09:11:39 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Wed, 15 Apr 2009 16:11:39 +0200 Subject: problems with MatLoad In-Reply-To: References: <49DCCCDF.4090104@student.uibk.ac.at> Message-ID: <49E5EB1B.5040301@student.uibk.ac.at> Matthew Knepley schrieb: > On Wed, Apr 8, 2009 at 11:12 AM, Andreas Grassl > > wrote: > > Hello, > > I got some success on the localtoglobalmapping, but now I'm stuck > with writing > to/reading from files. In a sequential code I write out some > matrices with > > PetscViewerBinaryOpen(comms,matrixname,FILE_MODE_WRITE,&viewer); > for (k=0;k MatView(AS[k],viewer);} > PetscViewerDestroy(viewer); > > and want to read them in in a parallel program, where each processor > should own > one matrix: > > ierr = > PetscViewerBinaryOpen(PETSC_COMM_WORLD,matrixname,FILE_MODE_READ,&viewer);CHKERRQ(ierr); > > > The Viewer has COMM_WORLD, but you are reading a matrix with COMM_SELF. > You need to create > a separate viewer for each process to do what you want. > Thank you for the fast answer. I resolved this issue now, but how could i gather the Matrix from COMM_SELF to COMM_WORLD. I searched for functions doing such matrix copying, but MatConvert and MatCopy act on the same communicator. Thanks in advance ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From Chun.SUN at 3ds.com Wed Apr 15 10:31:26 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Wed, 15 Apr 2009 11:31:26 -0400 Subject: MatLoad/VecLoad with user-specified partitioning In-Reply-To: <9CDCEA94-F1EC-4CC1-800B-466B4D02757F@mcs.anl.gov> References: <2545DC7A42DF804AAAB2ADA5043D57DA54C3BC@CORP-CLT-EXB01.ds> <9CDCEA94-F1EC-4CC1-800B-466B4D02757F@mcs.anl.gov> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA54C45D@CORP-CLT-EXB01.ds> Thanks Barry, I also found this thread https://lists.mcs.anl.gov/mailman/htdig/petsc-users/2009-February/003954.html also discussing similar issue. Sorry not to have found this earlier. Thanks again, Chun -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Barry Smith Sent: Tuesday, April 14, 2009 4:50 PM To: PETSc users list Subject: Re: MatLoad/VecLoad with user-specified partitioning We don't currently have support for setting the sizes before loading. You can do a default MatLoad() and then call MatGetSubMatrix() with the desired IS for each process to get the values where you want. The MatGetSubMatrix() runs quickly. Barry On Apr 14, 2009, at 1:52 PM, SUN Chun wrote: > Hello PETSc developers, > > I want to use MatLoad to load matrix binary files from external > storage. > Then solve it or do whatever in parallel. However, as soon as I load > it > with MatLoad, I found that PETSc partition it automatically. I need to > specify the partition myself. However I can't figure out where to > properly use MatSetSizes: if I do MatSetSizes before MatLoad, it does > nothing and MatGetSizes still gives me the auto-partitioning result > from > PETSc; if I do MatSetSizes after MatLoad, it simply crashes with error > message "cannot change/reset row sizes...". > > Neither can I find a proper example... > > Any comments would be great appreciated. Thank you very much. > > Chun From Andreas.Grassl at student.uibk.ac.at Wed Apr 15 10:42:11 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Wed, 15 Apr 2009 17:42:11 +0200 Subject: problems with MatLoad In-Reply-To: <49E5EB1B.5040301@student.uibk.ac.at> References: <49DCCCDF.4090104@student.uibk.ac.at> <49E5EB1B.5040301@student.uibk.ac.at> Message-ID: <49E60053.5070204@student.uibk.ac.at> Andreas Grassl schrieb: > Matthew Knepley schrieb: >> >> The Viewer has COMM_WORLD, but you are reading a matrix with COMM_SELF. >> You need to create >> a separate viewer for each process to do what you want. >> > > Thank you for the fast answer. I resolved this issue now, but how could i gather > the Matrix from COMM_SELF to COMM_WORLD. I searched for functions doing such > matrix copying, but MatConvert and MatCopy act on the same communicator. > I think I solved this as well, I simply invoked MatGetValues on one communicator and MatSetValues on the other. Kind Regards, ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From knepley at gmail.com Wed Apr 15 11:00:42 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 15 Apr 2009 11:00:42 -0500 Subject: problems with MatLoad In-Reply-To: <49E60053.5070204@student.uibk.ac.at> References: <49DCCCDF.4090104@student.uibk.ac.at> <49E5EB1B.5040301@student.uibk.ac.at> <49E60053.5070204@student.uibk.ac.at> Message-ID: To gather a Mat, you can use MatGetSubMatrix(). Matt On Wed, Apr 15, 2009 at 10:42 AM, Andreas Grassl < Andreas.Grassl at student.uibk.ac.at> wrote: > Andreas Grassl schrieb: > > Matthew Knepley schrieb: > >> > >> The Viewer has COMM_WORLD, but you are reading a matrix with COMM_SELF. > >> You need to create > >> a separate viewer for each process to do what you want. > >> > > > > Thank you for the fast answer. I resolved this issue now, but how could i > gather > > the Matrix from COMM_SELF to COMM_WORLD. I searched for functions doing > such > > matrix copying, but MatConvert and MatCopy act on the same communicator. > > > > I think I solved this as well, I simply invoked MatGetValues on one > communicator > and MatSetValues on the other. > > > Kind Regards, > > ando > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Wed Apr 15 17:50:30 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 15 Apr 2009 18:50:30 -0400 Subject: VecDuplicate() Message-ID: <20090415185030.7pnlxp8s08skkw80@cubmail.cc.columbia.edu> Hi, I use VecDuplicate() to duplicate four vectors from x, whose da has DA_XPERIODIC, however, those four new vectors lost the properties of DA_XPERIODIC. How could I keep this XPERIODIC property? ---------------------------------------------------------------------------- ierr = DACreate2d(comm,DA_XPERIODIC,DA_STENCIL_STAR, -5, -5, PETSC_DECIDE, PETSC_DECIDE, 4, 2, 0, 0, &da);CHKERRQ(ierr); ierr = DMMGSetDM(dmmg, (DM)da);CHKERRQ(ierr); for (i = 0; i < parameters.numberOfLevels; i++){ ierr = VecDuplicate(dmmg[i]->x, &appCtx[i].uOld1Vector);CHKERRQ(ierr); ierr = VecDuplicate(dmmg[i]->x, &appCtx[i].uOld2Vector);CHKERRQ(ierr); ierr = VecDuplicate(dmmg[i]->x, &appCtx[i].uOld3Vector);CHKERRQ(ierr); ierr = VecDuplicate(dmmg[i]->x, &appCtx[i].uOld4Vector);CHKERRQ(ierr); } ------------------------------------------------------------------- I tried to use VecCopy(), but it is wrong at line 1678:vector.c. ------------------------------------------------------------------ Breakpoint 4, VecCopy (x=0x8861070, y=0x886407c) at vector.c:1672 1672 PetscReal norms[4] = {0.0,0.0,0.0,0.0}; (gdb) n 1676 PetscFunctionBegin; (gdb) n 1677 PetscValidHeaderSpecific(x,VEC_COOKIE,1); (gdb) 1678 PetscValidHeaderSpecific(y,VEC_COOKIE,2); (gdb) [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Invalid argument! [0]PETSC ERROR: Wrong type of object: Parameter # 2! ------------------------------------------------------------------ Is there any better way to "duplicate" or "copy" the vector and make it preserve the da properties? Thanks very much! Cheers, Rebecca From bsmith at mcs.anl.gov Wed Apr 15 18:16:01 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 15 Apr 2009 18:16:01 -0500 Subject: VecDuplicate() In-Reply-To: <20090415185030.7pnlxp8s08skkw80@cubmail.cc.columbia.edu> References: <20090415185030.7pnlxp8s08skkw80@cubmail.cc.columbia.edu> Message-ID: There is something else wrong here. It is rejecting the y argument as not being a vector. Perhaps you are not passing it down correctly into the call? Perhaps passing a pointer to it instead? Don't think this has anything do with with periodic. Barry On Apr 15, 2009, at 5:50 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I use VecDuplicate() to duplicate four vectors from x, whose da has > DA_XPERIODIC, however, those four new vectors lost the properties of > DA_XPERIODIC. How could I keep this XPERIODIC property? > ---------------------------------------------------------------------------- > ierr = DACreate2d(comm,DA_XPERIODIC,DA_STENCIL_STAR, -5, -5, > PETSC_DECIDE, PETSC_DECIDE, 4, 2, 0, 0, &da);CHKERRQ(ierr); > ierr = DMMGSetDM(dmmg, (DM)da);CHKERRQ(ierr); > > > for (i = 0; i < parameters.numberOfLevels; i++){ > ierr = VecDuplicate(dmmg[i]->x, > &appCtx[i].uOld1Vector);CHKERRQ(ierr); > ierr = VecDuplicate(dmmg[i]->x, > &appCtx[i].uOld2Vector);CHKERRQ(ierr); > ierr = VecDuplicate(dmmg[i]->x, > &appCtx[i].uOld3Vector);CHKERRQ(ierr); > ierr = VecDuplicate(dmmg[i]->x, > &appCtx[i].uOld4Vector);CHKERRQ(ierr); > } > ------------------------------------------------------------------- > > I tried to use VecCopy(), but it is wrong at line 1678:vector.c. > > ------------------------------------------------------------------ > Breakpoint 4, VecCopy (x=0x8861070, y=0x886407c) at vector.c:1672 > 1672 PetscReal norms[4] = {0.0,0.0,0.0,0.0}; > (gdb) n > 1676 PetscFunctionBegin; > (gdb) n > 1677 PetscValidHeaderSpecific(x,VEC_COOKIE,1); > (gdb) > 1678 PetscValidHeaderSpecific(y,VEC_COOKIE,2); > (gdb) > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Invalid argument! > [0]PETSC ERROR: Wrong type of object: Parameter # 2! > ------------------------------------------------------------------ > > Is there any better way to "duplicate" or "copy" the vector and make > it preserve the da properties? > > Thanks very much! > > Cheers, > > Rebecca > > > > From enjoywm at cs.wm.edu Thu Apr 16 10:22:01 2009 From: enjoywm at cs.wm.edu (Yixun Liu) Date: Thu, 16 Apr 2009 11:22:01 -0400 Subject: singular matrix Message-ID: <49E74D19.7040406@cs.wm.edu> Hi, For Ax=b, A is mxn, m>n. I use CG to resolve it and find the solution makes no sense. I guess rank(A) < min(m,n). How to resolve this singular system? Use SVD? Best, Yixun From knepley at gmail.com Thu Apr 16 10:34:58 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Apr 2009 10:34:58 -0500 Subject: singular matrix In-Reply-To: <49E74D19.7040406@cs.wm.edu> References: <49E74D19.7040406@cs.wm.edu> Message-ID: On Thu, Apr 16, 2009 at 10:22 AM, Yixun Liu wrote: > Hi, > For Ax=b, A is mxn, m>n. I use CG to resolve it and find the solution > makes no sense. I guess rank(A) < min(m,n). How to resolve this > singular system? Use SVD? CG is a method for SPD matrices. This matrix is not symmetric. You appear to be solving a least squares problem. I would read about these a little. Ake Bjorck has a pretty good book. Matt > > Best, > > Yixun > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jroman at dsic.upv.es Thu Apr 16 11:04:03 2009 From: jroman at dsic.upv.es (Jose E. Roman) Date: Thu, 16 Apr 2009 18:04:03 +0200 Subject: singular matrix In-Reply-To: <49E74D19.7040406@cs.wm.edu> References: <49E74D19.7040406@cs.wm.edu> Message-ID: <2765D0CB-3154-47B7-A42E-FC82778867A4@dsic.upv.es> On 16/04/2009, Yixun Liu wrote: > Hi, > For Ax=b, A is mxn, m>n. I use CG to resolve it and find the solution > makes no sense. I guess rank(A) < min(m,n). How to resolve this > singular system? Use SVD? > > Best, > > Yixun Although it is probably not the most efficient way, you can use SLEPc to compute an SVD-based approximation of the pseudo-inverse, A+, then pre-multiply vector b. The following sequence should do the task. 1) Compute p References: <49E74D19.7040406@cs.wm.edu> Message-ID: > From: Yixun Liu > > Hi, > For Ax=b, A is mxn, m>n. I use CG to resolve it and find the solution > makes no sense. I guess rank(A) < min(m,n). How to resolve this > singular system? Use SVD? Only a square matrix can be singular. If reinterpreting as a least-squares problem, SVD would be slower. If rank(A) = n, see If A is dense, use LAPACK for QR, otherwise sparse QR factorization should be faster. http://www.cise.ufl.edu/research/sparse/CSparse/ If A is not full rank (rank(A) < n), it is more complicated. The pseudoinverse does not have a simple formula, although it is still computable for getting the minimum norm solution. The book by Ake Bjorck would be useful, as Matt already suggested. Chetan > Best, > > Yixun > From chetan at ices.utexas.edu Thu Apr 16 12:12:44 2009 From: chetan at ices.utexas.edu (Chetan Jhurani) Date: Thu, 16 Apr 2009 12:12:44 -0500 Subject: singular matrix In-Reply-To: <2765D0CB-3154-47B7-A42E-FC82778867A4@dsic.upv.es> References: <49E74D19.7040406@cs.wm.edu> <2765D0CB-3154-47B7-A42E-FC82778867A4@dsic.upv.es> Message-ID: > -----Original Message----- > From: Jose E. Roman > > On 16/04/2009, Yixun Liu wrote: > > > For Ax=b, A is mxn, m>n. I use CG to resolve it and find the solution > > makes no sense. I guess rank(A) < min(m,n). How to resolve this > > singular system? Use SVD? > > > > Best, > > > > Yixun > > Although it is probably not the most efficient way, you can use SLEPc > to compute an SVD-based approximation of the pseudo-inverse, A+, then > pre-multiply vector b. The following sequence should do the task. > > 1) Compute p 2) VecMDot of b with the left singular vectors. > 3) Scale the resulting values with the reciprocals of singular values. > 4) VecMAXPY of right singular vectors using the coefficients > obtained in 3) I am not sure of the order in which the p singular values would be determined in SLEPc, but if they are the p largest values, then the approximate pseudoinverse determined from them could be far from the "exact" pseudoinverse. In essence, the smallest singular values are those that determine the magnitude of the entries of the pseudoinverse, and those have been ignored in the approximation by truncation above. The reciprocals are used in step 3 above. Reciprocals of the ignored (n - p - z) singular values (where z is the number of zero singular values) will be large compared to the reciprocals of the p larger values and the approximation to pseudoinverse may not be good at all. I think this analysis is correct, but let me know if I've made a stupid mistake. Chetan > > Jose > > From knepley at gmail.com Thu Apr 16 14:38:54 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Apr 2009 14:38:54 -0500 Subject: singular matrix In-Reply-To: References: <49E74D19.7040406@cs.wm.edu> Message-ID: On Thu, Apr 16, 2009 at 11:34 AM, Chetan Jhurani wrote: > > From: Yixun Liu > > > > Hi, > > For Ax=b, A is mxn, m>n. I use CG to resolve it and find the solution > > makes no sense. I guess rank(A) < min(m,n). How to resolve this > > singular system? Use SVD? > > Only a square matrix can be singular. No, a singular matrix has a kernel. A non-square matrix can be singular. > > If reinterpreting as a least-squares problem, SVD would be slower. > > If rank(A) = n, see > QR will work for a matrix of rank < n. In this case, a null space basis fills out U. Matt > If A is dense, use LAPACK for QR, otherwise sparse QR factorization > should be faster. http://www.cise.ufl.edu/research/sparse/CSparse/ > > If A is not full rank (rank(A) < n), it is more complicated. The > pseudoinverse does not have a simple formula, although it is still > computable for getting the minimum norm solution. The book by Ake > Bjorck would be useful, as Matt already suggested. > > Chetan > > > Best, > > > > Yixun > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From chetan at ices.utexas.edu Thu Apr 16 15:05:15 2009 From: chetan at ices.utexas.edu (Chetan Jhurani) Date: Thu, 16 Apr 2009 15:05:15 -0500 Subject: singular matrix In-Reply-To: References: <49E74D19.7040406@cs.wm.edu> Message-ID: > From: Matthew Knepley > > On Thu, Apr 16, 2009 at 11:34 AM, Chetan Jhurani wrote: > > > Only a square matrix can be singular. > > No, a singular matrix has a kernel. A non-square matrix can be singular. One can generalize the concept of singular for rank-deficient rectangular matrices, but almost all usual definitions of singular matrix use non-invertibility or determinant and thus restrict themselves to square matrices. For example, http://mathworld.wolfram.com/SingularMatrix.html. > > If rank(A) = n, see > > > > QR will work for a matrix of rank < n. In this case, a null space basis fills out U. Agreed. Chetan From knepley at gmail.com Thu Apr 16 15:17:13 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 16 Apr 2009 15:17:13 -0500 Subject: singular matrix In-Reply-To: References: <49E74D19.7040406@cs.wm.edu> Message-ID: On Thu, Apr 16, 2009 at 3:05 PM, Chetan Jhurani wrote: > > > From: Matthew Knepley > > > > On Thu, Apr 16, 2009 at 11:34 AM, Chetan Jhurani > wrote: > > > > > Only a square matrix can be singular. > > > > No, a singular matrix has a kernel. A non-square matrix can be singular. > > One can generalize the concept of singular for rank-deficient rectangular > matrices, but almost all usual definitions of singular matrix use > non-invertibility or determinant and thus restrict themselves to > square matrices. > > For example, http://mathworld.wolfram.com/SingularMatrix.html. > The definition that makes the most sense (and generalizes far beyond matrices) is |ker(A)| > 0. Matt > > > If rank(A) = n, see > > > < > http://en.wikipedia.org/wiki/Moore-Penrose_pseudoinverse#The_QR_method> > > > > QR will work for a matrix of rank < n. In this case, a null space basis > fills out U. > > Agreed. > > Chetan > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongcheng.zhou at gmail.com Fri Apr 17 00:58:56 2009 From: yongcheng.zhou at gmail.com (Yongcheng Zhou) Date: Thu, 16 Apr 2009 23:58:56 -0600 Subject: tolerance Message-ID: Hi there, I am wondering what is the real tolerance for stopping the Krylov iteations. In solving my nonlinear problem I have to call KSP solvers multiple time. my tolerance setting is KSPSetTolerances(ksp,1.e-5,1.e-5,PETSC_DEFAULT,200); After several calls I found that the linear solver does not do anything, as shown by the convergence history: 35 KSP preconditioned resid norm 8.570481231422e-05 true resid norm 7.256019920381e+01 ||Ae||/||Ax|| 1.008233844002e-01 36 KSP preconditioned resid norm 5.210695852340e-06 true resid norm 7.448290646301e+00 ||Ae||/||Ax|| 1.034950123066e-02 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 where in the last three calls the KSP simply refused to do any interation. Is there any way I can force the KSP to check the convergence according to true resid or Ae/Ax rather than the preconditoned resid? Thanks! Rocky From knepley at gmail.com Fri Apr 17 13:22:53 2009 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 17 Apr 2009 13:22:53 -0500 Subject: tolerance In-Reply-To: References: Message-ID: On Fri, Apr 17, 2009 at 12:58 AM, Yongcheng Zhou wrote: > Hi there, > > I am wondering what is the real tolerance for stopping the Krylov > iteations. In solving my nonlinear problem I have to call > KSP solvers multiple time. my tolerance setting is > KSPSetTolerances(ksp,1.e-5,1.e-5,PETSC_DEFAULT,200); > > After several calls I found that the linear solver does not do > anything, as shown by the convergence history: > > 35 KSP preconditioned resid norm 8.570481231422e-05 true resid norm > 7.256019920381e+01 ||Ae||/||Ax|| 1.008233844002e-01 > 36 KSP preconditioned resid norm 5.210695852340e-06 true resid norm > 7.448290646301e+00 ||Ae||/||Ax|| 1.034950123066e-02 > 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm > 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 > 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm > 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 > 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm > 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 > > where in the last three calls the KSP simply refused to do any > interation. Is there any way I can force the KSP to check the > convergence according to true resid or Ae/Ax rather than the > preconditoned resid? I am not sure waht you are asking. You can write any tolerance check you want using KSPSetConvergenceTest(). Matt > > Thanks! > > Rocky > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From keita at cray.com Fri Apr 17 13:46:32 2009 From: keita at cray.com (Keita Teranishi) Date: Fri, 17 Apr 2009 13:46:32 -0500 Subject: tolerance In-Reply-To: References: Message-ID: <925346A443D4E340BEB20248BAFCDBDF0A8E5873@CFEVS1-IP.americas.cray.com> Hi, Why don't you use "-ksp_norm_type unpreconditioned" option? Thanks, Keita -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Yongcheng Zhou Sent: Friday, April 17, 2009 12:59 AM To: PETSc users list Subject: tolerance Hi there, I am wondering what is the real tolerance for stopping the Krylov iteations. In solving my nonlinear problem I have to call KSP solvers multiple time. my tolerance setting is KSPSetTolerances(ksp,1.e-5,1.e-5,PETSC_DEFAULT,200); After several calls I found that the linear solver does not do anything, as shown by the convergence history: 35 KSP preconditioned resid norm 8.570481231422e-05 true resid norm 7.256019920381e+01 ||Ae||/||Ax|| 1.008233844002e-01 36 KSP preconditioned resid norm 5.210695852340e-06 true resid norm 7.448290646301e+00 ||Ae||/||Ax|| 1.034950123066e-02 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 0 KSP preconditioned resid norm 5.210622961414e-06 true resid norm 7.448436220158e+00 ||Ae||/||Ax|| 1.000000000000e+00 where in the last three calls the KSP simply refused to do any interation. Is there any way I can force the KSP to check the convergence according to true resid or Ae/Ax rather than the preconditoned resid? Thanks! Rocky From tchouanm at msn.com Sat Apr 18 13:09:51 2009 From: tchouanm at msn.com (STEPHANE TCHOUANMO) Date: Sat, 18 Apr 2009 20:09:51 +0200 Subject: SNES convergence In-Reply-To: References: Message-ID: Hi all, Using the SNES solver, i have divergence at second Newton iteration because of Line-Search failure. However at each iteration, i had a residual around 10^{-14} ie. very small. Does somebody know what it means exactly? Is it possible that my resolution did not converge at all? Thanks. Stephane _________________________________________________________________ News, entertainment and everything you care about at Live.com. Get it now! http://www.live.com/getstarted.aspx -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Mon Apr 20 14:53:07 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Mon, 20 Apr 2009 15:53:07 -0400 Subject: Debug in parallel Message-ID: <20090420155307.63irvvl5wwcssss8@cubmail.cc.columbia.edu> Hi, I am running a petsc code on my local machine with two processors, and would like to debug it in parallel. When I use the option "-start_in_debug", there is only one gdb window coming out, is there something wrong? How could I have two gdb windows each on one processor? Thanks, Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From balay at mcs.anl.gov Mon Apr 20 16:03:51 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 20 Apr 2009 16:03:51 -0500 (CDT) Subject: Debug in parallel In-Reply-To: <20090420155307.63irvvl5wwcssss8@cubmail.cc.columbia.edu> References: <20090420155307.63irvvl5wwcssss8@cubmail.cc.columbia.edu> Message-ID: Generally X11 settings different for different senarios - so the default is not correct for some folks.. You can try -start_in_debugger -display :0.0 Satish On Mon, 20 Apr 2009, (Rebecca) Xuefei YUAN wrote: > Hi, > > I am running a petsc code on my local machine with two processors, and would > like to debug it in parallel. > > When I use the option "-start_in_debug", there is only one gdb window coming > out, is there something wrong? How could I have two gdb windows each on one > processor? > > Thanks, > > Rebecca > > > From sperif at gmail.com Tue Apr 21 10:27:12 2009 From: sperif at gmail.com (Pierre-Yves Aquilanti) Date: Tue, 21 Apr 2009 17:27:12 +0200 Subject: integrating petsc code into non petsc code Message-ID: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> Hello everyone, i'm trying to integrate petsc code into one big software. petsc would act as a solver for the program. The way that i try to integrate my petsc code is like that: ######################## program myoldnonpetsccode use mymodule implicit none call process_petsc end myoldnonpetsccode ######################## module mymodule implicit none #include "finclude/petsc.h" contains subroutine process_petsc call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call PetscFinalize(ierr) end subroutine process_petsc end module mymodule ############### The compiler used for petsc, mpich and the non-petsc code is the same (PGI). I'm perfectly making my module file (.mod) for mymodule and object one (.o). This one is inserted into a library file (libtest.a). When i try to make my binary "myoldnonpetsccode" my compiler tells me during the linking process that there's two undefined reference for 'petscinitialize_' and 'petscfinalize_'. I verified that petsc libraries where included during linking process (with -L/mypathtopetsclib and -lpetsc). I don't find any answer to this on the internet and documentation. Do you have any clue on what would be the problem ? Thanks a lot Best regards PYA -------------- next part -------------- An HTML attachment was scrubbed... URL: From w_subber at yahoo.com Tue Apr 21 10:49:24 2009 From: w_subber at yahoo.com (Waad Subber) Date: Tue, 21 Apr 2009 08:49:24 -0700 (PDT) Subject: integrating petsc code into non petsc code Message-ID: <529901.72940.qm@web38201.mail.mud.yahoo.com> I use the following and it works for me master is a fortran code calls PETSc solver Waad --- On Tue, 4/21/09, Pierre-Yves Aquilanti wrote: From: Pierre-Yves Aquilanti Subject: integrating petsc code into non petsc code To: petsc-users at mcs.anl.gov Date: Tuesday, April 21, 2009, 11:27 AM Hello everyone, i'm trying to integrate petsc code into one big software. petsc would act as a solver for the program. The way that i try to integrate my petsc code is like that: ######################## program myoldnonpetsccode use mymodule implicit none call process_petsc end myoldnonpetsccode ######################## module mymodule implicit none #include "finclude/petsc.h" contains subroutine process_petsc call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call PetscFinalize(ierr) end subroutine process_petsc end module mymodule ############### The compiler used for petsc, mpich and the non-petsc code is the same (PGI). I'm perfectly making my module file (.mod) for mymodule and object one (.o). This one is inserted into a library file (libtest.a). When i try to make my binary "myoldnonpetsccode" my compiler tells me during the linking process that there's two undefined reference for 'petscinitialize_' and 'petscfinalize_'. I verified that petsc libraries where included during linking process (with -L/mypathtopetsclib and -lpetsc). I don't find any answer to this on the internet and documentation. Do you have any clue on what would be the problem ? Thanks a lot Best regards PYA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: master.F Type: text/x-fortran Size: 810 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: msolve.F Type: text/x-fortran Size: 999 bytes Desc: not available URL: From matze999 at gmx.net Tue Apr 21 11:03:08 2009 From: matze999 at gmx.net (Matt Funk) Date: Tue, 21 Apr 2009 10:03:08 -0600 Subject: integrating petsc code into non petsc code In-Reply-To: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> References: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> Message-ID: <200904211003.08591.matze999@gmx.net> If i am missing the question please ignore this email, but i think you are using your own make files (i.e. NOT petsc make system?). If so, i am doing the same. Here is what you need during compilation: PETSC_INC_DIR= -I$(PETSC_DIR)/include -I$(PETSC_DIR)/bmake/$(PETSC_ARCHITECTURE) PETSC_LIB_DIR= -L$(PETSC_DIR)/lib/$(PETSC_ARCHITECTURE) PETSC_LIBS= -lpetsccontrib -lpetscts -lpetscsnes -lpetscdm -lpetscksp -lpetscmat -lpetscvec -lpetsc Petsc make does this automatically for you i believe. Again, if i missed the question please ignore. matt ps: i am on petsc-2.3.3-p15 On Tuesday 21 April 2009, Pierre-Yves Aquilanti wrote: > Hello everyone, > > i'm trying to integrate petsc code into one big software. petsc would act > as a solver for the program. > The t i try toway tha integrate my petsc code is like that: > > ######################## > program myoldnonpetsccode > > use mymodule > > implicit none > > call process_petsc > > end myoldnonpetsccode > ######################## > module mymodule > implicit none > > #include "finclude/petsc.h" > > contains > subroutine process_petsc > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > call PetscFinalize(ierr) > > end subroutine process_petsc > end module mymodule > ############### > > > The compiler used for petsc, mpich and the non-petsc code is the same > (PGI). I'm perfectly making my module file (.mod) for mymodule and object > one (.o). This one is inserted into a library file (libtest.a). > When i try to make my binary "myoldnonpetsccode" my compiler tells me > during the linking process that there's two undefined reference for > 'petscinitialize_' and 'petscfinalize_'. I verified that petsc libraries > where included during linking process (with -L/mypathtopetsclib and > -lpetsc). > I don't find any answer to this on the internet and documentation. > > Do you have any clue on what would be the problem ? > > Thanks a lot > > Best regards > > PYA From mafunk at nmsu.edu Tue Apr 21 11:06:44 2009 From: mafunk at nmsu.edu (Matt Funk) Date: Tue, 21 Apr 2009 10:06:44 -0600 Subject: integrating petsc code into non petsc code In-Reply-To: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> References: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> Message-ID: <200904211006.44967.mafunk@nmsu.edu> If i am missing the question please ignore this email, but i think you are using your own make files (i.e. NOT petsc make system?). If so, i am doing the same. Here is what you need during compilation: ? ? PETSC_INC_DIR= -I$(PETSC_DIR)/include -I$(PETSC_DIR)/bmake/$(PETSC_ARCHITECTURE) PETSC_LIB_DIR= -L$(PETSC_DIR)/lib/$(PETSC_ARCHITECTURE) PETSC_LIBS= -lpetsccontrib -lpetscts -lpetscsnes????????-lpetscdm -lpetscksp -lpetscmat -lpetscvec -lpetsc Petsc make does this automatically for you i believe. Again, if i missed the question please ignore. matt ps: i am on petsc-2.3.3-p15 On Tuesday 21 April 2009, Pierre-Yves Aquilanti wrote: > Hello everyone, > > i'm trying to integrate petsc code into one big software. petsc would act > as a solver for the program. > The way that i try to integrate my petsc code is like that: > > ######################## > program myoldnonpetsccode > > use mymodule > > implicit none > > call process_petsc > > end myoldnonpetsccode > ######################## > module mymodule > implicit none > > #include "finclude/petsc.h" > > contains > subroutine process_petsc > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > call PetscFinalize(ierr) > > end subroutine process_petsc > end module mymodule > ############### > > > The compiler used for petsc, mpich and the non-petsc code is the same > (PGI). I'm perfectly making my module file (.mod) for mymodule and object > one (.o). This one is inserted into a library file (libtest.a). > When i try to make my binary "myoldnonpetsccode" my compiler tells me > during the linking process that there's two undefined reference for > 'petscinitialize_' and 'petscfinalize_'. I verified that petsc libraries > where included during linking process (with -L/mypathtopetsclib and > -lpetsc). > I don't find any answer to this on the internet and documentation. > > Do you have any clue on what would be the problem ? > > Thanks a lot > > Best regards > > PYA From knepley at gmail.com Tue Apr 21 11:09:32 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 21 Apr 2009 11:09:32 -0500 Subject: integrating petsc code into non petsc code In-Reply-To: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> References: <2b9153980904210827j1ea10a3dp81b4db21e874cc70@mail.gmail.com> Message-ID: We can't tell anything unless you send a) the WHOLE error message b) the exact command line executed Matt On Tue, Apr 21, 2009 at 10:27 AM, Pierre-Yves Aquilanti wrote: > Hello everyone, > > i'm trying to integrate petsc code into one big software. petsc would act > as a solver for the program. > The way that i try to integrate my petsc code is like that: > > ######################## > program myoldnonpetsccode > > use mymodule > > implicit none > > call process_petsc > > end myoldnonpetsccode > ######################## > module mymodule > implicit none > > #include "finclude/petsc.h" > > contains > subroutine process_petsc > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > > call PetscFinalize(ierr) > > end subroutine process_petsc > end module mymodule > ############### > > > The compiler used for petsc, mpich and the non-petsc code is the same > (PGI). I'm perfectly making my module file (.mod) for mymodule and object > one (.o). This one is inserted into a library file (libtest.a). > When i try to make my binary "myoldnonpetsccode" my compiler tells me > during the linking process that there's two undefined reference for > 'petscinitialize_' and 'petscfinalize_'. I verified that petsc libraries > where included during linking process (with -L/mypathtopetsclib and > -lpetsc). > I don't find any answer to this on the internet and documentation. > > Do you have any clue on what would be the problem ? > > Thanks a lot > > Best regards > > PYA > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Tue Apr 21 14:05:59 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 21 Apr 2009 15:05:59 -0400 Subject: -snes_type test Message-ID: <20090421150559.iyaj0gsw8444owkg@cubmail.cc.columbia.edu> Hi, I am testing my hand coded jacobian matrix in multi processors with the option "-snes_type test -snes_test_display", and I find that 1) my hand coded jacobian is different from the finite difference jacobian running with 2 processors. But 2) my jacobian is the same as the finite difference jacobian matrix running with 1 single processor. Also 3) my hand coded jacobian with 1 processor is the same as my hand coded jacobian with 2 processors. Where could be wrong about multiprocessors' jacobian matrix? Thanks very much! -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From knepley at gmail.com Tue Apr 21 14:10:45 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 21 Apr 2009 14:10:45 -0500 Subject: -snes_type test In-Reply-To: <20090421150559.iyaj0gsw8444owkg@cubmail.cc.columbia.edu> References: <20090421150559.iyaj0gsw8444owkg@cubmail.cc.columbia.edu> Message-ID: You could calculate an index incorrectly. Pick the first value that is different. Follow the entire computation by hand. Matt On Tue, Apr 21, 2009 at 2:05 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I am testing my hand coded jacobian matrix in multi processors with the > option "-snes_type test -snes_test_display", and I find that > 1) my hand coded jacobian is different from the finite difference jacobian > running with 2 processors. > But > 2) my jacobian is the same as the finite difference jacobian matrix running > with 1 single processor. > Also > 3) my hand coded jacobian with 1 processor is the same as my hand coded > jacobian with 2 processors. > > Where could be wrong about multiprocessors' jacobian matrix? > > Thanks very much! > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Apr 21 14:23:01 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 21 Apr 2009 14:23:01 -0500 Subject: -snes_type test In-Reply-To: <20090421150559.iyaj0gsw8444owkg@cubmail.cc.columbia.edu> References: <20090421150559.iyaj0gsw8444owkg@cubmail.cc.columbia.edu> Message-ID: <01D8B46C-4C06-4BA4-AA3D-653F9835AA09@mcs.anl.gov> Check that your function evaluation is the same with 1 and 2 processors. Check the ordering of the unknowns and how they are partitioned and that proper ghost point values are set. On Apr 21, 2009, at 2:05 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I am testing my hand coded jacobian matrix in multi processors with > the option "-snes_type test -snes_test_display", and I find that > 1) my hand coded jacobian is different from the finite difference > jacobian running with 2 processors. > But > 2) my jacobian is the same as the finite difference jacobian matrix > running with 1 single processor. > Also > 3) my hand coded jacobian with 1 processor is the same as my hand > coded jacobian with 2 processors. > > Where could be wrong about multiprocessors' jacobian matrix? > > Thanks very much! > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From schuang at ats.ucla.edu Wed Apr 22 19:26:41 2009 From: schuang at ats.ucla.edu (Shao-Ching Huang) Date: Wed, 22 Apr 2009 17:26:41 -0700 Subject: Scattering vectors with different DA stencil types Message-ID: <49EFB5C1.8010001@ats.ucla.edu> Hi, I am trying to use DA type "STENCIL_STAR" for matrix solve, while using DA type "STENCIL_BOX" for some auxiliary operations. My question is: is the following sequence of operations valid (especially step 3 below): 1. create two DA's, one with DA_STENCIL_STAR and the other with DA_STENCIL_BOX: DA da1, da2; DACreate3d(..., DA_STENCIL_STAR, ... , &da1); DACreate3d(..., DA_STENCIL_BOX, ..., &da2); (All other parameters of the two DA's are identical, on all processors.) 2. Create a global vector "x1" associated with "da1" (STENCIL_STAR): Vec x1; Mat A; DACreateGlobalVector(da1, &x1); ... DAGetMatrix(da1,..., &A); ... (compute RHS, requiring STENCIL_STAR) ... ... (solve matrix) ... 3. Scatter global vector "x1" onto a local vector "y2", which is assocoated with da2 (STENCIL_BOX): Vec y2; DAGetLocalVector(da2, &y2); DAGlobalToLocalBegin(da1, x1, INSERT_VALUES, y2); DAGlobalToLocalEnd (da1, x1, INSERT_VALUES, y2); ... (operations on y2, requiring STENCIL_BOX) ... The main reason to use "da1" (STENCIL_STAR) for matrix solve is that it introduces much less number of entries per row (9 in STENCIL_STAR vs. 27 in STENCIL_BOX). All operations on "x1" only require STENCIL_STAR. The computation on "y2" (requiring STENCIL_BOX) is only a small fraction of the entire code. Thanks, Shao-Ching From knepley at gmail.com Wed Apr 22 19:44:01 2009 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 22 Apr 2009 19:44:01 -0500 Subject: Scattering vectors with different DA stencil types In-Reply-To: <49EFB5C1.8010001@ats.ucla.edu> References: <49EFB5C1.8010001@ats.ucla.edu> Message-ID: I believe this should work since they have the same overlap size. The stencil shape should only determine the scatter. Matt On Wed, Apr 22, 2009 at 7:26 PM, Shao-Ching Huang wrote: > Hi, > > I am trying to use DA type "STENCIL_STAR" for matrix solve, while > using DA type "STENCIL_BOX" for some auxiliary operations. > > My question is: is the following sequence of operations valid > (especially step 3 below): > > 1. create two DA's, one with DA_STENCIL_STAR and the other with > DA_STENCIL_BOX: > > DA da1, da2; > DACreate3d(..., DA_STENCIL_STAR, ... , &da1); > DACreate3d(..., DA_STENCIL_BOX, ..., &da2); > > (All other parameters of the two DA's are identical, on all > processors.) > > 2. Create a global vector "x1" associated with "da1" (STENCIL_STAR): > > Vec x1; > Mat A; > DACreateGlobalVector(da1, &x1); > ... > DAGetMatrix(da1,..., &A); > ... (compute RHS, requiring STENCIL_STAR) ... > ... (solve matrix) ... > > 3. Scatter global vector "x1" onto a local vector "y2", which is > assocoated with da2 (STENCIL_BOX): > > Vec y2; > DAGetLocalVector(da2, &y2); > DAGlobalToLocalBegin(da1, x1, INSERT_VALUES, y2); > DAGlobalToLocalEnd (da1, x1, INSERT_VALUES, y2); > ... (operations on y2, requiring STENCIL_BOX) ... > > > The main reason to use "da1" (STENCIL_STAR) for matrix solve is that > it introduces much less number of entries per row (9 in STENCIL_STAR > vs. 27 in STENCIL_BOX). All operations on "x1" only require > STENCIL_STAR. The computation on "y2" (requiring STENCIL_BOX) is only > a small fraction of the entire code. > > Thanks, > > Shao-Ching > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From li at loyno.edu Wed Apr 22 21:44:29 2009 From: li at loyno.edu (Xuefeng Li) Date: Wed, 22 Apr 2009 21:44:29 -0500 (CDT) Subject: dmmg_grid_sequence: KSP not functional? In-Reply-To: <49EFB5C1.8010001@ats.ucla.edu> References: <49EFB5C1.8010001@ats.ucla.edu> Message-ID: Hello, everyone. I am running src/snes/examples/tutorials/ex19.c to test the use of multi-level dmmg in Petsc with option -dmmg_grid_sequence. In all the tests I've run, I observed that on the coarsest level, KSPSolve() always converges in one iteration with reason 4 and residual 0. And option -ksp_monitor is not producing any output on this level. Attached is an output from one test run with a two-level dmmg, refine factor 2 and mesh 33x33 (coarse)/65x65 (fine). The line containing "step in LS:" is printed from src/snes/impls/ls/ls.c to report KSP activities regarding KSP converge reason (kreason) on every iteration. It feels like KSP for the coarsest level is either not functional or a direct solver, whereas KSP for finer levels are iterative solvers. What is the KSP type associated with SNES on the coarsest level? Is the above observation by design in Petsc? Regards, --Xuefeng Li, (504)865-3340(phone) Like floating clouds, the heart rests easy Like flowing water, the spirit stays free http://www.loyno.edu/~li/home New Orleans, Louisiana (504)865-2051(fax) -------------- next part -------------- lid velocity = 0, prandtl # = 1, grashof # = 10 0 SNES Function norm 3.027343750000e-01 1-st step in LS: kiter= 1; kreason=4; kres= 0.000000000000e+00; 1 SNES Function norm 2.715281504674e-04 2-nd step in LS: kiter= 1; kreason=4; kres= 0.000000000000e+00; 2 SNES Function norm 6.154554861599e-11 0 SNES Function norm 1.459106922253e-01 0 KSP Residual norm 1.459106922253e-01 1 KSP Residual norm 1.371836659970e-01 2 KSP Residual norm 2.540661848461e-02 3 KSP Residual norm 6.181889597814e-03 4 KSP Residual norm 1.572134257147e-03 5 KSP Residual norm 2.287065092537e-04 6 KSP Residual norm 3.184241572285e-05 7 KSP Residual norm 4.332199061241e-06 8 KSP Residual norm 8.673786649533e-07 1-st step in LS: kiter= 8; kreason=2; kres= 8.673786649533e-07; 1 SNES Function norm 9.473925324366e-07 0 KSP Residual norm 9.473925324366e-07 1 KSP Residual norm 1.125539671715e-07 2 KSP Residual norm 1.532200439274e-08 3 KSP Residual norm 4.962833769150e-09 4 KSP Residual norm 1.093531357780e-09 5 KSP Residual norm 1.096111146965e-10 6 KSP Residual norm 2.341320410628e-11 7 KSP Residual norm 3.853426393455e-12 2-nd step in LS: kiter= 7; kreason=2; kres= 3.853426393455e-12; 2 SNES Function norm 3.854044974807e-12 Number of Newton iterations = 2 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex19 on a cygwin_bl named AMOEBA.no.cox.net with 16 processors, by Friend Wed Apr 22 21:12:05 2009 Using Petsc Release Version 3.0.0, Patch 2, Wed Jan 14 22:57:05 CST 2009 Max Max/Min Avg Total Time (sec): 4.090e+01 1.00345 4.085e+01 Objects: 4.610e+02 1.00000 4.610e+02 Flops: 3.517e+08 1.00787 3.497e+08 5.596e+09 Flops/sec: 8.598e+06 1.00727 8.563e+06 1.370e+08 Memory: 1.730e+07 1.01353 2.744e+08 MPI Messages: 2.664e+03 1.37594 2.313e+03 3.700e+04 MPI Message Lengths: 9.758e+06 1.12054 3.939e+03 1.458e+08 MPI Reductions: 8.862e+01 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.4873e-01 0.6% 0.0000e+00 0.0% 1.500e+01 0.0% 1.621e-03 0.0% 0.000e+00 0.0% 1: SetUp: 2.6383e+00 6.5% 1.3020e+05 0.0% 5.790e+02 1.6% 4.420e-01 0.0% 1.400e+02 9.9% 2: Solve: 1.6828e+01 41.2% 2.7978e+09 50.0% 1.792e+04 48.4% 1.969e+03 50.0% 5.030e+02 35.5% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run config/configure.py # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage PetscBarrier 2 1.0 1.0574e-0111.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 29 0 0 0 0 0 --- Event Stage 1: SetUp VecSet 3 1.0 3.1624e-0424.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 1 1.0 5.5172e-03790.0 0.00e+00 0.0 3.3e+01 1.3e+02 0.0e+00 0 0 0 0 0 0 0 6 25 0 0 VecScatterEnd 1 1.0 9.1836e-032191.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMultTranspose 1 1.0 2.3881e-02 2.8 5.00e+03 1.1 3.3e+01 1.3e+02 2.0e+00 0 0 0 0 0 1 58 6 25 1 3 MatAssemblyBegin 3 1.0 9.4317e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 3 0 0 0 4 0 MatAssemblyEnd 3 1.0 4.5032e-01 1.0 0.00e+00 0.0 2.6e+02 2.3e+01 3.0e+01 1 0 1 0 2 17 0 45 37 21 0 MatFDColorCreate 2 1.0 7.6469e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+01 2 0 0 0 3 29 0 0 0 29 0 --- Event Stage 2: Solve VecDot 4 1.0 7.6557e-02 1.8 5.92e+03 1.2 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 1 1 VecMDot 45 1.0 7.7941e-01 2.0 2.17e+05 1.1 0.0e+00 0.0e+00 4.5e+01 1 0 0 0 3 3 0 0 0 9 4 VecNorm 123 1.0 1.7008e+00 1.4 2.69e+05 1.1 0.0e+00 0.0e+00 1.2e+02 3 0 0 0 8 8 0 0 0 23 2 VecScale 77 1.0 1.7737e-03 2.5 8.90e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 734 VecCopy 164 1.0 1.1524e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 250 1.0 2.4710e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 182 1.0 2.2335e-03 1.8 2.88e+05 1.2 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1856 VecAYPX 15 1.0 3.7295e-04 2.5 3.47e+04 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1359 VecWAXPY 4 1.0 6.9087e-0420.1 2.96e+03 1.2 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 62 VecMAXPY 77 1.0 2.5467e-03 2.2 3.21e+05 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1845 VecPointwiseMult 2 1.0 1.1175e-05 1.3 6.48e+02 1.3 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 780 VecScatterBegin 233 1.0 1.2532e+00 2.9 0.00e+00 0.0 1.5e+04 1.3e+03 0.0e+00 2 0 40 13 0 5 0 83 25 0 0 VecScatterEnd 233 1.0 3.2011e+00 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 16 0 0 0 0 0 VecReduceArith 4 1.0 6.7606e-05 1.7 5.92e+03 1.2 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1257 VecReduceComm 2 1.0 5.9115e-02 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 60 1.0 9.8417e-01 1.8 2.08e+05 1.1 0.0e+00 0.0e+00 6.0e+01 2 0 0 0 4 4 0 0 0 12 3 MatMult 110 1.0 2.0059e+00 1.6 4.13e+06 1.1 5.0e+03 3.0e+02 0.0e+00 4 1 14 1 0 10 2 28 2 0 30 MatMultAdd 15 1.0 5.0544e-01 9.0 7.50e+04 1.1 5.0e+02 1.3e+02 0.0e+00 1 0 1 0 0 2 0 3 0 0 2 MatMultTranspose 32 1.0 7.8889e-01 1.4 1.60e+05 1.1 1.1e+03 1.3e+02 6.4e+01 2 0 3 0 5 4 0 6 0 13 3 MatSolve 122 1.0 1.4828e-01 1.3 3.11e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 9 0 0 0 1 18 0 0 0 3320 MatLUFactorSym 2 1.0 3.5973e-01 5.5 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 1 0 0 0 1 0 MatLUFactorNum 6 1.0 1.6635e+00 2.0 1.37e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 39 0 0 0 8 79 0 0 0 1322 MatILUFactorSym 1 1.0 2.5755e-03 4.6 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 1 0 MatAssemblyBegin 10 1.0 3.1545e-01 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 1 0 0 0 1 1 0 0 0 2 0 MatAssemblyEnd 10 1.0 7.6078e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 1 0 MatGetRowIJ 3 1.0 9.6632e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 3 1.0 5.7125e-02 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 1 0 0 0 0 2 0 MatZeroEntries 6 1.0 3.1484e-04 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatFDColorApply 6 1.0 3.9806e-01 1.7 1.74e+06 1.2 0.0e+00 0.0e+00 2.4e+01 1 0 0 0 2 2 1 0 0 5 62 MatFDColorFunc 126 1.0 4.4332e-03 1.5 1.59e+06 1.2 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 1 0 0 0 5096 MatGetRedundant 4 1.0 3.7212e+00 3.9 0.00e+00 0.0 1.9e+03 2.7e+04 4.0e+00 6 0 5 36 0 14 0 11 72 1 0 SNESSolve 2 1.0 1.6165e+01 1.0 1.76e+08 1.0 1.8e+04 4.1e+03 5.0e+02 40 50 48 50 35 96100 99100100 173 SNESLineSearch 4 1.0 3.4570e-01 1.3 1.98e+05 1.2 3.8e+02 2.4e+02 1.6e+01 1 0 1 0 1 2 0 2 0 3 8 SNESFunctionEval 6 1.0 9.3704e-02 2.9 9.32e+04 1.2 2.9e+02 2.4e+02 2.0e+00 0 0 1 0 0 0 0 2 0 0 14 SNESJacobianEval 4 1.0 5.3182e-01 1.1 1.75e+06 1.2 3.5e+02 2.0e+02 2.8e+01 1 0 1 0 2 3 1 2 0 6 47 KSPGMRESOrthog 45 1.0 7.8182e-01 2.0 4.35e+05 1.1 0.0e+00 0.0e+00 4.5e+01 1 0 0 0 3 3 0 0 0 9 8 KSPSetup 14 1.0 5.7125e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 3 0 0 0 0 0 KSPSolve 4 1.0 1.5273e+01 1.0 1.74e+08 1.0 1.7e+04 4.3e+03 4.5e+02 37 49 46 50 32 91 99 95100 90 181 PCSetUp 6 1.0 6.3247e+00 1.6 1.37e+08 1.0 3.0e+03 1.8e+04 5.5e+01 12 39 8 37 4 30 79 17 75 11 348 PCSetUpOnBlocks 30 1.0 9.0615e-03 2.0 2.80e+05 1.1 0.0e+00 0.0e+00 7.0e+00 0 0 0 0 0 0 0 0 0 1 468 PCApply 17 1.0 6.7339e+00 1.0 3.56e+07 1.0 1.3e+04 1.4e+03 3.7e+02 16 10 36 12 26 40 20 74 25 73 83 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Viewer 0 1 336 0 --- Event Stage 1: SetUp Distributed array 4 0 0 0 Vec 52 18 32088 0 Vec Scatter 16 0 0 0 Index Set 112 32 19808 0 IS L to G Mapping 8 0 0 0 Matrix 24 0 0 0 Matrix FD Coloring 4 0 0 0 SNES 4 0 0 0 Krylov Solver 10 2 1024 0 Preconditioner 10 2 664 0 Viewer 1 0 0 0 --- Event Stage 2: Solve Distributed array 0 4 24928 0 Vec 136 170 1474968 0 Vec Scatter 8 24 12960 0 Index Set 46 126 494032 0 IS L to G Mapping 0 8 18912 0 Matrix 10 34 26220032 0 Matrix FD Coloring 0 4 477456 0 SNES 0 4 2640 0 Krylov Solver 6 14 75520 0 Preconditioner 6 14 6744 0 Container 4 4 944 0 ======================================================================================================================== Average time to get PetscTime(): 3.91111e-06 Average time for MPI_Barrier(): 0.00701251 Average time for zero size MPI_Send(): 0.000452013 #PETSc Option Table entries: -da_grid_x 33 -da_grid_y 33 -dmmg_grid_sequence -grashof 1.0E01 -ksp_monitor -lidvelocity 0 -log_summary -nlevels 2 -prandtl 1 -snes_monitor #End o PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4 sizeof(PetscScalar) 8 Configure run at: Wed Mar 25 18:05:57 2009 Configure options: --download-f-blas-lapack=1 --download-mpich=1 --with-debugging=1 --with-cc=gcc --with-fc=g77 PETSC_ARCH=cygwin_blas_lapack --useThreads=0 --with-shared=0 ----------------------------------------- Libraries compiled on Wed Mar 25 18:08:45 CDT 2009 on amoeba Machine characteristics: CYGWIN_NT-5.1 amoeba 1.5.25(0.156/4/2) 2008-06-12 19:34 i686 Cygwin Using PETSc directory: /home/Friend/software/petsc-3.0.0-p2 Using PETSc arch: cygwin_blas_lapack ----------------------------------------- Using C compiler: /home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -g3 Using Fortran compiler: /home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/bin/mpif77 -Wall -Wno-unused-variable -g ----------------------------------------- Using include paths: -I/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/include -I/home/Friend/software/petsc-3.0.0-p2/include -I/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/include ------------------------------------------ Using C linker: /home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -g3 Using Fortran linker: /home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/bin/mpif77 -Wall -Wno-unused-variable -g Using libraries: -Wl,-rpath,/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/lib -L/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -Wl,-rpath,/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/lib -L/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/lib -lflapack -lfblas -ldl -L/home/Friend/software/petsc-3.0.0-p2/cygwin_blas_lapack/lib -lpmpich -lmpich -lfrtbegin -lg2c -L/usr/lib/gcc/i686-pc-cygwin/3.4.4 -lcygwin -luser32 -ladvapi32 -lshell32 -lgdi32 -luser32 -ladvapi32 -lkernel32 -ldl ------------------------------------------ From jed at 59A2.org Thu Apr 23 02:47:58 2009 From: jed at 59A2.org (Jed Brown) Date: Thu, 23 Apr 2009 09:47:58 +0200 Subject: dmmg_grid_sequence: KSP not functional? In-Reply-To: References: <49EFB5C1.8010001@ats.ucla.edu> Message-ID: <20090423074758.GW2744@brakk.ethz.ch> On Wed 2009-04-22 21:44, Xuefeng Li wrote: > It feels like KSP for the coarsest level is either not > functional or a direct solver, whereas KSP for finer > levels are iterative solvers. What is the KSP type > associated with SNES on the coarsest level? Is the above > observation by design in Petsc? Yes, this is by design. The whole point of multigrid is to be able to propagate information globally in each iteration, while spending the minimum effort to do it. Usually this means that the global matrix is small enough to be easily solved with a direct solver, often redundantly. Run with -dmmg_view and look under 'mg_coarse_' to see what is running, and with '-help |grep mg_coarse' to see what you can tune on the command line. If you do iterations on the coarse level, you waste all your time on the network because each processor has almost nothing to do. If you use a domain decomposition preconditioner that deteriorates with the number of subdomains (anything without its *own* coarse-level solve) then you can't observe optimal scaling. The PETSc default is to use the 'preonly' KSP and a redundant direct solve (every process gets the whole matrix coarse-level matrix and solves it sequentially). Sometimes it's not feasible to make the coarse level small enough for such a direct solve to be effective (common with complex geometry). In this case, you can use -mg_coarse_pc_redundant_number and a parallel direct solver (mumps, superlu_dist, etc) to solve semi-redundantly (e.g. groups of 8 processors work together to factor the coarse-level matrix). You are welcome to try an iterative solver on the coarse level, redundantly or otherwise. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From Andreas.Grassl at student.uibk.ac.at Thu Apr 23 10:38:59 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Thu, 23 Apr 2009 17:38:59 +0200 Subject: Preallocating Matrix Message-ID: <49F08B93.8090501@student.uibk.ac.at> Hello, I'm assembling large matrices giving just the numbers of zero per row and wondering if it is possible to extract the nonzero-structure in array-format it can be fed again into MatSeqAIJSetPreallocation(Mat B,PetscInt nz,const PetscInt nnz[]) to detect the bottleneck? cheers ando -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From balay at mcs.anl.gov Thu Apr 23 10:52:31 2009 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 23 Apr 2009 10:52:31 -0500 (CDT) Subject: Preallocating Matrix In-Reply-To: <49F08B93.8090501@student.uibk.ac.at> References: <49F08B93.8090501@student.uibk.ac.at> Message-ID: Not sure what you mean by "numbers of zero per row" Do you mean "number of zeros per row" or "column indices of zeros for each row"?. Either way - you should be able to write a single loop over this thingy - to compute the required nnz[] Satish On Thu, 23 Apr 2009, Andreas Grassl wrote: > Hello, > > I'm assembling large matrices giving just the numbers of zero per row and > wondering if it is possible to extract the nonzero-structure in array-format it > can be fed again into > > MatSeqAIJSetPreallocation(Mat B,PetscInt nz,const PetscInt nnz[]) > > to detect the bottleneck? > > cheers > > ando > > From Hung.V.Nguyen at usace.army.mil Thu Apr 23 11:07:40 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 23 Apr 2009 11:07:40 -0500 Subject: Cg/asm doesn't scale Message-ID: Hello, I tried to solver the SPD linear system with using cg/asm preconditioner and found that it doesn't scale well, see table below. Note: it does scale well with cg/jacobi preconditioner. Do you know why it doesn't scale? Thanks, -hung Number Pes Solver Time (secs) #it 1 31.317345 544 2 263.172225 6959 4 734.828840 23233 8 805.217591 41250 16 611.813716 49262 32 345.331928 49792 64 212.084555 53771 --- 1 : aprun -n 1 ./test_matrix_read -ksp_type cg -pc_type asm -pc_asm_type basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 -ksp_max_it 100000 Time in PETSc solver: 31.317345 seconds The number of iteration = 544 The solution residual error = 1.658653e-08 2 norm 7.885361e-07 infinity norm 6.738382e-09 1 norm 2.124207e-04 Application 679466 resources: utime 0, stime 0 ************************ Beginning new run ************************ 2 : aprun -n 2 ./test_matrix_read -ksp_type cg -pc_type asm -pc_asm_type basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 -ksp_max_it 100000 Time in PETSc solver: 263.172225 seconds The number of iteration = 6959 The solution residual error = 1.794494e-08 2 norm 6.579571e-07 infinity norm 8.745052e-09 1 norm 1.907733e-04 -- Here is info about matrix A: Computed as <178353> Computed as <0> Computed as <3578321> Computed as <27> Computed as <6> Computed as <76553> Computed as <76553> From knepley at gmail.com Thu Apr 23 11:12:07 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 23 Apr 2009 11:12:07 -0500 Subject: Cg/asm doesn't scale In-Reply-To: References: Message-ID: On Thu, Apr 23, 2009 at 11:07 AM, Nguyen, Hung V ERDC-ITL-MS < Hung.V.Nguyen at usace.army.mil> wrote: > Hello, > > I tried to solver the SPD linear system with using cg/asm preconditioner > and > found that it doesn't scale well, see table below. Note: it does scale well > with cg/jacobi preconditioner. > > Do you know why it doesn't scale? ILU is incredibly unpredictable. You have not provided the Jacobi numbers, but the particular nonzero pattern of the non-overlapping matrix must be much more amenable. Also, this is a really really bad preconditioner for your system. I would put my time into figuring out why my system is so ill-conditioned and try to formulate a good preconditioner, like an approximate system, etc. Matt > > Thanks, > > -hung > > Number Pes Solver Time (secs) #it > 1 31.317345 544 > 2 263.172225 6959 > 4 734.828840 23233 > 8 805.217591 41250 > 16 611.813716 49262 > 32 345.331928 49792 > 64 212.084555 53771 > > > --- > 1 : aprun -n 1 ./test_matrix_read -ksp_type cg -pc_type asm -pc_asm_type > basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 > -ksp_max_it > 100000 > Time in PETSc solver: 31.317345 seconds > The number of iteration = 544 > The solution residual error = 1.658653e-08 > 2 norm 7.885361e-07 > infinity norm 6.738382e-09 > 1 norm 2.124207e-04 > > Application 679466 resources: utime 0, stime 0 > ************************ Beginning new run ************************ > > 2 : aprun -n 2 ./test_matrix_read -ksp_type cg -pc_type asm -pc_asm_type > basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 > -ksp_max_it > 100000 > Time in PETSc solver: 263.172225 seconds > The number of iteration = 6959 > The solution residual error = 1.794494e-08 > 2 norm 6.579571e-07 > infinity norm 8.745052e-09 > 1 norm 1.907733e-04 > > -- Here is info about matrix A: > > Computed as <178353> > Computed as <0> > Computed as <3578321> > Computed as <27> > Computed as <6> > Computed as <76553> > Computed as <76553> > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hung.V.Nguyen at usace.army.mil Thu Apr 23 14:13:17 2009 From: Hung.V.Nguyen at usace.army.mil (Nguyen, Hung V ERDC-ITL-MS) Date: Thu, 23 Apr 2009 14:13:17 -0500 Subject: Cg/asm doesn't scale In-Reply-To: References: Message-ID: Hello Matt, >ILU is incredibly unpredictable. I got the same result when running without setting ILU in sub_pc_type. It seems to me that direct solver is set up as default when solving for sub_ksp_type(?). Please let me know if it is not correct. >You have not provided the Jacobi numbers, but the particular nonzero pattern of the non-overlapping matrix must be >much more amenable. Also, this is a really really bad preconditioner for your system. Indeed, the asm preconditioner is not really bad preconditioner for some of my ill-conditioned systems. In some other SPD linear systems, I have found that cg with asm preconditioner converges better than others. And, it does scale well within the size of matrix, see attached file. However, it doesn't scale in this case. Here is the solver time for cg/jacobi. The performance of cg/asm is better than cg/jacobi in the range from 1 to 4 processors. Number Pes Solver Time (secs) #it Solver Time(secs) cg/asm cg/jacobi 1 31.317345 544 1999.276566 2 263.172225 6959 1188.067975 4 734.828840 23233 984.062940 8 805.217591 41250 538.102407 16 611.813716 49262 308.547316 32 345.331928 49792 170.074248 64 212.084555 53771 92.398144 >I would put my time into figuring out why my system is so ill-conditioned and try to formulate a good preconditioner, like an approximate system, etc. The linear system is from groundwater flow in a water repellent soil that can cause a very ill-conditioned linear system. -Hung -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Thursday, April 23, 2009 11:12 AM To: PETSc users list Subject: Re: Cg/asm doesn't scale On Thu, Apr 23, 2009 at 11:07 AM, Nguyen, Hung V ERDC-ITL-MS wrote: Hello, I tried to solver the SPD linear system with using cg/asm preconditioner and found that it doesn't scale well, see table below. Note: it does scale well with cg/jacobi preconditioner. Do you know why it doesn't scale? ILU is incredibly unpredictable. You have not provided the Jacobi numbers, but the particular nonzero pattern of the non-overlapping matrix must be much more amenable. Also, this is a really really bad preconditioner for your system. I would put my time into figuring out why my system is so ill-conditioned and try to formulate a good preconditioner, like an approximate system, etc. Matt Thanks, -hung Number Pes Solver Time (secs) #it 1 31.317345 544 2 263.172225 6959 4 734.828840 23233 8 805.217591 41250 16 611.813716 49262 32 345.331928 49792 64 212.084555 53771 --- 1 : aprun -n 1 ./test_matrix_read -ksp_type cg -pc_type asm -pc_asm_type basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 -ksp_max_it 100000 Time in PETSc solver: 31.317345 seconds The number of iteration = 544 The solution residual error = 1.658653e-08 2 norm 7.885361e-07 infinity norm 6.738382e-09 1 norm 2.124207e-04 Application 679466 resources: utime 0, stime 0 ************************ Beginning new run ************************ 2 : aprun -n 2 ./test_matrix_read -ksp_type cg -pc_type asm -pc_asm_type basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 -ksp_max_it 100000 Time in PETSc solver: 263.172225 seconds The number of iteration = 6959 The solution residual error = 1.794494e-08 2 norm 6.579571e-07 infinity norm 8.745052e-09 1 norm 1.907733e-04 -- Here is info about matrix A: Computed as <178353> Computed as <0> Computed as <3578321> Computed as <27> Computed as <6> Computed as <76553> Computed as <76553> -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- A non-text attachment was scrubbed... Name: fig-B.png Type: image/png Size: 8401 bytes Desc: fig-B.png URL: From knepley at gmail.com Thu Apr 23 14:50:16 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 23 Apr 2009 14:50:16 -0500 Subject: Cg/asm doesn't scale In-Reply-To: References: Message-ID: On Thu, Apr 23, 2009 at 2:13 PM, Nguyen, Hung V ERDC-ITL-MS < Hung.V.Nguyen at usace.army.mil> wrote: > Hello Matt, > > >ILU is incredibly unpredictable. > > I got the same result when running without setting ILU in sub_pc_type. It > seems to me that direct solver is set up as default when solving for > sub_ksp_type(?). Please let me know if it is not correct. > You can see what the default is using -ksp_view > >You have not provided the Jacobi numbers, but the particular nonzero > pattern > of the non-overlapping matrix must be > >much more amenable. Also, this is a really really bad preconditioner for > your system. > > Indeed, the asm preconditioner is not really bad preconditioner for some of > my ill-conditioned systems. In some other SPD linear systems, I have found > that cg with asm preconditioner converges better than others. And, it does > scale well within the size of matrix, see attached file. However, it > doesn't > scale in this case. Here is the solver time for cg/jacobi. The performance > of > cg/asm is better than cg/jacobi in the range from 1 to 4 processors. > > Number Pes Solver Time (secs) #it Solver Time(secs) > cg/asm cg/jacobi > 1 31.317345 544 1999.276566 > 2 263.172225 6959 1188.067975 > 4 734.828840 23233 984.062940 > 8 805.217591 41250 538.102407 > 16 611.813716 49262 308.547316 > 32 345.331928 49792 170.074248 > 64 212.084555 53771 92.398144 > > >I would put my time into figuring out why my system is so ill-conditioned > and try to formulate a good preconditioner, like an approximate system, > etc. > > The linear system is from groundwater flow in a water repellent soil that > can > cause a very ill-conditioned linear system. This is why people develop special purpose discretizations for these problems. Matt > > -Hung > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov > [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley > Sent: Thursday, April 23, 2009 11:12 AM > To: PETSc users list > Subject: Re: Cg/asm doesn't scale > > On Thu, Apr 23, 2009 at 11:07 AM, Nguyen, Hung V ERDC-ITL-MS > wrote: > > > Hello, > > I tried to solver the SPD linear system with using cg/asm > preconditioner and > found that it doesn't scale well, see table below. Note: it does > scale well > with cg/jacobi preconditioner. > > Do you know why it doesn't scale? > > > ILU is incredibly unpredictable. You have not provided the Jacobi numbers, > but the particular nonzero pattern of the non-overlapping matrix must be > much > more amenable. Also, this is a really really bad preconditioner for your > system. I would put my time into figuring out why my system is so > ill-conditioned and try to formulate a good preconditioner, like an > approximate system, etc. > > Matt > > > > Thanks, > > -hung > > Number Pes Solver Time (secs) #it > 1 31.317345 544 > 2 263.172225 6959 > 4 734.828840 23233 > 8 805.217591 41250 > 16 611.813716 49262 > 32 345.331928 49792 > 64 212.084555 53771 > > > --- > 1 : aprun -n 1 ./test_matrix_read -ksp_type cg -pc_type asm > -pc_asm_type > basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 > -ksp_max_it > 100000 > Time in PETSc solver: 31.317345 seconds > The number of iteration = 544 > The solution residual error = 1.658653e-08 > 2 norm 7.885361e-07 > infinity norm 6.738382e-09 > 1 norm 2.124207e-04 > > Application 679466 resources: utime 0, stime 0 > ************************ Beginning new run ************************ > > 2 : aprun -n 2 ./test_matrix_read -ksp_type cg -pc_type asm > -pc_asm_type > basic -sub_pc_type ilu -sub_ksp_type preonly -ksp_rtol 1.0e-12 > -ksp_max_it > 100000 > Time in PETSc solver: 263.172225 seconds > The number of iteration = 6959 > The solution residual error = 1.794494e-08 > 2 norm 6.579571e-07 > infinity norm 8.745052e-09 > 1 norm 1.907733e-04 > > -- Here is info about matrix A: > > Computed as <178353> > Computed as <0> > Computed as <3578321> > Computed as <27> > Computed as <6> > Computed as <76553> > Computed as <76553> > > > > > > > > > -- > What most experimenters take for granted before they begin their > experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From enjoywm at cs.wm.edu Sun Apr 26 16:30:00 2009 From: enjoywm at cs.wm.edu (Yixun Liu) Date: Sun, 26 Apr 2009 17:30:00 -0400 Subject: How to write a program, which can be run on 1 and multiple processors? Message-ID: <49F4D258.2030803@cs.wm.edu> Hi, I want to make my code run on 1 or multiple processors. The code, which can run on multiple processors is like the following, MatCreate(PETSC_COMM_WORLD, &A); MatSetSizes(A, 3*numOfVerticesOfOneProcessor, 3*numOfVerticesOfOneProcessor, systemSize, systemSize); MatSetFromOptions(A); MatMPIAIJSetPreallocation(A, 50, PETSC_NULL, 50, PETSC_NULL); However if I want to run on 1 processor I have to change the last code to: MatSeqAIJSetPreallocation(A,1000,PETSC_NULL); How to avoid changing code? Thanks. Yixun From jed at 59A2.org Sun Apr 26 16:37:10 2009 From: jed at 59A2.org (Jed Brown) Date: Sun, 26 Apr 2009 23:37:10 +0200 Subject: How to write a program, which can be run on 1 and multiple processors? In-Reply-To: <49F4D258.2030803@cs.wm.edu> References: <49F4D258.2030803@cs.wm.edu> Message-ID: <49F4D406.3000908@59A2.org> Yixun Liu wrote: > Hi, > I want to make my code run on 1 or multiple processors. The code, which > can run on multiple processors is like the following, > > MatCreate(PETSC_COMM_WORLD, &A); > MatSetSizes(A, 3*numOfVerticesOfOneProcessor, > 3*numOfVerticesOfOneProcessor, systemSize, systemSize); You don't have to provide both local and global size unless you want PETSc to check that these numbers are compatible. > MatSetFromOptions(A); > MatMPIAIJSetPreallocation(A, 50, PETSC_NULL, 50, PETSC_NULL); > > However if I want to run on 1 processor I have to change the last code to: > MatSeqAIJSetPreallocation(A,1000,PETSC_NULL); ^^^^ you probably mean 100 > How to avoid changing code? Call both always. You can call {Seq,MPI}BAIJ preallocation while you're at it. The preallocation functions don't do anything unless they match the matrix type that you have. Jed From Chun.SUN at 3ds.com Mon Apr 27 08:13:15 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Mon, 27 Apr 2009 09:13:15 -0400 Subject: SSOR problem In-Reply-To: <49F4D406.3000908@59A2.org> References: <49F4D258.2030803@cs.wm.edu> <49F4D406.3000908@59A2.org> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA54CB7B@CORP-CLT-EXB01.ds> Hello, I have an *particular* Ax=b which I want to solve with CG preconditioned by SSOR using PETSc. Then some specific strange things happen. Please allow me to describe all the symptoms that I found here. Thanks for your help: 0) All solves are in serial. 1) A 20-line academic code and another matlab code converge the solution with identical residual history and number of iterations (76), they match well. If I run without SSOR (just diagonal scaled CG): PETSc, academic code, and matlab all match well with same number (180) of iterations. 2) PETSc with SSOR seems to give me -8 indefinite pc. If I play with omega other than using 1.0 (as in Gauss-Seidel), sometimes (with omega=1.2) I see stagnation and it won't converge then exceeds the maximum iteration allowed (500). Residuals even don't go down. If I don't say -ksp_diagonal_scale, I get -8 too. So, PETSc with SSOR either gives me -8 or -3. 3) The above was run with -pc_sor_symmetric. However, if I ran with -pc_sor_forward, I got a convergence curve identical to what I have without any preconditioner, with same iterations (180). If I ran with -pc_sor_backward, it gives me -8 indefinite pc. 4) If I increase any of the number of -pc_sor_its (or lits) to 2, it converges (but still don't match the matlab/academic code). 5) The matrix has good condition number (~8000), maximum diagonal is about 6, minimum diagonal is about 1.1. There's no zero or negative diagonal entries in this matrix. It's spd otherwise matlab won't be able to solve it. 6) The behavior is independent of rhs. I've tried random rhs and get the same scenario. 7) Here is the confusing part: All other matrices that we have except for this one can be solved by PETSc with same settings very well. And they match the academic code and matlab code. It's just this matrix that exhibits the strange behavior. I tend to eliminate the possibility of interface problem because all other matrices and other preconditioner settings work well. We're running out of ideas here, if you have any insight please say anything or point any directions. Thanks a lot, Chun From Chun.SUN at 3ds.com Mon Apr 27 09:34:41 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Mon, 27 Apr 2009 10:34:41 -0400 Subject: SSOR problem In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA54CB7B@CORP-CLT-EXB01.ds> References: <49F4D258.2030803@cs.wm.edu> <49F4D406.3000908@59A2.org> <2545DC7A42DF804AAAB2ADA5043D57DA54CB7B@CORP-CLT-EXB01.ds> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA54CB9C@CORP-CLT-EXB01.ds> Hello, I have an update to this problem: I found that in MatRelax_SeqAIJ function (mat/impl/aij/seq/aij.c), I have: diag = a->diag and: diag[i] is has exactly the same value of a->i[i] for each row i. This gives me n=0 when doing forward pass of zero initial guess. That explains why setting -pc_sor_forward will give me identical results as if I run pure DSCG. I assume that this a->diag[] stores the sparse column index of diagonal entries of a matrix. Now it seems to be improperly set. I will pursue this further in debugger. Do you know which function it should be set during the assembly process? That would point a short-cut for me.... Thanks again! Chun -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of SUN Chun Sent: Monday, April 27, 2009 9:13 AM To: PETSc users list Subject: SSOR problem Hello, I have an *particular* Ax=b which I want to solve with CG preconditioned by SSOR using PETSc. Then some specific strange things happen. Please allow me to describe all the symptoms that I found here. Thanks for your help: 0) All solves are in serial. 1) A 20-line academic code and another matlab code converge the solution with identical residual history and number of iterations (76), they match well. If I run without SSOR (just diagonal scaled CG): PETSc, academic code, and matlab all match well with same number (180) of iterations. 2) PETSc with SSOR seems to give me -8 indefinite pc. If I play with omega other than using 1.0 (as in Gauss-Seidel), sometimes (with omega=1.2) I see stagnation and it won't converge then exceeds the maximum iteration allowed (500). Residuals even don't go down. If I don't say -ksp_diagonal_scale, I get -8 too. So, PETSc with SSOR either gives me -8 or -3. 3) The above was run with -pc_sor_symmetric. However, if I ran with -pc_sor_forward, I got a convergence curve identical to what I have without any preconditioner, with same iterations (180). If I ran with -pc_sor_backward, it gives me -8 indefinite pc. 4) If I increase any of the number of -pc_sor_its (or lits) to 2, it converges (but still don't match the matlab/academic code). 5) The matrix has good condition number (~8000), maximum diagonal is about 6, minimum diagonal is about 1.1. There's no zero or negative diagonal entries in this matrix. It's spd otherwise matlab won't be able to solve it. 6) The behavior is independent of rhs. I've tried random rhs and get the same scenario. 7) Here is the confusing part: All other matrices that we have except for this one can be solved by PETSc with same settings very well. And they match the academic code and matlab code. It's just this matrix that exhibits the strange behavior. I tend to eliminate the possibility of interface problem because all other matrices and other preconditioner settings work well. We're running out of ideas here, if you have any insight please say anything or point any directions. Thanks a lot, Chun From knepley at gmail.com Mon Apr 27 10:32:30 2009 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 27 Apr 2009 10:32:30 -0500 Subject: SSOR problem In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA54CB9C@CORP-CLT-EXB01.ds> References: <49F4D258.2030803@cs.wm.edu> <49F4D406.3000908@59A2.org> <2545DC7A42DF804AAAB2ADA5043D57DA54CB7B@CORP-CLT-EXB01.ds> <2545DC7A42DF804AAAB2ADA5043D57DA54CB9C@CORP-CLT-EXB01.ds> Message-ID: The function MatMarkDiagonal_SeqAIJ() takes care of this. Matt On Mon, Apr 27, 2009 at 9:34 AM, SUN Chun wrote: > Hello, > > I have an update to this problem: > > I found that in MatRelax_SeqAIJ function (mat/impl/aij/seq/aij.c), I have: > > diag = a->diag and: > > diag[i] is has exactly the same value of a->i[i] for each row i. This gives > me n=0 when doing forward pass of zero initial guess. That explains why > setting -pc_sor_forward will give me identical results as if I run pure > DSCG. > > I assume that this a->diag[] stores the sparse column index of diagonal > entries of a matrix. Now it seems to be improperly set. I will pursue this > further in debugger. Do you know which function it should be set during the > assembly process? That would point a short-cut for me.... > > Thanks again! > Chun > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto: > petsc-users-bounces at mcs.anl.gov] On Behalf Of SUN Chun > Sent: Monday, April 27, 2009 9:13 AM > To: PETSc users list > Subject: SSOR problem > > Hello, > > I have an *particular* Ax=b which I want to solve with CG preconditioned > by SSOR using PETSc. Then some specific strange things happen. Please > allow me to describe all the symptoms that I found here. Thanks for your > help: > > 0) All solves are in serial. > > 1) A 20-line academic code and another matlab code converge the solution > with identical residual history and number of iterations (76), they > match well. If I run without SSOR (just diagonal scaled CG): PETSc, > academic code, and matlab all match well with same number (180) of > iterations. > > 2) PETSc with SSOR seems to give me -8 indefinite pc. If I play with > omega other than using 1.0 (as in Gauss-Seidel), sometimes (with > omega=1.2) I see stagnation and it won't converge then exceeds the > maximum iteration allowed (500). Residuals even don't go down. If I > don't say -ksp_diagonal_scale, I get -8 too. So, PETSc with SSOR either > gives me -8 or -3. > > 3) The above was run with -pc_sor_symmetric. However, if I ran with > -pc_sor_forward, I got a convergence curve identical to what I have > without any preconditioner, with same iterations (180). If I ran with > -pc_sor_backward, it gives me -8 indefinite pc. > > 4) If I increase any of the number of -pc_sor_its (or lits) to 2, it > converges (but still don't match the matlab/academic code). > > 5) The matrix has good condition number (~8000), maximum diagonal is > about 6, minimum diagonal is about 1.1. There's no zero or negative > diagonal entries in this matrix. It's spd otherwise matlab won't be able > to solve it. > > 6) The behavior is independent of rhs. I've tried random rhs and get the > same scenario. > > 7) Here is the confusing part: All other matrices that we have except > for this one can be solved by PETSc with same settings very well. And > they match the academic code and matlab code. It's just this matrix that > exhibits the strange behavior. I tend to eliminate the possibility of > interface problem because all other matrices and other preconditioner > settings work well. > > We're running out of ideas here, if you have any insight please say > anything or point any directions. > > Thanks a lot, > Chun > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 27 11:19:42 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Apr 2009 11:19:42 -0500 Subject: How to write a program, which can be run on 1 and multiple processors? In-Reply-To: <49F4D258.2030803@cs.wm.edu> References: <49F4D258.2030803@cs.wm.edu> Message-ID: <6AD59573-FC71-44D4-AA6D-7A440FB8091D@mcs.anl.gov> On Apr 26, 2009, at 4:30 PM, Yixun Liu wrote: > Hi, > I want to make my code run on 1 or multiple processors. The code, > which > can run on multiple processors is like the following, > > MatCreate(PETSC_COMM_WORLD, &A); > MatSetSizes(A, 3*numOfVerticesOfOneProcessor, > 3*numOfVerticesOfOneProcessor, systemSize, systemSize); > MatSetFromOptions(A); > MatMPIAIJSetPreallocation(A, 50, PETSC_NULL, 50, PETSC_NULL); > > However if I want to run on 1 processor I have to change the last > code to: > MatSeqAIJSetPreallocation(A,1000,PETSC_NULL); > > How to avoid changing code? Just leave BOTH lines in the code. PETSc has the unique feature that it ignores optional function calls that are not appropriate for the particular situation. So for runs with one process the MPI version is ignored and for runs with multiple processors the Seq version is ignored. Barry > > > Thanks. > > Yixun From Andreas.Grassl at student.uibk.ac.at Mon Apr 27 13:08:14 2009 From: Andreas.Grassl at student.uibk.ac.at (Andreas Grassl) Date: Mon, 27 Apr 2009 20:08:14 +0200 Subject: Preallocating Matrix In-Reply-To: References: <49F08B93.8090501@student.uibk.ac.at> Message-ID: <49F5F48E.2070104@student.uibk.ac.at> Satish Balay schrieb: > Not sure what you mean by "numbers of zero per row" I actually meant "number of nonzeros per row", but providing the nnz[]-array to MatSeqAIJSetPreallocation brought only very little speed-up, by the same factor as preallocating just with nz. I suppose that I have to provide already the assembled sparse matrix to the PETSc-routines instead of calculating it element by element, as long as I don't distribute this calculation over many cpu's. cheers, ando > > Do you mean "number of zeros per row" or "column indices of zeros for > each row"?. Either way - you should be able to write a single loop > over this thingy - to compute the required nnz[] > > Satish > > On Thu, 23 Apr 2009, Andreas Grassl wrote: > >> Hello, >> >> I'm assembling large matrices giving just the numbers of zero per row and >> wondering if it is possible to extract the nonzero-structure in array-format it >> can be fed again into >> >> MatSeqAIJSetPreallocation(Mat B,PetscInt nz,const PetscInt nnz[]) >> >> to detect the bottleneck? >> >> cheers >> >> ando >> -- /"\ Grassl Andreas \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik X against HTML email Technikerstr. 13 Zi 709 / \ +43 (0)512 507 6091 From bsmith at mcs.anl.gov Mon Apr 27 13:14:53 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Apr 2009 13:14:53 -0500 Subject: Preallocating Matrix In-Reply-To: <49F5F48E.2070104@student.uibk.ac.at> References: <49F08B93.8090501@student.uibk.ac.at> <49F5F48E.2070104@student.uibk.ac.at> Message-ID: On Apr 27, 2009, at 1:08 PM, Andreas Grassl wrote: > Satish Balay schrieb: >> Not sure what you mean by "numbers of zero per row" > > I actually meant "number of nonzeros per row", but providing the > nnz[]-array to > MatSeqAIJSetPreallocation brought only very little speed-up, by the > same factor > as preallocating just with nz. It is almost 100% sure that you are NOT correctly providing enough size per row. Run the code with -info (save the output in a file) and then send petsc-maint at mcs.anl.gov that file. Barry > > > I suppose that I have to provide already the assembled sparse matrix > to the > PETSc-routines instead of calculating it element by element, as long > as I don't > distribute this calculation over many cpu's. > > cheers, > > ando > >> >> Do you mean "number of zeros per row" or "column indices of zeros for >> each row"?. Either way - you should be able to write a single loop >> over this thingy - to compute the required nnz[] >> >> Satish >> >> On Thu, 23 Apr 2009, Andreas Grassl wrote: >> >>> Hello, >>> >>> I'm assembling large matrices giving just the numbers of zero per >>> row and >>> wondering if it is possible to extract the nonzero-structure in >>> array-format it >>> can be fed again into >>> >>> MatSeqAIJSetPreallocation(Mat B,PetscInt nz,const PetscInt nnz[]) >>> >>> to detect the bottleneck? >>> >>> cheers >>> >>> ando >>> > > > -- > /"\ Grassl Andreas > \ / ASCII Ribbon Campaign Uni Innsbruck Institut f. Mathematik > X against HTML email Technikerstr. 13 Zi 709 > / \ +43 (0)512 507 6091 From Chun.SUN at 3ds.com Mon Apr 27 13:45:45 2009 From: Chun.SUN at 3ds.com (SUN Chun) Date: Mon, 27 Apr 2009 14:45:45 -0400 Subject: SSOR problem In-Reply-To: References: <49F4D258.2030803@cs.wm.edu> <49F4D406.3000908@59A2.org><2545DC7A42DF804AAAB2ADA5043D57DA54CB7B@CORP-CLT-EXB01.ds><2545DC7A42DF804AAAB2ADA5043D57DA54CB9C@CORP-CLT-EXB01.ds> Message-ID: <2545DC7A42DF804AAAB2ADA5043D57DA54CC06@CORP-CLT-EXB01.ds> Hello, Sorry to raise the question again. It's a bit detailed.. I was trying to make the column indices a->j sorted for each row for my SEQAIJ Matrix (reason explained below). I looked up the source code and found MAT_COLUMNS_SORTED option, and I can do a MatSetOption to set that. However, even now with a->sorted=PETSC_TRUE, I still see my a->j's unsorted for each row after a bunch of MatSetValues. Is there a particular procedure that I should follow to have each row of my matrix sorted? Or I have to do it outside PETSc? I should not care how PETSc store matrix, however, the reason that I need it to be sorted is that in SSOR implementation, there seems to be an assumption that the column indices being sorted for zero initial guess ( n=diag[i]-a->i[i]; in MatRelax_SeqAIJ )... I suspect I might have misunderstandings here. But at least I would like to try a sorted matrix on SSOR to clarify. Any suggestions? Thanks, Chun From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Matthew Knepley Sent: Monday, April 27, 2009 11:33 AM To: PETSc users list Subject: Re: SSOR problem The function MatMarkDiagonal_SeqAIJ() takes care of this. Matt On Mon, Apr 27, 2009 at 9:34 AM, SUN Chun wrote: Hello, I have an update to this problem: I found that in MatRelax_SeqAIJ function (mat/impl/aij/seq/aij.c), I have: diag = a->diag and: diag[i] is has exactly the same value of a->i[i] for each row i. This gives me n=0 when doing forward pass of zero initial guess. That explains why setting -pc_sor_forward will give me identical results as if I run pure DSCG. I assume that this a->diag[] stores the sparse column index of diagonal entries of a matrix. Now it seems to be improperly set. I will pursue this further in debugger. Do you know which function it should be set during the assembly process? That would point a short-cut for me.... Thanks again! Chun -----Original Message----- From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of SUN Chun Sent: Monday, April 27, 2009 9:13 AM To: PETSc users list Subject: SSOR problem Hello, I have an *particular* Ax=b which I want to solve with CG preconditioned by SSOR using PETSc. Then some specific strange things happen. Please allow me to describe all the symptoms that I found here. Thanks for your help: 0) All solves are in serial. 1) A 20-line academic code and another matlab code converge the solution with identical residual history and number of iterations (76), they match well. If I run without SSOR (just diagonal scaled CG): PETSc, academic code, and matlab all match well with same number (180) of iterations. 2) PETSc with SSOR seems to give me -8 indefinite pc. If I play with omega other than using 1.0 (as in Gauss-Seidel), sometimes (with omega=1.2) I see stagnation and it won't converge then exceeds the maximum iteration allowed (500). Residuals even don't go down. If I don't say -ksp_diagonal_scale, I get -8 too. So, PETSc with SSOR either gives me -8 or -3. 3) The above was run with -pc_sor_symmetric. However, if I ran with -pc_sor_forward, I got a convergence curve identical to what I have without any preconditioner, with same iterations (180). If I ran with -pc_sor_backward, it gives me -8 indefinite pc. 4) If I increase any of the number of -pc_sor_its (or lits) to 2, it converges (but still don't match the matlab/academic code). 5) The matrix has good condition number (~8000), maximum diagonal is about 6, minimum diagonal is about 1.1. There's no zero or negative diagonal entries in this matrix. It's spd otherwise matlab won't be able to solve it. 6) The behavior is independent of rhs. I've tried random rhs and get the same scenario. 7) Here is the confusing part: All other matrices that we have except for this one can be solved by PETSc with same settings very well. And they match the academic code and matlab code. It's just this matrix that exhibits the strange behavior. I tend to eliminate the possibility of interface problem because all other matrices and other preconditioner settings work well. We're running out of ideas here, if you have any insight please say anything or point any directions. Thanks a lot, Chun -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Apr 27 13:50:37 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 27 Apr 2009 13:50:37 -0500 Subject: SSOR problem In-Reply-To: <2545DC7A42DF804AAAB2ADA5043D57DA54CC06@CORP-CLT-EXB01.ds> References: <49F4D258.2030803@cs.wm.edu> <49F4D406.3000908@59A2.org><2545DC7A42DF804AAAB2ADA5043D57DA54CB7B@CORP-CLT-EXB01.ds><2545DC7A42DF804AAAB2ADA5043D57DA54CB9C@CORP-CLT-EXB01.ds> <2545DC7A42DF804AAAB2ADA5043D57DA54CC06@CORP-CLT-EXB01.ds> Message-ID: <97DE5C1A-D88E-424D-944D-DFE4B54D2B4B@mcs.anl.gov> 1) It looks like you are using an old version of PETSc. We highly recommend upgrading to petsc 3.0.0; we are not going to debug old versions of PETSc 2) the column indices are ALWAYS sorted (for each row). The MAT_COLUMNS_SORTED option was only for the column values passed into MatSetValues() it had nothing to do with how the values are actually stored. Barry On Apr 27, 2009, at 1:45 PM, SUN Chun wrote: > Hello, > > Sorry to raise the question again. It?s a bit detailed.. > > I was trying to make the column indices a->j sorted for each row for > my SEQAIJ Matrix (reason explained below). I looked up the source > code and found MAT_COLUMNS_SORTED option, and I can do a > MatSetOption to set that. However, even now with a- > >sorted=PETSC_TRUE, I still see my a->j?s unsorted for each row > after a bunch of MatSetValues. Is there a particular procedure that > I should follow to have each row of my matrix sorted? Or I have to > do it outside PETSc? > > I should not care how PETSc store matrix, however, the reason that I > need it to be sorted is that in SSOR implementation, there seems to > be an assumption that the column indices being sorted for zero > initial guess ( n=diag[i]-a->i[i]; in MatRelax_SeqAIJ )? I suspect I > might have misunderstandings here. But at least I would like to try > a sorted matrix on SSOR to clarify. > > Any suggestions? > > Thanks, > Chun > > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov > ] On Behalf Of Matthew Knepley > Sent: Monday, April 27, 2009 11:33 AM > To: PETSc users list > Subject: Re: SSOR problem > > The function MatMarkDiagonal_SeqAIJ() takes care of this. > > Matt > On Mon, Apr 27, 2009 at 9:34 AM, SUN Chun wrote: > Hello, > > I have an update to this problem: > > I found that in MatRelax_SeqAIJ function (mat/impl/aij/seq/aij.c), I > have: > > diag = a->diag and: > > diag[i] is has exactly the same value of a->i[i] for each row i. > This gives me n=0 when doing forward pass of zero initial guess. > That explains why setting -pc_sor_forward will give me identical > results as if I run pure DSCG. > > I assume that this a->diag[] stores the sparse column index of > diagonal entries of a matrix. Now it seems to be improperly set. I > will pursue this further in debugger. Do you know which function it > should be set during the assembly process? That would point a short- > cut for me.... > > Thanks again! > Chun > > > -----Original Message----- > From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov > ] On Behalf Of SUN Chun > Sent: Monday, April 27, 2009 9:13 AM > To: PETSc users list > Subject: SSOR problem > > Hello, > > I have an *particular* Ax=b which I want to solve with CG > preconditioned > by SSOR using PETSc. Then some specific strange things happen. Please > allow me to describe all the symptoms that I found here. Thanks for > your > help: > > 0) All solves are in serial. > > 1) A 20-line academic code and another matlab code converge the > solution > with identical residual history and number of iterations (76), they > match well. If I run without SSOR (just diagonal scaled CG): PETSc, > academic code, and matlab all match well with same number (180) of > iterations. > > 2) PETSc with SSOR seems to give me -8 indefinite pc. If I play with > omega other than using 1.0 (as in Gauss-Seidel), sometimes (with > omega=1.2) I see stagnation and it won't converge then exceeds the > maximum iteration allowed (500). Residuals even don't go down. If I > don't say -ksp_diagonal_scale, I get -8 too. So, PETSc with SSOR > either > gives me -8 or -3. > > 3) The above was run with -pc_sor_symmetric. However, if I ran with > -pc_sor_forward, I got a convergence curve identical to what I have > without any preconditioner, with same iterations (180). If I ran with > -pc_sor_backward, it gives me -8 indefinite pc. > > 4) If I increase any of the number of -pc_sor_its (or lits) to 2, it > converges (but still don't match the matlab/academic code). > > 5) The matrix has good condition number (~8000), maximum diagonal is > about 6, minimum diagonal is about 1.1. There's no zero or negative > diagonal entries in this matrix. It's spd otherwise matlab won't be > able > to solve it. > > 6) The behavior is independent of rhs. I've tried random rhs and get > the > same scenario. > > 7) Here is the confusing part: All other matrices that we have except > for this one can be solved by PETSc with same settings very well. And > they match the academic code and matlab code. It's just this matrix > that > exhibits the strange behavior. I tend to eliminate the possibility of > interface problem because all other matrices and other preconditioner > settings work well. > > We're running out of ideas here, if you have any insight please say > anything or point any directions. > > Thanks a lot, > Chun > > > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which > their experiments lead. > -- Norbert Wiener From matthewreilly at mac.com Tue Apr 28 07:12:19 2009 From: matthewreilly at mac.com (Matt Reilly) Date: Tue, 28 Apr 2009 08:12:19 -0400 Subject: petsc-users Digest, Vol 4, Issue 28 In-Reply-To: References: Message-ID: <49F6F2A3.1030106@mac.com> > > Message: 2 > Date: Sun, 26 Apr 2009 23:37:10 +0200 > From: Jed Brown > Subject: Re: How to write a program, which can be run on 1 and > multiple processors? > To: PETSc users list > Message-ID: <49F4D406.3000908 at 59A2.org> > Content-Type: text/plain; charset=ISO-8859-1 > > Yixun Liu wrote: > >> Hi, >> I want to make my code run on 1 or multiple processors. The code, which >> can run on multiple processors is like the following, >> >> MatCreate(PETSC_COMM_WORLD, &A); >> MatSetSizes(A, 3*numOfVerticesOfOneProcessor, >> 3*numOfVerticesOfOneProcessor, systemSize, systemSize); >> > > You don't have to provide both local and global size unless you want > PETSc to check that these numbers are compatible. > > >> MatSetFromOptions(A); >> MatMPIAIJSetPreallocation(A, 50, PETSC_NULL, 50, PETSC_NULL); >> >> However if I want to run on 1 processor I have to change the last code to: >> MatSeqAIJSetPreallocation(A,1000,PETSC_NULL); >> > ^^^^ > you probably mean 100 > > >> How to avoid changing code? >> > > Call both always. You can call {Seq,MPI}BAIJ preallocation while you're > at it. The preallocation functions don't do anything unless they match > the matrix type that you have. > > Jed > > Jed's suggestion works, and is certainly reasonable. Personally, I'd advocate wrapping the sequence of matcreate/setfromoptions/preallocate in a higher level function, if possible. One version of the function exists for serial codes, the other for parallel. This keeps the code cleaner, doesn't rely on functions that don't do anything if the conditions aren't right, and will probably look cleaner (be easier to maintain). The disadvantage of this approach is that you have two routines that perform the same function, but in slightly different ways. If you find a bug in one, it is important that you remember to fix the bug in both routines. In my own code, when I'm forced into this situation, I put a large reminder comment in both routines to remind me that there bug fixes in one routine should be propagated to the other. Finally, some would suggest using #ifdef #endif conditional compilation. I'm not one of them. The problem with conditional compilation is that it is a slippery slope. You start off with just one or two instances, and then they breed. Soon you've got a giant program with dozens of places where a human reader will have quite a difficult time figuring out just what code gets compiled and what code doesn't. matt From knepley at gmail.com Tue Apr 28 08:20:00 2009 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 28 Apr 2009 08:20:00 -0500 Subject: petsc-users Digest, Vol 4, Issue 28 In-Reply-To: <49F6F2A3.1030106@mac.com> References: <49F6F2A3.1030106@mac.com> Message-ID: On Tue, Apr 28, 2009 at 7:12 AM, Matt Reilly wrote: > > >> Message: 2 >> Date: Sun, 26 Apr 2009 23:37:10 +0200 >> From: Jed Brown >> Subject: Re: How to write a program, which can be run on 1 and >> multiple processors? >> To: PETSc users list >> Message-ID: <49F4D406.3000908 at 59A2.org> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> Yixun Liu wrote: >> >> >>> Hi, >>> I want to make my code run on 1 or multiple processors. The code, which >>> can run on multiple processors is like the following, >>> >>> MatCreate(PETSC_COMM_WORLD, &A); >>> MatSetSizes(A, 3*numOfVerticesOfOneProcessor, >>> 3*numOfVerticesOfOneProcessor, systemSize, systemSize); >>> >>> >> >> You don't have to provide both local and global size unless you want >> PETSc to check that these numbers are compatible. >> >> >> >>> MatSetFromOptions(A); >>> MatMPIAIJSetPreallocation(A, 50, PETSC_NULL, 50, PETSC_NULL); >>> >>> However if I want to run on 1 processor I have to change the last code >>> to: >>> MatSeqAIJSetPreallocation(A,1000,PETSC_NULL); >>> >>> >> ^^^^ >> you probably mean 100 >> >> >> >>> How to avoid changing code? >>> >>> >> >> Call both always. You can call {Seq,MPI}BAIJ preallocation while you're >> at it. The preallocation functions don't do anything unless they match >> the matrix type that you have. >> >> Jed >> >> >> > Jed's suggestion works, and is certainly reasonable. > > Personally, I'd advocate wrapping the sequence of > matcreate/setfromoptions/preallocate in a higher > level function, if possible. One version of the function exists for serial > codes, the other for > parallel. This keeps the code cleaner, doesn't rely on functions that > don't do anything if the > conditions aren't right, and will probably look cleaner (be easier to > maintain). These exist already. They are not easier to maintain, and have disadvantages. That is why we switched. > > The disadvantage of this approach is that you have two routines that > perform the same function, > but in slightly different ways. If you find a bug in one, it is important > that you remember to fix > the bug in both routines. In my own code, when I'm forced into this > situation, I put a large reminder > comment in both routines to remind me that there bug fixes in one routine > should be propagated to > the other. I am not sure what you mean. These routines do not perform the same function at all. There is no code reuse. Matt > > Finally, some would suggest using #ifdef #endif conditional compilation. > I'm not one of them. > The problem with conditional compilation is that it is a slippery slope. > You start off with just one > or two instances, and then they breed. Soon you've got a giant program with > dozens of places > where a human reader will have quite a difficult time figuring out just > what code gets compiled > and what code doesn't. > > matt > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchangjohn at gmail.com Wed Apr 29 14:13:47 2009 From: liuchangjohn at gmail.com (liu chang) Date: Thu, 30 Apr 2009 03:13:47 +0800 Subject: Convex optimization with linear constraint? Message-ID: <94e43e390904291213q8845391vf1e37d06174a0c4@mail.gmail.com> I'm using PETSc + TAO's LMVM method for a convex optimization problem. As the project progresses, it's clear that some linear constraints are also needed, the problem now looks like: minimize f(vec_x) (f is neither linear or quadratic, but is convex) subject to A * vec_x = vec_b As LMVM does not support linear constraints, I'm looking for another solver. TAO lists several functions dealing with constraints, but they're all in the developer section, and in the samples linked from the manual I haven't found one that's linearly constrained. Is there a suitable one in TAO? Liu Chang From fuentesdt at gmail.com Wed Apr 29 14:37:09 2009 From: fuentesdt at gmail.com (David Fuentes) Date: Wed, 29 Apr 2009 14:37:09 -0500 (CDT) Subject: Convex optimization with linear constraint? In-Reply-To: <94e43e390904291213q8845391vf1e37d06174a0c4@mail.gmail.com> References: <94e43e390904291213q8845391vf1e37d06174a0c4@mail.gmail.com> Message-ID: Liu, Unless I'm missing something, I don't think you will directly find what you are looking for. You will prob have to solve A * vec_x = vec_b directly in your FormObjective function then use an adjoint method or something to compute your gradient directly in your FormGradient routine. On Thu, 30 Apr 2009, liu chang wrote: > I'm using PETSc + TAO's LMVM method for a convex optimization problem. > As the project progresses, it's clear that some linear constraints are > also needed, the problem now looks like: > > minimize f(vec_x) (f is neither linear or quadratic, but is convex) > subject to > A * vec_x = vec_b > > As LMVM does not support linear constraints, I'm looking for another > solver. TAO lists several functions dealing with constraints, but > they're all in the developer section, and in the samples linked from > the manual I haven't found one that's linearly constrained. Is there a > suitable one in TAO? > > Liu Chang > From xy2102 at columbia.edu Wed Apr 29 22:18:06 2009 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 29 Apr 2009 23:18:06 -0400 Subject: Some strange results running in PETSc. Message-ID: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> Hi, I am running some codes and stores the solution in the text file. However, I found that some results are wired in the sense that some processors are "eating" my (i,j) index and the corresponding solution. For example, the solution at time step =235 on processor 6 is right, but at time step = 236 on processor 6, one grid solution is missing and thus the order of the index is wrong. In the attached two files: hp.solution.dt0.16700.n90.t235.p6 (right one) hp.solution.dt0.16700.n90.t236.p6 (wrong one) for example, in hp.solution.dt0.16700.n90.t235.p6 (right one) i j --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 50 14 3.3212491928636803e-02 7.2992225179014901e-03 2.9841295384404947e+00 2.2004855368148415e-02 51 14 4.0287965701667774e-02 2.5401878231124070e-03 2.9201873761746322e+00 2.6251864239477816e-02 52 14 4.7084950235070790e-02 -1.5367647745423544e-03 2.8460826176461214e+00 3.1550405800570377e-02 53 14 5.3394938807608198e-02 -3.8189091837479271e-03 2.7618414550374171e+00 3.4078755072804334e-02 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- however, in hp.solution.dt0.16700.n90.t236.p6 (wrong one) i j --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 50 14 3.1239406700376452e-02 8.5179559039992043e-03 2.9840003096148520e+00 2.1760859158622522e-02 51 14 3.8032143218986063e-02 3.6341965920997721e-03 2.9198035731818854e+00 2.6200771510346648e-02 53 14 5.0661309132451274e-02 -2.9274557377189617e-03 2.7606822480069755e+00 3.4021016413777964e-02 54 14 5.6049570141121191e-02 -2.8111244430837979e-03 2.6669503598276267e+00 3.8855104759650566e-02 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- the (i,j) = (52,14) is missing and as a result, one grid point solution is missing. I did not understand how this happens and why this happens? Any ideas? Thanks very much! Cheers, Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 -------------- next part -------------- 46 12 8.6953221505124516e-03 2.0605847174523028e-02 2.9607903988665032e+00 4.9976731491140052e-03 47 12 1.7514395632452152e-02 1.7843969816173071e-02 2.9402412885707143e+00 9.7114965432256537e-03 48 12 2.6478777014489099e-02 1.3643571973255383e-02 2.9066372480894351e+00 1.5562306706129153e-02 49 12 3.5589716228136017e-02 8.7845608901538207e-03 2.8593225581624639e+00 1.9131705483206349e-02 50 12 4.4648511105922987e-02 4.0427364020778882e-03 2.8001461138919930e+00 2.6529018238762227e-02 51 12 5.3389759546595618e-02 1.1497620904368645e-03 2.7278833359269510e+00 3.1473181267120912e-02 52 12 6.1638327210051774e-02 1.0290723419584355e-03 2.6433121163772908e+00 3.4052125788813135e-02 53 12 6.8908915297550527e-02 3.9751048206720764e-03 2.5483390260232524e+00 4.7237041818613525e-02 54 12 7.4892358235516196e-02 9.8052096613123040e-03 2.4415020843540391e+00 5.7403008206305749e-02 55 12 7.9819204288899617e-02 1.6594551477715502e-02 2.3233991271715291e+00 4.6511455455329691e-02 56 12 8.3737978143752237e-02 2.1326503890894313e-02 2.2002058515721505e+00 3.0320442405839150e-02 57 12 8.6198605640352727e-02 2.1945671528569004e-02 2.0789729377765296e+00 3.4491799076114889e-02 58 12 8.6983790203463029e-02 1.8173382004082844e-02 1.9630518079080044e+00 5.7341718107343280e-02 59 12 8.6263182600523250e-02 1.0887053709102965e-02 1.8521303092803896e+00 7.9624750636460628e-02 60 12 8.4600391426024907e-02 1.1301469113017583e-03 1.7450752493699524e+00 8.8137868580738063e-02 61 12 8.2348058491617290e-02 -1.0263254751762367e-02 1.6415610269999552e+00 8.4888500652429241e-02 62 12 7.9489027725015732e-02 -2.2516878788224840e-02 1.5419858772953983e+00 7.8721724219649236e-02 63 12 7.5877719097376631e-02 -3.4734469427269335e-02 1.4466531317263069e+00 7.6214937893437001e-02 64 12 7.1420495932718500e-02 -4.5983362980324759e-02 1.3553969299960216e+00 7.7335548348328786e-02 65 12 6.6189165845669085e-02 -5.5541991365218993e-02 1.2678370636381959e+00 7.8293867336053827e-02 66 12 6.0482800577468548e-02 -6.3176784865969171e-02 1.1837485624394763e+00 7.6385265197791222e-02 67 12 5.4750682728317096e-02 -6.9316546271342788e-02 1.1031236660506867e+00 7.1277608314767135e-02 46 13 7.4173593520649320e-03 2.1059876847711278e-02 3.0438055164620628e+00 4.7241131430010904e-03 47 13 1.4972377718444908e-02 1.8816488989064423e-02 3.0242636110527799e+00 9.1353983931045223e-03 48 13 2.2750673670075747e-02 1.5157025345777946e-02 2.9923891595228564e+00 1.4167088607220435e-02 49 13 3.0718657490524028e-02 1.0600000227029758e-02 2.9479533715613080e+00 1.8863832484416199e-02 50 13 3.8797009150328542e-02 5.6236647397885772e-03 2.8915806108375519e+00 2.2824015258721766e-02 51 13 4.6713353522402294e-02 1.1605926706015146e-03 2.8241291897867127e+00 2.9470725425542091e-02 52 13 5.4232096883532144e-02 -1.4218344480102015e-03 2.7449793572205539e+00 3.2798117443219907e-02 53 13 6.1130713332375339e-02 -1.3585181953299384e-03 2.6553239019104944e+00 3.6609242943439611e-02 54 13 6.6992504240975845e-02 1.6566141889439447e-03 2.5560875921619468e+00 4.8651629941571010e-02 55 13 7.1623973972656615e-02 7.2066172542624976e-03 2.4463621299755798e+00 5.6592427187780472e-02 56 13 7.5230192975498045e-02 1.3508223739846155e-02 2.3273752739331983e+00 4.7685177964185387e-02 57 13 7.7879012145652543e-02 1.8062141487968773e-02 2.2043151597803350e+00 3.3500006044831276e-02 58 13 7.9275481924596811e-02 1.9042187900578277e-02 2.0831962289633976e+00 3.3930912642814955e-02 59 13 7.9247782381475743e-02 1.5969025037424697e-02 1.9674977383490257e+00 5.0948772465235430e-02 60 13 7.7857962162705810e-02 9.4378828913587176e-03 1.8574863724033923e+00 7.2160439939907992e-02 61 13 7.5564903554000667e-02 3.9311235591571534e-04 1.7521008088233945e+00 8.4412870052469627e-02 62 13 7.2729791800087928e-02 -1.0262504574496926e-02 1.6505580991621174e+00 8.4379572019936663e-02 63 13 6.9441911440407331e-02 -2.1692522440963560e-02 1.5528352394804574e+00 7.7481419341905319e-02 64 13 6.5625111718553050e-02 -3.3049031230736714e-02 1.4592331092983732e+00 7.1000256610966594e-02 65 13 6.1135917079909716e-02 -4.3516341556941028e-02 1.3698897947960931e+00 6.8164033228793677e-02 66 13 5.5984430468646929e-02 -5.2518935705375192e-02 1.2846905692704527e+00 6.8192379941696024e-02 67 13 5.0456445567035914e-02 -6.0003289473553462e-02 1.2034179972532224e+00 6.8572250390128042e-02 46 14 6.2125310352920942e-03 2.0303946122167457e-02 3.1281391934751750e+00 4.3418548964642848e-03 47 14 1.2586252994141551e-02 1.8526081233266944e-02 3.1098454935089426e+00 8.9060293648175973e-03 48 14 1.9230946043577560e-02 1.5667372453989855e-02 3.0794176943161435e+00 1.3115057054418081e-02 49 14 2.6137774778200437e-02 1.1810351136806422e-02 3.0375174049882485e+00 1.8087048754774859e-02 50 14 3.3212491928636803e-02 7.2992225179014901e-03 2.9841295384404947e+00 2.2004855368148415e-02 51 14 4.0287965701667774e-02 2.5401878231124070e-03 2.9201873761746322e+00 2.6251864239477816e-02 52 14 4.7084950235070790e-02 -1.5367647745423544e-03 2.8460826176461214e+00 3.1550405800570377e-02 53 14 5.3394938807608198e-02 -3.8189091837479271e-03 2.7618414550374171e+00 3.4078755072804334e-02 54 14 5.8993292313748183e-02 -3.5973792455875567e-03 2.6685349927998416e+00 3.8408227574764739e-02 55 14 6.3556335349961954e-02 -6.2431637367217803e-04 2.5666275536315712e+00 4.8699089040917742e-02 56 14 6.6964056267884084e-02 4.5769367377603920e-03 2.4557353749765953e+00 5.5250177879654923e-02 57 14 6.9383096110104403e-02 1.0443700417892785e-02 2.3372460787617091e+00 4.8877265509441860e-02 58 14 7.0902882912434767e-02 1.4901150217599933e-02 2.2153762731957523e+00 3.6855855444881423e-02 59 14 7.1371437986785607e-02 1.6309046491044948e-02 2.0952150868451325e+00 3.3767321347237748e-02 60 14 7.0669455361377170e-02 1.4022514079635867e-02 1.9802548348082920e+00 4.4183070822277598e-02 61 14 6.8778625694252790e-02 8.3491333919763697e-03 1.8713681248889604e+00 6.2122972249850195e-02 62 14 6.6020744812228985e-02 9.3214254594525453e-05 1.7678740540400675e+00 7.6623706282128889e-02 63 14 6.2682284726107512e-02 -9.7792451167304662e-03 1.6688065675718022e+00 8.1415567216948037e-02 64 14 5.8918721723525268e-02 -2.0302586431406405e-02 1.5736941621604550e+00 7.7500219992702721e-02 65 14 5.4746595093657431e-02 -3.0579454642138081e-02 1.4825643820257655e+00 6.9762389143182515e-02 66 14 5.0069730858363332e-02 -3.9888642901338277e-02 1.3955927459608877e+00 6.2758970275432674e-02 67 14 4.4933663450095848e-02 -4.7891116689763653e-02 1.3128772446257440e+00 5.8682027004382027e-02 46 15 5.1152952775922033e-03 1.8659378110024907e-02 3.2140271070913005e+00 4.1540349709897912e-03 47 15 1.0407156273095077e-02 1.7326664719003650e-02 3.1967057763255800e+00 8.4306957061432188e-03 48 15 1.5988622567598428e-02 1.5168445877281743e-02 3.1679442576225614e+00 1.2719379076289393e-02 49 15 2.1885568217009564e-02 1.2174941867328717e-02 3.1281577535354508e+00 1.7001405158051222e-02 50 15 2.8005157584934003e-02 8.4234346635273499e-03 3.0777427119831637e+00 2.1535344884361880e-02 51 15 3.4203628631048706e-02 4.1201527991218930e-03 3.0170732085592578e+00 2.5092419266711505e-02 52 15 4.0273023892509745e-02 -3.0905470037058879e-04 2.9469864247848712e+00 2.9151288534464047e-02 53 15 4.5966726241029668e-02 -4.0006666181612452e-03 2.8678393242058098e+00 3.3130278884574968e-02 54 15 5.1104650994780447e-02 -6.0041424552807693e-03 2.7800364642676412e+00 3.5223124979544898e-02 55 15 5.5488243850781153e-02 -5.6728228381646593e-03 2.6844283422931334e+00 3.9372064188177118e-02 56 15 5.8877431019381893e-02 -2.8373977451194481e-03 2.5813303997207036e+00 4.7735205473457727e-02 57 15 6.1192754404677731e-02 1.9625744957038923e-03 2.4707229892338156e+00 5.3395658748910058e-02 58 15 6.2559399840932478e-02 7.4029841945239131e-03 2.3539122221120694e+00 4.9631397477652224e-02 59 15 6.3085762963691172e-02 1.1767920074159980e-02 2.2342302605464366e+00 4.0074274113849645e-02 60 15 6.2738583370051323e-02 1.3595809964949298e-02 2.1158922086823080e+00 3.4544843870994241e-02 61 15 6.1459495497541553e-02 1.2132218698963734e-02 2.0022478956425349e+00 3.8648370005537683e-02 62 15 5.9191014617697386e-02 7.4369212481993083e-03 1.8946927916820802e+00 5.1280094143650111e-02 63 15 5.6112883035391553e-02 1.5241042460865118e-04 1.7931192080227514e+00 6.5078275199232719e-02 64 15 5.2381914480018284e-02 -8.7393573351747279e-03 1.6966990995460491e+00 7.3703171614088514e-02 65 15 4.8154561274024253e-02 -1.8158186214249080e-02 1.6047042570277603e+00 7.4757596569215323e-02 66 15 4.3514236192847160e-02 -2.7140068841868543e-02 1.5167924507932848e+00 6.9645510071157046e-02 67 15 3.8487646056039965e-02 -3.5050041703896445e-02 1.4329030412257056e+00 6.1796068583530220e-02 46 16 4.1121613287546767e-03 1.6611727971503738e-02 3.3012373876010765e+00 4.0112149671078772e-03 47 16 8.4072155839625025e-03 1.5640948062718176e-02 3.2848185807655312e+00 8.0389858799836296e-03 48 16 1.3009529873836844e-02 1.4026826818067700e-02 3.2576849926871678e+00 1.2267366461203941e-02 49 16 1.7952728506840557e-02 1.1775392085978324e-02 3.2200224311420773e+00 1.6380610929632368e-02 50 16 2.3173517317309104e-02 8.8396949162233354e-03 3.1723098667280469e+00 2.0617505845510527e-02 51 16 2.8527375873014901e-02 5.2605331108324270e-03 3.1149331904123927e+00 2.4642039277569559e-02 52 16 3.3844169479655781e-02 1.2062688978583447e-03 3.0484729299533981e+00 2.7964409658875373e-02 53 16 3.8918292620302095e-02 -2.8748849238108124e-03 2.9736549446978735e+00 3.1505366881226446e-02 54 16 4.3547349660752313e-02 -6.2069009445611893e-03 2.8909473772084544e+00 3.4424485208683749e-02 55 16 4.7580424685808739e-02 -7.9710880751985153e-03 2.8009205099100662e+00 3.6154412382339962e-02 56 16 5.0854081627286961e-02 -7.6092730545798135e-03 2.7042647267665645e+00 3.9641542152399581e-02 57 16 5.3194487235788138e-02 -5.0173104951803374e-03 2.6012868465831764e+00 4.6092263504384168e-02 58 16 5.4544418744086581e-02 -6.8981101735996983e-04 2.4921828963517614e+00 5.1006910369826138e-02 59 16 5.4994924553140458e-02 4.2836327580780377e-03 2.3780796826182380e+00 4.9514823037830176e-02 60 16 5.4666172261284576e-02 8.5004902840890417e-03 2.2615335099041989e+00 4.2636775110839707e-02 61 16 5.3609165992374738e-02 1.0679834551897555e-02 2.1459324006490923e+00 3.6227663615146233e-02 62 16 5.1824250947414448e-02 1.0027817808510680e-02 2.0343188029423711e+00 3.5426258051869976e-02 63 16 4.9248691299402139e-02 6.4274458142425410e-03 1.9284341544992951e+00 4.1920964867417754e-02 64 16 4.5942736420817219e-02 3.6482770074813435e-04 1.8287654150660164e+00 5.2078611259451145e-02 65 16 4.1929143621165149e-02 -7.2191885007406413e-03 1.7348739413467118e+00 6.1426997674065266e-02 66 16 3.7317264247106942e-02 -1.5218834824136813e-02 1.6460751828468192e+00 6.6340158974418473e-02 67 16 3.2216964534058742e-02 -2.2669907423239363e-02 1.5617743688866514e+00 6.5718965253840023e-02 46 17 3.2038513402595833e-03 1.4549363688928604e-02 3.3895591676663472e+00 3.8740372261070587e-03 47 17 6.5893010911466827e-03 1.3832948394557329e-02 3.3740799785352897e+00 7.8047258334290430e-03 48 17 1.0288153920761848e-02 1.2627273070096253e-02 3.3484604158029749e+00 1.1814964845215372e-02 49 17 1.4343562544311081e-02 1.0917642310925266e-02 3.3129302217397250e+00 1.5915837352799615e-02 50 17 1.8711069907365707e-02 8.6603810569449227e-03 3.2678224968534835e+00 1.9891857112843046e-02 51 17 2.3268354328054120e-02 5.8075366064755298e-03 3.2136008840805963e+00 2.3873438109450550e-02 52 17 2.7851442490967895e-02 2.3902533780137251e-03 3.1507328506174890e+00 2.7425971912980490e-02 53 17 3.2289470499939563e-02 -1.4108553486720968e-03 3.0798629197810201e+00 3.0454281997189366e-02 54 17 3.6401531810566365e-02 -5.1542768558244315e-03 3.0016709734921223e+00 3.3364940344086241e-02 55 17 4.0026666380262227e-02 -8.1642473948382813e-03 2.9167297948312720e+00 3.5485725237048169e-02 56 17 4.3040698476333121e-02 -9.7514986799021215e-03 2.8256685511253132e+00 3.6823497285884521e-02 57 17 4.5315242054961340e-02 -9.4553091544438365e-03 2.7290964170680425e+00 3.9412667748337134e-02 58 17 4.6728186094146799e-02 -7.2080798896910379e-03 2.6273598240592557e+00 4.4054673737077221e-02 59 17 4.7238582365894982e-02 -3.4265439603308372e-03 2.5207756866934345e+00 4.8132749440320768e-02 60 17 4.6915167308037457e-02 1.0173022367524662e-03 2.4102691717707518e+00 4.8264378889040602e-02 61 17 4.5878458913814729e-02 4.9914096968149378e-03 2.2977472609152954e+00 4.3979159143593703e-02 62 17 4.4224680130988188e-02 7.3990129011475029e-03 2.1858328774551228e+00 3.8173580608216115e-02 63 17 4.1994820361455860e-02 7.4827953801520318e-03 2.0771305343119466e+00 3.4563297534431575e-02 64 17 3.9137257641992704e-02 5.0370874042866597e-03 1.9734992097644621e+00 3.5730952569924916e-02 65 17 3.5631725273507026e-02 4.3449677264763162e-04 1.8758828301938326e+00 4.0744339317557048e-02 66 17 3.1408151509453426e-02 -5.4777200198948585e-03 1.7843301358141830e+00 4.7520639963472604e-02 67 17 2.6522238334985564e-02 -1.1679511023747726e-02 1.6984415129734605e+00 5.3018341714655810e-02 46 18 2.3843828669610156e-03 1.2660978178025983e-02 3.4789271288679227e+00 3.7918342793510783e-03 47 18 4.9450889525803707e-03 1.2132860447939984e-02 3.4643510317225061e+00 7.6421135667516567e-03 48 18 7.8140679661203652e-03 1.1226229835357066e-02 3.4401964470811213e+00 1.1541284520763247e-02 49 18 1.1046618013063213e-02 9.9060143131185548e-03 3.4067015198600039e+00 1.5506589806688502e-02 50 18 1.4612750988423856e-02 8.1374440503600414e-03 3.3641562260547460e+00 1.9422558129474219e-02 51 18 1.8413498469032674e-02 5.8727315303511605e-03 3.3129651226979617e+00 2.3183654657968313e-02 52 18 2.2301622623293011e-02 3.0835195724965925e-03 3.2536020585164382e+00 2.6772033944297578e-02 53 18 2.6116220201990684e-02 -1.8279689744549508e-04 3.1866007696746581e+00 2.9872680865316317e-02 54 18 2.9701667884207642e-02 -3.7316976800115241e-03 3.1126129141969376e+00 3.2513413034807800e-02 55 18 3.2907391541621034e-02 -7.1583115429751567e-03 3.0323072668050601e+00 3.4808631252509027e-02 56 18 3.5606527510435450e-02 -9.8939725694922511e-03 2.9463146261644657e+00 3.6319829721286813e-02 57 18 3.7698663737262605e-02 -1.1375390742749276e-02 2.8552820963069059e+00 3.7239861983610650e-02 58 18 3.9088465706309777e-02 -1.1228746132927958e-02 2.7597873686482539e+00 3.8889025411047565e-02 59 18 3.9695256409507204e-02 -9.4009394189014297e-03 2.6602313542384337e+00 4.1895974993005257e-02 60 18 3.9495309259367679e-02 -6.2230345830911723e-03 2.5569963966181226e+00 4.4965631490782157e-02 61 18 3.8545039913546152e-02 -2.3786535532725776e-03 2.4508312675303316e+00 4.5920514488069615e-02 62 18 3.6954074865480128e-02 1.2282016341536312e-03 2.3431267665259212e+00 4.3758560412907668e-02 63 18 3.4827220456812864e-02 3.6945661465787663e-03 2.2358327801611484e+00 3.9451292942136867e-02 64 18 3.2218910695365657e-02 4.3660320425317227e-03 2.1310490517969525e+00 3.5111254516543990e-02 65 18 2.9087342227999129e-02 3.0455444688651813e-03 2.0305282973571783e+00 3.2918417721830844e-02 66 18 2.5372232775870272e-02 4.3101617223550806e-05 1.9354486875268957e+00 3.3273427758987784e-02 67 18 2.0969446360974874e-02 -3.9151705864842639e-03 1.8462670595978425e+00 3.5839853074825130e-02 46 19 1.6512843261320075e-03 1.0987872164888977e-02 3.5693114120600948e+00 3.7544453550501099e-03 47 19 3.4693828934049794e-03 1.0614470512646424e-02 3.5555847621812378e+00 7.5448521170096428e-03 48 19 5.5832878553496159e-03 9.9412650542773581e-03 3.5328411070743782e+00 1.1389274895471794e-02 49 19 8.0560404265094324e-03 8.9169703548363649e-03 3.5012842989918398e+00 1.5246580463137240e-02 50 19 1.0874798910797465e-02 7.5006175557548070e-03 3.4611993753262835e+00 1.9078439557839770e-02 51 19 1.3960170247446628e-02 5.6628766607884507e-03 3.4129406099981541e+00 2.2748376850992687e-02 52 19 1.7187306926990015e-02 3.3791114984346986e-03 3.3569501463910090e+00 2.6189271201973383e-02 53 19 2.0408118329521163e-02 6.4893557402087099e-04 3.2937280845971824e+00 2.9321606718930526e-02 54 19 2.3476723238065832e-02 -2.4602876499539316e-03 3.2238399203920238e+00 3.1962958457161156e-02 55 19 2.6258295464776520e-02 -5.7576796941004914e-03 3.1479284673536454e+00 3.4156911956077854e-02 56 19 2.8630619370958902e-02 -8.8930690252339677e-03 3.0666518643046277e+00 3.5892661056604852e-02 57 19 3.0493738278774083e-02 -1.1403520667650087e-02 2.9806615465117541e+00 3.6918478010249958e-02 58 19 3.1769633521921414e-02 -1.2838304360500291e-02 2.8906014649494733e+00 3.7430068396673274e-02 59 19 3.2392878898576796e-02 -1.2895369907254324e-02 2.7970441000103028e+00 3.8231054024247216e-02 60 19 3.2317458564871615e-02 -1.1528530737544909e-02 2.7004400730470790e+00 3.9850014504588943e-02 61 19 3.1538209536514525e-02 -8.9913866493091833e-03 2.6012104705787098e+00 4.1788846857990419e-02 62 19 3.0106151848055710e-02 -5.8169296010834618e-03 2.4999744392347112e+00 4.2824771525379189e-02 63 19 2.8111751824499116e-02 -2.7099067826484723e-03 2.3977370098242989e+00 4.1999743689251690e-02 64 19 2.5642115166966899e-02 -3.7583093927085479e-04 2.2959013817621652e+00 3.9311086761425744e-02 65 19 2.2739024932534879e-02 6.9031725796892798e-04 2.1960607660111831e+00 3.5593548755216170e-02 66 19 1.9360984539962538e-02 3.6977888255263831e-04 2.0997081168528346e+00 3.2154276329086920e-02 67 19 1.5438050100429968e-02 -1.0432301873896661e-03 2.0080189220035769e+00 2.9631505848896855e-02 46 20 1.0005713393943312e-03 9.5017081123285156e-03 3.6606778948685812e+00 3.7416489300894046e-03 47 20 2.1556139405973940e-03 9.2610489425346554e-03 3.6477633247626944e+00 7.5085693396003093e-03 48 20 3.5885938883311297e-03 8.7880043044545279e-03 3.6263631596267261e+00 1.1309641194272810e-02 49 20 5.3667732922661367e-03 8.0102175511010849e-03 3.5966615908351534e+00 1.5115097149696225e-02 50 20 7.4924739787415977e-03 6.8735127555147619e-03 3.5589207954479325e+00 1.8863857888042439e-02 51 20 9.9068107944864647e-03 5.3529476751576887e-03 3.5134740087135223e+00 2.2477537729162646e-02 52 20 1.2506075993609781e-02 3.4444756299246053e-03 3.4607226745356003e+00 2.5837693062659404e-02 53 20 1.5160874077946692e-02 1.1524882969926673e-03 3.4011375846729339e+00 2.8887408479528413e-02 54 20 1.7734940457716470e-02 -1.4990556993727258e-03 3.3352386564920229e+00 3.1543630638170637e-02 55 20 2.0100928465037738e-02 -4.4339065447279637e-03 3.2636019595819645e+00 3.3717558182568816e-02 56 20 2.2145164753640190e-02 -7.4804875903483848e-03 3.1868512001165499e+00 3.5441880155073519e-02 57 20 2.3769428022307568e-02 -1.0349759450154568e-02 3.1056273903527054e+00 3.6673532122603443e-02 58 20 2.4896339715789952e-02 -1.2675191999544918e-02 3.0205737284853935e+00 3.7292538535201189e-02 59 20 2.5468985979643841e-02 -1.4100081990934616e-02 2.9323189951321198e+00 3.7427685599965600e-02 60 20 2.5448607918788255e-02 -1.4385567397833704e-02 2.8414354097290775e+00 3.7556283499316126e-02 61 20 2.4817783643438188e-02 -1.3490546586533239e-02 2.7484076422565842e+00 3.8082260735718951e-02 62 20 2.3589343336608590e-02 -1.1611993935377100e-02 2.6536848254584062e+00 3.8935620038885489e-02 63 20 2.1811762687040492e-02 -9.1584992822167973e-03 2.5577995428625764e+00 3.9535705699374889e-02 64 20 1.9552560718246830e-02 -6.6616023925910534e-03 2.4614977563818163e+00 3.9231839964941270e-02 65 20 1.6868207868199737e-02 -4.6256110695646292e-03 2.3657686281406716e+00 3.7680740280574096e-02 66 20 1.3771923064182467e-02 -3.3629764370108782e-03 2.2717690628115208e+00 3.4992575818894436e-02 67 20 1.0216563831813628e-02 -2.8773280983627786e-03 2.1806570915446502e+00 3.1680335175008734e-02 46 21 4.2819159687512460e-04 8.1632007249548867e-03 3.7529903834448159e+00 3.7423609631461038e-03 47 21 9.9711160580639069e-04 8.0361394988281831e-03 3.7408544465593803e+00 7.5017491952824259e-03 48 21 1.8223463049841296e-03 7.7391232489513418e-03 3.7207385811780820e+00 1.1276720360426545e-02 49 21 2.9729908670854291e-03 7.1809536434441065e-03 3.6928128968565739e+00 1.5044619806378935e-02 50 21 4.4630211842279621e-03 6.2898530592735060e-03 3.6573153126998164e+00 1.8745564376663873e-02 51 21 6.2530378926498095e-03 5.0327232191291708e-03 3.6145560350050006e+00 2.2306723048093197e-02 52 21 8.2606063458683775e-03 3.4143364240138180e-03 3.5649083315155745e+00 2.5637044991408991e-02 53 21 1.0375890873171322e-02 1.4584462825118164e-03 3.5088085173247681e+00 2.8643640582957056e-02 54 21 1.2477040328430030e-02 -8.0788691088751207e-04 3.4467451721111528e+00 3.1274776134106071e-02 55 21 1.4443655042179667e-02 -3.3475041858781873e-03 3.3792493006270390e+00 3.3464274062126993e-02 56 21 1.6166861381571758e-02 -6.0878453757704206e-03 3.3068913780283182e+00 3.5180477777681421e-02 57 21 1.7552281639891781e-02 -8.8841019430742352e-03 3.2302718610025165e+00 3.6437506196775833e-02 58 21 1.8522518692212671e-02 -1.1508044131585694e-02 3.1500059790290980e+00 3.7214981162674035e-02 59 21 1.9020137500286093e-02 -1.3673284098105828e-02 3.0667102983264090e+00 3.7459322958210216e-02 60 21 1.9008228804481608e-02 -1.5105050505846495e-02 2.9809901134630730e+00 3.7259219685928026e-02 61 21 1.8471089387219392e-02 -1.5617600902148483e-02 2.8934057046210779e+00 3.6879631284435256e-02 62 21 1.7413686408023165e-02 -1.5182594919430080e-02 2.8044607964052970e+00 3.6617815353005692e-02 63 21 1.5863615239819974e-02 -1.3952505300488589e-02 2.7146151715551481e+00 3.6563304315398518e-02 64 21 1.3865327635428144e-02 -1.2234280411652334e-02 2.6243556996642896e+00 3.6521224130188504e-02 65 21 1.1464117662043393e-02 -1.0401210709517453e-02 2.5342611338610443e+00 3.6136358508197755e-02 66 21 8.6831260305573858e-03 -8.7685621833870353e-03 2.4450434118304316e+00 3.5071187650686675e-02 67 21 5.5067154198047783e-03 -7.4699862274778556e-03 2.3575171918526414e+00 3.3147846949162604e-02 46 22 -7.0449306807385435e-05 6.9541254688860691e-03 3.8462003366201043e+00 3.7480880422250235e-03 47 22 -1.4376220495519640e-05 6.9200523633839942e-03 3.8348107954675510e+00 7.5041260935584716e-03 48 22 2.7467731222376320e-04 6.7724731026497127e-03 3.8159278699085903e+00 1.1263414453561838e-02 49 22 8.6526152263487915e-04 6.4095349642244275e-03 3.7897058367979266e+00 1.5000110719996081e-02 50 22 1.7801606831351967e-03 5.7433080798464323e-03 3.7563621818235422e+00 1.8666592640497342e-02 51 22 2.9963890622926675e-03 4.7262866291639784e-03 3.7161812699998515e+00 2.2191406305823023e-02 52 22 4.4520748244689405e-03 3.3577865823987457e-03 3.6695099683875614e+00 2.5499793288717244e-02 53 22 6.0571989537343291e-03 1.6715349130283119e-03 3.6167536073413378e+00 2.8505692325385454e-02 54 22 7.7063492969091296e-03 -2.8938859608522866e-04 3.5583685992744662e+00 3.1142032713363701e-02 55 22 9.2897868070741518e-03 -2.4856635860555807e-03 3.4948526553527666e+00 3.3358551532695180e-02 56 22 1.0703298481132606e-02 -4.8791154922520740e-03 3.4267361673688725e+00 3.5118161428701930e-02 57 22 1.1855035957849513e-02 -7.4061871989512303e-03 3.3545743254611660e+00 3.6405091527723719e-02 58 22 1.2668245450805752e-02 -9.9532055216555813e-03 3.2789424961948241e+00 3.7237555286431662e-02 59 22 1.3084362522347818e-02 -1.2342944030483997e-02 3.2004214945512284e+00 3.7608016859132669e-02 60 22 1.3064065150659214e-02 -1.4359152139198713e-02 3.1195958049523469e+00 3.7514558622903947e-02 61 22 1.2589532261629734e-02 -1.5794281985108176e-02 3.0370335746058879e+00 3.7012174217746746e-02 62 22 1.1663566021906671e-02 -1.6513702345218907e-02 2.9532764754743015e+00 3.6260686559867764e-02 63 22 1.0308118861170027e-02 -1.6502293938751245e-02 2.8688205550427193e+00 3.5457138907730140e-02 64 22 8.5588276467537371e-03 -1.5878726406335527e-02 2.7841277154443778e+00 3.4729868235104169e-02 65 22 6.4539802622594639e-03 -1.4858587651869384e-02 2.6996507669467382e+00 3.4059863896111771e-02 66 22 4.0182260293560633e-03 -1.3674148172850377e-02 2.6158720230363572e+00 3.3292980508267740e-02 67 22 1.2476762605800654e-03 -1.2473006564673176e-02 2.5333251112595070e+00 3.2199669156576960e-02 -------------- next part -------------- 46 12 8.2320905105907529e-03 2.2115631212954099e-02 2.9618276172116600e+00 4.8940496226270967e-03 47 12 1.6611248876999340e-02 1.9311287936368087e-02 2.9412196895728004e+00 9.6273831018199151e-03 48 12 2.5175211267583236e-02 1.5077111140260870e-02 2.9073929263822129e+00 1.5369099858192346e-02 49 12 3.3938251754845945e-02 1.0154455094181274e-02 2.8598340477172868e+00 1.8985410213209816e-02 50 12 4.2688751176236238e-02 5.3645330078426617e-03 2.8003505875324293e+00 2.6669172098597139e-02 51 12 5.1166710743074981e-02 2.4497836484232739e-03 2.7275837662676432e+00 3.1286535376945750e-02 52 12 5.9179785389584329e-02 2.2600737048917023e-03 2.6425671354619249e+00 3.4354663176138334e-02 53 12 6.6216245835602483e-02 5.0878150647569315e-03 2.5470739094270782e+00 4.8196055895039641e-02 54 12 7.1992352340940716e-02 1.0694753764058340e-02 2.4395215691993757e+00 5.7399379430447435e-02 55 12 7.6755399465316052e-02 1.7056866034872806e-02 2.3208815564309115e+00 4.5515260591472692e-02 56 12 8.0519011579896937e-02 2.1186891451858605e-02 2.1976140284218211e+00 3.0329631453222549e-02 57 12 8.2810947249035191e-02 2.1170212309800499e-02 2.0766034271175573e+00 3.6270778751756355e-02 58 12 8.3437596316591311e-02 1.6848047448639079e-02 1.9608865234130002e+00 5.9464092692347849e-02 59 12 8.2597580650171698e-02 9.1341175992936691e-03 1.8500351286395105e+00 8.0416965463074738e-02 60 12 8.0846471998148262e-02 -9.4001955755015942e-04 1.7429903543846961e+00 8.7447730352884390e-02 61 12 7.8504528069929372e-02 -1.2552368634715970e-02 1.6395365250169194e+00 8.3590369138713594e-02 62 12 7.5533638864664618e-02 -2.4919624641105649e-02 1.5400842797679255e+00 7.7599028717174431e-02 63 12 7.1789181911941599e-02 -3.7140862617139205e-02 1.4448744859547602e+00 7.5400029713406438e-02 64 12 6.7193716175038540e-02 -4.8300057290690167e-02 1.3536967045228512e+00 7.6436534836526751e-02 65 12 6.1835225026861784e-02 -5.7702768429302935e-02 1.2661723862914072e+00 7.6876196923511081e-02 66 12 5.6015094515013186e-02 -6.5136071311716406e-02 1.1821040789335460e+00 7.4340494129820689e-02 67 12 5.0173263700077832e-02 -7.1032157398399465e-02 1.1015187217870031e+00 6.8787359901584980e-02 46 13 6.9562968658897174e-03 2.2503664470198945e-02 3.0445787863191440e+00 4.6532726780925430e-03 47 13 1.4072018643270517e-02 2.0241017860088230e-02 3.0249327629216634e+00 8.9626504304551515e-03 48 13 2.1446602821652051e-02 1.6535061602644823e-02 2.9929366679675566e+00 1.4042370505417644e-02 49 13 2.9057561800052079e-02 1.1916867575347781e-02 2.9482557504682223e+00 1.8608327503964107e-02 50 13 3.6824142467195141e-02 6.8486685798989488e-03 2.8916420008420638e+00 2.2751715019474920e-02 51 13 4.4462554036773155e-02 2.3123446469024739e-03 2.8238358854132937e+00 2.9514005325473502e-02 52 13 5.1739415860868394e-02 -3.3273747894992556e-04 2.7442245887252499e+00 3.2667680894422645e-02 53 13 5.8414033371157061e-02 -3.5961640413731990e-04 2.6541325254079995e+00 3.7024936036151458e-02 54 13 6.4055665866012787e-02 2.5222139135606873e-03 2.5543599813348186e+00 4.9500807909346571e-02 55 13 6.8493575205861745e-02 7.8336331868567831e-03 2.4439842554253750e+00 5.6618433100926388e-02 56 13 7.1945505860103176e-02 1.3723190065897081e-02 2.3245270178097868e+00 4.6866108975236298e-02 57 13 7.4453562111853361e-02 1.7717001023246227e-02 2.2013910184974743e+00 3.3344594666439076e-02 58 13 7.5702268480538334e-02 1.8094584157261083e-02 2.0804947407453485e+00 3.5243465283733517e-02 59 13 7.5531185297967707e-02 1.4483085181471032e-02 1.9650649487158229e+00 5.2890602160985657e-02 60 13 7.4026402586553630e-02 7.5277968228264118e-03 1.8552143839056927e+00 7.3291807074672088e-02 61 13 7.1651534114764368e-02 -1.8271082135919328e-03 1.7498907026129020e+00 8.4149513144255811e-02 62 13 6.8751192595554969e-02 -1.2697250959953025e-02 1.6484034925666458e+00 8.3104483895744874e-02 63 13 6.5396075805744663e-02 -2.4258163357222452e-02 1.5507780059059464e+00 7.5967882001677201e-02 64 13 6.1500549181132937e-02 -3.5670611263537849e-02 1.4573002184194368e+00 6.9761483830757726e-02 65 13 5.6925331070261750e-02 -4.6127838796670435e-02 1.3680746893704490e+00 6.7180223928964153e-02 66 13 5.1688481539725724e-02 -5.5058527599851408e-02 1.2829642207136853e+00 6.7142983017479424e-02 67 13 4.6075514091289835e-02 -6.2403473787500600e-02 1.2017557056891981e+00 6.7159687634448753e-02 46 14 5.7590251366942524e-03 2.1721073944672108e-02 3.1286096610568563e+00 4.2693196303457059e-03 47 14 1.1697254464383543e-02 1.9940220666416084e-02 3.1102325055097704e+00 8.7536177480758252e-03 48 14 1.7939832396988365e-02 1.7049672781349064e-02 3.0796945444523240e+00 1.2929942877697291e-02 49 14 2.4483662368946731e-02 1.3129604316414295e-02 3.0376231645743217e+00 1.7907921625605994e-02 50 14 3.1239406700376452e-02 8.5179559039992043e-03 2.9840003096148520e+00 2.1760859158622522e-02 51 14 3.8032143218986063e-02 3.6341965920997721e-03 2.9198035731818854e+00 2.6200771510346648e-02 53 14 5.0661309132451274e-02 -2.9274557377189617e-03 2.7606822480069755e+00 3.4021016413777964e-02 54 14 5.6049570141121191e-02 -2.8111244430837979e-03 2.6669503598276267e+00 3.8855104759650566e-02 55 14 6.0409907739999671e-02 1.7325019540784139e-05 2.5645263789419750e+00 4.9446393390087648e-02 56 14 6.3639667264546784e-02 4.9755521743176656e-03 2.4530574957789368e+00 5.5362956501374266e-02 57 14 6.5915261366348385e-02 1.0452153917156827e-02 2.3341515418095580e+00 4.8266298613478462e-02 58 14 6.7309157608385281e-02 1.4391117411401359e-02 2.2121959089620824e+00 3.6564207571319583e-02 59 14 6.7651905710379753e-02 1.5231851741165295e-02 2.0922402294823588e+00 3.4569152827017710e-02 60 14 6.6826119193148306e-02 1.2421610074258680e-02 1.9775863507597762e+00 4.5759963809158290e-02 61 14 6.4830549267190563e-02 6.3233602962785617e-03 1.8689451580711378e+00 6.3431147422894962e-02 62 14 6.1998524170224237e-02 -2.2481605139210081e-03 1.7655890898079178e+00 7.6902598361623609e-02 63 14 5.8614298032223372e-02 -1.2344783351282673e-02 1.6666012494767211e+00 8.0562903160310340e-02 64 14 5.4822127046988939e-02 -2.3024427166761410e-02 1.5715662660553544e+00 7.5988855777348457e-02 65 14 5.0624311835398152e-02 -3.3406972642086627e-02 1.4805379403332957e+00 6.8167731693335440e-02 66 14 4.5915826663911349e-02 -4.2773282423611017e-02 1.3936890620699671e+00 6.1378190512027579e-02 67 14 4.0735521166962142e-02 -5.0767907942862837e-02 1.3111005073382813e+00 5.7513348138022446e-02 46 15 4.6727088041120099e-03 2.0063260702217433e-02 3.2141907266992353e+00 4.0787645808751417e-03 47 15 9.5374165317105887e-03 1.8737923428240384e-02 3.1968188282504086e+00 8.3067387648968587e-03 48 15 1.4720036398783403e-02 1.6572294006950587e-02 3.1679532621703967e+00 1.2527339419658902e-02 49 15 2.0253545105211369e-02 1.3529173485721125e-02 3.1280381829985919e+00 1.6818517253924005e-02 50 15 2.6047034959698422e-02 9.6818122533588666e-03 3.0774332366132846e+00 2.1328469267764907e-02 51 15 3.1957105801288327e-02 5.2373360030571713e-03 3.0165389113340075e+00 2.4885518434941876e-02 52 15 3.7768248365844026e-02 6.5194055487508625e-04 2.9461909643403503e+00 2.9088252237913774e-02 53 15 4.3230340825622826e-02 -3.1762152363348237e-03 2.8667069770363969e+00 3.3063968302552822e-02 54 15 4.8161394805061992e-02 -5.2960057208420869e-03 2.7785295764949329e+00 3.5219150158049474e-02 55 15 5.2351879609290317e-02 -5.0811408195443799e-03 2.6825202806330819e+00 3.9794746623416054e-02 56 15 5.5557759391286321e-02 -2.3990362675141083e-03 2.5789493438000242e+00 4.8393091849310357e-02 57 15 5.7711322908972539e-02 2.1591656312026837e-03 2.4678397086719177e+00 5.3617159654531218e-02 58 15 5.8945769478622644e-02 7.2340677037824085e-03 2.3506584784069706e+00 4.9249860352064398e-02 59 15 5.9360374298848988e-02 1.1123477377651816e-02 2.2308766811921057e+00 3.9725066628139585e-02 60 15 5.8908949449781318e-02 1.2425412824427022e-02 2.1127097986248655e+00 3.4884809353343635e-02 61 15 5.7529751247830045e-02 1.0460710068973415e-02 1.9993774738603840e+00 3.9724115591299604e-02 62 15 5.5175252218634688e-02 5.3409711204844730e-03 1.8921308359354503e+00 5.2496712396929233e-02 63 15 5.2037410450241843e-02 -2.2747851642737934e-03 1.7907800253263921e+00 6.5762053150155264e-02 64 15 4.8279033811402944e-02 -1.1417834928251082e-02 1.6944982196290677e+00 7.3470040151639646e-02 65 15 4.4052468865163313e-02 -2.1031231645956900e-02 1.6025947320439060e+00 7.3704137425373142e-02 66 15 3.9426873580270957e-02 -3.0165440108044003e-02 1.5147700073386217e+00 6.8127974884097872e-02 67 15 3.4411629090671324e-02 -3.8177673850620604e-02 1.4309888709606662e+00 6.0169775697118955e-02 46 16 3.6833723122651186e-03 1.7999322118428033e-02 3.3011173281385342e+00 3.9481170514855486e-03 47 16 7.5627146100830612e-03 1.7047643559931484e-02 3.2846573235025076e+00 7.9218846321623630e-03 48 16 1.1772641824535333e-02 1.5444235372745510e-02 3.2574468827068142e+00 1.2110715364812945e-02 49 16 1.6354161193546012e-02 1.3167777762442129e-02 3.2196733627750476e+00 1.6188179059026857e-02 50 16 2.1246189621560493e-02 1.0152896374668940e-02 3.1718167193611371e+00 2.0438844117269744e-02 51 16 2.6304957685888194e-02 6.4394766376279049e-03 3.1142493966402136e+00 2.4439079384121383e-02 52 16 3.1359040062044562e-02 2.2103847266294707e-03 3.0475721801555138e+00 2.7792273190949679e-02 53 16 3.6197634224528992e-02 -2.0507635615149948e-03 2.9725001173349979e+00 3.1423980271713794e-02 54 16 4.0616289533776703e-02 -5.5389542805622414e-03 2.8894886955443386e+00 3.4343892798436602e-02 55 16 4.4461194946433198e-02 -7.4341105775384249e-03 2.7991299210096443e+00 3.6175689715075537e-02 56 16 4.7560655373134329e-02 -7.1993137880962807e-03 2.7021113813156039e+00 4.0004949285408660e-02 57 16 4.9737790163413516e-02 -4.7677302662924052e-03 2.5987183510924083e+00 4.6662917920987514e-02 58 16 5.0942647666601434e-02 -6.7611998961016192e-04 2.4891856555396528e+00 5.1322770918066474e-02 59 16 5.1273090967404962e-02 3.9605740158815596e-03 2.3747556034614536e+00 4.9362516296532315e-02 60 16 5.0846257578654463e-02 7.7494388222478826e-03 2.2580959622295702e+00 4.2331594612434693e-02 61 16 4.9705535518246448e-02 9.4522667959986637e-03 2.1426194557804705e+00 3.6243178822971296e-02 62 16 4.7846661358890936e-02 8.3311862515080114e-03 2.0312922204942891e+00 3.5988962507601474e-02 63 16 4.5211639124851426e-02 4.3109393023718686e-03 1.9257436684324223e+00 4.2810552872153643e-02 64 16 4.1869554240592022e-02 -2.1050072336794476e-03 1.8263686174480975e+00 5.2865507697845593e-02 65 16 3.7850343053435286e-02 -9.9817971770572757e-03 1.7326940379685423e+00 6.1696903443622622e-02 66 16 3.3261766316768859e-02 -1.8225412174832832e-02 1.6440426911644010e+00 6.5942704012431755e-02 67 16 2.8200411974401714e-02 -2.5872170714281471e-02 1.5598539266695661e+00 6.4722442427902122e-02 46 17 2.7909814229549758e-03 1.5920183606843382e-02 3.3891686321027614e+00 3.8217613950534634e-03 47 17 5.7741265423955461e-03 1.5230325973430542e-02 3.3736568026366642e+00 7.7020360122805644e-03 48 17 9.0900545589784575e-03 1.4047303912937289e-02 3.3479834585657664e+00 1.1679394866404845e-02 49 17 1.2787930782771744e-02 1.2333212846848471e-02 3.3123673885727012e+00 1.5749777690086165e-02 50 17 1.6826932935259425e-02 1.0021048269789475e-02 3.2671453997551376e+00 1.9714571173986332e-02 51 17 2.1085570430381777e-02 7.0537296317284915e-03 3.2127748470469406e+00 2.3703568521888527e-02 52 17 2.5400563235211501e-02 3.4689901226021358e-03 3.1497236179210866e+00 2.7244588055865368e-02 53 17 2.9599708613581251e-02 -5.2985945916628413e-04 3.0786472900072428e+00 3.0309653255466838e-02 54 17 3.3498757437941375e-02 -4.4679986315416869e-03 3.0002194192096585e+00 3.3273885583864356e-02 55 17 3.6934802065971918e-02 -7.6466658828618153e-03 2.9150116141250999e+00 3.5405738363668525e-02 56 17 3.9780261323639550e-02 -9.3765356275258716e-03 2.8236630049889988e+00 3.6848053322292740e-02 57 17 4.1900445448949483e-02 -9.2174248612029979e-03 2.7267771538976673e+00 3.9699731410389877e-02 58 17 4.3170516650518048e-02 -7.1346125645286819e-03 2.6246910088478703e+00 4.4531992937905808e-02 59 17 4.3553651595013704e-02 -3.5773965530214399e-03 2.5177510847660365e+00 4.8500992058605565e-02 60 17 4.3124168327343518e-02 5.6368287251822773e-04 2.4069631138699674e+00 4.8310456949154437e-02 61 17 4.2003613809055616e-02 4.1622615545271130e-03 2.2943196886427137e+00 4.3798874136638445e-02 62 17 4.0285594004808432e-02 6.1489764561703901e-03 2.1824801675449335e+00 3.8044248276059706e-02 63 17 3.8008328976067290e-02 5.8030312107470658e-03 2.0740145070547964e+00 3.4732580319063101e-02 64 17 3.5122279050693012e-02 2.9473012505933211e-03 1.9707073577339782e+00 3.6206107264528851e-02 65 17 3.1611449215169424e-02 -2.0307488512853163e-03 1.8734218173768611e+00 4.1351502987761288e-02 66 17 2.7409081440758112e-02 -8.2808760527334647e-03 1.7821573734704503e+00 4.7975601582673208e-02 67 17 2.2566614702527379e-02 -1.4777251246354432e-02 1.6964961362780890e+00 5.3116761485928872e-02 46 18 1.9888451966816999e-03 1.4021965216939283e-02 3.4782737798598640e+00 3.7480446480427179e-03 47 18 4.1623402545757845e-03 1.3521582138279177e-02 3.4636779370366542e+00 7.5587735960865243e-03 48 18 6.6597497657014974e-03 1.2643079848070277e-02 3.4394870635857835e+00 1.1423635090215958e-02 49 18 9.5416610163460789e-03 1.1330183402671305e-02 3.4059343099467463e+00 1.5367357019324781e-02 50 18 1.2781785756659978e-02 9.5274516078279695e-03 3.3633039568756593e+00 1.9263629092100620e-02 51 18 1.6283324246881878e-02 7.1728715697225286e-03 3.3120003602160755e+00 2.3024108279582369e-02 52 18 1.9900071101208141e-02 4.2372186857669086e-03 3.2524935301396409e+00 2.6615976372997095e-02 53 18 2.3471883502174544e-02 7.8028090550609562e-04 3.1853218633770704e+00 2.9717802199741763e-02 54 18 2.6842006787867002e-02 -2.9790907710546396e-03 3.1111436216572383e+00 3.2390976982969548e-02 55 18 2.9857347644952219e-02 -6.6080404335774516e-03 3.0306271927543826e+00 3.4719388992462207e-02 56 18 3.2388776246008645e-02 -9.5204414056351168e-03 2.9444062640073065e+00 3.6248830083227955e-02 57 18 3.4332464060743161e-02 -1.1153834917633521e-02 2.8531323206690047e+00 3.7256726283129503e-02 58 18 3.5588281157605997e-02 -1.1151980540540913e-02 2.7573785448446975e+00 3.9099157827241708e-02 59 18 3.6073139969224670e-02 -9.4878637924174277e-03 2.6575414348954367e+00 4.2274255123182496e-02 60 18 3.5765297231992732e-02 -6.5170914058027551e-03 2.5540235057487002e+00 4.5336432789823591e-02 61 18 3.4725521711943735e-02 -2.9374493288927038e-03 2.4476265505146331e+00 4.6107117117330045e-02 62 18 3.3066637448603477e-02 3.4833225477064243e-04 2.3398034757533850e+00 4.3736967218314861e-02 63 18 3.0894860886939438e-02 2.4511253415925318e-03 2.2325417533228782e+00 3.9338116741805630e-02 64 18 2.8264886075118477e-02 2.7357077701325432e-03 2.1279344884093878e+00 3.5084628430534982e-02 65 18 2.5136288846852269e-02 1.0206940192492530e-03 2.0276940433449395e+00 3.3068294163427249e-02 66 18 2.1449081800845794e-02 -2.3717702492781385e-03 1.9329412963178945e+00 3.3592561378594121e-02 67 18 1.7096619147710754e-02 -6.7032214349202167e-03 1.8440874688983158e+00 3.6202871094686381e-02 46 19 1.2740052234055854e-03 1.2350210657775395e-02 3.5684060203121968e+00 3.7201645059424799e-03 47 19 2.7212201875440236e-03 1.2001927149062589e-02 3.5546707140375728e+00 7.4791346619291245e-03 48 19 4.4764390101243081e-03 1.1355911254520032e-02 3.5319078360177771e+00 1.1295045927064373e-02 49 19 6.6075072696647466e-03 1.0342908700083949e-02 3.5003174194241269e+00 1.5130435072169672e-02 50 19 9.1052165937187540e-03 8.9045236322818043e-03 3.4601781732067960e+00 1.8946448272953505e-02 51 19 1.1893067239594716e-02 6.9977629143776459e-03 3.4118409297051429e+00 2.2605247225466280e-02 52 19 1.4848188684836380e-02 4.5921974363031135e-03 3.3557460379695643e+00 2.6048117934727718e-02 53 19 1.7823792081225014e-02 1.6919894206269155e-03 3.2923930029914326e+00 2.9184039025783915e-02 54 19 2.0674535497862880e-02 -1.6210490317361559e-03 3.2223522479971911e+00 3.1834605193882336e-02 55 19 2.3264432885013630e-02 -5.1340455175893033e-03 3.1462720165325053e+00 3.4054836447399359e-02 56 19 2.5468973659078169e-02 -8.4746311582271777e-03 3.0648138661854425e+00 3.5814690807619433e-02 57 19 2.7185780248629294e-02 -1.1166453513251996e-02 2.9786335347801409e+00 3.6860472139579437e-02 58 19 2.8333723612586632e-02 -1.2759505521837479e-02 2.8883780574919058e+00 3.7438130639472468e-02 59 19 2.8843827820327787e-02 -1.2964934416846794e-02 2.7946172053878833e+00 3.8373685408250320e-02 60 19 2.8667977155309006e-02 -1.1755985697228317e-02 2.6977996871399501e+00 4.0130296706124377e-02 61 19 2.7801788452603358e-02 -9.4045764822074229e-03 2.5983584395932584e+00 4.2117291037893925e-02 62 19 2.6299599400936718e-02 -6.4555749924357720e-03 2.4969451228278139e+00 4.3078308912578182e-02 63 19 2.4256020931767673e-02 -3.6172152135026726e-03 2.3946060650857777e+00 4.2113802671415852e-02 64 19 2.1762175416170232e-02 -1.5923922742227502e-03 2.2927765336266543e+00 3.9302558317053801e-02 65 19 1.8862549413262562e-02 -8.7074335629146529e-04 2.1930576312705052e+00 3.5553823105421670e-02 66 19 1.5516932140723605e-02 -1.5648050575657912e-03 2.0969270417763166e+00 3.2162999347674567e-02 67 19 1.1652554810847992e-02 -3.3693120869891261e-03 2.0055284054058959e+00 2.9744460882670277e-02 46 20 6.4219185273366251e-04 1.0876337345892564e-02 3.6595351914393377e+00 3.7167532655139695e-03 47 20 1.4435391412424611e-03 1.0656389877619160e-02 3.6466207366037633e+00 7.4595900663065153e-03 48 20 2.5320298354714480e-03 1.0205691506156566e-02 3.6252167880495723e+00 1.1238847306097664e-02 49 20 3.9791046154489494e-03 9.4374895997675935e-03 3.5955026103502963e+00 1.5024415383073990e-02 50 20 5.7908163792241753e-03 8.2833098807725929e-03 3.5577351981450649e+00 1.8758139057437350e-02 51 20 7.9115198759729891e-03 6.7069717339340437e-03 3.5122426907579301e+00 2.2359973283709585e-02 52 20 1.0240253604894903e-02 4.6977386498878058e-03 3.4594238125945131e+00 2.5714324397811638e-02 53 20 1.2649573038868015e-02 2.2597020500908561e-03 3.3997480349232916e+00 2.8766435703192518e-02 54 20 1.5004341089279456e-02 -5.7638147831177793e-04 3.3337364017495315e+00 3.1428584395995024e-02 55 20 1.7177300365405462e-02 -3.7204760349359776e-03 3.2619693079776049e+00 3.3615208883122326e-02 56 20 1.9053427418839912e-02 -6.9827970390013887e-03 3.1850758566208643e+00 3.5361925905582889e-02 57 20 2.0532355348241549e-02 -1.0056693976491310e-02 3.1037019662754166e+00 3.6613755841021015e-02 58 20 2.1534336687860093e-02 -1.2564944066849571e-02 3.0184961890461777e+00 3.7252524175207462e-02 59 20 2.1999846575507822e-02 -1.4150430307438301e-02 2.9300904208056120e+00 3.7434173644289741e-02 60 20 2.1887481486382473e-02 -1.4583138966977816e-02 2.8390565602517466e+00 3.7651233442616887e-02 61 20 2.1178138222504889e-02 -1.3835787501924825e-02 2.7458784717581595e+00 3.8279927251220626e-02 62 20 1.9885015851241826e-02 -1.2119293594843955e-02 2.6510115491010788e+00 3.9198543312533904e-02 63 20 1.8059331402610974e-02 -9.8536576367896766e-03 2.5550063150746274e+00 3.9793371217983362e-02 64 20 1.5773099571040155e-02 -7.5784453304910569e-03 2.4586334496939721e+00 3.9424449256578641e-02 65 20 1.3087860481343286e-02 -5.8041683380735225e-03 2.3629056374470148e+00 3.7786209102427817e-02 66 20 1.0020520050994554e-02 -4.8469535088859839e-03 2.2689919292724641e+00 3.5039084568036095e-02 67 20 6.5244445856917233e-03 -4.7093223782647461e-03 2.1780502371148662e+00 3.1710259015561716e-02 46 21 8.9128368737065300e-05 9.5570488742428190e-03 3.7516291105765736e+00 3.7257619797768067e-03 47 21 3.2219923029953358e-04 9.4457099244076789e-03 3.7395007334865729e+00 7.4685081146245022e-03 48 21 8.1819520178354590e-04 9.1648264385956035e-03 3.7193938578353354e+00 1.1227248931542871e-02 49 21 1.6497726721416183e-03 8.6115743728392841e-03 3.6914737629722842e+00 1.4979513836412285e-02 50 21 2.8345958407722492e-03 7.7027647421426797e-03 3.6559732375348060e+00 1.8666417597524628e-02 51 21 4.3368453272751625e-03 6.3964939402161866e-03 3.6131975305027124e+00 2.2216371405894601e-02 52 21 6.0772946941374016e-03 4.6919520259997268e-03 3.5635161566982889e+00 2.5538702026676036e-02 53 21 7.9486831226330788e-03 2.6110003888382047e-03 3.5073633401287641e+00 2.8542540509049204e-02 54 21 9.8307663723135735e-03 1.8284621909457906e-04 3.4452273712904700e+00 3.1177192876390637e-02 55 21 1.1603812516479196e-02 -2.5482712643570423e-03 3.3776408027364919e+00 3.3374189417081861e-02 56 21 1.3158536955116788e-02 -5.4980461869122412e-03 3.3051782429945202e+00 3.5104214576641188e-02 57 21 1.4399148456529537e-02 -8.5067868395944744e-03 3.2284450298191016e+00 3.6380559537973318e-02 58 21 1.5246283131814678e-02 -1.1332078778371646e-02 3.1480620746271688e+00 3.7176406611832892e-02 59 21 1.5640381619025027e-02 -1.3678291392531660e-02 3.0646513847180059e+00 3.7439681801356012e-02 60 21 1.5542391760995436e-02 -1.5268768474773606e-02 2.9788221916736410e+00 3.7271910486293518e-02 61 21 1.4934620427490896e-02 -1.5922956557893206e-02 2.8911362022988358e+00 3.6949005281380773e-02 62 21 1.3820918377492225e-02 -1.5622035671203224e-02 2.8020973065078456e+00 3.6757488054672861e-02 63 21 1.2229395075564531e-02 -1.4530053832910334e-02 2.7121684564479662e+00 3.6763923653264842e-02 64 21 1.0207224119640888e-02 -1.2965930748153904e-02 2.6218450922859118e+00 3.6749409009825422e-02 65 21 7.8041552100696276e-03 -1.1315267095563424e-02 2.5317193536464035e+00 3.6356009991912172e-02 66 21 5.0482814913731409e-03 -9.9051399780963421e-03 2.4425179018296177e+00 3.5256765320179198e-02 67 21 1.9269928049393099e-03 -8.8774901654848047e-03 2.3550665542561169e+00 3.3300182057361016e-02 46 22 -3.8994471258567323e-04 8.3684304075300388e-03 3.8446432825394647e+00 3.7383891669291952e-03 47 22 -6.5137492010006171e-04 8.3452914956516459e-03 3.8332674463125826e+00 7.4841498287616243e-03 48 22 -6.7543219769506324e-04 8.2076779478259063e-03 3.8144040079702095e+00 1.1232382226180804e-02 49 22 -3.9059190241916140e-04 7.8442696630428785e-03 3.7882027491863011e+00 1.4957449596081824e-02 50 22 2.2943990907027355e-04 7.1579555953033055e-03 3.7548762320812208e+00 1.8612597087384986e-02 51 22 1.1654674089544472e-03 6.0944231086119552e-03 3.7147038968034982e+00 2.2127345901466245e-02 52 22 2.3592032909911798e-03 4.6488088358172582e-03 3.6680287372035867e+00 2.5428504144946542e-02 53 22 3.7236454887383734e-03 2.8531815032384880e-03 3.6152531520115416e+00 2.8430302247544319e-02 54 22 5.1554710996441020e-03 7.5092763876121870e-04 3.5568325241922323e+00 3.1067405569729593e-02 55 22 6.5460135179683668e-03 -1.6156769481794420e-03 3.4932646290774954e+00 3.3289199314153105e-02 56 22 7.7911948008459039e-03 -4.2022265959678279e-03 3.4250819678858231e+00 3.5057777470142375e-02 57 22 8.7984655144421418e-03 -6.9353373024504282e-03 3.3528432475094982e+00 3.6358316994752511e-02 58 22 9.4897326873542194e-03 -9.6890856226222958e-03 3.2771285566929396e+00 3.7207169219469698e-02 59 22 9.8047141185438891e-03 -1.2274391132122493e-02 3.1985241518588774e+00 3.7594401909511861e-02 60 22 9.7022846197048693e-03 -1.4466407414033659e-02 3.1176198068181926e+00 3.7517387689909830e-02 61 22 9.1628460346133735e-03 -1.6054481502297709e-02 3.0349881016829281e+00 3.7038340921861382e-02 62 22 8.1877960101374905e-03 -1.6906268943916478e-02 2.9511730690866949e+00 3.6322161552605120e-02 63 22 6.7985704523466986e-03 -1.7013652070128542e-02 2.8666719542446693e+00 3.5564883173022366e-02 64 22 5.0318597793385560e-03 -1.6505745491578919e-02 2.7819482015732526e+00 3.4885515016550378e-02 65 22 2.9288517454410610e-03 -1.5611525784394462e-02 2.6974584203556446e+00 3.4253712864638429e-02 66 22 5.1837565883542953e-04 -1.4578470450599831e-02 2.6136913274182998e+00 3.3510768886344966e-02 67 22 -2.1995243297600587e-03 -1.3568724209871780e-02 2.5311888574762533e+00 3.2427876220910257e-02 From DOMI0002 at ntu.edu.sg Thu Apr 30 01:25:32 2009 From: DOMI0002 at ntu.edu.sg (#DOMINIC DENVER JOHN CHANDAR#) Date: Thu, 30 Apr 2009 14:25:32 +0800 Subject: Using petsc in user code Message-ID: Hi, I have a C++ code in which I plan to call PETSc for solving a system of equations. My doubt is regarding PETScInitialize() and PETScFinalize(). For example, I have a class called classX classX { public: initialize(); finalize(); }; classX::initialize() { PetscInitialize(...) .. .. more initializations.. } classX::finalize() { PetscFinalize(); ... ... } main() { classX ob1, ob2 ; // 2 objects // Is the following acceptable ?? ob1.initialize(); ob2.initialize(); } Do I need to call PetscInitialize() /PetscFinalize() for each object ? -Dominic -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 30 06:44:41 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Apr 2009 06:44:41 -0500 Subject: Some strange results running in PETSc. In-Reply-To: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> References: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> Message-ID: It appears there is a problem in your output code. Why not just use PETSC output? Matt On Wed, Apr 29, 2009 at 10:18 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I am running some codes and stores the solution in the text file. However, > I found that some results are wired in the sense that some processors are > "eating" my (i,j) index and the corresponding solution. > > For example, the solution at time step =235 on processor 6 is right, but at > time step = 236 on processor 6, one grid solution is missing and thus the > order of the index is wrong. > > In the attached two files: > hp.solution.dt0.16700.n90.t235.p6 (right one) > hp.solution.dt0.16700.n90.t236.p6 (wrong one) > for example, > > in hp.solution.dt0.16700.n90.t235.p6 (right one) > i j > > --------------------------------------------------------------------------------------------------------------------------------------------------------------------- > 50 14 3.3212491928636803e-02 7.2992225179014901e-03 2.9841295384404947e+00 > 2.2004855368148415e-02 > 51 14 4.0287965701667774e-02 2.5401878231124070e-03 2.9201873761746322e+00 > 2.6251864239477816e-02 > 52 14 4.7084950235070790e-02 -1.5367647745423544e-03 2.8460826176461214e+00 > 3.1550405800570377e-02 > 53 14 5.3394938807608198e-02 -3.8189091837479271e-03 2.7618414550374171e+00 > 3.4078755072804334e-02 > > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > however, in hp.solution.dt0.16700.n90.t236.p6 (wrong one) > i j > > --------------------------------------------------------------------------------------------------------------------------------------------------------------------- > 50 14 3.1239406700376452e-02 8.5179559039992043e-03 2.9840003096148520e+00 > 2.1760859158622522e-02 > 51 14 3.8032143218986063e-02 3.6341965920997721e-03 2.9198035731818854e+00 > 2.6200771510346648e-02 > 53 14 5.0661309132451274e-02 -2.9274557377189617e-03 2.7606822480069755e+00 > 3.4021016413777964e-02 > 54 14 5.6049570141121191e-02 -2.8111244430837979e-03 2.6669503598276267e+00 > 3.8855104759650566e-02 > > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > the (i,j) = (52,14) is missing and as a result, one grid point solution is > missing. > > I did not understand how this happens and why this happens? > Any ideas? > > Thanks very much! > > Cheers, > > Rebecca > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Apr 30 06:48:29 2009 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 30 Apr 2009 06:48:29 -0500 Subject: Using petsc in user code In-Reply-To: References: Message-ID: No, you can only call them once for the entire program. Matt On Thu, Apr 30, 2009 at 1:25 AM, #DOMINIC DENVER JOHN CHANDAR# < DOMI0002 at ntu.edu.sg> wrote: > Hi, > > I have a C++ code in which I plan to call PETSc for solving a > system of equations. My doubt is regarding PETScInitialize() and > PETScFinalize(). > For example, I have a class called classX > > classX > { > > public: > initialize(); > finalize(); > }; > > classX::initialize() > { > PetscInitialize(...) > .. > .. > more initializations.. > } > > classX::finalize() > { > PetscFinalize(); > ... > ... > } > > main() > { > classX ob1, ob2 ; // 2 objects > > // Is the following acceptable ?? > ob1.initialize(); > ob2.initialize(); > > } > > Do I need to call PetscInitialize() /PetscFinalize() for each object ? > > -Dominic > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Apr 30 15:09:24 2009 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 30 Apr 2009 15:09:24 -0500 Subject: Some strange results running in PETSc. In-Reply-To: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> References: <20090429231806.8il8cv8248wwc0sc@cubmail.cc.columbia.edu> Message-ID: www.valgrind.org On Apr 29, 2009, at 10:18 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I am running some codes and stores the solution in the text file. > However, I found that some results are wired in the sense that some > processors are "eating" my (i,j) index and the corresponding solution. > > For example, the solution at time step =235 on processor 6 is right, > but at time step = 236 on processor 6, one grid solution is missing > and thus the order of the index is wrong. > > In the attached two files: > hp.solution.dt0.16700.n90.t235.p6 (right one) > hp.solution.dt0.16700.n90.t236.p6 (wrong one) > for example, > > in hp.solution.dt0.16700.n90.t235.p6 (right one) > i j > --------------------------------------------------------------------------------------------------------------------------------------------------------------------- > 50 14 3.3212491928636803e-02 7.2992225179014901e-03 > 2.9841295384404947e+00 2.2004855368148415e-02 > 51 14 4.0287965701667774e-02 2.5401878231124070e-03 > 2.9201873761746322e+00 2.6251864239477816e-02 > 52 14 4.7084950235070790e-02 -1.5367647745423544e-03 > 2.8460826176461214e+00 3.1550405800570377e-02 > 53 14 5.3394938807608198e-02 -3.8189091837479271e-03 > 2.7618414550374171e+00 3.4078755072804334e-02 > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > however, in hp.solution.dt0.16700.n90.t236.p6 (wrong one) > i j > --------------------------------------------------------------------------------------------------------------------------------------------------------------------- > 50 14 3.1239406700376452e-02 8.5179559039992043e-03 > 2.9840003096148520e+00 2.1760859158622522e-02 > 51 14 3.8032143218986063e-02 3.6341965920997721e-03 > 2.9198035731818854e+00 2.6200771510346648e-02 > 53 14 5.0661309132451274e-02 -2.9274557377189617e-03 > 2.7606822480069755e+00 3.4021016413777964e-02 > 54 14 5.6049570141121191e-02 -2.8111244430837979e-03 > 2.6669503598276267e+00 3.8855104759650566e-02 > -------------------------------------------------------------------------------------------------------------------------------------------------------------------- > the (i,j) = (52,14) is missing and as a result, one grid point > solution is missing. > > I did not understand how this happens and why this happens? > Any ideas? > > Thanks very much! > > Cheers, > > Rebecca > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > >