From lvankampenhout at gmail.com Fri Oct 1 04:05:11 2010 From: lvankampenhout at gmail.com (Leo van Kampenhout) Date: Fri, 1 Oct 2010 11:05:11 +0200 Subject: [petsc-users] [Fortran] subroutines inside modules? In-Reply-To: <4CA50F98.9000902@imperial.ac.uk> References: <4CA4CE58.3040004@imperial.ac.uk> <4CA50F98.9000902@imperial.ac.uk> Message-ID: Thank you Stephan, it is working now. I forgot to add the correct files to the makefile, for I put the module in a seperate file (grid.F). The correct makefile rule for the main program (main.F) is: main: grid.o main.o chkopts -${FLINKER} -o main grid.o main.o ${PETSC_KSP_LIB} ${RM} main.o grid.o Thanks again. Leo 2010/10/1 Stephan Kramer > On 30/09/10 23:09, Leo van Kampenhout wrote: > >> Declaring it external in the program/subroutine that is using the module >> results in >> >> main.F:65.43: >> external gridtest >> Error: Cannot change attributes of USE-associated symbol at (1) >> >> Thanks, Leo >> > > Yes, as I said before :) - module subroutines should *not* be declared > external. You do > not need that line. > > Cheers > Stephan > > > >> >> 2010/9/30 Stephan Kramer > > >> >> >> On 30/09/10 15:31, Leo van Kampenhout wrote: >> >> Hi all, >> >> since it is mandatory to declare all subroutines as "external" in >> Fortran, is it possible for Modules to have subroutines? I'm >> unable to >> declare the subroutine external inside the module itself, nor in >> the >> program which is using it. Not declaring it external at all >> results in >> the following compilation error: >> >> /net/users/csg/csg4035/master/workdir/src/main.F:97: undefined >> reference >> to `__grid_MOD_readgrid' >> >> (the module is here is named "grid", the subroutine "readgrid" ) >> >> Thanks, >> Leo >> >> >> If you put your subroutine in a module, it should not be declared >> external. You can directly call it from within the module itself. When >> calling it inside any other module/program you need to add "use >> grid" before >> the "implicit none". >> >> Putting subroutines inside a module is highly recommended as it >> automatically >> provides an explicit interface so that the compiler can check the >> arguments in >> your subroutine call. >> >> Cheers >> Stephan >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brtnfld at uiuc.edu Fri Oct 1 09:34:31 2010 From: brtnfld at uiuc.edu (M. Scot Breitenfeld) Date: Fri, 01 Oct 2010 09:34:31 -0500 Subject: [petsc-users] petsc and meshless type method: Forming parallel coefficient matrix Message-ID: <4CA5F177.6090307@uiuc.edu> Hi, I'm working on implementing petsc in a meshfree type method. I have the serial version working with Petsc, but I have some questions about the parallel implementation. I guess the easiest way to explain the problem is with an example, lets take the 1D problem, 1 2 3 4 5 6 7 8 o o o o | o o o o Nodal partitioning: proc 0: 1-4 proc 1: 5-8 Now, Proc 0 contributes values to 5 and 6, so they are included in my "local numbering" and the total number of local nodes is 6 on proc 0 Proc 1 contributes values to 3 and 4, so they are included in my "local numbering" and the total number of local nodes is 6 on proc 1 1 2 3 4 5 6 o o o o o o Proc 0 o o o o o o Proc 1 3 4 5 6 7 8 Each node has 3 dof, so the global coefficient matrix, A, would be 24x24: Processor 0 has rows (1-12) and Processor 1 has rows (13-24) When each processor loops over its nodes (Proc 0: 1-6, Proc 1: 3-4) it adds its contribution into A: CALL MatSetValues(A, 1, "global row id", "number of column entries", "column entries", "values", ADD_VALUES,ierr) I'm unclear what the best way to create the A matrix is. For each processor I have its number of global nodes (proc 0: 1-6, proc 1: 3-8) so I can get the corresponding rows in the global A that it contributes (note that some of the rows are not local to the processor, those values need to be sent). Also, for each processor I have a list of the actual global nodes the processors owns (proc 0: 1-4, proc 1:5-8). I can set the mapping with: CALL ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, "number of local elements", "global index of elements", mapping, ierr) where I'm assuming the "number of local elements" includes the ghost nodes. This is essentially what global row this processor will contribute. How do I tell Petsc what nodes the processor actually owns so it knows what nodes need to be sent? Is there another mapping command to use for mapping the owned global nodes to the processor and then is there a command that takes both maps and figures out what nodes need to be sent. Or is the correct procedure to use VecScatterCreate and create these arrays myself. Can I still use MatSetValues or do I have to use MatSetValuesLoc (together with VecSetLocalToGlobalMapping) to assemble the matrix, or can I still do this all using global numbering. If there is a similar example to what I'm doing that you could point me to it would be helpful. Thanks for your time, Scot From jed at 59A2.org Fri Oct 1 09:57:40 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 1 Oct 2010 16:57:40 +0200 Subject: [petsc-users] petsc and meshless type method: Forming parallel coefficient matrix In-Reply-To: <4CA5F177.6090307@uiuc.edu> References: <4CA5F177.6090307@uiuc.edu> Message-ID: On Fri, Oct 1, 2010 at 16:34, M. Scot Breitenfeld wrote: > ?Hi, I'm working on implementing petsc in a meshfree type method. I have the > serial version working with Petsc, but I have some questions about the > parallel implementation. I guess the easiest way to explain the problem is > with an example, lets take the 1D problem, > > 1 ? ?2 ? ?3 ? 4 ? ?5 ? 6 ? 7 ? 8 > o ? ?o ? o ? o ?| o ? o ? o ?o > > Nodal partitioning: proc 0: 1-4 > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?proc 1: 5-8 Minor point; PETSc matrices and vectors use zero-based indexing. > Now, Proc 0 contributes values to 5 and 6, so they are included in my "local > numbering" and the total number of local nodes is 6 on proc 0 > ? ? ? ? Proc 1 contributes values to 3 and 4, so they are included in my > "local numbering" and the total number of local nodes is 6 on proc 1 > > ?1 ? ? 2 ? 3 ?4 ? 5 ? 6 > ?o ? ?o ? o ?o ? o ? o ? ? ? ? ? ? ? ? ? Proc 0 > > ? ? ? ? ? ? o ?o ? o ? o ? ?o ? o ? ? ? Proc 1 > ? ? ? ? ? ? 3 ?4 ? 5 ? ?6 ? ?7 ? 8 > > Each node has 3 dof, so the global coefficient matrix, A, ?would be 24x24: > Processor 0 has rows (1-12) and Processor 1 has rows (13-24) Presumably you provide this in MatSetSizes (or MatCreateMPI*AIJ)? > When each processor loops over its nodes (Proc 0: 1-6, Proc 1: 3-4) it adds > its contribution into A: Are these indices really what you mean? > CALL MatSetValues(A, 1, "global row id", "number of column entries", "column > entries", "values", ADD_VALUES,ierr) > > I'm unclear what the best way to create the A matrix is. > > For each processor I have its number of global nodes (proc 0: 1-6, proc 1: > 3-8) so I can get the corresponding rows in the global A that it contributes > (note that some of the rows are not local to the processor, those values > need to be sent). They will be sent during MatAssemblyBegin/End. There are two common assembly modes. Either a process only sets rows that it owns (the owner of a particle computes the interaction with all its neighbors, note that there is some redundancy here if the interaction is symmetric) or you partition the interactions (elements in FEM, fluxes in FVM) and compute the interaction only once (no redundancy), summing into the appropriate rows. The latter involves adding values into unowned rows. PETSc matrices perform this communication automatically. You can choose which type to use (or some hybrid if you desire). You can also choose whether to insert in the local ordering or the global ordering. They are equivalent, it's just a matter of what is most convenient to you. > Also, for each processor I have a list of the actual global nodes the > processors owns (proc 0: 1-4, proc 1:5-8). > > I can set the mapping with: > > CALL ISLocalToGlobalMappingCreate(PETSC_COMM_WORLD, "number of local > elements", "global index of elements", mapping, ierr) > > where I'm assuming the "number of local elements" includes the ghost nodes. > This is essentially what global row this processor will contribute. > > How do I tell Petsc what nodes the processor actually owns so it knows what > nodes need to be sent? Is there another mapping command to use for mapping > the owned global nodes to the processor and then is there a command that > takes both maps and figures out what nodes need to be sent. > Or is the correct procedure to use VecScatterCreate and create these arrays > myself. Create local and global vectors, VecScatter will get the values from wherever the IS says. You don't have to do anything special to indicate which indices are owned, that is determined using the Vec. > Can I still use MatSetValues or do I have to use MatSetValuesLoc (together > with VecSetLocalToGlobalMapping) to assemble the matrix, or can I still do > this all using global numbering. Yes, it's entirely up to you. > If there is a similar example to what I'm doing that you could point me to > it would be helpful. I'm not familiar with a particle example in PETSc. It might help to look at an unstructured mesh problem, the linear algebra setup is quite similar. Jed From gdiso at ustc.edu Fri Oct 1 22:58:00 2010 From: gdiso at ustc.edu (Gong Ding) Date: Sat, 2 Oct 2010 11:58:00 +0800 Subject: [petsc-users] does petsc support schur complement Message-ID: <5ED1A79EA04D40C0A2B229FC5FF44A07@cogendaeda> Dear all, I had take a look with the manual of petsc.It has refered schur complement. However, I can not find more useful information. Anyway, I am new to this method and I would like to get any information about it. Yours Gong Ding From jed at 59A2.org Fri Oct 1 23:28:55 2010 From: jed at 59A2.org (Jed Brown) Date: Sat, 2 Oct 2010 06:28:55 +0200 Subject: [petsc-users] does petsc support schur complement In-Reply-To: <5ED1A79EA04D40C0A2B229FC5FF44A07@cogendaeda> References: <5ED1A79EA04D40C0A2B229FC5FF44A07@cogendaeda> Message-ID: 2010/10/2 Gong Ding : > Dear all, > I had take a look with the manual of petsc.It has refered schur complement. > However, I can not find more useful information. A couple useful references, there are others depending on your application. @article{benzi2005nss, title={{Numerical solution of saddle point problems}}, author={Benzi, M. and Golub, G.H. and Liesen, J.}, journal={Acta Numerica}, volume={14}, pages={1--137}, year={2005}, publisher={Cambridge Univ Press} } @article{elman2008tcp, title={{A taxonomy and comparison of parallel block multi-level preconditioners for the incompressible Navier-Stokes equations}}, author={Elman, H.C. and Howle, V.E. and Shadid, J. and Shuttleworth, R. and Tuminaro, R.}, journal={Journal of Computational Physics}, volume={227}, number={1}, pages={1790--1808}, year={2008}, publisher={Academic Press} } Many of these methods are easy to implement using PCFieldSplit, when you find one that is likely to work well for your application, ask and we can suggest a good way to implement it. Jed From amal.ghamdi at kaust.edu.sa Sat Oct 2 17:08:38 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Sun, 3 Oct 2010 01:08:38 +0300 Subject: [petsc-users] petsc4py Vec: how to copy from one index to multiple indices of the same vector? Message-ID: Dear all, I would like to ask about the proper way to copy a value in vector x, let us say x[0] into places in the same vector, let us say x[1], x[100]. I want to do this using petsc4py. Actually I have tried the method getValue in Vec class. But I received an error telling me that this method "Can only get local values", so I tried to follow example 10 in the manual, which is posted below, but when trying to create sequential vector I get the error " [1] Invalid argument [1] Cannot create VECSEQ on more than one process " Vec p, x; /* initial vector, destination vector */ VecScatter scatter; /* scatter context */ IS from, to; /* index sets that define the scatter */ PetscScalar *values; int idx_from[] = {100,200}, idx_to[] = {0,1}; VecCreateSeq(PETSC COMM SELF,2,&x); ISCreateGeneral(PETSC COMM SELF,2,idx from,&from); ISCreateGeneral(PETSC COMM SELF,2,idx to,&to); VecScatterCreate(p,from,x,to,&scatter); VecScatterBegin(scatter,p,x,INSERT VALUES,SCATTER FORWARD); VecScatterEnd(scatter,p,x,INSERT VALUES,SCATTER FORWARD); VecGetArray(x,&values); ISDestroy(from); ISDestroy(to); VecScatterDestroy(scatter); Thank you very much Amal -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Oct 2 19:48:41 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 2 Oct 2010 19:48:41 -0500 Subject: [petsc-users] petsc4py Vec: how to copy from one index to multiple indices of the same vector? In-Reply-To: References: Message-ID: On Oct 2, 2010, at 5:08 PM, Amal Alghamdi wrote: > Dear all, > > I would like to ask about the proper way to copy a value in vector x, let us say x[0] into places in the same vector, let us say x[1], x[100]. I want to do this using petsc4py. In this case you can just make a scatter that comes from the vector and goes to the same vector. So create an IS with the entry 0 in it twice and another with the entry 1 and 100 now create the VecScatter with those two index sets and do the scatter. The resulting code will work on any number of processes Barry > > Actually I have tried the method getValue in Vec class. But I received an error telling me that this method "Can only get local values", so I tried to follow example 10 in the manual, which is posted below, but when trying to create sequential vector I get the error > " > [1] Invalid argument > [1] Cannot create VECSEQ on more than one process > " > > > Vec p, x; /* initial vector, destination vector */ > VecScatter scatter; /* scatter context */ > IS from, to; /* index sets that define the scatter */ > PetscScalar *values; > int idx_from[] = {100,200}, idx_to[] = {0,1}; > VecCreateSeq(PETSC COMM SELF,2,&x); > ISCreateGeneral(PETSC COMM SELF,2,idx from,&from); > ISCreateGeneral(PETSC COMM SELF,2,idx to,&to); > VecScatterCreate(p,from,x,to,&scatter); > VecScatterBegin(scatter,p,x,INSERT VALUES,SCATTER FORWARD); > VecScatterEnd(scatter,p,x,INSERT VALUES,SCATTER FORWARD); > VecGetArray(x,&values); > ISDestroy(from); > ISDestroy(to); > VecScatterDestroy(scatter); > > > Thank you very much > Amal From patdevelop at gmail.com Sat Oct 2 21:39:04 2010 From: patdevelop at gmail.com (Patrick Sunter) Date: Sun, 3 Oct 2010 13:39:04 +1100 Subject: [petsc-users] does petsc support schur complement In-Reply-To: References: <5ED1A79EA04D40C0A2B229FC5FF44A07@cogendaeda> Message-ID: Hi there, you may also like to check out the "PetscExt" project that a colleague of mine has developed over the last couple of years for solving block matrix systems in PETSc: http://jupiter.ethz.ch/~dmay/Research/PetscExt/index.html -- Patrick. On Sat, Oct 2, 2010 at 3:28 PM, Jed Brown wrote: > 2010/10/2 Gong Ding : >> Dear all, >> I had take a look with the manual of petsc.It has refered schur complement. >> However, I can not find more useful information. > > A couple useful references, there are others depending on your application. > > @article{benzi2005nss, > ?title={{Numerical solution of saddle point problems}}, > ?author={Benzi, M. and Golub, G.H. and Liesen, J.}, > ?journal={Acta Numerica}, > ?volume={14}, > ?pages={1--137}, > ?year={2005}, > ?publisher={Cambridge Univ Press} > } > > @article{elman2008tcp, > ?title={{A taxonomy and comparison of parallel block multi-level > preconditioners for the incompressible Navier-Stokes equations}}, > ?author={Elman, H.C. and Howle, V.E. and Shadid, J. and Shuttleworth, > R. and Tuminaro, R.}, > ?journal={Journal of Computational Physics}, > ?volume={227}, > ?number={1}, > ?pages={1790--1808}, > ?year={2008}, > ?publisher={Academic Press} > } > > > Many of these methods are easy to implement using PCFieldSplit, when > you find one that is likely to work well for your application, ask and > we can suggest a good way to implement it. > > Jed > -- Patrick Sunter VPAC Senior Computational Software Developer - AuScope Monash SAM Project Monash University Adjunct Research Associate (School of Mathematical Sciences) Room 301, Building 28 Monash University VIC 3800 Australia Ph: +61 (0)3 9905 4468 For VPAC/AuScope admin issues email: patrick at vpac.org For AuScope software development email: patdevelop at gmail.com From amal.ghamdi at kaust.edu.sa Sun Oct 3 00:05:39 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Sun, 3 Oct 2010 08:05:39 +0300 Subject: [petsc-users] petsc4py Vec: how to copy from one index to multiple indices of the same vector? In-Reply-To: References: Message-ID: Thank you very much Barry. On Sun, Oct 3, 2010 at 3:48 AM, Barry Smith wrote: > > On Oct 2, 2010, at 5:08 PM, Amal Alghamdi wrote: > > > Dear all, > > > > I would like to ask about the proper way to copy a value in vector x, let > us say x[0] into places in the same vector, let us say x[1], x[100]. I want > to do this using petsc4py. > > In this case you can just make a scatter that comes from the vector and > goes to the same vector. So create an IS with the entry 0 in it twice and > another with the entry 1 and 100 now create the VecScatter with those two > index sets and do the scatter. The resulting code will work on any number of > processes > > Barry > > > > > Actually I have tried the method getValue in Vec class. But I received an > error telling me that this method "Can only get local values", so I tried to > follow example 10 in the manual, which is posted below, but when trying to > create sequential vector I get the error > > " > > [1] Invalid argument > > [1] Cannot create VECSEQ on more than one process > > " > > > > > > Vec p, x; /* initial vector, destination vector */ > > VecScatter scatter; /* scatter context */ > > IS from, to; /* index sets that define the scatter */ > > PetscScalar *values; > > int idx_from[] = {100,200}, idx_to[] = {0,1}; > > VecCreateSeq(PETSC COMM SELF,2,&x); > > ISCreateGeneral(PETSC COMM SELF,2,idx from,&from); > > ISCreateGeneral(PETSC COMM SELF,2,idx to,&to); > > VecScatterCreate(p,from,x,to,&scatter); > > VecScatterBegin(scatter,p,x,INSERT VALUES,SCATTER FORWARD); > > VecScatterEnd(scatter,p,x,INSERT VALUES,SCATTER FORWARD); > > VecGetArray(x,&values); > > ISDestroy(from); > > ISDestroy(to); > > VecScatterDestroy(scatter); > > > > > > Thank you very much > > Amal > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Oct 3 21:56:00 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 3 Oct 2010 21:56:00 -0500 Subject: [petsc-users] Project: Matlab binding for PETSc Message-ID: PETSc-users, I've started a little project to develop Matlab bindings for PETSc. It is not difficult but requires manually translating the API so is a bit tedious. If anyone is interesting in helping out the code is bin/matlab/classes; I have only the most rudimentary parts done and it is not ready for users but with help it could be. It you are interested then get petsc-dev http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html and join petsc-dev at mcs.anl.gov http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/mailing-lists.html and take a look at the comments in bin/matlab/classes/PetscInitialize.m We need to add the rest of the bindings to matlabheader.h and all the classes. Plus, of course, simple tests. Happy computing, Barry From stali at geology.wisc.edu Mon Oct 4 16:21:47 2010 From: stali at geology.wisc.edu (Tabrez Ali) Date: Mon, 04 Oct 2010 16:21:47 -0500 Subject: [petsc-users] preallocation for FE matrix Message-ID: <4CAA456B.3020806@geology.wisc.edu> I am trying to assemble a FE matrix (adding one element at a time from the local stiffness matrix) and cant seem to get the preallocation right. I am correctly calculating the number of non zeros per row and storing the value in the array [nnzpr]. Here is part of the relevant code ... call MatCreateSeqBAIJ(Petsc_Comm_Self, 1, m, n, petsc_null_integer, nnzpr, Mat_A, ierr) ... ! Assume a bilinear quad (2 dof per node) do j1=1,8 do j2=1,8 call MatSetValues(Mat_A, 1, indx(j1)-1, 1, indx(j2)-1, k(j1,j2), Add_Values, ierr) end do end do ... call MatAssemblyBegin(Mat_A,Mat_Final_Assembly,ierr) call MatAssemblyEnd(Mat_A,Mat_Final_Assembly,ierr) ... On running it I get -bash-3.00$ ./a.out From bsmith at mcs.anl.gov Mon Oct 4 16:26:55 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 4 Oct 2010 16:26:55 -0500 Subject: [petsc-users] preallocation for FE matrix In-Reply-To: <4CAA456B.3020806@geology.wisc.edu> References: <4CAA456B.3020806@geology.wisc.edu> Message-ID: <64EFABC4-0232-4BE2-AA6D-BBB05BAEEBC3@mcs.anl.gov> On Oct 4, 2010, at 4:21 PM, Tabrez Ali wrote: > I am trying to assemble a FE matrix (adding one element at a time from the local stiffness matrix) and cant seem to get the preallocation right. I am correctly calculating the number of non zeros per row and storing the value in the array [nnzpr]. (For sequential case) you cannot be calculating the number of nonzeros per row correctly and yet not getting the preallocation correct. For your simple problem. Print the nonzeros per row you are computing then print the sparse matrix you computing. That will show which rows are not computed or set properly. > I dont understand why -mat_view_info shows allocated nonzeros to be 154 when sum(nnzpr) is 112. If you mess up the count on a row it allocates a few extra for that row (incase you end up needing them) so the number allocated can be more then the number actually needed. Barry > > Here is part of the relevant code > > ... > call MatCreateSeqBAIJ(Petsc_Comm_Self, 1, m, n, petsc_null_integer, nnzpr, Mat_A, ierr) > ... > ! Assume a bilinear quad (2 dof per node) > do j1=1,8 > do j2=1,8 > call MatSetValues(Mat_A, 1, indx(j1)-1, 1, indx(j2)-1, k(j1,j2), Add_Values, ierr) > end do > end do > ... > call MatAssemblyBegin(Mat_A,Mat_Final_Assembly,ierr) > call MatAssemblyEnd(Mat_A,Mat_Final_Assembly,ierr) > ... > > On running it I get > > -bash-3.00$ ./a.out ... > Total non-zero elements estimated i.e., sum(nnzpr) = 112 > [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374784 max tags = 2147483647 > [0] PetscCommDuplicate(): returning tag 2147483647 > [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 12 X 12, block size 1; storage space: 42 unneeded, 112 used > [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 9 > [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 12 > ... > Matrix Object: > type=seqbaij, rows=12, cols=12 > total: nonzeros=112, allocated nonzeros=154 > block size is 1 > > I dont understand why -mat_view_info shows allocated nonzeros to be 154 when sum(nnzpr) is 112. > > Thanks in advance. From stali at geology.wisc.edu Mon Oct 4 16:43:44 2010 From: stali at geology.wisc.edu (Tabrez Ali) Date: Mon, 04 Oct 2010 16:43:44 -0500 Subject: [petsc-users] preallocation for FE matrix In-Reply-To: <64EFABC4-0232-4BE2-AA6D-BBB05BAEEBC3@mcs.anl.gov> References: <4CAA456B.3020806@geology.wisc.edu> <64EFABC4-0232-4BE2-AA6D-BBB05BAEEBC3@mcs.anl.gov> Message-ID: <4CAA4A90.70704@geology.wisc.edu> Barry Smith wrote: > On Oct 4, 2010, at 4:21 PM, Tabrez Ali wrote: > > >> I am trying to assemble a FE matrix (adding one element at a time from the local stiffness matrix) and cant seem to get the preallocation right. I am correctly calculating the number of non zeros per row and storing the value in the array [nnzpr]. >> > > (For sequential case) you cannot be calculating the number of nonzeros per row correctly and yet not getting the preallocation correct. > > For your simple problem. Print the nonzeros per row you are computing then print the sparse matrix you computing. That will show which rows are not computed or set properly. > I am calculating the nonzeros per row correctly and have checked this by explicitly forming the dense matrix. Here is the stiffness matrix (1 corresponds to non zero location) pattern for the same problem. 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 and here is the [nnzpr] array that I passed to MatCreateSeqBAIJ 8 8 12 12 12 12 8 8 8 8 8 8 The sum of values in [nnzpr] adds up to 112. Thanks again >> I dont understand why -mat_view_info shows allocated nonzeros to be 154 when sum(nnzpr) is 112. >> > > If you mess up the count on a row it allocates a few extra for that row (incase you end up needing them) so the number allocated can be more then the number actually needed. > > Barry > > > >> Here is part of the relevant code >> >> ... >> call MatCreateSeqBAIJ(Petsc_Comm_Self, 1, m, n, petsc_null_integer, nnzpr, Mat_A, ierr) >> ... >> ! Assume a bilinear quad (2 dof per node) >> do j1=1,8 >> do j2=1,8 >> call MatSetValues(Mat_A, 1, indx(j1)-1, 1, indx(j2)-1, k(j1,j2), Add_Values, ierr) >> end do >> end do >> ... >> call MatAssemblyBegin(Mat_A,Mat_Final_Assembly,ierr) >> call MatAssemblyEnd(Mat_A,Mat_Final_Assembly,ierr) >> ... >> >> On running it I get >> >> -bash-3.00$ ./a.out > ... >> Total non-zero elements estimated i.e., sum(nnzpr) = 112 >> [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374784 max tags = 2147483647 >> [0] PetscCommDuplicate(): returning tag 2147483647 >> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 12 X 12, block size 1; storage space: 42 unneeded, 112 used >> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 9 >> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 12 >> ... >> Matrix Object: >> type=seqbaij, rows=12, cols=12 >> total: nonzeros=112, allocated nonzeros=154 >> block size is 1 >> >> I dont understand why -mat_view_info shows allocated nonzeros to be 154 when sum(nnzpr) is 112. >> >> Thanks in advance. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Oct 4 16:50:31 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 4 Oct 2010 16:50:31 -0500 Subject: [petsc-users] preallocation for FE matrix In-Reply-To: <4CAA4A90.70704@geology.wisc.edu> References: <4CAA456B.3020806@geology.wisc.edu> <64EFABC4-0232-4BE2-AA6D-BBB05BAEEBC3@mcs.anl.gov> <4CAA4A90.70704@geology.wisc.edu> Message-ID: <87D7287A-B2B5-451F-9160-A8569D58A7E9@mcs.anl.gov> Then perhaps your calls to MatSetValues() are wrong? Everything cannot be write and yet still get the wrong preallocation. Here is a trick you can do after providing the preallocation info to the matrix but before calling MatSetValues() call MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) now it will automatically stop when it finds it has "run out of preallocated spaces" during a set values so in the debugger you can see what has happened when the "impossible" actually happens. Barry On Oct 4, 2010, at 4:43 PM, Tabrez Ali wrote: > Barry Smith wrote: >> >> On Oct 4, 2010, at 4:21 PM, Tabrez Ali wrote: >> >> >>> I am trying to assemble a FE matrix (adding one element at a time from the local stiffness matrix) and cant seem to get the preallocation right. I am correctly calculating the number of non zeros per row and storing the value in the array [nnzpr]. >>> >> >> (For sequential case) you cannot be calculating the number of nonzeros per row correctly and yet not getting the preallocation correct. >> >> For your simple problem. Print the nonzeros per row you are computing then print the sparse matrix you computing. That will show which rows are not computed or set properly. >> > > I am calculating the nonzeros per row correctly and have checked this by explicitly forming the dense matrix. > > Here is the stiffness matrix (1 corresponds to non zero location) pattern for the same problem. > > 1 1 1 1 1 1 1 1 0 0 0 0 > 1 1 1 1 1 1 1 1 0 0 0 0 > 1 1 1 1 1 1 1 1 1 1 1 1 > 1 1 1 1 1 1 1 1 1 1 1 1 > 1 1 1 1 1 1 1 1 1 1 1 1 > 1 1 1 1 1 1 1 1 1 1 1 1 > 1 1 1 1 1 1 1 1 0 0 0 0 > 1 1 1 1 1 1 1 1 0 0 0 0 > 0 0 1 1 1 1 0 0 1 1 1 1 > 0 0 1 1 1 1 0 0 1 1 1 1 > 0 0 1 1 1 1 0 0 1 1 1 1 > 0 0 1 1 1 1 0 0 1 1 1 1 > > and here is the [nnzpr] array that I passed to MatCreateSeqBAIJ > > 8 > 8 > 12 > 12 > 12 > 12 > 8 > 8 > 8 > 8 > 8 > 8 > > The sum of values in [nnzpr] adds up to 112. > > Thanks again >>> I dont understand why -mat_view_info shows allocated nonzeros to be 154 when sum(nnzpr) is 112. >>> >> >> If you mess up the count on a row it allocates a few extra for that row (incase you end up needing them) so the number allocated can be more then the number actually needed. >> >> Barry >> >> >> >>> Here is part of the relevant code >>> >>> ... >>> call MatCreateSeqBAIJ(Petsc_Comm_Self, 1, m, n, petsc_null_integer, nnzpr, Mat_A, ierr) >>> ... >>> ! Assume a bilinear quad (2 dof per node) >>> do j1=1,8 >>> do j2=1,8 >>> call MatSetValues(Mat_A, 1, indx(j1)-1, 1, indx(j2)-1, k(j1,j2), Add_Values, ierr) >>> end do >>> end do >>> ... >>> call MatAssemblyBegin(Mat_A,Mat_Final_Assembly,ierr) >>> call MatAssemblyEnd(Mat_A,Mat_Final_Assembly,ierr) >>> ... >>> >>> On running it I get >>> >>> -bash-3.00$ ./a.out >> ... >>> Total non-zero elements estimated i.e., sum(nnzpr) = 112 >>> [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 -2080374784 max tags = 2147483647 >>> [0] PetscCommDuplicate(): returning tag 2147483647 >>> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 12 X 12, block size 1; storage space: 42 unneeded, 112 used >>> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during MatSetValues is 9 >>> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 12 >>> ... >>> Matrix Object: >>> type=seqbaij, rows=12, cols=12 >>> total: nonzeros=112, allocated nonzeros=154 >>> block size is 1 >>> >>> I dont understand why -mat_view_info shows allocated nonzeros to be 154 when sum(nnzpr) is 112. >>> >>> Thanks in advance. >>> >> >> > From brtnfld at uiuc.edu Mon Oct 4 16:52:03 2010 From: brtnfld at uiuc.edu (M. Scot Breitenfeld) Date: Mon, 04 Oct 2010 16:52:03 -0500 Subject: [petsc-users] non-contiguous parallel block of the coefficient matrix and AO functions In-Reply-To: References: <4CA5F177.6090307@uiuc.edu> Message-ID: <4CAA4C83.4070003@uiuc.edu> I'm a little unclear how to use the AO functions if the global nodal numbering results in a non-contiguous parallel block of the coefficient matrix. For example, if I have 1D (1 dof per node) problem (2 processors), for this case node number = row position in vector: 0 3 4 7 2 5 6 1 (Apps) o o o o || o o o o 0 1 2 3 4 5 6 7 (Petsc) First I call, CALL AOCreateBasic(PETSC_COMM_WORLD, n, mappings, PETSC_NULL_INTEGER, ao, ierr) where n = 4, mappings=P0:{0,3,4,7}, P1:{2,5,6,1} Petsc will be continuous P0:{0,1,2,3}, P1:{4,5,6,7} so I used NULL. CALL AOApplicationToPetsc(ao,n,mappings, ierr) Now if I want to add a value at the global 7th row (Application row) on processor 0, do I use the Application's numbering 7, or petsc numbering 3. If it's the global Application id: CALL VecSetValues(b, 1, 7 , value, ADD_VALUES, ierr) Does Petsc know if I specify row 7 to put it in row 3 of petsc numbering? Or is the numbering always Petsc's numbering? If I then solve the solution CALL KSPSolve(ksp,b,b,ierr) and I want to get the values back CALL VecGetValues(b,1,row, value, ierr) is 'row' the Application row (7) or Petsc row (3) Thanks, Scot From bsmith at mcs.anl.gov Mon Oct 4 16:57:16 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 4 Oct 2010 16:57:16 -0500 Subject: [petsc-users] non-contiguous parallel block of the coefficient matrix and AO functions In-Reply-To: <4CAA4C83.4070003@uiuc.edu> References: <4CA5F177.6090307@uiuc.edu> <4CAA4C83.4070003@uiuc.edu> Message-ID: PETSc matrix and vector indexing is ALWAYS in the PETSc numbering. You can also make it in the PETSc local numbering with Vec/MatSetLocalToGlobalNumbering() but you can NEVER make it in some other "global numbering". The idea behind using AO is to "renumber" your mesh nodes (or something) to the PETSc numbering so you can then use the usual Vec/MatSetValues(). So you renumber anything that you would use to index into PETSc Vecs and Mats into the PETSc numbering. Barry On Oct 4, 2010, at 4:52 PM, M. Scot Breitenfeld wrote: > I'm a little unclear how to use the AO functions if the global nodal numbering results in a non-contiguous parallel block of the coefficient matrix. > > For example, if I have 1D (1 dof per node) problem (2 processors), for this case node number = row position in vector: > > 0 3 4 7 2 5 6 1 (Apps) > o o o o || o o o o > 0 1 2 3 4 5 6 7 (Petsc) > > First I call, > > CALL AOCreateBasic(PETSC_COMM_WORLD, n, mappings, PETSC_NULL_INTEGER, ao, ierr) > > where > > n = 4, mappings=P0:{0,3,4,7}, P1:{2,5,6,1} > Petsc will be continuous P0:{0,1,2,3}, P1:{4,5,6,7} so I used NULL. > > CALL AOApplicationToPetsc(ao,n,mappings, ierr) > > Now if I want to add a value at the global 7th row (Application row) on processor 0, do I use the Application's numbering 7, or petsc numbering 3. > > If it's the global Application id: > CALL VecSetValues(b, 1, 7 , value, ADD_VALUES, ierr) > Does Petsc know if I specify row 7 to put it in row 3 of petsc numbering? > > Or is the numbering always Petsc's numbering? > > If I then solve the solution > CALL KSPSolve(ksp,b,b,ierr) > > and I want to get the values back > > CALL VecGetValues(b,1,row, value, ierr) > > is 'row' the Application row (7) or Petsc row (3) > > Thanks, > Scot > > > > > > > > From stali at geology.wisc.edu Mon Oct 4 16:59:15 2010 From: stali at geology.wisc.edu (Tabrez Ali) Date: Mon, 4 Oct 2010 16:59:15 -0500 Subject: [petsc-users] preallocation for FE matrix In-Reply-To: <87D7287A-B2B5-451F-9160-A8569D58A7E9@mcs.anl.gov> References: <4CAA456B.3020806@geology.wisc.edu> <64EFABC4-0232-4BE2-AA6D-BBB05BAEEBC3@mcs.anl.gov> <4CAA4A90.70704@geology.wisc.edu> <87D7287A-B2B5-451F-9160-A8569D58A7E9@mcs.anl.gov> Message-ID: <3F68C4FC-60F0-4975-92D7-B4BBE26CF76C@geology.wisc.edu> You are right. I overlooked using integer(8) for [nnzpr]. Works fine with PetscInt. Thanks On Oct 4, 2010, at 4:50 PM, Barry Smith wrote: > > Then perhaps your calls to MatSetValues() are wrong? Everything > cannot be write and yet still get the wrong preallocation. > > Here is a trick you can do after providing the preallocation info > to the matrix but before calling MatSetValues() call > MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) now it will > automatically stop when it finds it has "run out of preallocated > spaces" during a set values so in the debugger you can see what has > happened when the "impossible" actually happens. > > Barry > > > On Oct 4, 2010, at 4:43 PM, Tabrez Ali wrote: > >> Barry Smith wrote: >>> >>> On Oct 4, 2010, at 4:21 PM, Tabrez Ali wrote: >>> >>> >>>> I am trying to assemble a FE matrix (adding one element at a time >>>> from the local stiffness matrix) and cant seem to get the >>>> preallocation right. I am correctly calculating the number of non >>>> zeros per row and storing the value in the array [nnzpr]. >>>> >>> >>> (For sequential case) you cannot be calculating the number of >>> nonzeros per row correctly and yet not getting the preallocation >>> correct. >>> >>> For your simple problem. Print the nonzeros per row you are >>> computing then print the sparse matrix you computing. That will >>> show which rows are not computed or set properly. >>> >> >> I am calculating the nonzeros per row correctly and have checked >> this by explicitly forming the dense matrix. >> >> Here is the stiffness matrix (1 corresponds to non zero location) >> pattern for the same problem. >> >> 1 1 1 1 1 1 1 1 0 0 0 0 >> 1 1 1 1 1 1 1 1 0 0 0 0 >> 1 1 1 1 1 1 1 1 1 1 1 1 >> 1 1 1 1 1 1 1 1 1 1 1 1 >> 1 1 1 1 1 1 1 1 1 1 1 1 >> 1 1 1 1 1 1 1 1 1 1 1 1 >> 1 1 1 1 1 1 1 1 0 0 0 0 >> 1 1 1 1 1 1 1 1 0 0 0 0 >> 0 0 1 1 1 1 0 0 1 1 1 1 >> 0 0 1 1 1 1 0 0 1 1 1 1 >> 0 0 1 1 1 1 0 0 1 1 1 1 >> 0 0 1 1 1 1 0 0 1 1 1 1 >> >> and here is the [nnzpr] array that I passed to MatCreateSeqBAIJ >> >> 8 >> 8 >> 12 >> 12 >> 12 >> 12 >> 8 >> 8 >> 8 >> 8 >> 8 >> 8 >> >> The sum of values in [nnzpr] adds up to 112. >> >> Thanks again >>>> I dont understand why -mat_view_info shows allocated nonzeros to >>>> be 154 when sum(nnzpr) is 112. >>>> >>> >>> If you mess up the count on a row it allocates a few extra for >>> that row (incase you end up needing them) so the number allocated >>> can be more then the number actually needed. >>> >>> Barry >>> >>> >>> >>>> Here is part of the relevant code >>>> >>>> ... >>>> call MatCreateSeqBAIJ(Petsc_Comm_Self, 1, m, n, >>>> petsc_null_integer, nnzpr, Mat_A, ierr) >>>> ... >>>> ! Assume a bilinear quad (2 dof per node) >>>> do j1=1,8 >>>> do j2=1,8 >>>> call MatSetValues(Mat_A, 1, indx(j1)-1, 1, indx(j2)-1, >>>> k(j1,j2), Add_Values, ierr) >>>> end do >>>> end do >>>> ... >>>> call MatAssemblyBegin(Mat_A,Mat_Final_Assembly,ierr) >>>> call MatAssemblyEnd(Mat_A,Mat_Final_Assembly,ierr) >>>> ... >>>> >>>> On running it I get >>>> >>>> -bash-3.00$ ./a.out >>> ... >>>> Total non-zero elements estimated i.e., sum(nnzpr) >>>> = 112 >>>> [0] PetscCommDuplicate(): Duplicating a communicator 1140850689 >>>> -2080374784 max tags = 2147483647 >>>> [0] PetscCommDuplicate(): returning tag 2147483647 >>>> [0] MatAssemblyEnd_SeqBAIJ(): Matrix size: 12 X 12, block size 1; >>>> storage space: 42 unneeded, 112 used >>>> [0] MatAssemblyEnd_SeqBAIJ(): Number of mallocs during >>>> MatSetValues is 9 >>>> [0] MatAssemblyEnd_SeqBAIJ(): Most nonzeros blocks in any row is 12 >>>> ... >>>> Matrix Object: >>>> type=seqbaij, rows=12, cols=12 >>>> total: nonzeros=112, allocated nonzeros=154 >>>> block size is 1 >>>> >>>> I dont understand why -mat_view_info shows allocated nonzeros to >>>> be 154 when sum(nnzpr) is 112. >>>> >>>> Thanks in advance. >>>> >>> >>> >> > From stali at geology.wisc.edu Tue Oct 5 12:46:27 2010 From: stali at geology.wisc.edu (Tabrez Ali) Date: Tue, 5 Oct 2010 12:46:27 -0500 Subject: [petsc-users] MatZeroRows In-Reply-To: References: <4CA5F177.6090307@uiuc.edu> <4CAA4C83.4070003@uiuc.edu> Message-ID: <44D0DBBA-84B3-4284-86B4-E71F9E712DF9@geology.wisc.edu> Hi Does MatZeroRows work with SeqSBAIJ matrices? I seem to be getting the following error: [0]PETSC ERROR: No support for this operation for this object type! [0]PETSC ERROR: Mat type seqsbaij! Thanks From jed at 59A2.org Tue Oct 5 12:55:01 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 5 Oct 2010 19:55:01 +0200 Subject: [petsc-users] MatZeroRows In-Reply-To: <44D0DBBA-84B3-4284-86B4-E71F9E712DF9@geology.wisc.edu> References: <4CA5F177.6090307@uiuc.edu> <4CAA4C83.4070003@uiuc.edu> <44D0DBBA-84B3-4284-86B4-E71F9E712DF9@geology.wisc.edu> Message-ID: On Tue, Oct 5, 2010 at 19:46, Tabrez Ali wrote: > Does MatZeroRows work with SeqSBAIJ matrices? > > I seem to be getting the following error: > > [0]PETSC ERROR: No support for this operation for this object type! > [0]PETSC ERROR: Mat type seqsbaij! > The message is correct, it's not supported. In fact, that logical operation cannot be supported by a symmetric format because it is a non-symmetric modification. A "zero rows and colums" could be implemented for SBAIJ, but zeroing columns is not efficient for non-symmetric formats so you would be committing to a specific format. If you want the symmetric zero-rows-and-columns, I recommend doing it at the assembly level. You can map the "zeroed" indices to a negative number before MatSetValues, or you can set a LocalToGlobalMapping that does this automatically when you use MatSetValuesLocal. Both of these solutions will perform well with any matrix format. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From kenway at utias.utoronto.ca Wed Oct 6 09:35:27 2010 From: kenway at utias.utoronto.ca (Gaetan Kenway) Date: Wed, 06 Oct 2010 10:35:27 -0400 Subject: [petsc-users] CHKERRQ in Fortran Message-ID: <4CAC892F.4070907@utias.utoronto.ca> Hello I use PETSc with fortran. I was wondering if the CHKERRQ(ierr) command is supposed to work in Fortran? My compiler (mpif90 with ifort). If I do something like this: call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr) CHKERRQ(ierr) ifort complains there is a syntax error. I also tried: call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr) call CHKERRQ(ierr) But then it complains that it can't find the chkerrq function while linking. I'm using PETSc-3.1-p3 which was compiled with the following options: --with-shared --download-superlu_dist=yes --download-spooles=yes --download-parmetis=yes --with-fortran-interfaces=1 The subroutine includes: #include "include/finclude/petsc.h" Am I missing something obvious? Gaetan From knepley at gmail.com Wed Oct 6 09:49:59 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 6 Oct 2010 14:49:59 +0000 Subject: [petsc-users] CHKERRQ in Fortran In-Reply-To: <4CAC892F.4070907@utias.utoronto.ca> References: <4CAC892F.4070907@utias.utoronto.ca> Message-ID: Does src/vec/is/examples/tests/ex1f.F work for you? That has CHKERRQ. Thanks, Matt On Wed, Oct 6, 2010 at 2:35 PM, Gaetan Kenway wrote: > Hello > > I use PETSc with fortran. I was wondering if the CHKERRQ(ierr) command is > supposed to work in Fortran? My compiler (mpif90 with ifort). If I do > something like this: > > call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr) > CHKERRQ(ierr) > ifort complains there is a syntax error. > > I also tried: > call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr) > call CHKERRQ(ierr) > > But then it complains that it can't find the chkerrq function while > linking. > > I'm using PETSc-3.1-p3 which was compiled with the following options: > > --with-shared --download-superlu_dist=yes --download-spooles=yes > --download-parmetis=yes --with-fortran-interfaces=1 > > The subroutine includes: > #include "include/finclude/petsc.h" > > Am I missing something obvious? > > Gaetan > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Oct 6 09:56:05 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 6 Oct 2010 09:56:05 -0500 Subject: [petsc-users] CHKERRQ in Fortran In-Reply-To: <4CAC892F.4070907@utias.utoronto.ca> References: <4CAC892F.4070907@utias.utoronto.ca> Message-ID: <9247F6C7-C92E-4244-BE61-4042AF918B05@mcs.anl.gov> It is simply CHKERRQ(ierr) NOT call CHKERRQ(ierr) if it was call CHKERRQ() then the action would only happen inside the subroutine that was called which won't provide any useful information. Barry On Oct 6, 2010, at 9:35 AM, Gaetan Kenway wrote: > Hello > > I use PETSc with fortran. I was wondering if the CHKERRQ(ierr) command is supposed to work in Fortran? My compiler (mpif90 with ifort). If I do something like this: > > call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr) > CHKERRQ(ierr) > ifort complains there is a syntax error. > > I also tried: > call VecCreate(WARP_COMM_WORLD,globalSurfForce,ierr) > call CHKERRQ(ierr) > > But then it complains that it can't find the chkerrq function while linking. > > I'm using PETSc-3.1-p3 which was compiled with the following options: > > --with-shared --download-superlu_dist=yes --download-spooles=yes --download-parmetis=yes --with-fortran-interfaces=1 > > The subroutine includes: > #include "include/finclude/petsc.h" > > Am I missing something obvious? > > Gaetan From hsharma.tgjobs at gmail.com Wed Oct 6 14:38:44 2010 From: hsharma.tgjobs at gmail.com (Harsh Sharma) Date: Wed, 6 Oct 2010 14:38:44 -0500 Subject: [petsc-users] Outer Product of two vectors Message-ID: Hi, I noticed that PETSc doesn't have a routine to compute the outer product of two Vec objects. Specifically, I want to compute the outer product for two Vec objects that are large enough to require storage on multiple processors. If I write my own routine in such a way that it performs local computations of the entries of the outer product matrix (called OP from hereon), say using VecGetOwnershipRange to retrieve local entries of the two vectors (Vec a and Vec b) and then computing local blocks of OP, then there will be parts of OP that will not be computed. Say we have 3 processors, and Vec a is (2,4,8,16,32), Vec b is (1, 1/2, 1/4). Then the above approach will produce OP as 2 - - 4 - - - 4 - - 8 - - - 8 instead of 2 1 0.5 4 2 1 8 4 2 16 8 4 32 16 8 So, I chose to implement my routine in this way instead >> I would convert the "row" vector {vector b in a*transpose(b) } into an array of PetscScalar values, using PetscMalloc. Then, I run a for-loop traversing this array-version of vector b where in each iteration of the loop, the processor-specific part of vector a is scaled with the element of b that is being accessed in the loop. Then MatSetValues is called to set the scaled part-of-vector-a in the right locations in OP. However, this approach is also producing the same result as above. My guess is that the array-version of Vec b created using PetscMalloc is not a globally visible array that is being accessed by each processor -- instead, each processor is creating its own version of array-vector-b and then again computing only parts of OP. Further, I think this probably has to do with the fact that the routine PetscMalloc is not collective. How do I get an array of PetscScalar values that is not processor-specific but is visible to all processors? If this cannot be done, how do I go about computing the outer product of two vectors? I am appending my code (routine to compute outer product and the "main" function) and sample output with this mail. Thanks very much, Harsh ---- code and sample output ---- static char helpMsg[] = "\nComputes outer product of two vectors.\n"; #include "petscmat.h" // function to compute outer-product matrix of two vectors PetscErrorCode MyMPIVecOuterProd(Mat OP, Vec a, Vec b, InsertMode addV) { /* for vectors a and b, computes a.transpose(b) and adds/stores the resulting outer-product matrix to/in the matrix OP */ PetscInt nRows = 0; /* number of rows of OP matrix */ PetscInt nCols = 0; /* number of columns of OP matrix */ PetscInt nA = 0; /* length of vector a */ PetscInt nB = 0; /* length of vector b */ PetscScalar *locAVals; /* array to hold local vector a values */ PetscScalar *locSAVals; /* array to hold scaled local vector a values */ PetscScalar *locBVals; /* array to hold local vector b values */ PetscScalar *bArr; /* array to hold the entire vector b */ PetscInt aLow,aHigh,bLow,bHigh; /* local index-range limits */ PetscInt ia,ib; /* for-loop index variables for a and b vectors */ PetscInt * locRowIdxOP; /* locally-set OP column indices for MatSetValues */ PetscScalar curBVal; /* value of vector b's component with which to scale vector a */ /* get the dimension of vector a */ VecGetSize(a,&nA); /* get the dimension of vector b */ VecGetSize(b,&nB); /* get the dimensions of outer-product matrix */ MatGetSize(OP,&nRows,&nCols); /* check for dimensional compatibility */ if ((nRows != nA) || (nCols != nB)) { SETERRQ(1,"Error: MyMPIVecOuterProd: Dimensional Incompatibility!"); return(1); } /* --------------------------------------------- */ /* first, convert vector b into array of scalars */ /* --------------------------------------------- */ /* allocate memory for array-representation of vector b */ PetscMalloc((nB)*sizeof(PetscScalar),&bArr); /* do local assignment from vector b values to array-representation */ /* first, obtain local range of vector b */ VecGetOwnershipRange(b,&bLow,&bHigh); // bHigh is one more than highest local index /* then, obtain pointer to local elements of vector b */ VecGetArray(b,&locBVals); /* then, assign local values of vector b to corresponding locations in bArr */ for (ib = bLow; ib < bHigh; ib++) { *(bArr + ib) = *(locBVals + ib - bLow); } // end of b for loop /* finally, restore local elements of vector b */ VecRestoreArray(b,&locBVals); /* ------------------------------------------------- */ /* next, scale local values of vector a and add them */ /* to corresponding locations in the OP matrix */ /* ------------------------------------------------- */ /* first, obtain local range of vector a */ VecGetOwnershipRange(a,&aLow,&aHigh); // aHigh is one more than highest local index /* then, obtain pointer to local elements of vector a */ VecGetArray(a,&locAVals); /* allocate memory for local array-of-scaled-vector-a-values */ PetscMalloc((aHigh-aLow)*sizeof(PetscScalar),&locSAVals); /* allocate memory for locally-set OP row indices */ PetscMalloc((aHigh-aLow)*sizeof(PetscInt),&locRowIdxOP); /* set locally-set OP row indices */ for (ia = 0; ia < aHigh-aLow; ia++) { *(locRowIdxOP + ia) = ia + aLow; } // end of for : set locally-set OP row indices /* next, for each component of vector b (bArr), scale local vector a values */ /* and set them up in the corresponding locations in the OP matrix */ for (ib = 0; ib < nB; ib++) { /* get component of vector b to scale with */ curBVal = *(bArr + ib); /* scale vector a local values */ for (ia = 0; ia < aHigh-aLow; ia++) { *(locSAVals + ia) = (*(locAVals + ia)) * curBVal; } // end of for: scale local vector a values /* set scaled values in appropriate locations in OP */ MatSetValues(OP,aHigh-aLow,locRowIdxOP,1,&ib,locSAVals,addV) } // end of for: set scaled values in OP matrix /* next, restore local elements of vector a */ VecRestoreArray(a,&locAVals); /* free memory for local row indices of OP */ PetscFree(locRowIdxOP); /* free memory for local scaled vector a values */ PetscFree(locSAVals); /* free memory for array representation of vector b */ PetscFree(bArr); /* ------------------------------- */ /* finally, assemble the OP matrix */ /* ------------------------------- */ MatAssemblyBegin(OP,MAT_FINAL_ASSEMBLY); MatAssemblyEnd(OP,MAT_FINAL_ASSEMBLY); return(0); } int main (int argc, char **argv) { PetscInt n1=5; /* first vector's dim */ PetscInt n2=3; /* second vector's dim */ Vec v1; /* n1x1 supervector */ Vec v2; /* n2x1 reduced-dimension representation of y */ Mat OP1; /* n1xn2 outer-product matrix */ PetscInt ii; /* for-loop index variables */ PetscInt * locRowIdx1; /* locally-set row indices for VecSetValues */ PetscScalar * locRowVals1; /* array to hold local vector values */ PetscInt low1, high1; /* variables to get local-indices' range */ PetscInt * locRowIdx2; /* locally-set row indices for VecSetValues */ PetscScalar * locRowVals2; /* array to hold local vector values */ PetscInt low2, high2; /* variables to get local-indices' range */ PetscInitialize(&argc,&argv,(char*)0,helpMsg); VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,n1,&v1); VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,n2,&v2); VecZeroEntries(v1); VecZeroEntries(v2); MatCreateMPIDense(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,n1,n2,PETSC_NULL,&OP1); MatZeroEntries(OP1); VecGetOwnershipRange(v1,&low1,&high1); PetscMalloc((high1-low1)*sizeof(PetscInt),&locRowIdx1); PetscMalloc((high1-low1)*sizeof(PetscScalar),&locRowVals1); /* set locally-set indices and values */ for (ii = 0; ii < high1-low1; ii++) { *(locRowIdx1 + ii) = ii + low1; *(locRowVals1 + ii) = pow(2,ii+low1+1); } // end of for : set locally-set indices and values VecSetValues(v1,high1-low1,locRowIdx1,locRowVals1,INSERT_VALUES); PetscFree(locRowIdx1); errCode += PetscFree(locRowVals1); /* now, assemble the vector v1 */ VecAssemblyBegin(v1); VecAssemblyEnd(v1); VecGetOwnershipRange(v2,&low2,&high2); PetscMalloc((high2-low2)*sizeof(PetscInt),&locRowIdx2); PetscMalloc((high2-low2)*sizeof(PetscScalar),&locRowVals2); /* set locally-set indices and values */ for (ii = 0; ii < high2-low2; ii++) { *(locRowIdx2 + ii) = ii + low2; *(locRowVals2 + ii) = 1.0 / pow(2,ii+low2); } // end of for : set locally-set indices and values VecSetValues(v2,high2-low2,locRowIdx2,locRowVals2,INSERT_VALUES); PetscFree(locRowIdx2); errCode += PetscFree(locRowVals2); /* now, assemble the vector v2 */ VecAssemblyBegin(v2); VecAssemblyEnd(v2); MyMPIVecOuterProd(OP1, v1, v2, INSERT_VALUES); PetscPrintf(PETSC_COMM_WORLD,"---- vector v1 ----\n"); VecView(v1,PETSC_VIEWER_STDOUT_WORLD); PetscPrintf(PETSC_COMM_WORLD,"---- vector v1 ----\n"); PetscPrintf(PETSC_COMM_WORLD,"---- vector v2 ----\n"); VecView(v2,PETSC_VIEWER_STDOUT_WORLD); PetscPrintf(PETSC_COMM_WORLD,"---- vector v2 ----\n"); PetscPrintf(PETSC_COMM_WORLD,"---- matrix OP1 ----\n"); MatView(OP1,PETSC_VIEWER_STDOUT_WORLD); PetscPrintf(PETSC_COMM_WORLD,"---- matrix OP1 ----\n"); /* destroy OP1 */ MatDestroy(OP1); /* destroy v1 */ VecDestroy(v1); /* destroy v2 */ VecDestroy(v2); PetscFinalize(); return 0; } [hsharma at ifp-32]$ petscmpiexec -np 3 ./OuterProductCheckOutput ---- vector v1 ---- Process [0] 2 4 Process [1] 8 16 Process [2] 32 ---- vector v1 ---- ---- vector v2 ---- Process [0] 1 Process [1] 0.5 Process [2] 0.25 ---- vector v2 ---- ---- matrix OP1 ---- 2.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 4.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 4.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 8.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 8.0000000000000000e+00 ---- matrix OP1 ---- ---- code and sample output ---- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Oct 6 14:42:50 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 6 Oct 2010 14:42:50 -0500 Subject: [petsc-users] Outer Product of two vectors In-Reply-To: References: Message-ID: <9A2E8EC6-AC16-43CD-97A5-48E1EA73A20D@mcs.anl.gov> I think VecScatterCreateToAll() is what you need. Something like VecScatterCreateToAll(x,&ctx,&xeveryone); VecScatterBegin(ctx,x,everyone,INSERT_VALUES); VecGetArray(xeveryone,&xvalues); Now every process has all the values of x in the array xvalues On Oct 6, 2010, at 2:38 PM, Harsh Sharma wrote: > Hi, > > I noticed that PETSc doesn't have a routine to compute > the outer product of two Vec objects. > > Specifically, I want to compute the outer product for two > Vec objects that are large enough to require storage on > multiple processors. > > If I write my own routine in such a way that it performs > local computations of the entries of the outer product > matrix (called OP from hereon), say using > VecGetOwnershipRange to retrieve local entries of the > two vectors (Vec a and Vec b) and then computing local > blocks of OP, then there will be parts of OP that will not > be computed. > > Say we have 3 processors, and > Vec a is (2,4,8,16,32), Vec b is (1, 1/2, 1/4). Then the > above approach will produce OP as > > 2 - - > 4 - - > - 4 - > - 8 - > - - 8 > > instead of > > 2 1 0.5 > 4 2 1 > 8 4 2 > 16 8 4 > 32 16 8 > > So, I chose to implement my routine in this way instead >> > I would convert the "row" vector {vector b in a*transpose(b) } > into an array of PetscScalar values, using PetscMalloc. > Then, I run a for-loop traversing this array-version of vector b > where in each iteration of the loop, the processor-specific > part of vector a is scaled with the element of b that is being > accessed in the loop. Then MatSetValues is called to set > the scaled part-of-vector-a in the right locations in OP. > > However, this approach is also producing the same result as > above. My guess is that the array-version of Vec b created > using PetscMalloc is not a globally visible array that is being accessed > by each processor -- instead, each processor is creating its own > version of array-vector-b and then again computing only parts of > OP. Further, I think this probably has to do with the fact that > the routine PetscMalloc is not collective. > > How do I get an array of PetscScalar values that is not > processor-specific but is visible to all processors? If this > cannot be done, how do I go about computing the outer > product of two vectors? > > I am appending my code (routine to compute outer product and > the "main" function) and sample output with this mail. > > Thanks very much, > Harsh > > ---- code and sample output ---- > > static char helpMsg[] = "\nComputes outer product of two vectors.\n"; > #include "petscmat.h" > > // function to compute outer-product matrix of two vectors > PetscErrorCode MyMPIVecOuterProd(Mat OP, Vec a, Vec b, InsertMode addV) > { > /* > for vectors a and b, computes a.transpose(b) and > adds/stores the resulting outer-product matrix > to/in the matrix OP > */ > > PetscInt nRows = 0; /* number of rows of OP matrix */ > PetscInt nCols = 0; /* number of columns of OP matrix */ > PetscInt nA = 0; /* length of vector a */ > PetscInt nB = 0; /* length of vector b */ > > PetscScalar *locAVals; /* array to hold local vector a values */ > PetscScalar *locSAVals; /* array to hold scaled local vector a values */ > PetscScalar *locBVals; /* array to hold local vector b values */ > PetscScalar *bArr; /* array to hold the entire vector b */ > > PetscInt aLow,aHigh,bLow,bHigh; /* local index-range limits */ > PetscInt ia,ib; /* for-loop index variables for a and b vectors */ > PetscInt * locRowIdxOP; /* locally-set OP column indices for MatSetValues */ > > PetscScalar curBVal; /* value of vector b's component with which to scale vector a */ > > /* get the dimension of vector a */ > VecGetSize(a,&nA); > /* get the dimension of vector b */ > VecGetSize(b,&nB); > > /* get the dimensions of outer-product matrix */ > MatGetSize(OP,&nRows,&nCols); > > /* check for dimensional compatibility */ > if ((nRows != nA) || (nCols != nB)) { > SETERRQ(1,"Error: MyMPIVecOuterProd: Dimensional Incompatibility!"); > return(1); > } > > /* --------------------------------------------- */ > /* first, convert vector b into array of scalars */ > /* --------------------------------------------- */ > > /* allocate memory for array-representation of vector b */ > PetscMalloc((nB)*sizeof(PetscScalar),&bArr); > > /* do local assignment from vector b values to array-representation */ > /* first, obtain local range of vector b */ > VecGetOwnershipRange(b,&bLow,&bHigh); // bHigh is one more than highest local index > /* then, obtain pointer to local elements of vector b */ > VecGetArray(b,&locBVals); > /* then, assign local values of vector b to corresponding locations in bArr */ > for (ib = bLow; ib < bHigh; ib++) { > *(bArr + ib) = *(locBVals + ib - bLow); > } // end of b for loop > /* finally, restore local elements of vector b */ > VecRestoreArray(b,&locBVals); > > /* ------------------------------------------------- */ > /* next, scale local values of vector a and add them */ > /* to corresponding locations in the OP matrix */ > /* ------------------------------------------------- */ > > /* first, obtain local range of vector a */ > VecGetOwnershipRange(a,&aLow,&aHigh); // aHigh is one more than highest local index > /* then, obtain pointer to local elements of vector a */ > VecGetArray(a,&locAVals); > /* allocate memory for local array-of-scaled-vector-a-values */ > PetscMalloc((aHigh-aLow)*sizeof(PetscScalar),&locSAVals); > /* allocate memory for locally-set OP row indices */ > PetscMalloc((aHigh-aLow)*sizeof(PetscInt),&locRowIdxOP); > > /* set locally-set OP row indices */ > for (ia = 0; ia < aHigh-aLow; ia++) { > *(locRowIdxOP + ia) = ia + aLow; > } // end of for : set locally-set OP row indices > > /* next, for each component of vector b (bArr), scale local vector a values */ > /* and set them up in the corresponding locations in the OP matrix */ > for (ib = 0; ib < nB; ib++) { > /* get component of vector b to scale with */ > curBVal = *(bArr + ib); > /* scale vector a local values */ > for (ia = 0; ia < aHigh-aLow; ia++) { > *(locSAVals + ia) = (*(locAVals + ia)) * curBVal; > } // end of for: scale local vector a values > /* set scaled values in appropriate locations in OP */ > MatSetValues(OP,aHigh-aLow,locRowIdxOP,1,&ib,locSAVals,addV) > } // end of for: set scaled values in OP matrix > > /* next, restore local elements of vector a */ > VecRestoreArray(a,&locAVals); > /* free memory for local row indices of OP */ > PetscFree(locRowIdxOP); > /* free memory for local scaled vector a values */ > PetscFree(locSAVals); > /* free memory for array representation of vector b */ > PetscFree(bArr); > > /* ------------------------------- */ > /* finally, assemble the OP matrix */ > /* ------------------------------- */ > MatAssemblyBegin(OP,MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(OP,MAT_FINAL_ASSEMBLY); > > return(0); > } > > int main (int argc, char **argv) { > PetscInt n1=5; /* first vector's dim */ > PetscInt n2=3; /* second vector's dim */ > > Vec v1; /* n1x1 supervector */ > Vec v2; /* n2x1 reduced-dimension representation of y */ > Mat OP1; /* n1xn2 outer-product matrix */ > > PetscInt ii; /* for-loop index variables */ > > PetscInt * locRowIdx1; /* locally-set row indices for VecSetValues */ > PetscScalar * locRowVals1; /* array to hold local vector values */ > PetscInt low1, high1; /* variables to get local-indices' range */ > > PetscInt * locRowIdx2; /* locally-set row indices for VecSetValues */ > PetscScalar * locRowVals2; /* array to hold local vector values */ > PetscInt low2, high2; /* variables to get local-indices' range */ > > > PetscInitialize(&argc,&argv,(char*)0,helpMsg); > > > VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,n1,&v1); > VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,n2,&v2); > VecZeroEntries(v1); > VecZeroEntries(v2); > > MatCreateMPIDense(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,n1,n2,PETSC_NULL,&OP1); > MatZeroEntries(OP1); > > > VecGetOwnershipRange(v1,&low1,&high1); > PetscMalloc((high1-low1)*sizeof(PetscInt),&locRowIdx1); > PetscMalloc((high1-low1)*sizeof(PetscScalar),&locRowVals1); > /* set locally-set indices and values */ > for (ii = 0; ii < high1-low1; ii++) { > *(locRowIdx1 + ii) = ii + low1; > *(locRowVals1 + ii) = pow(2,ii+low1+1); > } // end of for : set locally-set indices and values > VecSetValues(v1,high1-low1,locRowIdx1,locRowVals1,INSERT_VALUES); > PetscFree(locRowIdx1); errCode += PetscFree(locRowVals1); > /* now, assemble the vector v1 */ > VecAssemblyBegin(v1); > VecAssemblyEnd(v1); > > VecGetOwnershipRange(v2,&low2,&high2); > PetscMalloc((high2-low2)*sizeof(PetscInt),&locRowIdx2); > PetscMalloc((high2-low2)*sizeof(PetscScalar),&locRowVals2); > /* set locally-set indices and values */ > for (ii = 0; ii < high2-low2; ii++) { > *(locRowIdx2 + ii) = ii + low2; > *(locRowVals2 + ii) = 1.0 / pow(2,ii+low2); > } // end of for : set locally-set indices and values > VecSetValues(v2,high2-low2,locRowIdx2,locRowVals2,INSERT_VALUES); > PetscFree(locRowIdx2); errCode += PetscFree(locRowVals2); > /* now, assemble the vector v2 */ > VecAssemblyBegin(v2); > VecAssemblyEnd(v2); > > > MyMPIVecOuterProd(OP1, v1, v2, INSERT_VALUES); > > PetscPrintf(PETSC_COMM_WORLD,"---- vector v1 ----\n"); > VecView(v1,PETSC_VIEWER_STDOUT_WORLD); > PetscPrintf(PETSC_COMM_WORLD,"---- vector v1 ----\n"); > PetscPrintf(PETSC_COMM_WORLD,"---- vector v2 ----\n"); > VecView(v2,PETSC_VIEWER_STDOUT_WORLD); > PetscPrintf(PETSC_COMM_WORLD,"---- vector v2 ----\n"); > > PetscPrintf(PETSC_COMM_WORLD,"---- matrix OP1 ----\n"); > MatView(OP1,PETSC_VIEWER_STDOUT_WORLD); > PetscPrintf(PETSC_COMM_WORLD,"---- matrix OP1 ----\n"); > > > /* destroy OP1 */ > MatDestroy(OP1); > /* destroy v1 */ > VecDestroy(v1); > /* destroy v2 */ > VecDestroy(v2); > > PetscFinalize(); > return 0; > } > > [hsharma at ifp-32]$ petscmpiexec -np 3 ./OuterProductCheckOutput > > ---- vector v1 ---- > Process [0] > 2 > 4 > Process [1] > 8 > 16 > Process [2] > 32 > ---- vector v1 ---- > ---- vector v2 ---- > Process [0] > 1 > Process [1] > 0.5 > Process [2] > 0.25 > ---- vector v2 ---- > ---- matrix OP1 ---- > 2.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 4.0000000000000000e+00 0.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 4.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 8.0000000000000000e+00 0.0000000000000000e+00 > 0.0000000000000000e+00 0.0000000000000000e+00 8.0000000000000000e+00 > ---- matrix OP1 ---- > > ---- code and sample output ---- > From jed at 59A2.org Wed Oct 6 14:44:20 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 6 Oct 2010 21:44:20 +0200 Subject: [petsc-users] Outer Product of two vectors In-Reply-To: References: Message-ID: On Wed, Oct 6, 2010 at 21:38, Harsh Sharma wrote: > Specifically, I want to compute the outer product for two > Vec objects that are large enough to require storage on > multiple processors. > Don't do this, create a MatShell and apply it's action with VecDot followed by VecAXPY. Storing the dense outer product is insane. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Wed Oct 6 16:19:50 2010 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 6 Oct 2010 14:19:50 -0700 Subject: [petsc-users] DASetCoordinates In-Reply-To: References: <5AFDEDBA-9F3F-4F46-8997-162BA44DEB4E@gmail.com> Message-ID: > > > > > > Maybe I'm just being dense, but I don't see how or where I set these at > each level within > > the DMMG framework. Isn't all that buried within the DMMG routines, or > is there some > > way for me to specify this? > > It's kinda raw: > > for (i=0; i DA da = (DA)dmmg[i]->dm; > /* Set coordinates */ > } > I'm trying to write a wrapper in c to do this, but as a first step I'm just trying to set uniform coordinates at each level. However, I am simultaneously trying to learn c, learn about pointers, and figure out how this all works with PETSc, and I am having a problem which is probably very obvious to you expert c programmers, but not obvious to me (a long-time fortran programmer). Here is what I have so far, which is based largely on what Jed wrote above and zdmmg.c which is one of the Fortran wrappers you guys have written: #include "private/fortranimpl.h" #include "petscda.h" #include "petscdmmg.h" #if defined(PETSC_HAVE_FORTRAN_CAPS) #define dasetcoords_ DASETCOORDS #elif !defined(PETSC_HAVE_FORTRAN_UNDERSCORE) #define dasetcoords_ dasetcoords #endif EXTERN_C_BEGIN void PETSC_STDCALL dasetcoords_(DMMG **dmmg, PetscErrorCode *ierr) { PetscInt i, nlev; nlev = DMMGGetLevels(*dmmg); printf("The number of levels is %d \n",nlev); for (i=0; i < nlev; i++){ DA da = (DA)dmmg[i]->dm; *ierr = DASetUniformCoordinates(da,0.0,1.0,0.0,1.0,0.0,1.0); } } EXTERN_C_END But when I compile it I get: dasetcoords.c(22): error: expression must have pointer-to-struct-or-union type DA da = (DA)dmmg[i]->dm; ^ So, I've tried various things, but can't figure out how to do this correctly. Can someone who is a c expert help me here on how to get the DA set for each level? Thanks, Randy M. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Oct 6 16:24:42 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 6 Oct 2010 23:24:42 +0200 Subject: [petsc-users] DASetCoordinates In-Reply-To: References: <5AFDEDBA-9F3F-4F46-8997-162BA44DEB4E@gmail.com> Message-ID: On Wed, Oct 6, 2010 at 23:19, Randall Mackie wrote: > void PETSC_STDCALL dasetcoords_(DMMG **dmmg, PetscErrorCode *ierr) > This is a pointer-pointer, due to the way Fortran passes parameters. > { > PetscInt i, nlev; > > nlev = DMMGGetLevels(*dmmg); > It has to be dereferenced once to get the thing we usually work with in C. > printf("The number of levels is %d \n",nlev); > > for (i=0; i < nlev; i++){ > > DA da = (DA)dmmg[i]->dm; > You can write DA da = (DA) (*dmmg)[i]->dm; Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlmackie862 at gmail.com Wed Oct 6 16:35:58 2010 From: rlmackie862 at gmail.com (Randall Mackie) Date: Wed, 6 Oct 2010 14:35:58 -0700 Subject: [petsc-users] DASetCoordinates In-Reply-To: References: <5AFDEDBA-9F3F-4F46-8997-162BA44DEB4E@gmail.com> Message-ID: Thanks, I appreciate your help! Randy M. On Wed, Oct 6, 2010 at 2:24 PM, Jed Brown wrote: > On Wed, Oct 6, 2010 at 23:19, Randall Mackie wrote: > >> void PETSC_STDCALL dasetcoords_(DMMG **dmmg, PetscErrorCode *ierr) >> > > This is a pointer-pointer, due to the way Fortran passes parameters. > > >> { >> PetscInt i, nlev; >> >> nlev = DMMGGetLevels(*dmmg); >> > > It has to be dereferenced once to get the thing we usually work with in C. > > >> printf("The number of levels is %d \n",nlev); >> >> for (i=0; i < nlev; i++){ >> >> DA da = (DA)dmmg[i]->dm; >> > > You can write DA da = (DA) (*dmmg)[i]->dm; > > Jed > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vedaprakashsubramanian at gmail.com Wed Oct 6 20:56:28 2010 From: vedaprakashsubramanian at gmail.com (vedaprakash subramanian) Date: Wed, 6 Oct 2010 19:56:28 -0600 Subject: [petsc-users] How to pass the parameters for KSP Message-ID: I need my KSP to take in Tolerence, flag, err, 3 matrices, right hand side vector. How can I make it to take all the parameters. On Mon, Sep 20, 2010 at 11:00 AM, wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. How to pass the parameters for KSP (vedaprakash subramanian) > 2. Re: How to pass the parameters for KSP (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 Sep 2010 19:05:48 -0600 > From: vedaprakash subramanian > Subject: [petsc-users] How to pass the parameters for KSP > To: petsc-users at mcs.anl.gov > Message-ID: > > > > Content-Type: text/plain; charset="iso-8859-1" > > I am converting a MATLAB function into a KSP solver. I am doing it similar > to BiCGStab. But I wanted to know how to pass the arguments of the function > into KSP solver. > > Thanks, > Vedaprakash > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100919/9d902e74/attachment-0001.htm > > > > ------------------------------ > > Message: 2 > Date: Sun, 19 Sep 2010 21:49:47 -0500 > From: Barry Smith > Subject: Re: [petsc-users] How to pass the parameters for KSP > To: PETSc users list > Message-ID: <27B820EF-3A6A-424B-8A43-2A43E82089B3 at mcs.anl.gov> > Content-Type: text/plain; charset=us-ascii > > > What arguments? Do you mean the right hand side x and the matrix? Or do > you mean parameters like the relative tolerance in convergence? > > See src/ksp/ksp/examples/tutorials/ex1.c for a simple example. > > Barry > > On Sep 19, 2010, at 8:05 PM, vedaprakash subramanian wrote: > > > I am converting a MATLAB function into a KSP solver. I am doing it > similar to BiCGStab. But I wanted to know how to pass the arguments of the > function into KSP solver. > > > > Thanks, > > Vedaprakash > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 21, Issue 32 > ******************************************* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Oct 6 21:05:57 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 6 Oct 2010 21:05:57 -0500 Subject: [petsc-users] How to pass the parameters for KSP In-Reply-To: References: Message-ID: This discussion should take place on the petsc-dev at mcs.anl.gov mailing list since it involves developing PETSc source code, not using it. Convergence tests are done via a standard method for all Krylov methods so if by "Tolerance" you mean the usual rtol or atol you should use the same mechanism as biCG-stab The matrix that defines the linear system as passed as the first matrix to KSPSetOperators() and as in the biCG-stab code can be accessed via PCGetOperators(). The matrix that is used to construct the preconditioner is the second matrix to KSPSetOperators() and similarly can be accessed via PCGetOperators(). If you have some other matrix for some other purpose then you need to provide a custom function to set it for your solver: see for example how KSPGMRESSetRestart() is handled as a custom function that sets something directly into the KSP_GMRES data structure Similar for whatever this flag and err arguments might be. The right hand side is handled the same by all the KSP solvers, see how it is accessed in KSPSolve_BCGS. You will have to understand exactly the structure and flow of the KSPSolve_BCGS before implementing your own. Barry On Oct 6, 2010, at 8:56 PM, vedaprakash subramanian wrote: > I need my KSP to take in Tolerence, flag, err, 3 matrices, right hand side vector. How can I make it to take all the parameters. > > On Mon, Sep 20, 2010 at 11:00 AM, wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. How to pass the parameters for KSP (vedaprakash subramanian) > 2. Re: How to pass the parameters for KSP (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 Sep 2010 19:05:48 -0600 > From: vedaprakash subramanian > Subject: [petsc-users] How to pass the parameters for KSP > To: petsc-users at mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > I am converting a MATLAB function into a KSP solver. I am doing it similar > to BiCGStab. But I wanted to know how to pass the arguments of the function > into KSP solver. > > Thanks, > Vedaprakash > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Sun, 19 Sep 2010 21:49:47 -0500 > From: Barry Smith > Subject: Re: [petsc-users] How to pass the parameters for KSP > To: PETSc users list > Message-ID: <27B820EF-3A6A-424B-8A43-2A43E82089B3 at mcs.anl.gov> > Content-Type: text/plain; charset=us-ascii > > > What arguments? Do you mean the right hand side x and the matrix? Or do you mean parameters like the relative tolerance in convergence? > > See src/ksp/ksp/examples/tutorials/ex1.c for a simple example. > > Barry > > On Sep 19, 2010, at 8:05 PM, vedaprakash subramanian wrote: > > > I am converting a MATLAB function into a KSP solver. I am doing it similar to BiCGStab. But I wanted to know how to pass the arguments of the function into KSP solver. > > > > Thanks, > > Vedaprakash > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 21, Issue 32 > ******************************************* > From vedaprakashsubramanian at gmail.com Wed Oct 6 21:26:13 2010 From: vedaprakashsubramanian at gmail.com (vedaprakash subramanian) Date: Wed, 6 Oct 2010 20:26:13 -0600 Subject: [petsc-users] How to pass the parameters for KSP In-Reply-To: References: Message-ID: Can anyone tell me whether the concept I understood is correct or not. If I am having a MATLAB function func(Matrix A, Vector x, Vector b, Matrix Q, Matrix M, int max_it, int tol, Vec* sol, int* err, int* iter, int* flag) to be converted into a KSP solver, then I have to do these following steps. 1. In KSPSetUp_func(KSP ksp), initialize KSPDefaultGetWork(ksp,11); // This tells the KSP that it is going to take 11 arguments 2. In KSPSolve_func(KSP ksp), declare Mat A, Q, M; Vec x, b, *sol; PetscInt max_it, *flag, *iter; PetscScalar tol, *err; A = ksp->work[0]; x = ksp->work[1]; b = ksp->work[2]; Q = ksp->work[3]; M = ksp->work[4]; max_it = ksp->work[5]; tol = ksp->work[6]; *sol = ksp->work[7]; *err = ksp->work[8]; *iter = ksp->work[9]; *flag = ksp->work[10]; ------------------------------ Is the above said is correct. Is that the way to get the arguments of the KSPSolve. Moreover, I have a doubt. What are these variables that are been used in cg.c stored_max_it = ksp->max_it; X = ksp->vec_sol; B = ksp->vec_rhs; Where are these variables (ksp->max_it, ksp->vec_sol and ksp->vec_rhs) getting initialized. -Vedaprakash On Wed, Oct 6, 2010 at 7:56 PM, vedaprakash subramanian < vedaprakashsubramanian at gmail.com> wrote: > I need my KSP to take in Tolerence, flag, err, 3 matrices, right hand side > vector. How can I make it to take all the parameters. > > On Mon, Sep 20, 2010 at 11:00 AM, wrote: > >> Send petsc-users mailing list submissions to >> petsc-users at mcs.anl.gov >> >> To subscribe or unsubscribe via the World Wide Web, visit >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> or, via email, send a message with subject or body 'help' to >> petsc-users-request at mcs.anl.gov >> >> You can reach the person managing the list at >> petsc-users-owner at mcs.anl.gov >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of petsc-users digest..." >> >> >> Today's Topics: >> >> 1. How to pass the parameters for KSP (vedaprakash subramanian) >> 2. Re: How to pass the parameters for KSP (Barry Smith) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sun, 19 Sep 2010 19:05:48 -0600 >> From: vedaprakash subramanian >> Subject: [petsc-users] How to pass the parameters for KSP >> To: petsc-users at mcs.anl.gov >> Message-ID: >> >> > >> Content-Type: text/plain; charset="iso-8859-1" >> >> I am converting a MATLAB function into a KSP solver. I am doing it similar >> to BiCGStab. But I wanted to know how to pass the arguments of the >> function >> into KSP solver. >> >> Thanks, >> Vedaprakash >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100919/9d902e74/attachment-0001.htm >> > >> >> ------------------------------ >> >> Message: 2 >> Date: Sun, 19 Sep 2010 21:49:47 -0500 >> From: Barry Smith >> Subject: Re: [petsc-users] How to pass the parameters for KSP >> To: PETSc users list >> Message-ID: <27B820EF-3A6A-424B-8A43-2A43E82089B3 at mcs.anl.gov> >> Content-Type: text/plain; charset=us-ascii >> >> >> What arguments? Do you mean the right hand side x and the matrix? Or do >> you mean parameters like the relative tolerance in convergence? >> >> See src/ksp/ksp/examples/tutorials/ex1.c for a simple example. >> >> Barry >> >> On Sep 19, 2010, at 8:05 PM, vedaprakash subramanian wrote: >> >> > I am converting a MATLAB function into a KSP solver. I am doing it >> similar to BiCGStab. But I wanted to know how to pass the arguments of the >> function into KSP solver. >> > >> > Thanks, >> > Vedaprakash >> >> >> >> ------------------------------ >> >> _______________________________________________ >> petsc-users mailing list >> petsc-users at mcs.anl.gov >> https://lists.mcs.anl.gov/mailman/listinfo/petsc-users >> >> >> End of petsc-users Digest, Vol 21, Issue 32 >> ******************************************* >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Oct 6 21:34:44 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 6 Oct 2010 21:34:44 -0500 Subject: [petsc-users] How to pass the parameters for KSP In-Reply-To: References: Message-ID: <09484FD0-62F7-4BD9-BF85-AD57BB8C4BD0@mcs.anl.gov> On Oct 6, 2010, at 9:26 PM, vedaprakash subramanian wrote: > Can anyone tell me whether the concept I understood is correct or not. > > If I am having a MATLAB function func(Matrix A, Vector x, Vector b, Matrix Q, Matrix M, int max_it, int tol, Vec* sol, int* err, int* iter, int* flag) to be converted into a KSP solver, then I have to do these following steps. > > 1. In KSPSetUp_func(KSP ksp), initialize KSPDefaultGetWork(ksp,11); // This tells the KSP that it is going to take 11 arguments > 2. In KSPSolve_func(KSP ksp), declare > Mat A, Q, M; > Vec x, b, *sol; > PetscInt max_it, *flag, *iter; > PetscScalar tol, *err; > > A = ksp->work[0]; > x = ksp->work[1]; > b = ksp->work[2]; > Q = ksp->work[3]; > M = ksp->work[4]; > max_it = ksp->work[5]; > tol = ksp->work[6]; > *sol = ksp->work[7]; > *err = ksp->work[8]; > *iter = ksp->work[9]; > *flag = ksp->work[10]; > > ------------------------------ > Is the above said is correct. Is that the way to get the arguments of the KSPSolve. No this is very wrong. The KSPDefaultGetWork(ksp, just gets work vectors needed by the solver. It has nothing to do with number of arguments. > > Moreover, I have a doubt. What are these variables that are been used in cg.c > > stored_max_it = ksp->max_it; > X = ksp->vec_sol; > B = ksp->vec_rhs; > > Where are these variables (ksp->max_it, ksp->vec_sol and ksp->vec_rhs) getting initialized. This is done in the various base KSP routines like KSPCreate() KSPSetTolerances(). The vec_sol and vec_rhs are set in the KSPSolve() routine. You are going to need to understand the workings of KSP for BCGS much better before tackling a new metthod. Barry > > -Vedaprakash > > On Wed, Oct 6, 2010 at 7:56 PM, vedaprakash subramanian wrote: > I need my KSP to take in Tolerence, flag, err, 3 matrices, right hand side vector. How can I make it to take all the parameters. > > On Mon, Sep 20, 2010 at 11:00 AM, wrote: > Send petsc-users mailing list submissions to > petsc-users at mcs.anl.gov > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > or, via email, send a message with subject or body 'help' to > petsc-users-request at mcs.anl.gov > > You can reach the person managing the list at > petsc-users-owner at mcs.anl.gov > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of petsc-users digest..." > > > Today's Topics: > > 1. How to pass the parameters for KSP (vedaprakash subramanian) > 2. Re: How to pass the parameters for KSP (Barry Smith) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 Sep 2010 19:05:48 -0600 > From: vedaprakash subramanian > Subject: [petsc-users] How to pass the parameters for KSP > To: petsc-users at mcs.anl.gov > Message-ID: > > Content-Type: text/plain; charset="iso-8859-1" > > I am converting a MATLAB function into a KSP solver. I am doing it similar > to BiCGStab. But I wanted to know how to pass the arguments of the function > into KSP solver. > > Thanks, > Vedaprakash > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Sun, 19 Sep 2010 21:49:47 -0500 > From: Barry Smith > Subject: Re: [petsc-users] How to pass the parameters for KSP > To: PETSc users list > Message-ID: <27B820EF-3A6A-424B-8A43-2A43E82089B3 at mcs.anl.gov> > Content-Type: text/plain; charset=us-ascii > > > What arguments? Do you mean the right hand side x and the matrix? Or do you mean parameters like the relative tolerance in convergence? > > See src/ksp/ksp/examples/tutorials/ex1.c for a simple example. > > Barry > > On Sep 19, 2010, at 8:05 PM, vedaprakash subramanian wrote: > > > I am converting a MATLAB function into a KSP solver. I am doing it similar to BiCGStab. But I wanted to know how to pass the arguments of the function into KSP solver. > > > > Thanks, > > Vedaprakash > > > > ------------------------------ > > _______________________________________________ > petsc-users mailing list > petsc-users at mcs.anl.gov > https://lists.mcs.anl.gov/mailman/listinfo/petsc-users > > > End of petsc-users Digest, Vol 21, Issue 32 > ******************************************* > > From sylbar.vainbot at gmail.com Thu Oct 7 04:45:27 2010 From: sylbar.vainbot at gmail.com (Sylvain Barbot) Date: Thu, 7 Oct 2010 02:45:27 -0700 Subject: [petsc-users] V-cycle multigrid with matrix shells Message-ID: Hi, I am trying to implement the multigrid solver to solve a linear inhomogeneous elliptic equation with non-constant coefficients in 3-D with 3 degrees of freedom with matrix shells. The grid is structured with non-uniform sampling. I insist on using matrix shells because of memory limitations. The direct solver gmres, or alike, gives very accurate solutions even for large problems (say, with a grid of 256x256x256), but computation is lengthy. I am hoping to get significant speedup using a multigrid solver. It looks like the DMMG objects may provide all the tools necessary to do so: smoother, restriction and interpolation. But typically in my problems, I do not have an analytical expression for the right-hand side, just a given forcing term in finest resolution with high-frequency content (not easily subsampled). Also I do not typically have any non-trivial first guess. It looks the the default Petsc multi-grid solver starts from the coarser grid and builds a solution with a finer and finer resolution as levels increase. I would like to do the opposite, with the typical smoothing of the residuals, then restriction, smoothing, restriction, then direct solving at the coarsest level, then as many interpolation-correction-smoothing as necessary to go back to the finest level, the typical V cycle. questions: 1) is it at all possible to specify this mode of operation, from the finest to the coarser level, and back? Any examples out there? 2) is it readily possible to use matrix shells with DMMG? I imagine the Jacobian matrix may simply be provided as a matrix shell. Is there any examples of multi-grid methods with shell matrices online? 3) to deal with non-uniform sampling: can I provide the coordinates of the finest grid with DASetCoordinates, then expect DMMG to provide the subsampled coordinates at the coarser levels? Thanks a lot in advance for any advice. Best, Sylvain Barbot From darach at tchpc.tcd.ie Thu Oct 7 05:52:00 2010 From: darach at tchpc.tcd.ie (Darach Golden) Date: Thu, 7 Oct 2010 11:52:00 +0100 Subject: [petsc-users] call petsc real/petsc complex in same application Message-ID: <20101007105200.GA26108@tchpc.tcd.ie> Hi, Is there any (safe) way of using real and complex compiles of petsc in different parts of the same application? something like: ------------------------------------------------------------ Fortran Application main() call MPI_INIT() do things... call routine( uses petsc compiled with --with-scalar-type=real) {petsc initialize ... petsc finalize} do more things... call routine( uses petsc compiled with --with-scalar-type=complex) {petsc initialize ... petsc finalize} keep going... call MPI_FINALIZE() stop ------------------------------------------------------------ The intent here is to save memory in the first call above where complex numbers are not needed. Or is there any way of achieving this effect? Darach -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From jed at 59A2.org Thu Oct 7 06:59:42 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 7 Oct 2010 13:59:42 +0200 Subject: [petsc-users] V-cycle multigrid with matrix shells In-Reply-To: References: Message-ID: On Thu, Oct 7, 2010 at 11:45, Sylvain Barbot wrote: > > questions: > 1) is it at all possible to specify this mode of operation, from the > finest to the coarser level, and back? Any examples out there? > -pc_mg_type multiplicative, or PCMGSetType(pc,PC_MG_MULTIPLICATIVE); > > 2) is it readily possible to use matrix shells with DMMG? I imagine > the Jacobian matrix may simply be provided as a matrix shell. Is there > any examples of multi-grid methods with shell matrices online? > You can do this, but you will have to define a custom smoother. For Jacobi, you can have your MatShell implement MatGetDiagonal and it will work. I thought you could implement MatGetDiagonalBlock for PBJacobi, but it's not currently written that way (though it should be and that would be an easy change to make). If you want to use a multiplicative relaxation like SOR, you would have to implement it yourself. If you need something like ILU for a smoother, then you will have to pay for a matrix. Note that one possibility is to assemble a matrix on all but the finest level so you can use stronger smoothers there, then make do with Jacobi on the finest level. > 3) to deal with non-uniform sampling: can I provide the coordinates of > the finest grid with DASetCoordinates, then expect DMMG to provide the > subsampled coordinates at the coarser levels? Currently no, you have to set them on each level. Perhaps you could do this by rolling a loop over levels and applying MatRestrict using the restriction matrix from DMMG (this might not be the sampling that you want). Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Oct 7 07:03:04 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 7 Oct 2010 14:03:04 +0200 Subject: [petsc-users] call petsc real/petsc complex in same application In-Reply-To: <20101007105200.GA26108@tchpc.tcd.ie> References: <20101007105200.GA26108@tchpc.tcd.ie> Message-ID: On Thu, Oct 7, 2010 at 12:52, Darach Golden wrote: > Is there any (safe) way of using real and complex compiles of petsc in > different parts of the same application? > Not really. > The intent here is to save memory in the first call above where > complex numbers are not needed. > What is the relative size of each system? Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From darach at tchpc.tcd.ie Thu Oct 7 07:17:42 2010 From: darach at tchpc.tcd.ie (Darach Golden) Date: Thu, 7 Oct 2010 13:17:42 +0100 Subject: [petsc-users] call petsc real/petsc complex in same application In-Reply-To: References: <20101007105200.GA26108@tchpc.tcd.ie> Message-ID: <20101007121742.GD26108@tchpc.tcd.ie> On Thursday the 07 of October 2010 , Jed Brown wrote: > On Thu, Oct 7, 2010 at 12:52, Darach Golden wrote: > > > Is there any (safe) way of using real and complex compiles of petsc in > > different parts of the same application? > > > > Not really. thought so > > The intent here is to save memory in the first call above where > > complex numbers are not needed. > > > > What is the relative size of each system? the purposes are different: - one is a poisson solve using dmmg (real). So we don't want to store complex PetscScalars -- or am I just missing the fact that we can use PetscReal here with a complex compile? - the other is operations on (possibly including inversion of) complex matrices The solve and the inversion happen at different parts of an existing code Darach -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From jed at 59A2.org Thu Oct 7 07:33:47 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 7 Oct 2010 14:33:47 +0200 Subject: [petsc-users] call petsc real/petsc complex in same application In-Reply-To: <20101007121742.GD26108@tchpc.tcd.ie> References: <20101007105200.GA26108@tchpc.tcd.ie> <20101007121742.GD26108@tchpc.tcd.ie> Message-ID: On Thu, Oct 7, 2010 at 14:17, Darach Golden wrote: > - one is a poisson solve using dmmg (real). So we don't want to store > complex PetscScalars -- or am I just missing the fact that we can use > PetscReal here with a complex compile? > The matrices and vectors use PetscScalar. > - the other is operations on (possibly including inversion of) complex > matrices > What is the relative size? What sort of problem are you solving and why do you want an inverse? The issue is that unless the complex system is much smaller, the memory and performance gains of using PetscScalar=real for the real problem would actually be small. The worst case would be a large difficult real system and a small, easy complex system. In that case, you might consider using an equivalent real formulation for the complex problem. One could ask to template everything over the scalar type, but there are downsides to that. The more common case seems to be that the complex problem is big and difficult, so the cost of using PetscScalar=complex for the real problem is not too painful. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From darach at tchpc.tcd.ie Thu Oct 7 09:10:02 2010 From: darach at tchpc.tcd.ie (Darach Golden) Date: Thu, 7 Oct 2010 15:10:02 +0100 Subject: [petsc-users] call petsc real/petsc complex in same application In-Reply-To: References: <20101007105200.GA26108@tchpc.tcd.ie> Message-ID: <20101007141002.GA27320@tchpc.tcd.ie> Hi Jed, > What is the relative size? Sorry. I didn't think about this question properly the first time you asked it. The relative sizes will vary between large real/smaller complex (worst case) and approximately equally sized systems. So we'll have to initially implement for the most common case, which may be PetscScalar=real. Thanks very much for your responses, Darach -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: not available URL: From hsharma.tgjobs at gmail.com Thu Oct 7 17:14:35 2010 From: hsharma.tgjobs at gmail.com (Harsh Sharma) Date: Thu, 7 Oct 2010 17:14:35 -0500 Subject: [petsc-users] MatShift and unassembled matrices Message-ID: Hi, I am creating a square matrix using MatCreateMPIDense, then zero-ing its entries using MatZeroEntries and finally using the MatShift operation to make it the identity matrix. Here's the code snippet that I'm using for doing this: Mat OP1; /* n1xn1 matrix */ MatCreateMPIDense(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,n1,n1,PETSC_NULL,&OP1); MatZeroEntries(OP1); MatShift(OP1,1.0); The error I get is: [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Object is in wrong state! [0]PETSC ERROR: Not for unassembled matrix! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: MatShift() line 110 in src/mat/utils/axpy.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ Apparently, the matrix OP1 is in unassembled state. All the examples that I have seen for MatShift() do not call any assembly routines before or after calling the MatShift() function. What am I doing wrong here? Thanks very much, Harsh -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Oct 7 21:16:07 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 7 Oct 2010 21:16:07 -0500 Subject: [petsc-users] MatShift and unassembled matrices In-Reply-To: References: Message-ID: <496B3F79-4FB9-4BBA-865C-6EBB7FB27E56@mcs.anl.gov> Just call MatAssemblyBegin/End() before making the call to MatShift(). Yes for dense matrices this is sort of silly (since dense matrices are essentially always assembled) but since it is needed for sparse matrices we use the same paradigm for all matrix types. Barry On Oct 7, 2010, at 5:14 PM, Harsh Sharma wrote: > Hi, > > I am creating a square matrix using MatCreateMPIDense, then > zero-ing its entries using MatZeroEntries and finally using the > MatShift operation to make it the identity matrix. Here's the code > snippet that I'm using for doing this: > > Mat OP1; /* n1xn1 matrix */ > > MatCreateMPIDense(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,n1,n1,PETSC_NULL,&OP1); > MatZeroEntries(OP1); > MatShift(OP1,1.0); > > The error I get is: > > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Object is in wrong state! > [0]PETSC ERROR: Not for unassembled matrix! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: MatShift() line 110 in src/mat/utils/axpy.c > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > > Apparently, the matrix OP1 is in unassembled state. All the > examples that I have seen for MatShift() do not call any > assembly routines before or after calling the MatShift() function. > > What am I doing wrong here? > > Thanks very much, > Harsh From pengxwang at hotmail.com Fri Oct 8 21:33:40 2010 From: pengxwang at hotmail.com (Peter Wang) Date: Fri, 8 Oct 2010 21:33:40 -0500 Subject: [petsc-users] (no subject) Message-ID: I am trying to modify the example code in {PETSc_Dir}\src\vec\vec\examples\tests\ex19f.F . Only three lines are added into the original code. However, if the three lines are added, there is error coming out when it is compiled. Why it cannot be compiled when the lines are added? I am using gfortran 4.4.3 and openMPI 1.3.2 and petsc 3.1-p5-v1. The error infomation is as following: ********************************************************* ex19f.F:29.5: call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) 1 Error: Non-numeric character in statement label at (1) ex19f.F:29.5: call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) 1 Error: Unclassifiable statement at (1) ex19f.F:30.5: call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) 1 Error: Non-numeric character in statement label at (1) ex19f.F:30.5: call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) 1 Error: Unclassifiable statement at (1) make: *** [ex19f.o] Error 1 ********************************************************* Following is the code I modified. The lines followed by !***************Added for MPI are added by me for MPI subroutine. ********************************************************* ! ! program main ! include 'mpif.h' #include "finclude/petscsys.h" #include "finclude/petscvec.h" ! ! This example demonstrates basic use of the PETSc Fortran interface ! to vectors. ! integer myid,numprocs,namelen,rc !***************Added for MPI PetscInt n PetscErrorCode ierr PetscTruth flg PetscScalar one,two,three,dot PetscReal norm,rdot Vec x,y,w n = 20 one = 1.0 two = 2.0 three = 3.0 call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) !***************Added for MPI call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) !***************Added for MPI ! Create a vector, then duplicate it call VecCreate(PETSC_COMM_WORLD,x,ierr) call VecSetSizes(x,PETSC_DECIDE,n,ierr) call VecSetFromOptions(x,ierr) call VecDuplicate(x,y,ierr) call VecDuplicate(x,w,ierr) call VecSet(x,one,ierr) call VecSet(y,two,ierr) call VecDot(x,y,dot,ierr) rdot = PetscRealPart(dot) write(6,100) rdot 100 format('Result of inner product ',f10.4) call VecScale(x,two,ierr) call VecNorm(x,NORM_2,norm,ierr) write(6,110) norm 110 format('Result of scaling ',f10.4) call VecCopy(x,w,ierr) call VecNorm(w,NORM_2,norm,ierr) write(6,120) norm 120 format('Result of copy ',f10.4) call VecAXPY(y,three,x,ierr) call VecNorm(y,NORM_2,norm,ierr) write(6,130) norm 130 format('Result of axpy ',f10.4) call VecDestroy(x,ierr) call VecDestroy(y,ierr) call VecDestroy(w,ierr) call PetscFinalize(ierr) end -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Fri Oct 8 21:38:28 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 8 Oct 2010 21:38:28 -0500 Subject: [petsc-users] (no subject) In-Reply-To: References: Message-ID: Fortran fixed format requires that line not begin before the 7th column Barry On Oct 8, 2010, at 9:33 PM, Peter Wang wrote: > I am trying to modify the example code in {PETSc_Dir}\src\vec\vec\examples\tests\ex19f.F . Only three lines are added into the original code. However, if the three lines are added, there is error coming out when it is compiled. Why it cannot be compiled when the lines are added? I am using gfortran 4.4.3 and openMPI 1.3.2 and petsc 3.1-p5-v1. The error infomation is as following: > ********************************************************* > > > ex19f.F:29.5: > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > 1 > Error: Non-numeric character in statement label at (1) > ex19f.F:29.5: > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > 1 > Error: Unclassifiable statement at (1) > ex19f.F:30.5: > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > 1 > Error: Non-numeric character in statement label at (1) > ex19f.F:30.5: > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > 1 > Error: Unclassifiable statement at (1) > make: *** [ex19f.o] Error 1 > > ********************************************************* > Following is the code I modified. The lines followed by !***************Added for MPI are added by me for MPI subroutine. > ********************************************************* > ! > ! > program main > ! include 'mpif.h' > #include "finclude/petscsys.h" > #include "finclude/petscvec.h" > ! > ! This example demonstrates basic use of the PETSc Fortran interface > ! to vectors. > ! > > integer myid,numprocs,namelen,rc !***************Added for MPI > PetscInt n > PetscErrorCode ierr > PetscTruth flg > PetscScalar one,two,three,dot > PetscReal norm,rdot > Vec x,y,w > > n = 20 > one = 1.0 > two = 2.0 > three = 3.0 > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr) > > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) !***************Added for MPI > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) !***************Added for MPI > > > > ! Create a vector, then duplicate it > call VecCreate(PETSC_COMM_WORLD,x,ierr) > call VecSetSizes(x,PETSC_DECIDE,n,ierr) > call VecSetFromOptions(x,ierr) > call VecDuplicate(x,y,ierr) > call VecDuplicate(x,w,ierr) > call VecSet(x,one,ierr) > call VecSet(y,two,ierr) > call VecDot(x,y,dot,ierr) > rdot = PetscRealPart(dot) > write(6,100) rdot > 100 format('Result of inner product ',f10.4) > call VecScale(x,two,ierr) > call VecNorm(x,NORM_2,norm,ierr) > write(6,110) norm > 110 format('Result of scaling ',f10.4) > call VecCopy(x,w,ierr) > call VecNorm(w,NORM_2,norm,ierr) > write(6,120) norm > 120 format('Result of copy ',f10.4) > call VecAXPY(y,three,x,ierr) > call VecNorm(y,NORM_2,norm,ierr) > write(6,130) norm > 130 format('Result of axpy ',f10.4) > call VecDestroy(x,ierr) > call VecDestroy(y,ierr) > call VecDestroy(w,ierr) > call PetscFinalize(ierr) > end > From pengxwang at hotmail.com Fri Oct 8 21:38:46 2010 From: pengxwang at hotmail.com (Peter Wang) Date: Fri, 8 Oct 2010 21:38:46 -0500 Subject: [petsc-users] error in compiling fortran code In-Reply-To: References: Message-ID: Sorry for forgetting the subject in previous email. I am trying to modify the example code in {PETSc_Dir}\src\vec\vec\examples\tests\ex19f.F . Only three lines are added into the original code. However, if the three lines are added, there is error coming out when it is compiled. Why it cannot be compiled when the lines are added? I am using gfortran 4.4.3 and openMPI 1.3.2 and petsc 3.1-p5-v1. The error infomation is as following: ********************************************************* ex19f.F:29.5: call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) 1 Error: Non-numeric character in statement label at (1) ex19f.F:29.5: call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) 1 Error: Unclassifiable statement at (1) ex19f.F:30.5: call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) 1 Error: Non-numeric character in statement label at (1) ex19f.F:30.5: call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) 1 Error: Unclassifiable statement at (1) make: *** [ex19f.o] Error 1 ********************************************************* Following is the code I modified. The lines followed by !***************Added for MPI are added by me for MPI subroutine. ********************************************************* ! ! program main ! include 'mpif.h' #include "finclude/petscsys.h" #include "finclude/petscvec.h" ! ! This example demonstrates basic use of the PETSc Fortran interface ! to vectors. ! integer myid,numprocs,namelen,rc !***************Added for MPI PetscInt n PetscErrorCode ierr PetscTruth flg PetscScalar one,two,three,dot PetscReal norm,rdot Vec x,y,w n = 20 one = 1.0 two = 2.0 three = 3.0 call PetscInitialize(PETSC_NULL_CHARACTER,ierr) call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) !***************Added for MPI call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) !***************Added for MPI ! Create a vector, then duplicate it call VecCreate(PETSC_COMM_WORLD,x,ierr) call VecSetSizes(x,PETSC_DECIDE,n,ierr) call VecSetFromOptions(x,ierr) call VecDuplicate(x,y,ierr) call VecDuplicate(x,w,ierr) call VecSet(x,one,ierr) call VecSet(y,two,ierr) call VecDot(x,y,dot,ierr) rdot = PetscRealPart(dot) write(6,100) rdot 100 format('Result of inner product ',f10.4) call VecScale(x,two,ierr) call VecNorm(x,NORM_2,norm,ierr) write(6,110) norm 110 format('Result of scaling ',f10.4) call VecCopy(x,w,ierr) call VecNorm(w,NORM_2,norm,ierr) write(6,120) norm 120 format('Result of copy ',f10.4) call VecAXPY(y,three,x,ierr) call VecNorm(y,NORM_2,norm,ierr) write(6,130) norm 130 format('Result of axpy ',f10.4) call VecDestroy(x,ierr) call VecDestroy(y,ierr) call VecDestroy(w,ierr) call PetscFinalize(ierr) end -------------- next part -------------- An HTML attachment was scrubbed... URL: From brtnfld at uiuc.edu Fri Oct 8 23:38:01 2010 From: brtnfld at uiuc.edu (Michael Scot Breitenfeld) Date: Fri, 08 Oct 2010 23:38:01 -0500 Subject: [petsc-users] (no subject) In-Reply-To: References: Message-ID: <4CAFF1A9.4060704@uiuc.edu> If using free-format use extension .F90 not .F On 10/08/2010 09:33 PM, Peter Wang wrote: > I am trying to modify the example code in > {PETSc_Dir}\src\vec\vec\examples\tests\ex19f.F . Only three lines are > added into the original code. However, if the three lines are added, > there is error coming out when it is compiled. Why it cannot be > compiled when the lines are added? I am using gfortran 4.4.3 and > openMPI 1.3.2 and petsc 3.1-p5-v1. The error infomation is as following: > ********************************************************* > > > ex19f.F:29.5: > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > 1 > Error: Non-numeric character in statement label at (1) > ex19f.F:29.5: > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > 1 > Error: Unclassifiable statement at (1) > ex19f.F:30.5: > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > 1 > Error: Non-numeric character in statement label at (1) > ex19f.F:30.5: > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > 1 > Error: Unclassifiable statement at (1) > make: *** [ex19f.o] Error 1 > > ********************************************************* > Following is the code I modified. The lines followed by > !***************Added for MPI are added by me for MPI subroutine. > ********************************************************* > ! > ! > program main > ! include 'mpif.h' > #include "finclude/petscsys.h" > #include "finclude/petscvec.h" > ! > ! This example demonstrates basic use of the PETSc Fortran interface > ! to vectors. > ! > > integer myid,numprocs,namelen,rc !***************Added for MPI > PetscInt n > PetscErrorCode ierr > PetscTruth flg > PetscScalar one,two,three,dot > PetscReal norm,rdot > Vec x,y,w > > n = 20 > one = 1.0 > two = 2.0 > three = 3.0 > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr) > > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > !***************Added for MPI > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > !***************Added for MPI > > > > ! Create a vector, then duplicate it > call VecCreate(PETSC_COMM_WORLD,x,ierr) > call VecSetSizes(x,PETSC_DECIDE,n,ierr) > call VecSetFromOptions(x,ierr) > call VecDuplicate(x,y,ierr) > call VecDuplicate(x,w,ierr) > call VecSet(x,one,ierr) > call VecSet(y,two,ierr) > call VecDot(x,y,dot,ierr) > rdot = PetscRealPart(dot) > write(6,100) rdot > 100 format('Result of inner product ',f10.4) > call VecScale(x,two,ierr) > call VecNorm(x,NORM_2,norm,ierr) > write(6,110) norm > 110 format('Result of scaling ',f10.4) > call VecCopy(x,w,ierr) > call VecNorm(w,NORM_2,norm,ierr) > write(6,120) norm > 120 format('Result of copy ',f10.4) > call VecAXPY(y,three,x,ierr) > call VecNorm(y,NORM_2,norm,ierr) > write(6,130) norm > 130 format('Result of axpy ',f10.4) > call VecDestroy(x,ierr) > call VecDestroy(y,ierr) > call VecDestroy(w,ierr) > call PetscFinalize(ierr) > end > From j.alyn.roberts at gmail.com Sat Oct 9 08:11:58 2010 From: j.alyn.roberts at gmail.com (Jeremy Roberts) Date: Sat, 9 Oct 2010 09:11:58 -0400 Subject: [petsc-users] error in compiling fortran code In-Reply-To: References: Message-ID: It looks like a Fortran formatting error. Notice all the other "call" statements are in the 7th column (after the normally restricted 6 columns for f77 syntax). Matching that format or using "-ffree-form" in your Fortran flags should fix that problem. Regards, Jeremy On Fri, Oct 8, 2010 at 10:38 PM, Peter Wang wrote: > Sorry for forgetting the subject in previous email. > > > I am trying to modify the example code in > {PETSc_Dir}\src\vec\vec\examples\tests\ex19f.F . Only three lines are added > into the original code. However, if the three lines are added, there is > error coming out when it is compiled. Why it cannot be compiled when the > lines are added? I am using gfortran 4.4.3 and openMPI 1.3.2 and petsc > 3.1-p5-v1. The error infomation is as following: > ********************************************************* > > > ex19f.F:29.5: > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > 1 > Error: Non-numeric character in statement label at (1) > ex19f.F:29.5: > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) > 1 > Error: Unclassifiable statement at (1) > ex19f.F:30.5: > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > 1 > Error: Non-numeric character in statement label at (1) > ex19f.F:30.5: > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) > 1 > Error: Unclassifiable statement at (1) > make: *** [ex19f.o] Error 1 > > ********************************************************* > Following is the code I modified. The lines followed by > !***************Added for MPI are added by me for MPI subroutine. > ********************************************************* > ! > ! > program main > ! include 'mpif.h' > #include "finclude/petscsys.h" > #include "finclude/petscvec.h" > ! > ! This example demonstrates basic use of the PETSc Fortran interface > ! to vectors. > ! > > integer myid,numprocs,namelen,rc !***************Added for MPI > PetscInt n > PetscErrorCode ierr > PetscTruth flg > PetscScalar one,two,three,dot > PetscReal norm,rdot > Vec x,y,w > > n = 20 > one = 1.0 > two = 2.0 > three = 3.0 > call PetscInitialize(PETSC_NULL_CHARACTER,ierr) > call PetscOptionsGetInt(PETSC_NULL_CHARACTER,'-n',n,flg,ierr) > > call MPI_COMM_RANK(MPI_COMM_WORLD,myid,rc) !***************Added > for MPI > call MPI_COMM_SIZE(MPI_COMM_WORLD,numprocs,rc) !***************Added > for MPI > > > > ! Create a vector, then duplicate it > call VecCreate(PETSC_COMM_WORLD,x,ierr) > call VecSetSizes(x,PETSC_DECIDE,n,ierr) > call VecSetFromOptions(x,ierr) > call VecDuplicate(x,y,ierr) > call VecDuplicate(x,w,ierr) > call VecSet(x,one,ierr) > call VecSet(y,two,ierr) > call VecDot(x,y,dot,ierr) > rdot = PetscRealPart(dot) > write(6,100) rdot > 100 format('Result of inner product ',f10.4) > call VecScale(x,two,ierr) > call VecNorm(x,NORM_2,norm,ierr) > write(6,110) norm > 110 format('Result of scaling ',f10.4) > call VecCopy(x,w,ierr) > call VecNorm(w,NORM_2,norm,ierr) > write(6,120) norm > 120 format('Result of copy ',f10.4) > call VecAXPY(y,three,x,ierr) > call VecNorm(y,NORM_2,norm,ierr) > write(6,130) norm > 130 format('Result of axpy ',f10.4) > call VecDestroy(x,ierr) > call VecDestroy(y,ierr) > call VecDestroy(w,ierr) > call PetscFinalize(ierr) > end > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amal.ghamdi at kaust.edu.sa Tue Oct 12 03:21:33 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Tue, 12 Oct 2010 11:21:33 +0300 Subject: [petsc-users] access da local vector data array in petsc4py Message-ID: Dear All, I have a question regarding petsc4py. I'd like to access the data array in a local vector generated from a one-dimension da, which has dof = 1. I used q = localVec.getArray(). However, q shape now does not take into account dof which is one in my example. I always need to reshape q to match the intended dof. Is there a way to do that automatically? Thank you very much. Amal -------------- next part -------------- An HTML attachment was scrubbed... URL: From dalcinl at gmail.com Tue Oct 12 06:05:02 2010 From: dalcinl at gmail.com (Lisandro Dalcin) Date: Tue, 12 Oct 2010 08:05:02 -0300 Subject: [petsc-users] access da local vector data array in petsc4py In-Reply-To: References: Message-ID: On 12 October 2010 05:21, Amal Alghamdi wrote: > Dear All, > I have a question regarding petsc4py. I'd like to access the data array in a > local vector generated from a one-dimension da, which has dof = 1. I used ?q > = localVec.getArray(). However, q shape now does not take into account dof > which is one in my example. I always need to reshape q to match the intended > dof. Is there a way to do that automatically? > Thank you very much. > Amal Almost all functions and methods in petsc4py treat arrays as linear data, I mean, unidimensional arrays. As Vec.getArray() knows nothing about the DA, there is no way to get the DA shape. Note that recently I've added DA.getVecArray() to provide some support for global indexing, this new thing is not finished, I want to improve interoperability with regular numpy arrays, the functionality you want could be added there. Other possibility would be to add a method to Vec, let say Vec.getArrayBlock(), that return an array reshaped to (n/bs, bs) where "n" is the local vec size and "bs" is the block size (should equal ndof for Vec's originating from DA). What do you think? In the mean time, you could use an utility function like the one below: def get_array_block(vec): n = vec.getLocalSize() bs = vec.getBlockSize() a = numpy.asarray(vec) a.shape = (n//bs, bs) return a -- Lisandro Dalcin --------------- CIMEC (INTEC/CONICET-UNL) Predio CONICET-Santa Fe Colectora RN 168 Km 472, Paraje El Pozo Tel: +54-342-4511594 (ext 1011) Tel/Fax: +54-342-4511169 From klaus.zimmermann at physik.uni-freiburg.de Tue Oct 12 12:07:21 2010 From: klaus.zimmermann at physik.uni-freiburg.de (Klaus Zimmermann) Date: Tue, 12 Oct 2010 19:07:21 +0200 Subject: [petsc-users] Is SIPs released? Message-ID: <4CB495C9.8030305@physik.uni-freiburg.de> Hi all, on various places I read about SIPs: shift-and-invert parallel spectral transformations. I was unable to locate a homepage or download though. Is this available as a library? It would be very useful for me... Thanks for any information in advance! Klaus Zimmermann From art.fountain at gmail.com Tue Oct 12 13:12:54 2010 From: art.fountain at gmail.com (Arturo Fountain) Date: Tue, 12 Oct 2010 12:12:54 -0600 Subject: [petsc-users] Solves with valgrind, not without Message-ID: I am using petsc-3.1-p5 to solve a system of equations at multiple timesteps. The LHS matrix is the same at each timestep although the RHS changes. Strange thing is, the system solves when using valgrind but not without. This is the case with every solver I have tried and on multiple machines. I am most interested in cg/cgne although I have used gmres in the past when the the condition of the matrix was questionable (it is no longer questionable). In either case the sytem will converge for one and only one time step when calling: mpiexec.uni -n 1 ./MONO -indir InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true however, calling the same program with valgrind: G_SLICE=always-malloc G_DEBUG=gc-friendly mpiexec.uni -n 1 valgrind -v --leak-check=full --show-reachable=yes --track-origins=yes ./MONO -indir InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true it will solve many timesteps. (at least 5). The error I receive (again, only without valgrind) is: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): returning tag 2147483571 [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 5, Mon Sep 27 11:51:54 CDT 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./MONO on a linux-gnu named duality by xerxez Tue Oct 12 10:26:18 2010 [0]PETSC ERROR: Libraries linked from /home/xerxez/lib/petsc-3.1-p5/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Sat Oct 9 19:15:29 2010 [0]PETSC ERROR: Configure options --download-f-blas-lapack=1 --download-mpich=1 --download-blacs=1 --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 I have read up on http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signaland don't see anything wrong or anything different in the calls. After receiving this error I tried using valgrind but the error simply goes away each and every time I call it with valgrind. Has anyone seen such a problem before? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Oct 12 13:36:54 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 12 Oct 2010 13:36:54 -0500 Subject: [petsc-users] Solves with valgrind, not without In-Reply-To: References: Message-ID: <24424661-83D7-4B2C-B812-4ADE62C1132C@mcs.anl.gov> You'll need to build a debug version of the libraries and run in the debugger; in the debugger you will need to have it catch floating point exceptions to stop when the problem appears (that is debugger dependent). Based on the difference in behavior I am guessing the problem is memory corruption. On my mac I run valgrind with the option -q --tool=memcheck --dsymutil=yes --num-callers=20 Barry On Oct 12, 2010, at 1:12 PM, Arturo Fountain wrote: > I am using petsc-3.1-p5 to solve a system of equations at multiple timesteps. The LHS matrix is the same at each timestep although the RHS changes. Strange thing is, the system solves when using valgrind but not without. > > This is the case with every solver I have tried and on multiple machines. I am most interested in cg/cgne although I have used gmres in the past when the the condition of the matrix was questionable (it is no longer questionable). In either case the sytem will converge for one and only one time step when calling: > > mpiexec.uni -n 1 ./MONO -indir InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true > > however, calling the same program with valgrind: > > G_SLICE=always-malloc G_DEBUG=gc-friendly mpiexec.uni -n 1 valgrind -v --leak-check=full --show-reachable=yes --track-origins=yes ./MONO -indir InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true > > it will solve many timesteps. (at least 5). The error I receive (again, only without valgrind) is: > > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero > [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger > [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors > [0]PETSC ERROR: likely location of problem given in stack below > [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 > [0] PetscCommDuplicate(): returning tag 2147483571 > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, > [0]PETSC ERROR: INSTEAD the line number of the start of the function > [0]PETSC ERROR: is given. > [0]PETSC ERROR: --------------------- Error Message ------------------------------------ > [0]PETSC ERROR: Signal received! > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 5, Mon Sep 27 11:51:54 CDT 2010 > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > [0]PETSC ERROR: See docs/index.html for manual pages. > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: ./MONO on a linux-gnu named duality by xerxez Tue Oct 12 10:26:18 2010 > [0]PETSC ERROR: Libraries linked from /home/xerxez/lib/petsc-3.1-p5/linux-gnu-c-debug/lib > [0]PETSC ERROR: Configure run at Sat Oct 9 19:15:29 2010 > [0]PETSC ERROR: Configure options --download-f-blas-lapack=1 --download-mpich=1 --download-blacs=1 --download-hypre=1 > [0]PETSC ERROR: ------------------------------------------------------------------------ > [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[unset]: aborting job: > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > I have read up on http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal and don't see anything wrong or anything different in the calls. After receiving this error I tried using valgrind but the error simply goes away each and every time I call it with valgrind. > > Has anyone seen such a problem before? From jed at 59A2.org Tue Oct 12 13:37:59 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 12 Oct 2010 20:37:59 +0200 Subject: [petsc-users] Solves with valgrind, not without In-Reply-To: References: Message-ID: Try running it in a debugger with -fp_trap. Hmm, I don't remember if the -fp_trap patch made it into 3.1, you might have to enable it manually. See "man feenableexcept" on systems with glibc, or _MM_SET_EXCEPTION_MASK on other x86/x64. Jed On Oct 12, 2010 8:12 PM, "Arturo Fountain" wrote: I am using petsc-3.1-p5 to solve a system of equations at multiple timesteps. The LHS matrix is the same at each timestep although the RHS changes. Strange thing is, the system solves when using valgrind but not without. This is the case with every solver I have tried and on multiple machines. I am most interested in cg/cgne although I have used gmres in the past when the the condition of the matrix was questionable (it is no longer questionable). In either case the sytem will converge for one and only one time step when calling: mpiexec.uni -n 1 ./MONO -indir InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true however, calling the same program with valgrind: G_SLICE=always-malloc G_DEBUG=gc-friendly mpiexec.uni -n 1 valgrind -v --leak-check=full --show-reachable=yes --track-origins=yes ./MONO -indir InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true it will solve many timesteps. (at least 5). The error I receive (again, only without valgrind) is: [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point Exception,probably divide by zero [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger [0]PETSC ERROR: or see http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory corruption errors [0]PETSC ERROR: likely location of problem given in stack below [0]PETSC ERROR: --------------------- Stack Frames ------------------------------------ [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 -2080374783 [0] PetscCommDuplicate(): returning tag 2147483571 [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available, [0]PETSC ERROR: INSTEAD the line number of the start of the function [0]PETSC ERROR: is given. [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 5, Mon Sep 27 11:51:54 CDT 2010 [0]PETSC ERROR: See docs/changes/index.html for recent updates. [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. [0]PETSC ERROR: See docs/index.html for manual pages. [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: ./MONO on a linux-gnu named duality by xerxez Tue Oct 12 10:26:18 2010 [0]PETSC ERROR: Libraries linked from /home/xerxez/lib/petsc-3.1-p5/linux-gnu-c-debug/lib [0]PETSC ERROR: Configure run at Sat Oct 9 19:15:29 2010 [0]PETSC ERROR: Configure options --download-f-blas-lapack=1 --download-mpich=1 --download-blacs=1 --download-hypre=1 [0]PETSC ERROR: ------------------------------------------------------------------------ [0]PETSC ERROR: User provided function() line 0 in unknown directory unknown file application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[unset]: aborting job: application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 I have read up on http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signaland don't see anything wrong or anything different in the calls. After receiving this error I tried using valgrind but the error simply goes away each and every time I call it with valgrind. Has anyone seen such a problem before? -------------- next part -------------- An HTML attachment was scrubbed... URL: From art.fountain at gmail.com Tue Oct 12 14:28:11 2010 From: art.fountain at gmail.com (Arturo Fountain) Date: Tue, 12 Oct 2010 13:28:11 -0600 Subject: [petsc-users] Solves with valgrind, not without In-Reply-To: <24424661-83D7-4B2C-B812-4ADE62C1132C@mcs.anl.gov> References: <24424661-83D7-4B2C-B812-4ADE62C1132C@mcs.anl.gov> Message-ID: Indeed using a debugging version of PETSc and a debugger (gdb) got me to my error. Thank you very much. Art On Tue, Oct 12, 2010 at 12:36 PM, Barry Smith wrote: > > You'll need to build a debug version of the libraries and run in the > debugger; in the debugger you will need to have it catch floating point > exceptions to stop when the problem appears (that is debugger dependent). > Based on the difference in behavior I am guessing the problem is memory > corruption. > On my mac I run valgrind with the option -q --tool=memcheck --dsymutil=yes > --num-callers=20 > > Barry > > On Oct 12, 2010, at 1:12 PM, Arturo Fountain wrote: > > > I am using petsc-3.1-p5 to solve a system of equations at multiple > timesteps. The LHS matrix is the same at each timestep although the RHS > changes. Strange thing is, the system solves when using valgrind but not > without. > > > > This is the case with every solver I have tried and on multiple machines. > I am most interested in cg/cgne although I have used gmres in the past when > the the condition of the matrix was questionable (it is no longer > questionable). In either case the sytem will converge for one and only one > time step when calling: > > > > mpiexec.uni -n 1 ./MONO -indir InputAniso -outdir OutputAniso -ksp_type > gmres -pc_type jacobi -info -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true > > > > however, calling the same program with valgrind: > > > > G_SLICE=always-malloc G_DEBUG=gc-friendly mpiexec.uni -n 1 valgrind -v > --leak-check=full --show-reachable=yes --track-origins=yes ./MONO -indir > InputAniso -outdir OutputAniso -ksp_type gmres -pc_type jacobi -info > -ksp_rtol 1e-8 -ksp_initial_guess_nonzero true > > > > it will solve many timesteps. (at least 5). The error I receive (again, > only without valgrind) is: > > > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Caught signal number 8 FPE: Floating Point > Exception,probably divide by zero > > [0]PETSC ERROR: Try option -start_in_debugger or > -on_error_attach_debugger > > [0]PETSC ERROR: or see > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR: or try > http://valgrind.org on GNU/linux and Apple Mac OS X to find memory > corruption errors > > [0]PETSC ERROR: likely location of problem given in stack below > > [0]PETSC ERROR: --------------------- Stack Frames > ------------------------------------ > > [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850689 > -2080374783 > > [0] PetscCommDuplicate(): returning tag 2147483571 > > [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not > available, > > [0]PETSC ERROR: INSTEAD the line number of the start of the > function > > [0]PETSC ERROR: is given. > > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > > [0]PETSC ERROR: Signal received! > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 5, Mon Sep 27 11:51:54 > CDT 2010 > > [0]PETSC ERROR: See docs/changes/index.html for recent updates. > > [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting. > > [0]PETSC ERROR: See docs/index.html for manual pages. > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: ./MONO on a linux-gnu named duality by xerxez Tue Oct 12 > 10:26:18 2010 > > [0]PETSC ERROR: Libraries linked from > /home/xerxez/lib/petsc-3.1-p5/linux-gnu-c-debug/lib > > [0]PETSC ERROR: Configure run at Sat Oct 9 19:15:29 2010 > > [0]PETSC ERROR: Configure options --download-f-blas-lapack=1 > --download-mpich=1 --download-blacs=1 --download-hypre=1 > > [0]PETSC ERROR: > ------------------------------------------------------------------------ > > [0]PETSC ERROR: User provided function() line 0 in unknown directory > unknown file > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0[unset]: > aborting job: > > application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0 > > > > I have read up on > http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signaland don't see anything wrong or anything different in the calls. After > receiving this error I tried using valgrind but the error simply goes away > each and every time I call it with valgrind. > > > > Has anyone seen such a problem before? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hzhang at mcs.anl.gov Tue Oct 12 22:53:28 2010 From: hzhang at mcs.anl.gov (Hong Zhang) Date: Tue, 12 Oct 2010 22:53:28 -0500 Subject: [petsc-users] Is SIPs released? In-Reply-To: <4CB495C9.8030305@physik.uni-freiburg.de> References: <4CB495C9.8030305@physik.uni-freiburg.de> Message-ID: Dear Klaus: SIPs was developed from a project in material simulation more than 8 years ago. Since then, both petsc and slepc have several new releases while SIPs lacks resource for its update and further development. Thus far, I was the only SIPs coder, with obligation on several funded demanding projects, plus teaching and student advising. The current SIPs code remains as "research code" and fails to work with the latest version of petsc and slepc. Until I'm able to devote a period of concentrated effort to its update, it cannot be used. I attempted to update it to the current petsc/slepc release few times, but failed to accomplish due to other tasks. I'm sorry that I do not have anything to offer at this time. I'll let you know when SIPs is updated to be usable. Thanks for your interest, Hong > ?Hi all, > > on various places I read about SIPs: shift-and-invert parallel spectral > transformations. > I was unable to locate a homepage or download though. Is this available as a > library? > It would be very useful for me... > > Thanks for any information in advance! > Klaus Zimmermann > From luke.bloy at gmail.com Wed Oct 13 11:56:57 2010 From: luke.bloy at gmail.com (Luke Bloy) Date: Wed, 13 Oct 2010 12:56:57 -0400 Subject: [petsc-users] memory leaks Message-ID: <4CB5E4D9.5080606@gmail.com> Hi, I've had some issues with petsc and slepc recently where functionality would stop working following a recompilation of my code, after unrelated code was changed. This suggested to me a memory leak somewhere. After some investigation, it seems that there is a leak in petscInitialize, seemingly coming from openmpi. I'm attaching the output of valgrind and a basic executable calling petsciInitialize() and petscFinalize. I'm running on this an ubuntu 10.04 machine with petsc 3.0.0 and openMPI (1.4.1) installed from the repositiories. Although this problem is also evident on machines with petsc(3.0.0) and mpi(1.3) installed from source. Is this a problem others are seeing? what is a stable combination of petsc and mpi? how best should i proceed in tracking this down. Thanks for the input. Luke -------------- next part -------------- A non-text attachment was scrubbed... Name: petscInitTest.cxx Type: text/x-c++src Size: 303 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: petscValgrind.log Type: text/x-log Size: 2039 bytes Desc: not available URL: From knepley at gmail.com Wed Oct 13 12:07:41 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 13 Oct 2010 12:07:41 -0500 Subject: [petsc-users] memory leaks In-Reply-To: <4CB5E4D9.5080606@gmail.com> References: <4CB5E4D9.5080606@gmail.com> Message-ID: On Wed, Oct 13, 2010 at 11:56 AM, Luke Bloy wrote: > Hi, > > I've had some issues with petsc and slepc recently where functionality > would stop working following a recompilation of my code, after unrelated > code was changed. This suggested to me a memory leak somewhere. > 1) This error does not describe a leak. 2) This is definitely in OpenMPI, during MPI_Init(). I would send it to them 3) In order for valgrind to be more useful, run this using a debugging executable so we can see symbols 4) I do not get this on my machine. It looks like OpenMPI trying to be smart about multi-socket machines. Matt After some investigation, it seems that there is a leak in petscInitialize, > seemingly coming from openmpi. I'm attaching the output of valgrind and a > basic executable calling petsciInitialize() and petscFinalize. I'm running > on this an ubuntu 10.04 machine with petsc 3.0.0 and openMPI (1.4.1) > installed from the repositiories. Although this problem is also evident on > machines with petsc(3.0.0) and mpi(1.3) installed from source. > > Is this a problem others are seeing? what is a stable combination of petsc > and mpi? how best should i proceed in tracking this down. > > Thanks for the input. > Luke -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Oct 13 12:10:07 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 13 Oct 2010 19:10:07 +0200 Subject: [petsc-users] memory leaks In-Reply-To: <4CB5E4D9.5080606@gmail.com> References: <4CB5E4D9.5080606@gmail.com> Message-ID: On Wed, Oct 13, 2010 at 18:56, Luke Bloy wrote: > Hi, > > I've had some issues with petsc and slepc recently where functionality would > stop working following a recompilation of my code, after unrelated code was > changed. This suggested to me a memory leak somewhere. > > After some investigation, it seems that there is a leak in petscInitialize, > seemingly coming from openmpi. I'm attaching the output of valgrind and a > basic executable calling petsciInitialize() and petscFinalize. I'm running > on this an ubuntu 10.04 machine with petsc 3.0.0 and openMPI (1.4.1) > installed from the repositiories. Although this problem is also evident on > machines with petsc(3.0.0) and mpi(1.3) ?installed from source. Is it possible that you upgraded Open MPI without rebuilding PETSc? You can change affinity settings through mpiexec, but this looks like an MPI issue. What happens if you just run a plain MPI program (without PETSc)? The address of 0x0 is what concerns me, there are a few places where buffers are mis-marked as uninitialized, uninitialized memory is used in a harmless way, or memory is (intentionally) leaked. While I like a lot of things about Open MPI, I wish they would fix these spurious warnings so that we can debug application code without that noise. I have two MPI installations and switch to MPICH2 if I need it to be valgrind-clean. Note that MPICH2 handles communicators differently so (by default) you won't see anything about certain leaked communicators. I also have a few valgrind rules so the most common Open MPI noise is suppressed. The real bug you are hunting is almost certainly elsewhere. Jed From jed at 59A2.org Wed Oct 13 12:11:31 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 13 Oct 2010 19:11:31 +0200 Subject: [petsc-users] memory leaks In-Reply-To: References: <4CB5E4D9.5080606@gmail.com> Message-ID: On Wed, Oct 13, 2010 at 19:07, Matthew Knepley wrote: > 3) In order for valgrind to be more useful, run this using a debugging > executable so we can see symbols ==7052== by 0x409160: main (petscInitTest.cxx:11) His executable is built with debug symbols. Jed From knepley at gmail.com Wed Oct 13 12:12:54 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 13 Oct 2010 12:12:54 -0500 Subject: [petsc-users] memory leaks In-Reply-To: References: <4CB5E4D9.5080606@gmail.com> Message-ID: On Wed, Oct 13, 2010 at 12:11 PM, Jed Brown wrote: > On Wed, Oct 13, 2010 at 19:07, Matthew Knepley wrote: > > 3) In order for valgrind to be more useful, run this using a debugging > > executable so we can see symbols > > ==7052== by 0x409160: main (petscInitTest.cxx:11) > > His executable is built with debug symbols. I was talking about the OpenMPI, but I guess that is from packages. Just used to building everything myself. Matt > > Jed -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Oct 13 12:16:28 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 13 Oct 2010 19:16:28 +0200 Subject: [petsc-users] memory leaks In-Reply-To: References: <4CB5E4D9.5080606@gmail.com> Message-ID: On Wed, Oct 13, 2010 at 19:12, Matthew Knepley wrote: > I was talking about the OpenMPI, but I guess that is from packages. Just > used > to building everything myself. Yeah, the package is stripped, but the symbols don't help in this case because it's due to a somewhat intentional decision from the Open MPI devs (might be able to track down the last technical discussion of this, but supposedly they have a "reason" for leaving that warning in by default). But it's not hard to detect that the program is running under valgrind, so I think they should definitely fix it in that case. Jed From lizs at mail.uc.edu Mon Oct 18 16:51:57 2010 From: lizs at mail.uc.edu (Li, Zhisong (lizs)) Date: Mon, 18 Oct 2010 21:51:57 +0000 Subject: [petsc-users] 2D domain mapping to 3D Message-ID: <88D7E3BB7E1960428303E760100374511AA9F272@BL2PRD0103MB052.prod.exchangelabs.com> Hi, Petsc Team, Using "DACreate2d" and "DACreate3d", I wonder is it possible to create a global 2D (m x n) structural domain correctly mapping to a global 3D (m x n x q) structural domain on each process in a parallel computation? If not, how to achieve this? Thank you. Zhisong Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Oct 18 16:58:24 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 18 Oct 2010 23:58:24 +0200 Subject: [petsc-users] 2D domain mapping to 3D In-Reply-To: <88D7E3BB7E1960428303E760100374511AA9F272@BL2PRD0103MB052.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E760100374511AA9F272@BL2PRD0103MB052.prod.exchangelabs.com> Message-ID: On Mon, Oct 18, 2010 at 23:51, Li, Zhisong (lizs) wrote: > Hi, Petsc Team, > > Using "DACreate2d" and "DACreate3d", I wonder is it possible to create a > global 2D (m x n) structural domain correctly mapping to a global 3D (m x n > x q) structural domain on each process in a parallel computation? If not, > how to achieve this? > Yes, this is possible, but note that you will have to insist via the "p" parameter to DACreate3d that the domain not be partitioned in the z-direction. There is a concrete example in src/snes/examples/tutorials/ex48.c, but it is probably doing more than you need. If you need to match an arbitrary DA, then you can use the "lx" and "ly" parameters. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Oct 18 16:59:25 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 18 Oct 2010 16:59:25 -0500 Subject: [petsc-users] 2D domain mapping to 3D In-Reply-To: <88D7E3BB7E1960428303E760100374511AA9F272@BL2PRD0103MB052.prod.exchangelabs.com> References: <88D7E3BB7E1960428303E760100374511AA9F272@BL2PRD0103MB052.prod.exchangelabs.com> Message-ID: <24EAF8BF-B583-494C-AB75-6A12B4E10EE9@mcs.anl.gov> On Oct 18, 2010, at 4:51 PM, Li, Zhisong (lizs) wrote: > Hi, Petsc Team, > > Using "DACreate2d" and "DACreate3d", I wonder is it possible to create a global 2D (m x n) structural domain correctly mapping to a global 3D (m x n x q) structural domain on each process in a parallel computation? If not, how to achieve this? I do not understand what you mean. Please give a lot more details of what you want to do. Barry > > Thank you. > > > Zhisong Li > > From vedaprakashsubramanian at gmail.com Tue Oct 19 00:45:35 2010 From: vedaprakashsubramanian at gmail.com (vedaprakash subramanian) Date: Mon, 18 Oct 2010 23:45:35 -0600 Subject: [petsc-users] What is the difference between MatMult and KSP_MatMult Message-ID: Can anyone tell the difference between MatMult() and KSP_MatMult(). Can I use MatMult() inside the function KSPCreate_CGSTAB(KSP ksp) ---vedaprakash -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Tue Oct 19 04:24:13 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 19 Oct 2010 11:24:13 +0200 Subject: [petsc-users] What is the difference between MatMult and KSP_MatMult In-Reply-To: References: Message-ID: On Tue, Oct 19, 2010 at 07:45, vedaprakash subramanian < vedaprakashsubramanian at gmail.com> wrote: > Can anyone tell the difference between MatMult() and KSP_MatMult(). > If you're implementing a new KSP, it's worth reading the source: #define KSP_MatMult(ksp,A,x,y) (!ksp->transpose_solve) ? MatMult(A,x,y) : MatMultTranspose(A,x,y) > Can I use MatMult() inside the function KSPCreate_CGSTAB(KSP ksp) > You should not be using MatMult in KSPCreate_XXX because the matrix may not exist yet. Whether to use KSP_MatMult depends on how you want to handle transpose solves. Note that many KSPs get away withh KSP_PCApplyBAorAB. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From rabartl at sandia.gov Tue Oct 19 19:35:58 2010 From: rabartl at sandia.gov (Bartlett, Roscoe A) Date: Tue, 19 Oct 2010 18:35:58 -0600 Subject: [petsc-users] Final Notice: Survey about Software Practices in Computational Science In-Reply-To: <86D2E9E7B110124E919E891A9070CD7015E8FADA91@ES02SNLNT.srn.sandia.gov> References: <86D2E9E7B110124E919E891A9070CD7015E8FADA91@ES02SNLNT.srn.sandia.gov> Message-ID: <9C5EDABC60AD90488D506008E9277E4A1C628FB3A5@ES02SNLNT.srn.sandia.gov> Last reminder about software development survey ... The survey will be closed out after October 31, 2010. Hello, Dr. Roscoe Bartlett, Sandia National Laboratory, Dr. Jeffrey Carver, University of Alabama, and Dr. Lorin Hochstein, University of Southern California, are conducting a survey of software development practices among computational scientists. This survey seeks to understand current software development practices and identify areas of need. The survey should take approximately 15 minutes to complete. The survey can be accessed at: https://spreadsheets.google.com/viewform?hl=en&formkey=dGZwR1BfQ2NiNGh6SWt4ZjBCTnFoVmc6MQ#gid=0 This survey has been approved by The University of Alabama IRB board. Thanks, - Roscoe ----------------------------------------------------------------------- Dr. Roscoe A. Bartlett, PhD Sandia National Laboratories Department of Optimization and Uncertainty Estimation Trilinos Software Engineering Technologies and Integration Lead Phone: (505) 844-5097 Website: www.cs.sandia.gov/~rabartl/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.hauffe at tu-dresden.de Thu Oct 21 03:59:34 2010 From: andreas.hauffe at tu-dresden.de (Andreas Hauffe) Date: Thu, 21 Oct 2010 10:59:34 +0200 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector Message-ID: <201010211059.34454.andreas.hauffe@tu-dresden.de> Hi, I think there is a bug when multiplying an SBAIJ matrix with a vector, if the matrix has a zero/missing row. I tried to write a small example: |x x| |1| = |0| |x 1| |x| |0| The result since 3.1 is: |x x| |1| = |1| |x 1| |x| |0| I add the fortran code: program main implicit none #include "finclude/petscsys.h" #include "finclude/petscvec.h" #include "finclude/petscvec.h90" #include "finclude/petscmat.h" #include "finclude/petscmat.h90" Mat :: KaaS Vec :: v0,y PetscInt :: m PetscInt :: bs PetscErrorCode :: ierr call petscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr) m = 2 bs = 1 call MatCreate(PETSC_COMM_WORLD,KaaS,ierr); CHKERRQ(ierr) call MatSetSizes(KaaS,PETSC_DECIDE,PETSC_DECIDE,m,m,ierr); CHKERRQ(ierr) call MatSetType(KaaS,MATSEQSBAIJ,ierr); CHKERRQ(ierr) ! call MatSetType(KaaS,MATAIJ,ierr); CHKERRQ(ierr) call MatSetFromOptions(KaaS,ierr); CHKERRQ(ierr) ! call MatSetValue(KaaS, 0, 0, 0.d0, ADD_VALUES, ierr); CHKERRQ(ierr) call MatSetValue(KaaS, 1, 1, 1.d0, ADD_VALUES, ierr); CHKERRQ(ierr) call MatAssemblyBegin(KaaS,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr) call MatAssemblyEnd (KaaS,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr) call MatGetVecs(KaaS,y,v0,ierr); CHKERRQ(ierr) call VecSetValue(v0,0,1.D0,INSERT_VALUES,ierr); CHKERRQ(ierr) call MatMult(KaaS,v0,y,ierr); CHKERRQ(ierr) call VecView(y,PETSC_NULL_OBJECT,ierr); CHKERRQ(ierr) call VecDestroy(y,ierr); CHKERRQ(ierr) call VecDestroy(v0,ierr); CHKERRQ(ierr) call MatDestroy(KaaS,ierr); CHKERRQ(ierr) call petscFinalize(ierr) end program main best -- Andreas Hauffe ---------------------------------------------------------------------------------------------------- Technische Universit?t Dresden Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering D-01062 Dresden Germany phone : (++49)351 463 38496 fax : (++49)351 463 37263 mail : andreas.hauffe at tu-dresden.de Website : http://tu-dresden.de/mw/ilr/lft ---------------------------------------------------------------------------------------------------- From jed at 59a2.org Thu Oct 21 07:24:59 2010 From: jed at 59a2.org (Jed Brown) Date: Thu, 21 Oct 2010 14:24:59 +0200 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: <201010211059.34454.andreas.hauffe@tu-dresden.de> References: <201010211059.34454.andreas.hauffe@tu-dresden.de> Message-ID: The SBAIJ implementation requires that the diagonal entry exists. If you only have a few rows of all zeros, then just place explicit zeros there, otherwise use AIJ or BAIJ both of which work fine with completely empty rows (if your system is so sparse that you don't want to store one explicit zero per row, then the non-symmetric storage will probably also be faster. Aren't you getting an error in MatMult_SeqSBAIJ_1? This is a horrible error message, but it comes from trying to log a negative number of flops. I'll put a check into petsc-dev so it gives a better error message. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.trophime at lncmi.cnrs.fr Thu Oct 21 03:12:33 2010 From: christophe.trophime at lncmi.cnrs.fr (trophime) Date: Thu, 21 Oct 2010 10:12:33 +0200 Subject: [petsc-users] [petsc 3.1-p5] use of mumps Message-ID: <1287648753.2478.4.camel@calcul8.lcmi.local> Hi, I would like to add mumps support for debian petsc package. I slightly modify config/PETSc/packages/MUMPS.py to use existing mumps library compiled with scotch. I would like to test mumps support with a simple example. Is there already some existing examples for mumps? Thank for you work Regards C. Trophime From knepley at gmail.com Thu Oct 21 07:37:02 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Oct 2010 07:37:02 -0500 Subject: [petsc-users] [petsc 3.1-p5] use of mumps In-Reply-To: <1287648753.2478.4.camel@calcul8.lcmi.local> References: <1287648753.2478.4.camel@calcul8.lcmi.local> Message-ID: On Thu, Oct 21, 2010 at 3:12 AM, trophime wrote: > Hi, > I would like to add mumps support for debian petsc package. > I slightly modify config/PETSc/packages/MUMPS.py to use existing mumps > library compiled with scotch. > It already supports this. Use --with-mumps-include,--with-mumps-lib > I would like to test mumps support with a simple example. > Is there already some existing examples for mumps? > Run KSP ex2 with MUMPS: cd src/ksp/ksp/examples/tutorials make ex2 ./ex2 -pc_type lu -pc_factor_mat_solver_package mumps Matt > Thank for you work > Regards > C. Trophime > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.hauffe at tu-dresden.de Thu Oct 21 07:41:48 2010 From: andreas.hauffe at tu-dresden.de (Andreas Hauffe) Date: Thu, 21 Oct 2010 14:41:48 +0200 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: References: <201010211059.34454.andreas.hauffe@tu-dresden.de> Message-ID: <201010211441.48540.andreas.hauffe@tu-dresden.de> Am Donnerstag 21 Oktober 2010, 14:24:59 schrieb Jed Brown: > The SBAIJ implementation requires that the diagonal entry exists. If you > only have a few rows of all zeros, then just place explicit zeros there, > otherwise use AIJ or BAIJ both of which work fine with completely empty > rows (if your system is so sparse that you don't want to store one > explicit zero per row, then the non-symmetric storage will probably also > be faster. > > Aren't you getting an error in MatMult_SeqSBAIJ_1? This is a horrible > error message, but it comes from trying to log a negative number of flops. > I'll put a check into petsc-dev so it gives a better error message. > > Jed I get no error and this example delivers the right result for PETSC 3.0. What did change from 3.0 to 3.1? So I will but 0 to the diagonal. best regards, -- Andreas Hauffe ---------------------------------------------------------------------------------------------------- Technische Universit?t Dresden Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering D-01062 Dresden Germany phone : (++49)351 463 38496 fax : (++49)351 463 37263 mail : andreas.hauffe at tu-dresden.de Website : http://tu-dresden.de/mw/ilr/lft ---------------------------------------------------------------------------------------------------- From balay at mcs.anl.gov Thu Oct 21 07:46:59 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 21 Oct 2010 07:46:59 -0500 (CDT) Subject: [petsc-users] [petsc 3.1-p5] use of mumps In-Reply-To: <1287648753.2478.4.camel@calcul8.lcmi.local> References: <1287648753.2478.4.camel@calcul8.lcmi.local> Message-ID: On Thu, 21 Oct 2010, trophime wrote: > Hi, > I would like to add mumps support for debian petsc package. > I slightly modify config/PETSc/packages/MUMPS.py to use existing mumps > library compiled with scotch. > > I would like to test mumps support with a simple example. > Is there already some existing examples for mumps? you can try make ACTION=testexamples_MUMPS tree for this - you might need some of the testmatrices from ftp://ftp.mcs.anl.gov/pub/petsc/matrices If you place them say $HOME/datafiles/matrices - you can specify: make ACTION=testexamples_MUMPS DATAFILESPATH=$HOME/datafiles tree Satish From amal.ghamdi at kaust.edu.sa Thu Oct 21 07:55:12 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Thu, 21 Oct 2010 15:55:12 +0300 Subject: [petsc-users] using 2 da objects. Message-ID: Dear all, I need to create 2 da objects with the same dimensions size except the dof is not necessary the same for the two. Will PETSc always distribute the cells among the processes in the same way for the 2 da objects or I might get inconsistency? Thank you very much. Amal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Oct 21 08:08:39 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 21 Oct 2010 15:08:39 +0200 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: <201010211441.48540.andreas.hauffe@tu-dresden.de> References: <201010211059.34454.andreas.hauffe@tu-dresden.de> <201010211441.48540.andreas.hauffe@tu-dresden.de> Message-ID: On Thu, Oct 21, 2010 at 14:41, Andreas Hauffe wrote: > I get no error and this example delivers the right result for PETSC 3.0. What > did change from 3.0 to 3.1? Many of the matrix kernels were optimized, this one was changed in relatively simple ways, but the data structures for factorization were changed completely. In this case, Barry's name is on the commit message http://petsc.cs.iit.edu/petsc/petsc-dev/rev/0308bd570415#l2.33 He should have put that behavioral change in the 3.1 changelog. Or maybe the cost of handling empty rows is actually not significant so it should still be supported. I can't tell from the commit message, but empty rows are still supported by the code in MatMult_SeqSBAIJ_2, so it probably should work for block size 1 as well. Do you really not get an error message when you run your example code with 3.1 built with debug support? I just ran your code with 3.1 and I get a (bad) error message. Jed From balay at mcs.anl.gov Thu Oct 21 08:09:34 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 21 Oct 2010 08:09:34 -0500 (CDT) Subject: [petsc-users] using 2 da objects. In-Reply-To: References: Message-ID: On Thu, 21 Oct 2010, Amal Alghamdi wrote: > Dear all, > > I need to create 2 da objects with the same dimensions size except the dof > is not necessary the same for the two. Will PETSc always distribute the > cells among the processes in the same way for the 2 da objects or I might > get inconsistency? If the dimentions are the same - PETSC_DECIDE will distribute both DAs in the same way. Satish From jed at 59A2.org Thu Oct 21 08:10:55 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 21 Oct 2010 15:10:55 +0200 Subject: [petsc-users] using 2 da objects. In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 14:55, Amal Alghamdi wrote: > Dear all, > I need to create 2 da objects with the same dimensions size except the dof > is not necessary the same for the two. Will PETSc always distribute the > cells among the processes in the same way for the 2 da objects or I might > get inconsistency? If you call DACreate*d with all parameters equal except for dof or stencil_width, then the layout will be the same. Jed From amal.ghamdi at kaust.edu.sa Thu Oct 21 08:17:17 2010 From: amal.ghamdi at kaust.edu.sa (Amal Alghamdi) Date: Thu, 21 Oct 2010 16:17:17 +0300 Subject: [petsc-users] using 2 da objects. In-Reply-To: References: Message-ID: Thanks alot Satish and Jed! On Thu, Oct 21, 2010 at 4:10 PM, Jed Brown wrote: > On Thu, Oct 21, 2010 at 14:55, Amal Alghamdi > wrote: > > Dear all, > > I need to create 2 da objects with the same dimensions size except the > dof > > is not necessary the same for the two. Will PETSc always distribute the > > cells among the processes in the same way for the 2 da objects or I might > > get inconsistency? > > If you call DACreate*d with all parameters equal except for dof or > stencil_width, then the layout will be the same. > > Jed > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.hauffe at tu-dresden.de Thu Oct 21 08:23:32 2010 From: andreas.hauffe at tu-dresden.de (Andreas Hauffe) Date: Thu, 21 Oct 2010 15:23:32 +0200 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: References: <201010211059.34454.andreas.hauffe@tu-dresden.de> <201010211441.48540.andreas.hauffe@tu-dresden.de> Message-ID: <201010211523.32513.andreas.hauffe@tu-dresden.de> Am Donnerstag 21 Oktober 2010, 15:08:39 schrieb Jed Brown: > On Thu, Oct 21, 2010 at 14:41, Andreas Hauffe > > wrote: > > I get no error and this example delivers the right result for PETSC 3.0. > > What did change from 3.0 to 3.1? > > Many of the matrix kernels were optimized, this one was changed in > relatively simple ways, but the data structures for factorization were > changed completely. In this case, Barry's name is on the commit > message > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/0308bd570415#l2.33 > > He should have put that behavioral change in the 3.1 changelog. Or > maybe the cost of handling empty rows is actually not significant so > it should still be supported. I can't tell from the commit message, > but empty rows are still supported by the code in MatMult_SeqSBAIJ_2, > so it probably should work for block size 1 as well. > > Do you really not get an error message when you run your example code > with 3.1 built with debug support? I just ran your code with 3.1 and > I get a (bad) error message. > > Jed Thanks for the answer. I forgott to use a debug version of PETSC. Sorry! Now I get the error. Best regards, -- Andreas Hauffe ---------------------------------------------------------------------------------------------------- Technische Universit?t Dresden Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering D-01062 Dresden Germany phone : (++49)351 463 38496 fax : (++49)351 463 37263 mail : andreas.hauffe at tu-dresden.de Website : http://tu-dresden.de/mw/ilr/lft ---------------------------------------------------------------------------------------------------- From zonexo at gmail.com Thu Oct 21 10:46:06 2010 From: zonexo at gmail.com (Wee-Beng Tay) Date: Thu, 21 Oct 2010 17:46:06 +0200 Subject: [petsc-users] Error linking with HYPRE Message-ID: Hi, I am trying to compile and build my code. Initially it's simply a fortran PETSc code and I managed to build the code. However, when I try to compile another similar code which has HYPRE 2.6b, it has problems linking: global.o: In function `global_data_mp_de_ini_var_': global.F90:(.text+0xaaf5): undefined reference to `hypre_structgriddestroy_' global.F90:(.text+0xab06): undefined reference to `hypre_structstencildestroy_' global.F90:(.text+0xab17): undefined reference to `hypre_structmatrixdestroy_' global.F90:(.text+0xab28): undefined reference to `hypre_structvectordestroy_' global.F90:(.text+0xab39): undefined reference to `hypre_structvectordestroy_' global.F90:(.text+0xab55): undefined reference to `hypre_structsmgdestroy_' global.F90:(.text+0xab6d): undefined reference to `hypre_structpfmgdestroy_' global.F90:(.text+0xab8e): undefined reference to `hypre_structhybriddestroy_' global.F90:(.text+0xaba6): undefined reference to `hypre_structbicgstabdestroy_' global.F90:(.text+0xabb9): undefined reference to `hypre_structgmresdestroy_' hypre.o: In function `hypre_mp_hypre_solver_': hypre.F90:(.text+0xb5): undefined reference to `hypre_structgridcreate_' hypre.F90:(.text+0x10f): undefined reference to `hypre_structgridsetextents_' hypre.F90:(.text+0x120): undefined reference to `hypre_structgridassemble_' hypre.F90:(.text+0x13b): undefined reference to `hypre_structstencilcreate_' hypre.F90:(.text+0x1c3): undefined reference to `hypre_structstencilsetelement_' hypre.F90:(.text+0x20e): undefined reference to `hypre_structmatrixcreate_' hypre.F90:(.text+0x224): undefined reference to `hypre_structmatrixsetsymmetric_' ... I have attached my global.F90, and also my makefile. Hope someone can help. I think if I can solve the global.F90 problem, the other hypre.F90 can be solved as well. Thank you. -- ================================================ Wee-Beng TAY Postdoctoral Fellow Aerodynamics Group Aerospace Engineering Delft University of Technology Kluyverweg 2 2629 HT Delft The Netherlands Temporary E-mail: zonexo at gmail.com ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: global.F90 Type: application/octet-stream Size: 37878 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: makefile Type: application/octet-stream Size: 1757 bytes Desc: not available URL: From knepley at gmail.com Thu Oct 21 10:49:57 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Oct 2010 10:49:57 -0500 Subject: [petsc-users] Error linking with HYPRE In-Reply-To: References: Message-ID: Is your PETSc configured to use Hypre? If so, send the entire failing link line and configure.log. Matt On Thu, Oct 21, 2010 at 10:46 AM, Wee-Beng Tay wrote: > Hi, > > I am trying to compile and build my code. Initially it's simply a fortran > PETSc code and I managed to build the code. > > However, when I try to compile another similar code which has HYPRE 2.6b, > it has problems linking: > > global.o: In function `global_data_mp_de_ini_var_': > global.F90:(.text+0xaaf5): undefined reference to > `hypre_structgriddestroy_' > global.F90:(.text+0xab06): undefined reference to > `hypre_structstencildestroy_' > global.F90:(.text+0xab17): undefined reference to > `hypre_structmatrixdestroy_' > global.F90:(.text+0xab28): undefined reference to > `hypre_structvectordestroy_' > global.F90:(.text+0xab39): undefined reference to > `hypre_structvectordestroy_' > global.F90:(.text+0xab55): undefined reference to `hypre_structsmgdestroy_' > global.F90:(.text+0xab6d): undefined reference to > `hypre_structpfmgdestroy_' > global.F90:(.text+0xab8e): undefined reference to > `hypre_structhybriddestroy_' > global.F90:(.text+0xaba6): undefined reference to > `hypre_structbicgstabdestroy_' > global.F90:(.text+0xabb9): undefined reference to > `hypre_structgmresdestroy_' > hypre.o: In function `hypre_mp_hypre_solver_': > hypre.F90:(.text+0xb5): undefined reference to `hypre_structgridcreate_' > hypre.F90:(.text+0x10f): undefined reference to > `hypre_structgridsetextents_' > hypre.F90:(.text+0x120): undefined reference to `hypre_structgridassemble_' > hypre.F90:(.text+0x13b): undefined reference to > `hypre_structstencilcreate_' > hypre.F90:(.text+0x1c3): undefined reference to > `hypre_structstencilsetelement_' > hypre.F90:(.text+0x20e): undefined reference to `hypre_structmatrixcreate_' > hypre.F90:(.text+0x224): undefined reference to > `hypre_structmatrixsetsymmetric_' > ... > > I have attached my global.F90, and also my makefile. Hope someone can > help. I think if I can solve the global.F90 problem, the other hypre.F90 can > be solved as well. > > Thank you. > > -- > ================================================ > Wee-Beng TAY > Postdoctoral Fellow > Aerodynamics Group > Aerospace Engineering > Delft University of Technology > Kluyverweg 2 > 2629 HT Delft > The Netherlands > Temporary E-mail: zonexo at gmail.com > ================================================ > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Oct 21 11:47:43 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 21 Oct 2010 11:47:43 -0500 (CDT) Subject: [petsc-users] [petsc-maint #54896] Re: Error linking with HYPRE In-Reply-To: References: Message-ID: > --with-hypre-dir=/home/svu/g0306332/lib/hypre-2.6.0b_atlas5 Looks like this hypre is built without fortran - or with a worng [to you] fortran compiler. Suggest using --download-hypre=1 instead. Satish On Thu, 21 Oct 2010, Wee-Beng Tay wrote: > [atlas5-c49]$ make > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o global.o global.F90 > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o flux_area.o flux_area.F90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save airfoil.f90 > /app1/mvapich2/current/bin/mpif90 -w95 -c -O3 -save grid.f90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save bc.f90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save bc_impl.f90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save bc_semi.f90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save set_matrix.f90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save inter_step.f90 > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o mom_disz.o mom_disz.F90 > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o poisson.o poisson.F90 > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o hypre.o hypre.F90 > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o cell_data.o cell_data.F90 > /app1/mvapich2/current/bin/mpif90 -r8 -w95 -c -O3 -save fractional.f90 > /app1/mvapich2/current/bin/mpif90 -c -r8 -save -w90 -w -w95 -O3 -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/petsc-3.1-p5/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -I/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/include -I/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/include -I/app1/mvapich2/1.4/include -I/app1/mvapich2/current/include -o ns2d_c.o ns2d_c.F90 > /app1/mvapich2/current/bin/mpif90 -O3 -o a.out global.o grid.o flux_area.o airfoil.o bc.o bc_impl.o bc_semi.o set_matrix.o inter_step.o mom_disz.o hypre.o poisson.o cell_data.o fractional.o ns2d_c.o /home/svu/g0306332/lib/tecio64.a /home/svu/g0306332/lib/linux64.a -Wl,-rpath,/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/lib -L/home/svu/g0306332/lib/petsc-3.1-p5/atlas5_nodebug/lib -lpetsc -lX11 -Wl,-rpath,/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/lib -L/home/svu/g0306332/lib/hypre-2.6.0b_atlas5/lib -lHYPRE -lmpichcxx -lstdc++ -Wl,-rpath,/app1/intel/mkl/10.0.5.025/lib/em64t -L/app1/intel/mkl/10.0.5.025/lib/em64t -lmkl_lapack -lmkl -lguide -lpthread -L/app1/mvapich2/1.4/lib -L/app1/intel/Compiler/11.1/069/lib/intel64 -L/usr/lib/gcc/x86_64-redhat-linux/4.1.2 -ldl -lmpich -lpthread -lrdmacm -libverbs -libumad -lrt -lgcc_s -lmpichf90 -lifport -lifcore -limf -lsvml -lm -lipgo -lirc -lirc_s -lm -lmpichcxx -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lpthread -lrdmacm - li > bverbs -libumad -lrt -lgcc_s -ldl > global.o: In function `global_data_mp_de_ini_var_': > global.F90:(.text+0xaaf5): undefined reference to `hypre_structgriddestroy_' > global.F90:(.text+0xab06): undefined reference to `hypre_structstencildestroy_' > global.F90:(.text+0xab17): undefined reference to `hypre_structmatrixdestroy_' > global.F90:(.text+0xab28): undefined reference to `hypre_structvectordestroy_' > global.F90:(.text+0xab39): undefined reference to `hypre_structvectordestroy_' > global.F90:(.text+0xab55): undefined reference to `hypre_structsmgdestroy_' > global.F90:(.text+0xab6d): undefined reference to `hypre_structpfmgdestroy_' > global.F90:(.text+0xab8e): undefined reference to `hypre_structhybriddestroy_' > global.F90:(.text+0xaba6): undefined reference to `hypre_structbicgstabdestroy_' > global.F90:(.text+0xabb9): undefined reference to `hypre_structgmresdestroy_' > hypre.o: In function `hypre_mp_hypre_solver_': > hypre.F90:(.text+0xb5): undefined reference to `hypre_structgridcreate_' > hypre.F90:(.text+0x10f): undefined reference to `hypre_structgridsetextents_' > hypre.F90:(.text+0x120): undefined reference to `hypre_structgridassemble_' > hypre.F90:(.text+0x13b): undefined reference to `hypre_structstencilcreate_' > hypre.F90:(.text+0x1c3): undefined reference to `hypre_structstencilsetelement_' > hypre.F90:(.text+0x20e): undefined reference to `hypre_structmatrixcreate_' > hypre.F90:(.text+0x224): undefined reference to `hypre_structmatrixsetsymmetric_' > hypre.F90:(.text+0x235): undefined reference to `hypre_structmatrixinitialize_' > hypre.F90:(.text+0x74d): undefined reference to `hypre_structmatrixsetboxvalues_' > hypre.F90:(.text+0x762): undefined reference to `hypre_structmatrixassemble_' > hypre.F90:(.text+0x77d): undefined reference to `hypre_structvectorcreate_' > hypre.F90:(.text+0x798): undefined reference to `hypre_structvectorcreate_' > hypre.F90:(.text+0x7a9): undefined reference to `hypre_structvectorinitialize_' > hypre.F90:(.text+0x7ba): undefined reference to `hypre_structvectorinitialize_' > hypre.F90:(.text+0x7ed): undefined reference to `hypre_structbicgstabcreate_' > hypre.F90:(.text+0x803): undefined reference to `hypre_structbicgstabsettol_' > hypre.F90:(.text+0x819): undefined reference to `hypre_structbicgstabsetlogging_' > hypre.F90:(.text+0x83d): undefined reference to `hypre_structpcgcreate_' > hypre.F90:(.text+0x853): undefined reference to `hypre_structpcgsetmaxiter_' > hypre.F90:(.text+0x869): undefined reference to `hypre_structpcgsettol_' > hypre.F90:(.text+0x87f): undefined reference to `hypre_structpcgsettwonorm_' > hypre.F90:(.text+0x895): undefined reference to `hypre_structpcgsetrelchange_' > hypre.F90:(.text+0x8ab): undefined reference to `hypre_structpcgsetprintlevel_' > hypre.F90:(.text+0x8c6): undefined reference to `hypre_structhybridcreate_' > hypre.F90:(.text+0x8dc): undefined reference to `hypre_structhybridsetdscgmaxite_' > hypre.F90:(.text+0x8f2): undefined reference to `hypre_structhybridsetpcgmaxiter_' > hypre.F90:(.text+0x908): undefined reference to `hypre_structhybridsettol_' > hypre.F90:(.text+0x91e): undefined reference to `hypre_structhybridsetconvergenc_' > hypre.F90:(.text+0x934): undefined reference to `hypre_structhybridsettwonorm_' > hypre.F90:(.text+0x94a): undefined reference to `hypre_structhybridsetrelchange_' > hypre.F90:(.text+0x960): undefined reference to `hypre_structhybridsetlogging_' > hypre.F90:(.text+0x985): undefined reference to `hypre_structsmgcreate_' > hypre.F90:(.text+0x9b6): undefined reference to `hypre_structsmgsetmemoryuse_' > hypre.F90:(.text+0x9cc): undefined reference to `hypre_structsmgsetmaxiter_' > hypre.F90:(.text+0x9e2): undefined reference to `hypre_structsmgsettol_' > hypre.F90:(.text+0x9f3): undefined reference to `hypre_structsmgsetzeroguess_' > hypre.F90:(.text+0xa09): undefined reference to `hypre_structsmgsetnumprerelax_' > hypre.F90:(.text+0xa1f): undefined reference to `hypre_structsmgsetnumpostrelax_' > hypre.F90:(.text+0xa35): undefined reference to `hypre_structsmgsetprintlevel_' > hypre.F90:(.text+0xa4b): undefined reference to `hypre_structsmgsetlogging_' > hypre.F90:(.text+0xa6f): undefined reference to `hypre_structpfmgcreate_' > hypre.F90:(.text+0xa9c): undefined reference to `hypre_structpfmgsetmaxiter_' > hypre.F90:(.text+0xab2): undefined reference to `hypre_structpfmgsettol_' > hypre.F90:(.text+0xac3): undefined reference to `hypre_structpfmgsetzeroguess_' > hypre.F90:(.text+0xad9): undefined reference to `hypre_structpfmgsetrelaxtype_' > hypre.F90:(.text+0xaef): undefined reference to `hypre_structpfmgsetnumprerelax_' > hypre.F90:(.text+0xb05): undefined reference to `hypre_structpfmgsetnumpostrelax_' > hypre.F90:(.text+0xb1b): undefined reference to `hypre_structpfmgsetlogging_' > hypre.F90:(.text+0xb53): undefined reference to `hypre_structbicgstabsetprecond_' > hypre.F90:(.text+0xb74): undefined reference to `hypre_structbicgstabsetup_' > hypre.F90:(.text+0xb99): undefined reference to `hypre_structpcgsetprecond_' > hypre.F90:(.text+0xbba): undefined reference to `hypre_structpcgsetup_' > hypre.F90:(.text+0xbd7): undefined reference to `hypre_structhybridsetprecond_' > hypre.F90:(.text+0xbf8): undefined reference to `hypre_structhybridsetup_' > hypre.F90:(.text+0xecc): undefined reference to `hypre_structvectorsetboxvalues_' > hypre.F90:(.text+0xf94): undefined reference to `hypre_structvectorsetboxvalues_' > hypre.F90:(.text+0xfa5): undefined reference to `hypre_structvectorassemble_' > hypre.F90:(.text+0xfb6): undefined reference to `hypre_structvectorassemble_' > hypre.F90:(.text+0xff0): undefined reference to `hypre_structbicgstabsolve_' > hypre.F90:(.text+0x1018): undefined reference to `hypre_structpcgsolve_' > hypre.F90:(.text+0x103b): undefined reference to `hypre_structhybridsolve_' > hypre.F90:(.text+0x1099): undefined reference to `hypre_structvectorgetboxvalues_' > hypre.F90:(.text+0x1190): undefined reference to `hypre_structgmressolve_' > hypre.F90:(.text+0x11fa): undefined reference to `hypre_structgmressetprecond_' > hypre.F90:(.text+0x121b): undefined reference to `hypre_structgmressetup_' > hypre.F90:(.text+0x1236): undefined reference to `hypre_structgmrescreate_' > hypre.F90:(.text+0x124c): undefined reference to `hypre_structgmressetmaxiter_' > hypre.F90:(.text+0x1262): undefined reference to `hypre_structgmressettol_' > hypre.F90:(.text+0x1278): undefined reference to `hypre_structgmressetprintlevel_' > hypre.F90:(.text+0x128e): undefined reference to `hypre_structgmressetlogging_' > make: *** [a.out] Error 1 > [atlas5-c49]$ ls /home/svu/g0306332/lib/petsc-3.1-p5/ > > > From xxy113 at psu.edu Thu Oct 21 16:46:07 2010 From: xxy113 at psu.edu (Xuan Yu) Date: Thu, 21 Oct 2010 17:46:07 -0400 Subject: [petsc-users] ODE solving Message-ID: Hi I am using Petsc solve ODE nonlinear problem. I was told to use direct method: umfpackage because of the jacobian matrix is 1860 by 1860. But the time consumption is still too big: TSSetType(ts,TSBEULER) time ./pihm -log_summary -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package umfpack I got the result: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen shot 2010-10-21 at 5.41.41 PM.png Type: image/png Size: 126875 bytes Desc: not available URL: -------------- next part -------------- When I use TSSetType(ts,TSSUNDIALS), it is fast: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen shot 2010-10-21 at 5.41.54 PM.png Type: image/png Size: 40592 bytes Desc: not available URL: -------------- next part -------------- How can I find the reason why BEULER is so slow? Thanks Xuan From knepley at gmail.com Thu Oct 21 16:56:00 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Oct 2010 16:56:00 -0500 Subject: [petsc-users] ODE solving In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 4:46 PM, Xuan Yu wrote: > Hi > > > I am using Petsc solve ODE nonlinear problem. > > I was told to use direct method: umfpackage because of the jacobian matrix > is 1860 by 1860. > > But the time consumption is still too big: > > TSSetType(ts,TSBEULER) > > time ./pihm -log_summary -ksp_type preonly -pc_type lu > -pc_factor_mat_solver_package umfpack > > I got the result: > > > When I use TSSetType(ts,TSSUNDIALS), it is fast: > > > > How can I find the reason why BEULER is so slow? > You SNES is not converging. You solve 163 systems, but do 4300 function evaluations. Matt > Thanks > > Xuan > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Oct 21 17:01:23 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 22 Oct 2010 00:01:23 +0200 Subject: [petsc-users] ODE solving In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 23:56, Matthew Knepley wrote: > You SNES is not converging. You solve 163 systems, but do 4300 function > evaluations. That is because the Jacobian is assembled using coloring. Sundials is probably doing fewer steps than BEuler for this problem, and it's also lagging the preconditioner and just using matrix-free differencing to define the Krylov operator. These options are not default with PETSc's TS implementations. Jed From knepley at gmail.com Thu Oct 21 17:02:54 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Oct 2010 17:02:54 -0500 Subject: [petsc-users] ODE solving In-Reply-To: References: Message-ID: On Thu, Oct 21, 2010 at 5:01 PM, Jed Brown wrote: > On Thu, Oct 21, 2010 at 23:56, Matthew Knepley wrote: > > You SNES is not converging. You solve 163 systems, but do 4300 function > > evaluations. > > That is because the Jacobian is assembled using coloring. > With that many colors, it seems like you should sit down and work out the Jacobian. Matt > Sundials is probably doing fewer steps than BEuler for this problem, > and it's also lagging the preconditioner and just using matrix-free > differencing to define the Krylov operator. These options are not > default with PETSc's TS implementations. > > Jed > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xxy113 at psu.edu Thu Oct 21 17:02:56 2010 From: xxy113 at psu.edu (Xuan Yu) Date: Thu, 21 Oct 2010 18:02:56 -0400 Subject: [petsc-users] ODE solving In-Reply-To: References: Message-ID: <2294EAD2-AEC6-48A4-ACB2-A4BD1BC6A88D@psu.edu> On Oct 21, 2010, at 5:56 PM, Matthew Knepley wrote: > On Thu, Oct 21, 2010 at 4:46 PM, Xuan Yu wrote: > Hi > > > I am using Petsc solve ODE nonlinear problem. > > I was told to use direct method: umfpackage because of the jacobian matrix is 1860 by 1860. > > But the time consumption is still too big: > > TSSetType(ts,TSBEULER) > > time ./pihm -log_summary -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package umfpack > > I got the result: > > > When I use TSSetType(ts,TSSUNDIALS), it is fast: > > > > How can I find the reason why BEULER is so slow? > > You SNES is not converging. You solve 163 systems, but do 4300 function evaluations. Could you please tell me more solution options or possible mistakes in my code? > > Matt > > Thanks > > Xuan > > > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Thu Oct 21 17:07:34 2010 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 21 Oct 2010 17:07:34 -0500 Subject: [petsc-users] ODE solving In-Reply-To: <2294EAD2-AEC6-48A4-ACB2-A4BD1BC6A88D@psu.edu> References: <2294EAD2-AEC6-48A4-ACB2-A4BD1BC6A88D@psu.edu> Message-ID: On Thu, Oct 21, 2010 at 5:02 PM, Xuan Yu wrote: > > On Oct 21, 2010, at 5:56 PM, Matthew Knepley wrote: > > On Thu, Oct 21, 2010 at 4:46 PM, Xuan Yu wrote: > >> Hi >> >> >> I am using Petsc solve ODE nonlinear problem. >> >> I was told to use direct method: umfpackage because of the jacobian matrix >> is 1860 by 1860. >> >> But the time consumption is still too big: >> >> TSSetType(ts,TSBEULER) >> >> time ./pihm -log_summary -ksp_type preonly -pc_type lu >> -pc_factor_mat_solver_package umfpack >> >> I got the result: >> >> >> When I use TSSetType(ts,TSSUNDIALS), it is fast: >> >> >> >> How can I find the reason why BEULER is so slow? >> > > You SNES is not converging. You solve 163 systems, but do 4300 function > evaluations. > > > Could you please tell me more solution options or possible mistakes in my > code? > Jed, is right. You are using coloring with a lot of colors. Sundials is doing much better than backwards-Euler, why not just use it? Matt > Matt > > >> Thanks >> >> Xuan >> >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xxy113 at psu.edu Thu Oct 21 17:22:41 2010 From: xxy113 at psu.edu (Xuan Yu) Date: Thu, 21 Oct 2010 18:22:41 -0400 Subject: [petsc-users] ODE solving In-Reply-To: References: <2294EAD2-AEC6-48A4-ACB2-A4BD1BC6A88D@psu.edu> Message-ID: On Oct 21, 2010, at 6:07 PM, Matthew Knepley wrote: > On Thu, Oct 21, 2010 at 5:02 PM, Xuan Yu wrote: > > On Oct 21, 2010, at 5:56 PM, Matthew Knepley wrote: > >> On Thu, Oct 21, 2010 at 4:46 PM, Xuan Yu wrote: >> Hi >> >> >> I am using Petsc solve ODE nonlinear problem. >> >> I was told to use direct method: umfpackage because of the jacobian matrix is 1860 by 1860. >> >> But the time consumption is still too big: >> >> TSSetType(ts,TSBEULER) >> >> time ./pihm -log_summary -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package umfpack >> >> I got the result: >> >> >> When I use TSSetType(ts,TSSUNDIALS), it is fast: >> >> >> >> How can I find the reason why BEULER is so slow? >> >> You SNES is not converging. You solve 163 systems, but do 4300 function evaluations. > > Could you please tell me more solution options or possible mistakes in my code? > > Jed, is right. You are using coloring with a lot of colors. Sundials is doing much better > than backwards-Euler, why not just use it? > > Matt Thanks, at first we are using sundials. Now, we found Petsc is more popular, and powerful of parallel and precondition. So, do you think it is the best way to use sundials option under Petsc framework? Xuan > >> Matt >> >> Thanks >> >> Xuan >> >> >> >> >> >> >> -- >> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. >> -- Norbert Wiener > > > > > -- > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. > -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Thu Oct 21 17:29:46 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 22 Oct 2010 00:29:46 +0200 Subject: [petsc-users] ODE solving In-Reply-To: References: <2294EAD2-AEC6-48A4-ACB2-A4BD1BC6A88D@psu.edu> Message-ID: On Fri, Oct 22, 2010 at 00:22, Xuan Yu wrote: > Thanks, at first we are using sundials. Now, we found Petsc is more > popular, and powerful of parallel and precondition. So, do you think it is > the best way to use sundials option under Petsc framework? Yes, if Sundials is working well for your problem (in terms of error control and algorithmics for the solves) then it is a good option to use it through PETSc. You can use all PETSc preconditioners this way, and if your problem/machine/parameters changes, or you just want to try a different method, you don't have to write any code to use other TS implementations. There isn't a performance penalty for calling Sundials through PETSc and you get a lot more flexibility. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Oct 21 08:20:33 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 21 Oct 2010 08:20:33 -0500 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: <201010211059.34454.andreas.hauffe@tu-dresden.de> References: <201010211059.34454.andreas.hauffe@tu-dresden.de> Message-ID: <069E9F3F-862D-47E0-A98C-7DA1F98203D3@mcs.anl.gov> Hong and Shri, It looks like in a couple places (like MatMult_SeqSBAIJ_1) we have the assumption all diagonal entires exist (likely my fault). But usually we do not assume this. Could you please fix MatMult_SeqSBAIJ_1 and check for other places with this assumption? Thanks Barry In the MatSOR() for SBAIJ I would advocate generating an error message if a diagonal doesn't exist, this could also be done in the factorizations if it makes the code simplier. On Oct 21, 2010, at 3:59 AM, Andreas Hauffe wrote: > Hi, > > I think there is a bug when multiplying an SBAIJ matrix with a vector, if the > matrix has a zero/missing row. I tried to write a small example: > |x x| |1| = |0| > |x 1| |x| |0| > The result since 3.1 is: > |x x| |1| = |1| > |x 1| |x| |0| > > I add the fortran code: > program main > > implicit none > > #include "finclude/petscsys.h" > #include "finclude/petscvec.h" > #include "finclude/petscvec.h90" > #include "finclude/petscmat.h" > #include "finclude/petscmat.h90" > > Mat :: KaaS > Vec :: v0,y > PetscInt :: m > PetscInt :: bs > > PetscErrorCode :: ierr > > call petscInitialize(PETSC_NULL_CHARACTER,ierr); CHKERRQ(ierr) > > m = 2 > bs = 1 > call MatCreate(PETSC_COMM_WORLD,KaaS,ierr); CHKERRQ(ierr) > call MatSetSizes(KaaS,PETSC_DECIDE,PETSC_DECIDE,m,m,ierr); CHKERRQ(ierr) > call MatSetType(KaaS,MATSEQSBAIJ,ierr); CHKERRQ(ierr) > ! call MatSetType(KaaS,MATAIJ,ierr); CHKERRQ(ierr) > call MatSetFromOptions(KaaS,ierr); CHKERRQ(ierr) > > ! call MatSetValue(KaaS, 0, 0, 0.d0, ADD_VALUES, ierr); CHKERRQ(ierr) > call MatSetValue(KaaS, 1, 1, 1.d0, ADD_VALUES, ierr); CHKERRQ(ierr) > > call MatAssemblyBegin(KaaS,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr) > call MatAssemblyEnd (KaaS,MAT_FINAL_ASSEMBLY,ierr); CHKERRQ(ierr) > > call MatGetVecs(KaaS,y,v0,ierr); CHKERRQ(ierr) > call VecSetValue(v0,0,1.D0,INSERT_VALUES,ierr); CHKERRQ(ierr) > call MatMult(KaaS,v0,y,ierr); CHKERRQ(ierr) > call VecView(y,PETSC_NULL_OBJECT,ierr); CHKERRQ(ierr) > > call VecDestroy(y,ierr); CHKERRQ(ierr) > call VecDestroy(v0,ierr); CHKERRQ(ierr) > call MatDestroy(KaaS,ierr); CHKERRQ(ierr) > > call petscFinalize(ierr) > > end program main > > > best > -- > Andreas Hauffe > > ---------------------------------------------------------------------------------------------------- > Technische Universit?t Dresden > Institut f?r Luft- und Raumfahrttechnik / Institute of Aerospace Engineering > Lehrstuhl f?r Luftfahrzeugtechnik / Chair of Aircraft Engineering > > D-01062 Dresden > Germany > > phone : (++49)351 463 38496 > fax : (++49)351 463 37263 > mail : andreas.hauffe at tu-dresden.de > Website : http://tu-dresden.de/mw/ilr/lft > ---------------------------------------------------------------------------------------------------- From bsmith at mcs.anl.gov Fri Oct 22 15:12:38 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 22 Oct 2010 15:12:38 -0500 Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: References: <201010211059.34454.andreas.hauffe@tu-dresden.de> <201010211441.48540.andreas.hauffe@tu-dresden.de> Message-ID: <2F142492-7807-4122-897F-718136CB1DFF@mcs.anl.gov> I've asked Hong and Shri to fix the bug I introduced that sometimes makes it require a diagonal. Once the bug is fixed 3.1 will allow empty diagonals again. Barry On Oct 21, 2010, at 8:08 AM, Jed Brown wrote: > On Thu, Oct 21, 2010 at 14:41, Andreas Hauffe > wrote: >> I get no error and this example delivers the right result for PETSC 3.0. What >> did change from 3.0 to 3.1? > > Many of the matrix kernels were optimized, this one was changed in > relatively simple ways, but the data structures for factorization were > changed completely. In this case, Barry's name is on the commit > message > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/0308bd570415#l2.33 > > He should have put that behavioral change in the 3.1 changelog. Or > maybe the cost of handling empty rows is actually not significant so > it should still be supported. I can't tell from the commit message, > but empty rows are still supported by the code in MatMult_SeqSBAIJ_2, > so it probably should work for block size 1 as well. > > Do you really not get an error message when you run your example code > with 3.1 built with debug support? I just ran your code with 3.1 and > I get a (bad) error message. > > Jed From abhyshr at mcs.anl.gov Fri Oct 22 21:03:31 2010 From: abhyshr at mcs.anl.gov (Shri) Date: Fri, 22 Oct 2010 20:03:31 -0600 (GMT-06:00) Subject: [petsc-users] Bug when multipling a SBAIJ matrix with a vector In-Reply-To: <2F142492-7807-4122-897F-718136CB1DFF@mcs.anl.gov> Message-ID: <1554570138.268371287799411503.JavaMail.root@zimbra.anl.gov> Fixed and pushed to petsc-3.1 ----- Barry Smith wrote: > > I've asked Hong and Shri to fix the bug I introduced that sometimes makes it require a diagonal. Once the bug is fixed 3.1 will allow empty diagonals again. > > Barry > > On Oct 21, 2010, at 8:08 AM, Jed Brown wrote: > > > On Thu, Oct 21, 2010 at 14:41, Andreas Hauffe > > wrote: > >> I get no error and this example delivers the right result for PETSC 3.0. What > >> did change from 3.0 to 3.1? > > > > Many of the matrix kernels were optimized, this one was changed in > > relatively simple ways, but the data structures for factorization were > > changed completely. In this case, Barry's name is on the commit > > message > > > > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/0308bd570415#l2.33 > > > > He should have put that behavioral change in the 3.1 changelog. Or > > maybe the cost of handling empty rows is actually not significant so > > it should still be supported. I can't tell from the commit message, > > but empty rows are still supported by the code in MatMult_SeqSBAIJ_2, > > so it probably should work for block size 1 as well. > > > > Do you really not get an error message when you run your example code > > with 3.1 built with debug support? I just ran your code with 3.1 and > > I get a (bad) error message. > > > > Jed > From luvaero.pec at gmail.com Sat Oct 23 05:14:35 2010 From: luvaero.pec at gmail.com (aman singh) Date: Sat, 23 Oct 2010 15:44:35 +0530 Subject: [petsc-users] open source CFD Softwares Message-ID: Hello sir, I am Amandeep Singh from IIT Bombay.I am doing a seminar on Open source CFD tools as my M.tech Subject.I want to know about the first release of PETSc and what was its release date?Because i am mentioning this software in one of the CFD tools.It was claimed that OpenFOAM is the first one so i just want to ensure this.I will be thankful to you. -- ---- regards Amandeep Singh Roll No.-10301014 M.tech, Aerospace engg.(Aerodynamics) IIT Powaii,Mumbai -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Sat Oct 23 09:33:35 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 23 Oct 2010 09:33:35 -0500 (CDT) Subject: [petsc-users] open source CFD Softwares In-Reply-To: References: Message-ID: May 1992: first release of PETSc 1.0 http://www.nersc.gov/nusers/services/training/classes/ERSUG/1999apr/PetSC/petsc.ppt Satish On Sat, 23 Oct 2010, aman singh wrote: > Hello sir, > I am Amandeep Singh from IIT Bombay.I am doing a seminar on > Open source CFD tools as my M.tech Subject.I want to know about the first > release of PETSc and what was its release date?Because i am mentioning this > software in one of the CFD tools.It was claimed that OpenFOAM is the first > one so i just want to ensure this.I will be thankful to you. > > From knepley at gmail.com Sat Oct 23 11:04:07 2010 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 23 Oct 2010 11:04:07 -0500 Subject: [petsc-users] open source CFD Softwares In-Reply-To: References: Message-ID: Barry has 1991 on another slide. Matt On Sat, Oct 23, 2010 at 9:33 AM, Satish Balay wrote: > May 1992: first release of PETSc 1.0 > > > http://www.nersc.gov/nusers/services/training/classes/ERSUG/1999apr/PetSC/petsc.ppt > > Satish > On Sat, 23 Oct 2010, aman singh wrote: > > > Hello sir, > > I am Amandeep Singh from IIT Bombay.I am doing a seminar on > > Open source CFD tools as my M.tech Subject.I want to know about the first > > release of PETSc and what was its release date?Because i am mentioning > this > > software in one of the CFD tools.It was claimed that OpenFOAM is the > first > > one so i just want to ensure this.I will be thankful to you. > > > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From luvaero.pec at gmail.com Sun Oct 24 00:39:43 2010 From: luvaero.pec at gmail.com (aman singh) Date: Sun, 24 Oct 2010 11:09:43 +0530 Subject: [petsc-users] open source CFD Softwares In-Reply-To: References: Message-ID: Thank you very much sir.I feel highly obliged for your reply. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.skates82 at gmail.com Tue Oct 26 15:33:54 2010 From: m.skates82 at gmail.com (Nunion) Date: Tue, 26 Oct 2010 15:33:54 -0500 Subject: [petsc-users] Writing PETSc matrices Message-ID: Hello, I am new to PETSc and programming. I have a question concerning writing PETSc matrices in binary from binary matrices [compressessed/uncompressed] generated in Matlab. I am attempting to use the files in the /bin/matlab directory, in particular the PetscBinaryWrite.m file. However, the usage; PetscBinaryWrite('matrix.mat','output.ex') does not seem to work. I also tried using the examples in the /mat directory however, matlab does not support the writing of complex matrices in ASCII. Thanks in advance, Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Oct 26 15:57:04 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 26 Oct 2010 15:57:04 -0500 Subject: [petsc-users] Writing PETSc matrices In-Reply-To: References: Message-ID: <22CBB298-B12C-4583-A516-5DCB74576EDF@mcs.anl.gov> Use PetscBinaryWrite('filename',sparsematlabmatrix) I do not know why your second argument has quotes around it. Barry On Oct 26, 2010, at 3:33 PM, Nunion wrote: > Hello, > > I am new to PETSc and programming. I have a question concerning writing PETSc matrices in binary from binary matrices [compressessed/uncompressed] generated in Matlab. I am attempting to use the files in the /bin/matlab directory, in particular the PetscBinaryWrite.m file. However, the usage; > > PetscBinaryWrite('matrix.mat','output.ex') does not seem to work. I also tried using the examples in the /mat directory however, matlab does not support the writing of complex matrices in ASCII. > > Thanks in advance, > > Tom From B.Sanderse at cwi.nl Wed Oct 27 03:26:56 2010 From: B.Sanderse at cwi.nl (Benjamin Sanderse) Date: Wed, 27 Oct 2010 10:26:56 +0200 (CEST) Subject: [petsc-users] Writing PETSc matrices In-Reply-To: <22CBB298-B12C-4583-A516-5DCB74576EDF@mcs.anl.gov> Message-ID: <1117677303.44923.1288168016192.JavaMail.root@zembox02.zaas.igi.nl> I have a somewhat related question regarding sending data to Matlab. For a while I have been sending vectors back and forth between Matlab and Petsc and that works perfect. In addition I also want to send some information like number of iterations, norm of residual and solution time to Matlab. This gives me some headaches when I run the code in parallel: - Can I simply send a PetscInt like number of iterations to Matlab? I tried PetscIntView, but this does not work: PetscInt iterations; ierr = KSPGetIterationNumber(ksp,&iterations);CHKERRQ(ierr); fd = PETSC_VIEWER_SOCKET_WORLD; ierr = PetscIntView(1,iterations,fd);CHKERRQ(ierr); Petsc-Matlab communication hangs without an error on the Petsc side. - As alternative I tried to set a vector and set this to Matlab with VecView. This works, although it results (for this example) in a nx1 vector (n=no. of processors) that is received by Matlab, while I actually just want a 1x1 vector: ierr = VecCreateMPI(PETSC_COMM_WORLD,1,PETSC_DECIDE,&to_matlab);CHKERRQ(ierr); ierr = KSPGetIterationNumber(ksp,&iterations);CHKERRQ(ierr); fd = PETSC_VIEWER_SOCKET_WORLD; ierr = VecSet(to_matlab,iterations);CHKERRQ(ierr); ierr = VecView(to_matlab,fd);CHKERRQ(ierr); - So although I am not too happy with a nx1 vector instead of a 1x1 vector I could live with that. A bigger problem is that if instead of the number of iterations I want to pass the solution time to a vector, I get an error: PetscReal time1,time2,t_solve; ierr = VecCreateMPI(PETSC_COMM_WORLD,1,PETSC_DECIDE,&to_matlab);CHKERRQ(ierr); fd = PETSC_VIEWER_SOCKET_WORLD; ierr = PetscGetTime(&time1);CHKERRQ(ierr); // some matrix solve ierr = PetscGetTime(&time2);CHKERRQ(ierr); t_solve = time2-time1; ierr = VecSet(to_matlab,t_solve);CHKERRQ(ierr); ierr = VecView(to_matlab,fd);CHKERRQ(ierr); this produces the following error: [1]PETSC ERROR: --------------------- Error Message ------------------------------------ [1]PETSC ERROR: Invalid argument! [1]PETSC ERROR: Same value should be used across all processors! [1]PETSC ERROR: ------------------------------------------------------------------------ When I run Petsc with 1 processor there is no error. Any ideas? Ben ----- Original Message ----- From: "Barry Smith" To: "PETSc users list" Sent: Tuesday, October 26, 2010 10:57:04 PM Subject: Re: [petsc-users] Writing PETSc matrices Use PetscBinaryWrite('filename',sparsematlabmatrix) I do not know why your second argument has quotes around it. Barry On Oct 26, 2010, at 3:33 PM, Nunion wrote: > Hello, > > I am new to PETSc and programming. I have a question concerning writing PETSc matrices in binary from binary matrices [compressessed/uncompressed] generated in Matlab. I am attempting to use the files in the /bin/matlab directory, in particular the PetscBinaryWrite.m file. However, the usage; > > PetscBinaryWrite('matrix.mat','output.ex') does not seem to work. I also tried using the examples in the /mat directory however, matlab does not support the writing of complex matrices in ASCII. > > Thanks in advance, > > Tom From bsmith at mcs.anl.gov Wed Oct 27 08:10:13 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 27 Oct 2010 08:10:13 -0500 Subject: [petsc-users] Writing PETSc matrices In-Reply-To: <1117677303.44923.1288168016192.JavaMail.root@zembox02.zaas.igi.nl> References: <1117677303.44923.1288168016192.JavaMail.root@zembox02.zaas.igi.nl> Message-ID: <44C88D74-6FDD-40CB-95D8-975CC626B845@mcs.anl.gov> On Oct 27, 2010, at 3:26 AM, Benjamin Sanderse wrote: > I have a somewhat related question regarding sending data to Matlab. > For a while I have been sending vectors back and forth between Matlab and Petsc and that works perfect. > > In addition I also want to send some information like number of iterations, norm of residual and solution time to Matlab. This gives me some headaches when I run the code in parallel: > > - Can I simply send a PetscInt like number of iterations to Matlab? I tried PetscIntView, but this does not work: > PetscInt iterations; > ierr = KSPGetIterationNumber(ksp,&iterations);CHKERRQ(ierr); > fd = PETSC_VIEWER_SOCKET_WORLD; > ierr = PetscIntView(1,iterations,fd);CHKERRQ(ierr); > > Petsc-Matlab communication hangs without an error on the Petsc side. You can do a PetscIntView() BUT since each process is sending a single integer (since you are passing a 1 as the first argument on all processes) you need to read all of those integers on the Matlab side with read(fd,size,'int32') > > - As alternative I tried to set a vector and set this to Matlab with VecView. This works, although it results (for this example) in a nx1 vector (n=no. of processors) that is received by Matlab, while I actually just want a 1x1 vector: > ierr = VecCreateMPI(PETSC_COMM_WORLD,1,PETSC_DECIDE,&to_matlab);CHKERRQ(ierr); > ierr = KSPGetIterationNumber(ksp,&iterations);CHKERRQ(ierr); > fd = PETSC_VIEWER_SOCKET_WORLD; > ierr = VecSet(to_matlab,iterations);CHKERRQ(ierr); > ierr = VecView(to_matlab,fd);CHKERRQ(ierr); You are creating a Vec with local size 1 so its total size is the number of processes. If you want a Vec of total size one then use > err = VecCreateMPI(PETSC_COMM_WORLD,PETSC_DECIDE,1,&to_matlab);CHKERRQ(ierr); > > - So although I am not too happy with a nx1 vector instead of a 1x1 vector I could live with that. A bigger problem is that if instead of the number of iterations I want to pass the solution time to a vector, I get an error: > > PetscReal time1,time2,t_solve; > > ierr = VecCreateMPI(PETSC_COMM_WORLD,1,PETSC_DECIDE,&to_matlab);CHKERRQ(ierr); > fd = PETSC_VIEWER_SOCKET_WORLD; > ierr = PetscGetTime(&time1);CHKERRQ(ierr); > // some matrix solve > ierr = PetscGetTime(&time2);CHKERRQ(ierr); > t_solve = time2-time1; > ierr = VecSet(to_matlab,t_solve);CHKERRQ(ierr); What do you want to send to matlab? The sum of the times from all processes? ALL of the times? The maximum time? If you want the sum or max then use MPI_Allreduce() first and pass the result to the Vec. If you want all of the times then you do not want VecSet(). You want VecSetValues() and have each process set its own time into its position in the vector Barry > ierr = VecView(to_matlab,fd);CHKERRQ(ierr); > > this produces the following error: > [1]PETSC ERROR: --------------------- Error Message ------------------------------------ > [1]PETSC ERROR: Invalid argument! > [1]PETSC ERROR: Same value should be used across all processors! > [1]PETSC ERROR: ------------------------------------------------------------------------ > > When I run Petsc with 1 processor there is no error. Any ideas? > > > Ben > > > ----- Original Message ----- > From: "Barry Smith" > To: "PETSc users list" > Sent: Tuesday, October 26, 2010 10:57:04 PM > Subject: Re: [petsc-users] Writing PETSc matrices > > > Use PetscBinaryWrite('filename',sparsematlabmatrix) I do not know why your second argument has quotes around it. > > Barry > > > On Oct 26, 2010, at 3:33 PM, Nunion wrote: > >> Hello, >> >> I am new to PETSc and programming. I have a question concerning writing PETSc matrices in binary from binary matrices [compressessed/uncompressed] generated in Matlab. I am attempting to use the files in the /bin/matlab directory, in particular the PetscBinaryWrite.m file. However, the usage; >> >> PetscBinaryWrite('matrix.mat','output.ex') does not seem to work. I also tried using the examples in the /mat directory however, matlab does not support the writing of complex matrices in ASCII. >> >> Thanks in advance, >> >> Tom > From jed at 59A2.org Wed Oct 27 08:16:58 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 27 Oct 2010 15:16:58 +0200 Subject: [petsc-users] Writing PETSc matrices In-Reply-To: <1117677303.44923.1288168016192.JavaMail.root@zembox02.zaas.igi.nl> References: <22CBB298-B12C-4583-A516-5DCB74576EDF@mcs.anl.gov> <1117677303.44923.1288168016192.JavaMail.root@zembox02.zaas.igi.nl> Message-ID: On Wed, Oct 27, 2010 at 10:26, Benjamin Sanderse wrote: > PetscInt iterations; > ierr = KSPGetIterationNumber(ksp,&iterations);CHKERRQ(ierr); > fd = PETSC_VIEWER_SOCKET_WORLD; > ierr = PetscIntView(1,iterations,fd);CHKERRQ(ierr); > This is a type error (your compiler should have issued a warning), you want PetscIntView(1,&iterations,fd); because the prototype is PetscIntView(PetscInt,const PetscInt[],PetscViewer); Also see Barry's comment about reading it correctly. Jed -------------- next part -------------- An HTML attachment was scrubbed... URL: From huangsc at gmail.com Wed Oct 27 18:09:25 2010 From: huangsc at gmail.com (Shao-Ching Huang) Date: Wed, 27 Oct 2010 16:09:25 -0700 Subject: [petsc-users] assemble block matrices Message-ID: Hi, I need to solve M*x=f, where matrix M consists of 4 blocks: [ A B; C D ] I already have old code to build petsc sparse matrices A, B, C and D and the RHS vectors f1 and f2 (such that f=[f1; f2]), each having global indices starting from 0. My question is: is there already API(s) that I can easily assemble matrices A,B,C and D into M? Or do I have to build M from scratch? Thanks, Shao-Ching From knepley at gmail.com Wed Oct 27 18:12:14 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 27 Oct 2010 18:12:14 -0500 Subject: [petsc-users] assemble block matrices In-Reply-To: References: Message-ID: On Wed, Oct 27, 2010 at 6:09 PM, Shao-Ching Huang wrote: > Hi, > > I need to solve M*x=f, where matrix M consists of 4 blocks: > [ A B; > C D ] > > I already have old code to build petsc sparse matrices A, B, C and D > and the RHS vectors f1 and f2 > (such that f=[f1; f2]), each having global indices starting from 0. > > My question is: is there already API(s) that I can easily assemble > matrices A,B,C and D into M? Or do I have to build M from scratch? > I think the right thing to do is build M from scratch. Its unlikely that you actually want the parallel matrix to be partitioned this way. Matt > Thanks, > > Shao-Ching > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From u.tabak at tudelft.nl Wed Oct 27 18:29:59 2010 From: u.tabak at tudelft.nl (Umut Tabak) Date: Thu, 28 Oct 2010 01:29:59 +0200 Subject: [petsc-users] assemble block matrices In-Reply-To: References: Message-ID: <4CC8B5F7.4060506@tudelft.nl> Matthew Knepley wrote: > > > I think the right thing to do is build M from scratch. Its unlikely > that you actually > want the parallel matrix to be partitioned this way. > > Matt > Or you can take a look at PetscExt, thanks to Dave May, which provides functions to build block matrices. I also learned about that from here. Best, Umut From huangsc at gmail.com Wed Oct 27 18:55:52 2010 From: huangsc at gmail.com (Shao-Ching Huang) Date: Wed, 27 Oct 2010 16:55:52 -0700 Subject: [petsc-users] assemble block matrices In-Reply-To: <4CC8B5F7.4060506@tudelft.nl> References: <4CC8B5F7.4060506@tudelft.nl> Message-ID: Thanks, Matt and Umut. I will look into these possibilities. Shao-Ching On Wed, Oct 27, 2010 at 4:29 PM, Umut Tabak wrote: > Matthew Knepley wrote: >> >> >> I think the right thing to do is build M from scratch. Its unlikely that >> you actually >> want the parallel matrix to be partitioned this way. >> >> ? Matt >> > > Or you can take a look at PetscExt, thanks to Dave May, which provides > functions to build block matrices. I also learned about that from here. > Best, > Umut > From rongliang.chan at gmail.com Thu Oct 28 18:23:27 2010 From: rongliang.chan at gmail.com (Rongliang Chen) Date: Thu, 28 Oct 2010 17:23:27 -0600 Subject: [petsc-users] Problem for PETSc 3.1 Message-ID: Hi all, I installed the Petsc 3.1 recently and when I tried to make my program using Petsc 3.1 it said: In file included from /home/rlchen/soft/petsc-3.1-p4/include/petscsys.h:1467, from /home/rlchen/soft/petsc-3.1-p4/include/petscis.h:7, from /home/rlchen/soft/petsc-3.1-p4/include/petscao.h:8, from joab.h:87, from joab.c:54: /home/rlchen/soft/petsc-3.1-p4/include/petscerror.h:81:1: warning: "__SDIR__" redefined :1:1: warning: this is the location of the previous definition . . . . I never defined the "__SDIR__" and I do not know why it said I redefined "__SDIR__". But I can run my program on Petsc 3.0. Can anyone tell me how to fix this problem? Thank you! Best, Rongliang -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Thu Oct 28 19:46:47 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Thu, 28 Oct 2010 19:46:47 -0500 (CDT) Subject: [petsc-users] Problem for PETSc 3.1 In-Reply-To: References: Message-ID: perhaps its defined in your makefile? Satish On Thu, 28 Oct 2010, Rongliang Chen wrote: > Hi all, > > I installed the Petsc 3.1 recently and when I tried to make my program using > Petsc 3.1 it said: > > In file included from > /home/rlchen/soft/petsc-3.1-p4/include/petscsys.h:1467, > from /home/rlchen/soft/petsc-3.1-p4/include/petscis.h:7, > from /home/rlchen/soft/petsc-3.1-p4/include/petscao.h:8, > from joab.h:87, > from joab.c:54: > /home/rlchen/soft/petsc-3.1-p4/include/petscerror.h:81:1: warning: > "__SDIR__" redefined > :1:1: warning: this is the location of the previous definition > . > . > . > . > I never defined the "__SDIR__" and I do not know why it said I redefined > "__SDIR__". > But I can run my program on Petsc 3.0. > > Can anyone tell me how to fix this problem? Thank you! > > Best, > > Rongliang > From zonexo at gmail.com Fri Oct 29 10:16:25 2010 From: zonexo at gmail.com (TAY Wee Beng) Date: Fri, 29 Oct 2010 17:16:25 +0200 Subject: [petsc-users] Problem with mpi Message-ID: <4CCAE549.1000302@gmail.com> Hi, I have a mpi code which works fine in my previous clusters. However, there's mpi problem when I use it in the new clusters, which use openmpi. I wonder if there's a relation. The error is: [n12-70:14429] *** An error occurred in MPI_comm_size [n12-70:14429] *** on communicator MPI_COMM_WORLD [n12-70:14429] *** MPI_ERR_COMM: invalid communicator [n12-70:14429] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) 2.48user 0.45system 0:03.99elapsed 73%CPU (0avgtext+0avgdata 2871936maxresident)k 0inputs+0outputs (0major+185587minor)pagefaults 0swaps -------------------------------------------------------------------------- mpiexec has exited due to process rank 3 with PID 14425 on node n12-70 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpiexec (as reported here). -------------------------------------------------------------------------- [n12-70:14421] 3 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal [n12-70:14421] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages I am not sure what other information you will need. I will provide more information if required. I tried running on 1, 4 and 8 processors but all can't work. -- Yours sincerely, TAY Wee Beng From knepley at gmail.com Fri Oct 29 10:39:40 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 29 Oct 2010 10:39:40 -0500 Subject: [petsc-users] Problem with mpi In-Reply-To: <4CCAE549.1000302@gmail.com> References: <4CCAE549.1000302@gmail.com> Message-ID: You have an invalid communicator. I suspect you are overwriting memory somewhere, since it works in one place but has corruption in another. For this kind of error, the best thing to do is use valgrind to find it. Matt On Fri, Oct 29, 2010 at 10:16 AM, TAY Wee Beng wrote: > Hi, > > I have a mpi code which works fine in my previous clusters. > > However, there's mpi problem when I use it in the new clusters, which use > openmpi. I wonder if there's a relation. > > The error is: > > [n12-70:14429] *** An error occurred in MPI_comm_size > [n12-70:14429] *** on communicator MPI_COMM_WORLD > [n12-70:14429] *** MPI_ERR_COMM: invalid communicator > [n12-70:14429] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) > 2.48user 0.45system 0:03.99elapsed 73%CPU (0avgtext+0avgdata > 2871936maxresident)k > 0inputs+0outputs (0major+185587minor)pagefaults 0swaps > -------------------------------------------------------------------------- > mpiexec has exited due to process rank 3 with PID 14425 on > node n12-70 exiting without calling "finalize". This may > have caused other processes in the application to be > terminated by signals sent by mpiexec (as reported here). > -------------------------------------------------------------------------- > [n12-70:14421] 3 more processes have sent help message help-mpi-errors.txt > / mpi_errors_are_fatal > [n12-70:14421] Set MCA parameter "orte_base_help_aggregate" to 0 to see all > help / error messages > > I am not sure what other information you will need. I will provide more > information if required. I tried running on 1, 4 and 8 processors but all > can't work. > > -- > Yours sincerely, > > TAY Wee Beng > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.mirzadeh at engineering.ucsb.edu Fri Oct 29 23:54:21 2010 From: m.mirzadeh at engineering.ucsb.edu (Mohammad Mirzadeh) Date: Fri, 29 Oct 2010 21:54:21 -0700 Subject: [petsc-users] Packages Similar to PETSc in C++? Message-ID: Dear all, I was wondering if anyone can refer me to a package similar to PETSc but that is written in c++? Right now I have a large code written in c++ for doing CFD simulations that I need to transform from serial to parallel. Initially I was thinking of PETSc and tried using it but found that PETSc is written in C and thus does not allow to have arrays (in parallel) of arbitrary type. I have a big data structure and it is much easier for me to retain the current structure and form of the code. As a result I was wondering if you guys know of any similar package in C++ ?(in the sense that it can provide with efficient linear solvers in parallel while hiding most of MPI from the user) I could think of HYPRE but then again I am not sure it is written in C++. Thanks, Mohammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From aron.ahmadia at kaust.edu.sa Sat Oct 30 00:44:26 2010 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Sat, 30 Oct 2010 08:44:26 +0300 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: Dear Mohammad, As a user of PETSc for the last 8 years, since my days as an undergraduate, and now as a professional staff scientist at a supercomputing center, I can say with some confidence that there are no codes like PETSc in C++ or any other language in terms of quality of implementation, documentation, and support. Can you tell us a little more about your current implementation? It is true that PETSc does not support multiple types in the same build, but you do get your choice of floating-point values and real or complex types. Also, PETSc has several C++ components within it, and one of the supported ways of building it is in 'C++' mode, see -c-language in the configure options. If you insist on departing us (we'll miss you), I suggest you look at Sandia's Trilinos package: http://trilinos.sandia.gov/ Good Luck, Aron On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh wrote: > Dear all, > I was wondering if anyone can refer me to a package similar to PETSc but > that is written in c++? Right now I have a large code written in c++ for > doing CFD simulations that I need to transform from serial to parallel. > Initially I was thinking of PETSc and tried using it but found that PETSc is > written in C and thus does not allow to have arrays (in parallel) of > arbitrary type. I have a big data structure and it is much easier for me to > retain the current structure and form of the code. As a result I was > wondering if you guys know of any similar package in C++ ?(in the sense that > it can provide with efficient linear solvers in parallel while hiding most > of MPI from the user) > I could think of HYPRE but then again I am not sure it is written in C++. > Thanks, > Mohammad From mirzadeh at gmail.com Sat Oct 30 01:07:31 2010 From: mirzadeh at gmail.com (Mohammad Mirzadeh) Date: Fri, 29 Oct 2010 23:07:31 -0700 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: Aron, Thanks for the quick reply. It's really great that PETSc has such an awesome community. Anyway, I am working on adaptive Cartesian grids for which I use Octree/Quadtree data structures. Naturally, then, I have components like cells, nodes, neighbors, child/parent, etc and my whole domain is consisted of arrays of these types. That is, if I happen to have 100 cells and 200 nodes, for example, I create an array for the whole domain by calling, Array *CellArray = new Array [100]; Array *NodeArray = new Array [200]; Now the problem is I want to be able to distribute this in parallel and have an array of cells or nodes. I understand that one of doing this is to change my data structure such that is consistent with PETSc only accepting double. I was hoping I could prevent that by using a package that allow for templates. That being said, I am not an expert on PETSc by any measure! As a result I highly appreciate any ideas and comments if you think this is possible to do with PETSc. All the best, Mohammad On Fri, Oct 29, 2010 at 10:44 PM, Aron Ahmadia wrote: > Dear Mohammad, > > As a user of PETSc for the last 8 years, since my days as an > undergraduate, and now as a professional staff scientist at a > supercomputing center, I can say with some confidence that there are > no codes like PETSc in C++ or any other language in terms of quality > of implementation, documentation, and support. Can you tell us a > little more about your current implementation? It is true that PETSc > does not support multiple types in the same build, but you do get your > choice of floating-point values and real or complex types. Also, > PETSc has several C++ components within it, and one of the supported > ways of building it is in 'C++' mode, see -c-language in the configure > options. > > If you insist on departing us (we'll miss you), I suggest you look at > Sandia's Trilinos package: http://trilinos.sandia.gov/ > > Good Luck, > Aron > > On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh > wrote: > > Dear all, > > I was wondering if anyone can refer me to a package similar to PETSc but > > that is written in c++? Right now I have a large code written in c++ for > > doing CFD simulations that I need to transform from serial to parallel. > > Initially I was thinking of PETSc and tried using it but found that PETSc > is > > written in C and thus does not allow to have arrays (in parallel) of > > arbitrary type. I have a big data structure and it is much easier for me > to > > retain the current structure and form of the code. As a result I was > > wondering if you guys know of any similar package in C++ ?(in the sense > that > > it can provide with efficient linear solvers in parallel while hiding > most > > of MPI from the user) > > I could think of HYPRE but then again I am not sure it is written in C++. > > Thanks, > > Mohammad > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aron.ahmadia at kaust.edu.sa Sat Oct 30 10:32:54 2010 From: aron.ahmadia at kaust.edu.sa (Aron Ahmadia) Date: Sat, 30 Oct 2010 18:32:54 +0300 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: Mohammad, I am sure some of the other users and developers here will have different opinions on the correct way to approach this. It sounds like you may benefit even more from investigating several of the packages that manage meshes and grids on parallel architectures. There are some very general toolkits for managing adaptive grids and meshes out there, one could start with Sieve or deal.ii. If one of these packages is suitable for you, I strongly suggest you consider reusing as much of their frameworks as possible to avoid "rewriting the wheel" so to speak. Cheers, Aron On Sat, Oct 30, 2010 at 9:07 AM, Mohammad Mirzadeh wrote: > Aron, > Thanks for the quick reply. It's really great that PETSc has such an awesome > community. > Anyway, I am working on adaptive Cartesian grids for which I use > Octree/Quadtree data structures. Naturally, then, I have components like > cells, nodes,?neighbors, child/parent, etc and my whole domain is consisted > of arrays of these types. That is, if I happen to have 100 cells and 200 > nodes, for example, I create an array for the whole domain by calling, > Array *CellArray = new Array [100]; > Array *NodeArray =?new Array [200]; > Now the problem is I want to be able to distribute this in parallel and have > an array of cells or nodes. I understand that one of doing this is to change > my data structure such that is consistent with PETSc only accepting double. > I was hoping I could prevent that by using a package that allow for > templates. That being said, I am not an expert on PETSc by any measure! As a > result I highly appreciate any ideas and comments if you think this is > possible to do with PETSc. > All the best, > Mohammad > > On Fri, Oct 29, 2010 at 10:44 PM, Aron Ahmadia > wrote: >> >> Dear Mohammad, >> >> As a user of PETSc for the last 8 years, since my days as an >> undergraduate, and now as a professional staff scientist at a >> supercomputing center, I can say with some confidence that there are >> no codes like PETSc in C++ or any other language in terms of quality >> of implementation, documentation, and support. ?Can you tell us a >> little more about your current implementation? ?It is true that PETSc >> does not support multiple types in the same build, but you do get your >> choice of floating-point values and real or complex types. ?Also, >> PETSc has several C++ components within it, and one of the supported >> ways of building it is in 'C++' mode, see -c-language in the configure >> options. >> >> If you insist on departing us (we'll miss you), I suggest you look at >> Sandia's Trilinos package: http://trilinos.sandia.gov/ >> >> Good Luck, >> Aron >> >> On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh >> wrote: >> > Dear all, >> > I was wondering if anyone can refer me to a package similar to PETSc but >> > that is written in c++? Right now I have a large code written in c++ for >> > doing CFD simulations that I need to transform from serial to parallel. >> > Initially I was thinking of PETSc and tried using it but found that >> > PETSc is >> > written in C and thus does not allow to have arrays (in parallel) of >> > arbitrary type. I have a big data structure and it is much easier for me >> > to >> > retain the current structure and form of the code. As a result I was >> > wondering if you guys know of any similar package in C++ ?(in the sense >> > that >> > it can provide with efficient linear solvers in parallel while hiding >> > most >> > of MPI from the user) >> > I could think of HYPRE but then again I am not sure it is written in >> > C++. >> > Thanks, >> > Mohammad > > From pflath at ices.utexas.edu Sat Oct 30 12:06:06 2010 From: pflath at ices.utexas.edu (Pearl Flath) Date: Sat, 30 Oct 2010 12:06:06 -0500 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: Mohammad, You could also look into p4est, which also interfaces with the deal II library (and deal II has wrappers for calling either PETSc or Trilinos). "The p4est software library enables the dynamic management of a collection of adaptive octrees, conveniently called a forest of octrees. p4est is designed to work in parallel and scale to hundreds of thousands of processor cores." http://p4est.org/ Best, Pearl Flath On Sat, Oct 30, 2010 at 10:32 AM, Aron Ahmadia wrote: > Mohammad, > > I am sure some of the other users and developers here will have > different opinions on the correct way to approach this. > > It sounds like you may benefit even more from investigating several of > the packages that manage meshes and grids on parallel architectures. > There are some very general toolkits for managing adaptive grids and > meshes out there, one could start with Sieve or deal.ii. If one of > these packages is suitable for you, I strongly suggest you consider > reusing as much of their frameworks as possible to avoid "rewriting > the wheel" so to speak. > > Cheers, > Aron > > On Sat, Oct 30, 2010 at 9:07 AM, Mohammad Mirzadeh > wrote: > > Aron, > > Thanks for the quick reply. It's really great that PETSc has such an > awesome > > community. > > Anyway, I am working on adaptive Cartesian grids for which I use > > Octree/Quadtree data structures. Naturally, then, I have components like > > cells, nodes, neighbors, child/parent, etc and my whole domain is > consisted > > of arrays of these types. That is, if I happen to have 100 cells and 200 > > nodes, for example, I create an array for the whole domain by calling, > > Array *CellArray = new Array [100]; > > Array *NodeArray = new Array [200]; > > Now the problem is I want to be able to distribute this in parallel and > have > > an array of cells or nodes. I understand that one of doing this is to > change > > my data structure such that is consistent with PETSc only accepting > double. > > I was hoping I could prevent that by using a package that allow for > > templates. That being said, I am not an expert on PETSc by any measure! > As a > > result I highly appreciate any ideas and comments if you think this is > > possible to do with PETSc. > > All the best, > > Mohammad > > > > On Fri, Oct 29, 2010 at 10:44 PM, Aron Ahmadia < > aron.ahmadia at kaust.edu.sa> > > wrote: > >> > >> Dear Mohammad, > >> > >> As a user of PETSc for the last 8 years, since my days as an > >> undergraduate, and now as a professional staff scientist at a > >> supercomputing center, I can say with some confidence that there are > >> no codes like PETSc in C++ or any other language in terms of quality > >> of implementation, documentation, and support. Can you tell us a > >> little more about your current implementation? It is true that PETSc > >> does not support multiple types in the same build, but you do get your > >> choice of floating-point values and real or complex types. Also, > >> PETSc has several C++ components within it, and one of the supported > >> ways of building it is in 'C++' mode, see -c-language in the configure > >> options. > >> > >> If you insist on departing us (we'll miss you), I suggest you look at > >> Sandia's Trilinos package: http://trilinos.sandia.gov/ > >> > >> Good Luck, > >> Aron > >> > >> On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh > >> wrote: > >> > Dear all, > >> > I was wondering if anyone can refer me to a package similar to PETSc > but > >> > that is written in c++? Right now I have a large code written in c++ > for > >> > doing CFD simulations that I need to transform from serial to > parallel. > >> > Initially I was thinking of PETSc and tried using it but found that > >> > PETSc is > >> > written in C and thus does not allow to have arrays (in parallel) of > >> > arbitrary type. I have a big data structure and it is much easier for > me > >> > to > >> > retain the current structure and form of the code. As a result I was > >> > wondering if you guys know of any similar package in C++ ?(in the > sense > >> > that > >> > it can provide with efficient linear solvers in parallel while hiding > >> > most > >> > of MPI from the user) > >> > I could think of HYPRE but then again I am not sure it is written in > >> > C++. > >> > Thanks, > >> > Mohammad > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mirzadeh at gmail.com Sat Oct 30 15:46:00 2010 From: mirzadeh at gmail.com (Mohammad Mirzadeh) Date: Sat, 30 Oct 2010 13:46:00 -0700 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: Thanks guys. I will definitely spend sometime and consider these packages. The truth is I really like the structure of PETSc and I want to stick with it as much as possible. If I can use any of these packages to replace for my data structure in the grid generation phase, I'll probably be able to port most of my code from serial into parallel. Hopefully it will be good enough (in the long term) that I can eventually publish it and be of help to the community. Thanks, Mohammad On Sat, Oct 30, 2010 at 10:06 AM, Pearl Flath wrote: > Mohammad, > You could also look into p4est, which also interfaces with the deal II > library (and deal II has wrappers for calling either PETSc or Trilinos). > > "The p4est software library enables the dynamic management of a collection > of adaptive octrees, conveniently called a forest of octrees. p4est is > designed to work in parallel and scale to hundreds of thousands of processor > cores." http://p4est.org/ > > Best, > Pearl Flath > > > On Sat, Oct 30, 2010 at 10:32 AM, Aron Ahmadia wrote: > >> Mohammad, >> >> I am sure some of the other users and developers here will have >> different opinions on the correct way to approach this. >> >> It sounds like you may benefit even more from investigating several of >> the packages that manage meshes and grids on parallel architectures. >> There are some very general toolkits for managing adaptive grids and >> meshes out there, one could start with Sieve or deal.ii. If one of >> these packages is suitable for you, I strongly suggest you consider >> reusing as much of their frameworks as possible to avoid "rewriting >> the wheel" so to speak. >> >> Cheers, >> Aron >> >> On Sat, Oct 30, 2010 at 9:07 AM, Mohammad Mirzadeh >> wrote: >> > Aron, >> > Thanks for the quick reply. It's really great that PETSc has such an >> awesome >> > community. >> > Anyway, I am working on adaptive Cartesian grids for which I use >> > Octree/Quadtree data structures. Naturally, then, I have components like >> > cells, nodes, neighbors, child/parent, etc and my whole domain is >> consisted >> > of arrays of these types. That is, if I happen to have 100 cells and 200 >> > nodes, for example, I create an array for the whole domain by calling, >> > Array *CellArray = new Array [100]; >> > Array *NodeArray = new Array [200]; >> > Now the problem is I want to be able to distribute this in parallel and >> have >> > an array of cells or nodes. I understand that one of doing this is to >> change >> > my data structure such that is consistent with PETSc only accepting >> double. >> > I was hoping I could prevent that by using a package that allow for >> > templates. That being said, I am not an expert on PETSc by any measure! >> As a >> > result I highly appreciate any ideas and comments if you think this is >> > possible to do with PETSc. >> > All the best, >> > Mohammad >> > >> > On Fri, Oct 29, 2010 at 10:44 PM, Aron Ahmadia < >> aron.ahmadia at kaust.edu.sa> >> > wrote: >> >> >> >> Dear Mohammad, >> >> >> >> As a user of PETSc for the last 8 years, since my days as an >> >> undergraduate, and now as a professional staff scientist at a >> >> supercomputing center, I can say with some confidence that there are >> >> no codes like PETSc in C++ or any other language in terms of quality >> >> of implementation, documentation, and support. Can you tell us a >> >> little more about your current implementation? It is true that PETSc >> >> does not support multiple types in the same build, but you do get your >> >> choice of floating-point values and real or complex types. Also, >> >> PETSc has several C++ components within it, and one of the supported >> >> ways of building it is in 'C++' mode, see -c-language in the configure >> >> options. >> >> >> >> If you insist on departing us (we'll miss you), I suggest you look at >> >> Sandia's Trilinos package: http://trilinos.sandia.gov/ >> >> >> >> Good Luck, >> >> Aron >> >> >> >> On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh >> >> wrote: >> >> > Dear all, >> >> > I was wondering if anyone can refer me to a package similar to PETSc >> but >> >> > that is written in c++? Right now I have a large code written in c++ >> for >> >> > doing CFD simulations that I need to transform from serial to >> parallel. >> >> > Initially I was thinking of PETSc and tried using it but found that >> >> > PETSc is >> >> > written in C and thus does not allow to have arrays (in parallel) of >> >> > arbitrary type. I have a big data structure and it is much easier for >> me >> >> > to >> >> > retain the current structure and form of the code. As a result I was >> >> > wondering if you guys know of any similar package in C++ ?(in the >> sense >> >> > that >> >> > it can provide with efficient linear solvers in parallel while hiding >> >> > most >> >> > of MPI from the user) >> >> > I could think of HYPRE but then again I am not sure it is written in >> >> > C++. >> >> > Thanks, >> >> > Mohammad >> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sat Oct 30 09:23:16 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 30 Oct 2010 09:23:16 -0500 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: <9EF7F44F-E8B0-4F0A-80B8-8068A4979E2A@mcs.anl.gov> On Oct 30, 2010, at 1:07 AM, Mohammad Mirzadeh wrote: > Aron, > > Thanks for the quick reply. It's really great that PETSc has such an awesome community. > > Anyway, I am working on adaptive Cartesian grids for which I use Octree/Quadtree data structures. Naturally, then, I have components like cells, nodes, neighbors, child/parent, etc and my whole domain is consisted of arrays of these types. That is, if I happen to have 100 cells and 200 nodes, for example, I create an array for the whole domain by calling, > > Array *CellArray = new Array [100]; > Array *NodeArray = new Array [200]; > > Now the problem is I want to be able to distribute this in parallel and have an array of cells or nodes. I understand that one of doing this is to change my data structure such that is consistent with PETSc only accepting double. I was hoping I could prevent that by using a package that allow for templates. That being said, I am not an expert on PETSc by any measure! As a result I highly appreciate any ideas and comments if you think this is possible to do with PETSc. This is sort of orthogonal to what PETSc provides. You can use PETSc for all of your "sparse" linear algebra but need to manage your parallel data structures (as you indicate above) yourself. I don't know of any package that helps parallelize things like > rray *CellArray = new Array [100]; > Array *NodeArray = new Array [200]; Trilinos is similar to PETSc in that it does the linear algebra for you but doesn't provide tools useful for parallelizing your data structures. There is a Octree type parallel code http://www.cc.gatech.edu/csela/dendro/html/index.html that uses PETSc and STL for managing the mesh. The version of PETSc it uses must be updated. Barry > > All the best, > Mohammad > > On Fri, Oct 29, 2010 at 10:44 PM, Aron Ahmadia wrote: > Dear Mohammad, > > As a user of PETSc for the last 8 years, since my days as an > undergraduate, and now as a professional staff scientist at a > supercomputing center, I can say with some confidence that there are > no codes like PETSc in C++ or any other language in terms of quality > of implementation, documentation, and support. Can you tell us a > little more about your current implementation? It is true that PETSc > does not support multiple types in the same build, but you do get your > choice of floating-point values and real or complex types. Also, > PETSc has several C++ components within it, and one of the supported > ways of building it is in 'C++' mode, see -c-language in the configure > options. > > If you insist on departing us (we'll miss you), I suggest you look at > Sandia's Trilinos package: http://trilinos.sandia.gov/ > > Good Luck, > Aron > > On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh > wrote: > > Dear all, > > I was wondering if anyone can refer me to a package similar to PETSc but > > that is written in c++? Right now I have a large code written in c++ for > > doing CFD simulations that I need to transform from serial to parallel. > > Initially I was thinking of PETSc and tried using it but found that PETSc is > > written in C and thus does not allow to have arrays (in parallel) of > > arbitrary type. I have a big data structure and it is much easier for me to > > retain the current structure and form of the code. As a result I was > > wondering if you guys know of any similar package in C++ ?(in the sense that > > it can provide with efficient linear solvers in parallel while hiding most > > of MPI from the user) > > I could think of HYPRE but then again I am not sure it is written in C++. > > Thanks, > > Mohammad > From zonexo at gmail.com Sun Oct 31 11:54:31 2010 From: zonexo at gmail.com (Wee-Beng TAY) Date: Sun, 31 Oct 2010 17:54:31 +0100 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: References: Message-ID: <4CCD9F47.4000205@gmail.com> Hi, It seems that some of you mention some adaptive mesh packages. I know of paramesh, pflotran and libmesh. Are there any other packages? Thank you very much and have a nice day! Yours sincerely, Wee-Beng Tay On 30/10/2010 10:46 PM, Mohammad Mirzadeh wrote: > Thanks guys. I will definitely spend sometime and consider these > packages. The truth is I really like the structure of PETSc and I want > to stick with it as much as possible. > > If I can use any of these packages to replace for my data structure in > the grid generation phase, I'll probably be able to port most of my > code from serial into parallel. Hopefully it will be good enough (in > the long term) that I can eventually publish it and be of help to the > community. > > Thanks, > Mohammad > > On Sat, Oct 30, 2010 at 10:06 AM, Pearl Flath > wrote: > > Mohammad, > You could also look into p4est, which also interfaces with the > deal II library (and deal II has wrappers for calling either PETSc > or Trilinos). > > "The p4est software library enables the dynamic management of a > collection of adaptive octrees, conveniently called a forest of > octrees. p4est is designed to work in parallel and scale to > hundreds of thousands of processor cores." http://p4est.org/ > > Best, > Pearl Flath > > > On Sat, Oct 30, 2010 at 10:32 AM, Aron Ahmadia > > wrote: > > Mohammad, > > I am sure some of the other users and developers here will have > different opinions on the correct way to approach this. > > It sounds like you may benefit even more from investigating > several of > the packages that manage meshes and grids on parallel > architectures. > There are some very general toolkits for managing adaptive > grids and > meshes out there, one could start with Sieve or deal.ii. If > one of > these packages is suitable for you, I strongly suggest you > consider > reusing as much of their frameworks as possible to avoid > "rewriting > the wheel" so to speak. > > Cheers, > Aron > > On Sat, Oct 30, 2010 at 9:07 AM, Mohammad Mirzadeh > > wrote: > > Aron, > > Thanks for the quick reply. It's really great that PETSc has > such an awesome > > community. > > Anyway, I am working on adaptive Cartesian grids for which I use > > Octree/Quadtree data structures. Naturally, then, I have > components like > > cells, nodes, neighbors, child/parent, etc and my whole > domain is consisted > > of arrays of these types. That is, if I happen to have 100 > cells and 200 > > nodes, for example, I create an array for the whole domain > by calling, > > Array *CellArray = new Array [100]; > > Array *NodeArray = new Array [200]; > > Now the problem is I want to be able to distribute this in > parallel and have > > an array of cells or nodes. I understand that one of doing > this is to change > > my data structure such that is consistent with PETSc only > accepting double. > > I was hoping I could prevent that by using a package that > allow for > > templates. That being said, I am not an expert on PETSc by > any measure! As a > > result I highly appreciate any ideas and comments if you > think this is > > possible to do with PETSc. > > All the best, > > Mohammad > > > > On Fri, Oct 29, 2010 at 10:44 PM, Aron Ahmadia > > > > wrote: > >> > >> Dear Mohammad, > >> > >> As a user of PETSc for the last 8 years, since my days as an > >> undergraduate, and now as a professional staff scientist at a > >> supercomputing center, I can say with some confidence that > there are > >> no codes like PETSc in C++ or any other language in terms > of quality > >> of implementation, documentation, and support. Can you > tell us a > >> little more about your current implementation? It is true > that PETSc > >> does not support multiple types in the same build, but you > do get your > >> choice of floating-point values and real or complex types. > Also, > >> PETSc has several C++ components within it, and one of the > supported > >> ways of building it is in 'C++' mode, see -c-language in > the configure > >> options. > >> > >> If you insist on departing us (we'll miss you), I suggest > you look at > >> Sandia's Trilinos package: http://trilinos.sandia.gov/ > >> > >> Good Luck, > >> Aron > >> > >> On Sat, Oct 30, 2010 at 7:54 AM, Mohammad Mirzadeh > >> > wrote: > >> > Dear all, > >> > I was wondering if anyone can refer me to a package > similar to PETSc but > >> > that is written in c++? Right now I have a large code > written in c++ for > >> > doing CFD simulations that I need to transform from > serial to parallel. > >> > Initially I was thinking of PETSc and tried using it but > found that > >> > PETSc is > >> > written in C and thus does not allow to have arrays (in > parallel) of > >> > arbitrary type. I have a big data structure and it is > much easier for me > >> > to > >> > retain the current structure and form of the code. As a > result I was > >> > wondering if you guys know of any similar package in C++ > ?(in the sense > >> > that > >> > it can provide with efficient linear solvers in parallel > while hiding > >> > most > >> > of MPI from the user) > >> > I could think of HYPRE but then again I am not sure it is > written in > >> > C++. > >> > Thanks, > >> > Mohammad > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baagaard at usgs.gov Sun Oct 31 11:57:34 2010 From: baagaard at usgs.gov (Brad Aagaard) Date: Sun, 31 Oct 2010 09:57:34 -0700 Subject: [petsc-users] Packages Similar to PETSc in C++? In-Reply-To: <4CCD9F47.4000205@gmail.com> References: <4CCD9F47.4000205@gmail.com> Message-ID: <4CCD9FFE.1010407@usgs.gov> On 10/31/10 9:54 AM, Wee-Beng TAY wrote: > Hi, > > It seems that some of you mention some adaptive mesh packages. I know of > paramesh, pflotran and libmesh. Are there any other packages? deal.ii (structured adaptive refinement using quadtrees and octrees w/hanging nodes) As mentioned previously in this thread, it is written in C++, integrated with PETSc and p4est. Brad