From mafunk at nmsu.edu Mon Jul 3 12:29:44 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Mon, 3 Jul 2006 11:29:44 -0600 Subject: matrix assembly In-Reply-To: References: <200606301746.04619.mafunk@nmsu.edu> Message-ID: <200607031129.47976.mafunk@nmsu.edu> Thanks for the reponse. I am just starting to use PETsC so it might be a little while. But eventually i think it might be useful to have (for myself and maybe other people might find it useful as well). The reason i need it, is that i have old codes that store the matrices in this format and i just wanted to "plug in" PETsC solvers. The old codes use solvers that we wrote ourselves. Anyway, thanks for the pointers. I have to see how i'll proceed. thanks mat On Friday 30 June 2006 19:04, Barry Smith wrote: > Mat, > > There is no routine like this. It would be possible for you or > someone else to provide a routine that worked for a particular matrix > format such as MatCreateSeqAIJFromCoordinates(comm,nz,i,j,values,&mat) > It likely would be essentially like the code you have written except it > would put the values directly into the data structure without having to use > calls to MatSetValues()[You would need to look at MatSetValues_SeqAIJ() to > see one way of getting the data directly in]. > > Similar code could be written for MPIAIJ though it gets more > complicated because of the more complicated structure and if you place > values off-processor. It could also be written for only PETSc matrix > formats, BAIJ, SBAIJ, Bdiag etc. > > We haven't written this codes because we haven't needed them and much > prefer to simply put the matrix values into the matrix WHEN they are > generated rather then store them in some data-structure that has to be then > converted into the PETSc format. > > If you would like to provide a routine like this we'd be glad to add it > to PETSc and maintain it. > > Barry > > On Fri, 30 Jun 2006, Matt Funk wrote: > > Hi, > > > > i have a matrix stored in the matrix free format. I.e. an array > > indicating the row number, an array storing the column number and the > > array of corresponding values. > > > > I was wondering what the best way is to build the PETsC matrix using > > this. I was hoping that there is call to some sparse matrix assembler > > function to which is simply pass these three arrays and it builds the > > matrix for me. > > > > However, i did not find anything that simple. So i guess i need to do it > > row by row using the MatSetValues fcn() after allocating the memory for > > the matrix (i.e. pretty much as the procedure described on p.54 of the > > user manual)? > > > > mat > > > > On Friday 30 June 2006 14:30, Satish Balay wrote: > >> Added to petsc-dev now. > >> > >> Satish > >> > >> On Fri, 30 Jun 2006, Barry Smith wrote: > >>> Mathieu, > >>> > >>> Cool, thanks. > >>> > >>> Satish, > >>> > >>> Could you please apply the diff to petsc-dev now and then push > >>> so any future changes anyone makes will be combatiable with the new > >>> code. > >>> > >>> Thanks > >>> > >>> Barry > >>> > >>> On Fri, 30 Jun 2006, Mathieu Taillefumier wrote: > >>>> Good morning everybody, > >>>> > >>>> I finished to modify the code of the library in order to compile the > >>>> complex version with a C compiler. A few numbers of files have been > >>>> modified. This modifications include a modification of the file > >>>> language.py where I put in comment two lines forcing to compile the > >>>> library with a c++ compiler. Since I don't really know python, I just > >>>> put them as a comment, but i think it is better to put a message > >>>> indicating that compiling the complex version of Petsc needs a C90 > >>>> compliant compiler (gcc 3.4 and after work, it works also with icc) > >>>> and is experimental. Actually I am working with this modifications and > >>>> it seems to work like a charm (I had no problem with it). > >>>> I still need some time to convert the example programs which do not > >>>> compile but it will be done soon. > >>>> Note for all of us : > >>>> Don't use the name I as a variable anymore. It is a reserved word > >>>> of the language. > >>>> For most of us, the modifications must be transparent. > >>>> > >>>> Regards > >>>> > >>>> Mathieu From mafunk at nmsu.edu Thu Jul 6 11:44:03 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 6 Jul 2006 10:44:03 -0600 Subject: CG solver and preconditioner In-Reply-To: <200607031129.47976.mafunk@nmsu.edu> References: <200607031129.47976.mafunk@nmsu.edu> Message-ID: <200607061044.04608.mafunk@nmsu.edu> Hi, In the html web manual for PETSc: KSPCG is described as the preconditioned conjugate gradient iterative method with: The PCG method requires both the matrix and preconditioner to be symmetric positive (semi) definite. For KSPCG do you need to specify a preconditioner type ie: Jacobi or is there a default the KSPCG uses. Reason for question is that for the system I am solving I need to specify -pc_type none for KSPCG to run. If I do not turn the PC off then the solver does not run. Other KSP types like BiCG and GMRES work fine when I specify a PC. On Monday 03 July 2006 11:29, Matt Funk wrote: > Thanks for the reponse. > > I am just starting to use PETsC so it might be a little while. But > eventually i think it might be useful to have (for myself and maybe other > people might find it useful as well). > > The reason i need it, is that i have old codes that store the matrices in > this format and i just wanted to "plug in" PETsC solvers. The old codes use > solvers that we wrote ourselves. > > Anyway, thanks for the pointers. I have to see how i'll proceed. > > thanks > mat > > On Friday 30 June 2006 19:04, Barry Smith wrote: > > Mat, > > > > There is no routine like this. It would be possible for you or > > someone else to provide a routine that worked for a particular matrix > > format such as MatCreateSeqAIJFromCoordinates(comm,nz,i,j,values,&mat) > > It likely would be essentially like the code you have written except it > > would put the values directly into the data structure without having to > > use calls to MatSetValues()[You would need to look at > > MatSetValues_SeqAIJ() to see one way of getting the data directly in]. > > > > Similar code could be written for MPIAIJ though it gets more > > complicated because of the more complicated structure and if you place > > values off-processor. It could also be written for only PETSc matrix > > formats, BAIJ, SBAIJ, Bdiag etc. > > > > We haven't written this codes because we haven't needed them and much > > prefer to simply put the matrix values into the matrix WHEN they are > > generated rather then store them in some data-structure that has to be > > then converted into the PETSc format. > > > > If you would like to provide a routine like this we'd be glad to add it > > to PETSc and maintain it. > > > > Barry > > > > On Fri, 30 Jun 2006, Matt Funk wrote: > > > Hi, > > > > > > i have a matrix stored in the matrix free format. I.e. an array > > > indicating the row number, an array storing the column number and the > > > array of corresponding values. > > > > > > I was wondering what the best way is to build the PETsC matrix using > > > this. I was hoping that there is call to some sparse matrix assembler > > > function to which is simply pass these three arrays and it builds the > > > matrix for me. > > > > > > However, i did not find anything that simple. So i guess i need to do > > > it row by row using the MatSetValues fcn() after allocating the memory > > > for the matrix (i.e. pretty much as the procedure described on p.54 of > > > the user manual)? > > > > > > mat > > > > > > On Friday 30 June 2006 14:30, Satish Balay wrote: > > >> Added to petsc-dev now. > > >> > > >> Satish > > >> > > >> On Fri, 30 Jun 2006, Barry Smith wrote: > > >>> Mathieu, > > >>> > > >>> Cool, thanks. > > >>> > > >>> Satish, > > >>> > > >>> Could you please apply the diff to petsc-dev now and then push > > >>> so any future changes anyone makes will be combatiable with the new > > >>> code. > > >>> > > >>> Thanks > > >>> > > >>> Barry > > >>> > > >>> On Fri, 30 Jun 2006, Mathieu Taillefumier wrote: > > >>>> Good morning everybody, > > >>>> > > >>>> I finished to modify the code of the library in order to compile the > > >>>> complex version with a C compiler. A few numbers of files have been > > >>>> modified. This modifications include a modification of the file > > >>>> language.py where I put in comment two lines forcing to compile the > > >>>> library with a c++ compiler. Since I don't really know python, I > > >>>> just put them as a comment, but i think it is better to put a > > >>>> message indicating that compiling the complex version of Petsc needs > > >>>> a C90 compliant compiler (gcc 3.4 and after work, it works also with > > >>>> icc) and is experimental. Actually I am working with this > > >>>> modifications and it seems to work like a charm (I had no problem > > >>>> with it). > > >>>> I still need some time to convert the example programs which do not > > >>>> compile but it will be done soon. > > >>>> Note for all of us : > > >>>> Don't use the name I as a variable anymore. It is a reserved word > > >>>> of the language. > > >>>> For most of us, the modifications must be transparent. > > >>>> > > >>>> Regards > > >>>> > > >>>> Mathieu From knepley at gmail.com Thu Jul 6 11:50:11 2006 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 6 Jul 2006 11:50:11 -0500 Subject: CG solver and preconditioner In-Reply-To: <200607061044.04608.mafunk@nmsu.edu> References: <200607031129.47976.mafunk@nmsu.edu> <200607061044.04608.mafunk@nmsu.edu> Message-ID: On 7/6/06, Matt Funk wrote: > > Hi, > > In the html web manual for PETSc: > > KSPCG is described as the preconditioned conjugate gradient iterative > method > with: The PCG method requires both the matrix and preconditioner to be > symmetric positive (semi) definite. > > For KSPCG do you need to specify a preconditioner type ie: Jacobi or is > there > a default the KSPCG uses. The default on 1 proc is ILU, and many is Block Jacobi. Reason for question is that for the system I am solving I need to specify > -pc_type none for KSPCG to run. If I do not turn the PC off then the > solver > does not run. Other KSP types like BiCG and GMRES work fine when I > specify a > PC. I have no idea what "deos not run" means. CG should work with any preconditioner that applies an SPD matrix. Matt -- "Failure has a thousand explanations. Success doesn't need one" -- Sir Alec Guiness -------------- next part -------------- An HTML attachment was scrubbed... URL: From diosady at MIT.EDU Thu Jul 6 15:43:39 2006 From: diosady at MIT.EDU (Laslo Tibor Diosady) Date: Thu, 6 Jul 2006 16:43:39 -0400 (EDT) Subject: AOApplicationToPetscPermute Message-ID: I have an array of values which are in my application ordering and I wish to reorder these to the petsc ordering. To do this I use AOApplicationToPetscPermuteInt, or PermuteReal, which works really well in serial but not so much in parallel. In parallel I want to have it so that each processor has it's local portion of the array, so that the sum of the lengths of the arrays on each processor add up to the sum of the indices in the AO ordering. However when I use AOApplicationToPetscPermute I get a segmentation fault, which I've traced to the fact that AOApplicationToPetscPermute assumes each processor has an array which is the number of indices in the AO ordering as opposed to only the length of the local portion of the AO. The permutations that I am doing are all local to each processor, but my AO has been defined globally since elsewhere in my code I need access to off processor entries. Do you have any suggestions on how I can solve this problem with the least amount of headache. Thanks, Laslo From knepley at gmail.com Thu Jul 6 16:00:45 2006 From: knepley at gmail.com (Matthew Knepley) Date: Thu, 6 Jul 2006 16:00:45 -0500 Subject: AOApplicationToPetscPermute In-Reply-To: References: Message-ID: AO is not scalable. I am pretty sure this is stated explicitly in the documentation. We are working on a parallel reordering construct, but it is not yet finished. Matt On 7/6/06, Laslo Tibor Diosady wrote: > > I have an array of values which are in my application ordering and I wish > to reorder these to the petsc ordering. To do this I use > AOApplicationToPetscPermuteInt, or PermuteReal, which works really well in > serial but not so much in parallel. > > In parallel I want to have it so that each processor has it's local > portion of the array, so that the sum of the lengths of the arrays on each > processor add up to the sum of the indices in the AO ordering. > > However when I use AOApplicationToPetscPermute I get a segmentation fault, > which I've traced to the fact that AOApplicationToPetscPermute assumes > each processor has an array which is the number of indices in the AO > ordering as opposed to only the length of the local portion of the AO. > > The permutations that I am doing are all local to each processor, but my > AO has been defined globally since elsewhere in my code I need access to > off processor entries. > > Do you have any suggestions on how I can solve this problem with the least > amount of headache. > > Thanks, > > Laslo > > -- "Failure has a thousand explanations. Success doesn't need one" -- Sir Alec Guiness -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchangjohn at gmail.com Fri Jul 7 01:33:20 2006 From: liuchangjohn at gmail.com (liu chang) Date: Fri, 7 Jul 2006 14:33:20 +0800 Subject: Welcome to petsc-users In-Reply-To: <200607070624.k676OF259778@mcs.anl.gov> References: <200607070624.k676OF259778@mcs.anl.gov> Message-ID: <94e43e390607062333i2a5036ddw8e3cbb4dd1a20844@mail.gmail.com> Dear all, I am a new user of Petsc and have been struggling to install 2.3.1 on my workstation. I noticed that in Petsc 2.3.1 I can no longer find the definitions of constants PETSC_FILE_CREATE, PETSC_FILE_RDONLY, etc, which were in petscviewer.h in version 2.3.0. I looked through the changelog and found nothing relevant. What has happened to these constants? Some exisiting code uses these constants and I can no longer compile them. Any advice? Thank you. Liu From liuchangjohn at gmail.com Fri Jul 7 02:23:08 2006 From: liuchangjohn at gmail.com (liu chang) Date: Fri, 7 Jul 2006 15:23:08 +0800 Subject: PETSC_FILE_* constants missing Message-ID: <94e43e390607070023r702c923cu430ad1886c221cea@mail.gmail.com> Dear all, I am a new user of Petsc and have been struggling to install 2.3.1 on my workstation. I noticed that in Petsc 2.3.1 I can no longer find the definitions of constants PETSC_FILE_CREATE, PETSC_FILE_RDONLY, etc, which were in petscviewer.h in version 2.3.0. I looked through the changelog and found nothing relevant. What has happened to these constants? Some exisiting code uses these constants and I can no longer compile them. Any advice? Thank you. Liu From knepley at gmail.com Fri Jul 7 09:47:42 2006 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 7 Jul 2006 09:47:42 -0500 Subject: Welcome to petsc-users In-Reply-To: <94e43e390607062333i2a5036ddw8e3cbb4dd1a20844@mail.gmail.com> References: <200607070624.k676OF259778@mcs.anl.gov> <94e43e390607062333i2a5036ddw8e3cbb4dd1a20844@mail.gmail.com> Message-ID: Yes, we changed those. FILE_MODE_READ now reads, FILE_MODE_WRITE writes to a new file, and FILE_MODE_APPEND writes to an existing file. Matt On 7/7/06, liu chang wrote: > > Dear all, > > I am a new user of Petsc and have been struggling to install 2.3.1 on > my workstation. I noticed that in Petsc 2.3.1 I can no longer find the > definitions of constants PETSC_FILE_CREATE, PETSC_FILE_RDONLY, etc, > which were in petscviewer.h in version 2.3.0. I looked through the > changelog and found nothing relevant. What has happened to these > constants? Some exisiting code uses these constants and I can no > longer compile them. Any advice? > > Thank you. > > Liu > > -- "Failure has a thousand explanations. Success doesn't need one" -- Sir Alec Guiness -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Jul 7 09:59:22 2006 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 7 Jul 2006 09:59:22 -0500 (CDT) Subject: PETSC_FILE_* constants missing In-Reply-To: <94e43e390607070023r702c923cu430ad1886c221cea@mail.gmail.com> References: <94e43e390607070023r702c923cu430ad1886c221cea@mail.gmail.com> Message-ID: The changelog is a bit brief on this. From: http://www-unix.mcs.anl.gov/petsc/petsc-as/documentation/changes/231.html # PetscViewerFileType changed to PetscFileMode # PetscViewerSetFileType() changed to PetscViewerFileSetMode() PETSC_FILE_CREATE, PETSC_FILE_RDONLY etc were of type PetscViewerFileType These are replaced with typedef enum {FILE_MODE_READ, FILE_MODE_WRITE, FILE_MODE_APPEND, FILE_MODE_UPDATE, FILE_MODE_APPEND_UPDATE} PetscFileMode; Satish On Fri, 7 Jul 2006, liu chang wrote: > Dear all, > > I am a new user of Petsc and have been struggling to install 2.3.1 on > my workstation. I noticed that in Petsc 2.3.1 I can no longer find the > definitions of constants PETSC_FILE_CREATE, PETSC_FILE_RDONLY, etc, > which were in petscviewer.h in version 2.3.0. I looked through the > changelog and found nothing relevant. What has happened to these > constants? Some exisiting code uses these constants and I can no > longer compile them. Any advice? > > Thank you. > > Liu > > From charden at scs.fsu.edu Mon Jul 10 16:09:24 2006 From: charden at scs.fsu.edu (Christopher Harden) Date: Mon, 10 Jul 2006 17:09:24 -0400 Subject: Appending Values in Matrices Message-ID: Hello All, I'm new to PETSc and I'm trying to use a linear solver from the KSP package in a finite element code. Does anyone have a slick way of appending values to already existing values in a matrix using MatSetValues or is there some better routine to call for this purpose? Thank you, Chris Harden From balay at mcs.anl.gov Mon Jul 10 16:23:15 2006 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 10 Jul 2006 16:23:15 -0500 (CDT) Subject: Appending Values in Matrices In-Reply-To: References: Message-ID: MatSetValues(..,ADD_VALUES); You might want to check the man page for MatSetValues(). Or are you looking for some other functionality? Satish On Mon, 10 Jul 2006, Christopher Harden wrote: > Hello All, > > I'm new to PETSc and I'm trying to use a linear solver from the KSP package in > a finite element code. Does anyone have a slick way of appending values to > already existing values in a matrix using MatSetValues or is there some better > routine to call for this purpose? > > Thank you, > > Chris Harden > > From sanjay at ce.berkeley.edu Mon Jul 10 16:23:36 2006 From: sanjay at ce.berkeley.edu (Sanjay Govindjee) Date: Mon, 10 Jul 2006 14:23:36 -0700 Subject: Appending Values in Matrices In-Reply-To: References: Message-ID: <44B2C558.5050609@ce.berkeley.edu> From Fortran: call MatSetValue( Kmat, i, j, val, ADD_VALUES, ierr ) Christopher Harden wrote: > Hello All, > > I'm new to PETSc and I'm trying to use a linear solver from the KSP > package in a finite element code. Does anyone have a slick way of > appending values to already existing values in a matrix using > MatSetValues or is there some better routine to call for this purpose? > > Thank you, > > Chris Harden -- ------------------------------------------------------------------------ Sanjay Govindjee, PhD, PE Voice: (510) 642-6060 Professor of Civil Engineering FAX: (510) 643-8928 Vice Chair for Research and Technical Services WWW: http://www.ce.berkeley.edu/~sanjay E-mail: sanjay at ce.berkeley.edu 709 Davis Hall Structural Engineering, Mechanics and Materials Department of Civil Engineering University of California Berkeley, CA 94720-1710 ------------------------------------------------------------------------ From charden at scs.fsu.edu Tue Jul 11 08:53:51 2006 From: charden at scs.fsu.edu (Christopher Harden) Date: Tue, 11 Jul 2006 09:53:51 -0400 Subject: Appending Values in Matrices In-Reply-To: References: Message-ID: Thank you! I was working from the online manual and missed the distinction between insert_values and add_values. I was sure that there had to be a simple way to do this. Thank you. Chris On Jul 10, 2006, at 5:23 PM, Satish Balay wrote: > MatSetValues(..,ADD_VALUES); > > You might want to check the man page for MatSetValues(). Or > are you looking for some other functionality? > > Satish > > On Mon, 10 Jul 2006, Christopher Harden wrote: > >> Hello All, >> >> I'm new to PETSc and I'm trying to use a linear solver from the KSP >> package in >> a finite element code. Does anyone have a slick way of appending >> values to >> already existing values in a matrix using MatSetValues or is there >> some better >> routine to call for this purpose? >> >> Thank you, >> >> Chris Harden >> >> > From KClimp at psl.nmsu.edu Wed Jul 12 16:24:04 2006 From: KClimp at psl.nmsu.edu (KClimp at psl.nmsu.edu) Date: Wed, 12 Jul 2006 15:24:04 -0600 Subject: Follow up CG and PC question Message-ID: <492AF2362E57294386A7A18C280CDA3D01AB17BD@mailserv.psl.nmsu.edu> PETSC Help, This e-mail is a follow up to the one Matt Funk wrote last week. I work with matt at NMSU and just wanted to give a little more information to try and solve our problem. The issue in a nutshell is we are solving our linear system using PETSC to experiment with different solvers and PCs. We have had great success with GMRES and BICG using Jacobi and block Jacobi and CG with no PC. The problem is when we try to run CG w/ the jacobi or bjacobi. Shown below are some results when our system was ran with CG and no pc and CG and Jacobi. Shown below is the ksp_view information for the system using no PC. It converges at 302 iterations and the solution vector is very accurate relative to our other results. CASE 1: CG AND NO PC ======================================================================== ======== Time Step #1 and Total Time 0.001 msec call lrmodel() call asm2_rhs() KSP Object: type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning PC Object: type: none linear system matrix = precond matrix: Matrix Object: type=aij, rows=8192, cols=8192 total: nonzeros=48640, allocated nonzeros=156280 not using I-node routines Norm of error 1.59444e-07, Iterations 302 CASE 2: CG AND JACOBI PC Shown below is the same system as above with the only change being -pc_type cg added at runtime. The ksp_view and ksp_monitor information are shown here. KSP reports a residual norm of 1.27 then does not iterate any more. This is what matt described as does not run. ======================================================================== ======== Time Step #1 and Total Time 0.001 msec call lrmodel() call asm2_rhs() 0 KSP Residual norm 1.270723190170e+00 KSP Object: type: cg maximum iterations=10000 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning PC Object: type: jacobi linear system matrix = precond matrix: Matrix Object: type=aij, rows=8192, cols=8192 total: nonzeros=48640, allocated nonzeros=156280 not using I-node routines Norm of error 0.0160078, Iterations 1 I hope that this will help diagnosis the problem and the issue is a simple mistake on our part. Thank you guys for helping beginner users like me. Kevin Climp Target and Missile Systems Physical Science Laboratory-New Mexico State University Phone - (505) 646-9244 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Jul 12 17:22:30 2006 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 12 Jul 2006 17:22:30 -0500 (CDT) Subject: Follow up CG and PC question In-Reply-To: <492AF2362E57294386A7A18C280CDA3D01AB17BD@mailserv.psl.nmsu.edu> References: <492AF2362E57294386A7A18C280CDA3D01AB17BD@mailserv.psl.nmsu.edu> Message-ID: Run with -ksp_converged_reason or call KSPGetConvergedReason() after the KSPSolve() to see why it stopped. My guess is that the matrix is not symmetric positive definite, perhaps it has some negative numbers on the diagonal? Barry On Wed, 12 Jul 2006, KClimp at psl.nmsu.edu wrote: > PETSC Help, > > > > This e-mail is a follow up to the one Matt Funk wrote last week. I work > with matt at NMSU and just wanted to give a little more information to > try and solve our problem. > > > > The issue in a nutshell is we are solving our linear system using PETSC > to experiment with different solvers and PCs. We have had great success > with GMRES and BICG using Jacobi and block Jacobi and CG with no PC. > The problem is when we try to run CG w/ the jacobi or bjacobi. > > > > Shown below are some results when our system was ran with CG and no pc > and CG and Jacobi. > > > > Shown below is the ksp_view information for the system using no PC. It > converges at 302 iterations and the solution vector is very accurate > relative to our other results. > > > > CASE 1: CG AND NO PC > > ======================================================================== > ======== > > Time Step #1 and Total Time 0.001 msec > > call lrmodel() > > call asm2_rhs() > > KSP Object: > > type: cg > > maximum iterations=10000 > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > > left preconditioning > > PC Object: > > type: none > > linear system matrix = precond matrix: > > Matrix Object: > > type=aij, rows=8192, cols=8192 > > total: nonzeros=48640, allocated nonzeros=156280 > > not using I-node routines > > Norm of error 1.59444e-07, Iterations 302 > > > > > > CASE 2: CG AND JACOBI PC > > Shown below is the same system as above with the only change being > -pc_type cg added at runtime. The ksp_view and ksp_monitor information > are shown here. KSP reports a residual norm of 1.27 then does not > iterate any more. This is what matt described as does not run. > > ======================================================================== > ======== > > Time Step #1 and Total Time 0.001 msec > > call lrmodel() > > call asm2_rhs() > > 0 KSP Residual norm 1.270723190170e+00 > > KSP Object: > > type: cg > > maximum iterations=10000 > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000 > > left preconditioning > > PC Object: > > type: jacobi > > linear system matrix = precond matrix: > > Matrix Object: > > type=aij, rows=8192, cols=8192 > > total: nonzeros=48640, allocated nonzeros=156280 > > not using I-node routines > > Norm of error 0.0160078, Iterations 1 > > > > > > I hope that this will help diagnosis the problem and the issue is a > simple mistake on our part. > > > > Thank you guys for helping beginner users like me. > > > > > > Kevin Climp > > Target and Missile Systems > > Physical Science Laboratory-New Mexico State University > > Phone - (505) 646-9244 > > > > From mafunk at nmsu.edu Thu Jul 13 17:24:19 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 13 Jul 2006 16:24:19 -0600 Subject: distributed array Message-ID: <200607131624.20616.mafunk@nmsu.edu> Hi, i was wondering if it is possible (or advisable) to use a distributed array for when my domain is not necessarily rectangular but my subdomains are rectangular? thanks mat From berend at chalmers.se Fri Jul 14 00:13:09 2006 From: berend at chalmers.se (Berend van Wachem) Date: Fri, 14 Jul 2006 07:13:09 +0200 Subject: distributed array In-Reply-To: <200607131624.20616.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> Message-ID: <44B727E5.9080708@chalmers.se> > Hi, > > i was wondering if it is possible (or advisable) to use a distributed array > for when my domain is not necessarily rectangular but my subdomains are > rectangular? > > thanks > mat > Hi Matt, This is exactly what I do; I work with a multi-block CFD problem, where the total geometry consists of a number of structured (not neccesarily rectangular) blocks. Each block has its own DA. How I solved it, with help from the petsc team, is by making functions which translate vectors from the complete domain (called block global in my code) to each block (called block local in my code). The matrix computation is done at the blockglobal level, and I can manipulate the vectors at the block local level. Just to give an example, I attatch the code at the end of this email. If you want more help/information please let me know. Good luck, Berend. #undef __FUNCT__ #define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct CFDMesh *Mesh) { int ierr, i; double *tmpPTR=NULL; PetscFunctionBegin; AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); for (i=0; iNMyBlocks; i++) { ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); CHKERRQ(ierr); ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i]); CHKERRQ(ierr); ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i]); CHKERRQ(ierr); } PetscFunctionReturn(0); } #undef __FUNCT__ #define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh *Mesh,int CopyData) { int i,ierr; PetscFunctionBegin; for (i=0; iNMyBlocks; i++) { if (CopyData==MF_COPY_DATA) { ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]); CHKERRQ(ierr); } ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); CHKERRQ(ierr); } DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); PetscFunctionReturn(0); } From jordi.marce at upc.edu Fri Jul 14 02:55:42 2006 From: jordi.marce at upc.edu (=?ISO-8859-1?Q?Jordi_Marc=E9_Nogu=E9?=) Date: Fri, 14 Jul 2006 09:55:42 +0200 Subject: I don't find information about this error In-Reply-To: <44B60C14.6090801@upc.edu> References: <44B60C14.6090801@upc.edu> Message-ID: <44B74DFE.8060501@upc.edu> Hello, I'm using Petsc 2.3.1 in Debian (updated by apt-get) and I don't find information in the web and in the troubleshooting section about this error message: > [0]PETSC ERROR: MatMatMult() line 6543 in src/mat/interface/matrix.c > [0]PETSC ERROR: Nonconforming object sizes! > [0]PETSC ERROR: fill=-2 must be > 0.0! Maybe you know what does it mean because I've tried a lot of thing in my code to solve this problem :-((( My Matrices have the same dimension. regards,\j -- Jordi Marc?-Nogu? Dept. Resist?ncia de Materials i Estructures a l'Enginyeria Universitat Polit?cnica de Catalunya (UPC) Edifici T45 - despatx 137 ETSEIAT (Terrassa) phone: +34 937 398 728 mail: jordi.marce at upc.edu From knepley at gmail.com Fri Jul 14 07:54:23 2006 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 14 Jul 2006 07:54:23 -0500 Subject: I don't find information about this error In-Reply-To: <44B74DFE.8060501@upc.edu> References: <44B60C14.6090801@upc.edu> <44B74DFE.8060501@upc.edu> Message-ID: 1) The error message is wrong. I fixed this in the dev tree. 2) 'fill' is the fill ratio for speculative allocation. It looks like you gave PETSC_DEFAULT which is invalid. You need to give a real number (the ratio of the final to initial allocation) so it should be > 1.0. Matt On 7/14/06, Jordi Marc? Nogu? wrote: > > > Hello, > > I'm using Petsc 2.3.1 in Debian (updated by apt-get) and I don't find > information in the web and in the troubleshooting section about this > error message: > > > [0]PETSC ERROR: MatMatMult() line 6543 in src/mat/interface/matrix.c > > [0]PETSC ERROR: Nonconforming object sizes! > > [0]PETSC ERROR: fill=-2 must be > 0.0! > > Maybe you know what does it mean because I've tried a lot of thing in my > code to solve this problem :-((( My Matrices have the same dimension. > > regards,\j > > > > -- > Jordi Marc?-Nogu? > Dept. Resist?ncia de Materials i Estructures a l'Enginyeria > Universitat Polit?cnica de Catalunya (UPC) > > Edifici T45 - despatx 137 > ETSEIAT (Terrassa) > > phone: +34 937 398 728 > mail: jordi.marce at upc.edu > > -- "Failure has a thousand explanations. Success doesn't need one" -- Sir Alec Guiness -------------- next part -------------- An HTML attachment was scrubbed... URL: From jordi.marce at upc.edu Mon Jul 17 03:42:11 2006 From: jordi.marce at upc.edu (=?ISO-8859-1?Q?Jordi_Marc=E9_Nogu=E9?=) Date: Mon, 17 Jul 2006 10:42:11 +0200 Subject: I don't find information about this error In-Reply-To: References: <44B60C14.6090801@upc.edu> <44B74DFE.8060501@upc.edu> Message-ID: <44BB4D63.7020305@upc.edu> Matthew Knepley wrote: > 1) The error message is wrong. I fixed this in the dev tree. > > 2) 'fill' is the fill ratio for speculative allocation. It looks like > you gave PETSC_DEFAULT which is invalid. You need to > give a real number (the ratio of the final to initial allocation) > so it should be > 1.0. Ok. I write "1" and runs. You guessed, after I writed "PETSC_DEFAULT".... but, I don't know/understand the value of "fill". There is a default value for this parameter??? What is the best input??? thanks, jordi PD: the solution was easy but the message "Nonconforming object sizes!" confused me. > > Matt > > On 7/14/06, *Jordi Marc? Nogu?* > wrote: > > > Hello, > > I'm using Petsc 2.3.1 in Debian (updated by apt-get) and I don't find > information in the web and in the troubleshooting section about this > error message: > > > [0]PETSC ERROR: MatMatMult() line 6543 in src/mat/interface/matrix.c > > [0]PETSC ERROR: Nonconforming object sizes! > > [0]PETSC ERROR: fill=-2 must be > 0.0! > > Maybe you know what does it mean because I've tried a lot of thing in my > code to solve this problem :-((( My Matrices have the same dimension. > > regards,\j > > > > -- > Jordi Marc?-Nogu? > Dept. Resist?ncia de Materials i Estructures a l'Enginyeria > Universitat Polit?cnica de Catalunya (UPC) > > Edifici T45 - despatx 137 > ETSEIAT (Terrassa) > > phone: +34 937 398 728 > mail: jordi.marce at upc.edu > > > > > -- > "Failure has a thousand explanations. Success doesn't need one" -- Sir > Alec Guiness -- Jordi Marc?-Nogu? Dept. Resist?ncia de Materials i Estructures a l'Enginyeria Universitat Polit?cnica de Catalunya (UPC) Edifici T45 - despatx 137 ETSEIAT (Terrassa) phone: +34 937 398 728 mail: jordi.marce at upc.edu From charden at scs.fsu.edu Wed Jul 19 12:20:21 2006 From: charden at scs.fsu.edu (Christopher Harden) Date: Wed, 19 Jul 2006 13:20:21 -0400 Subject: Passing Vec and Mat in C++ Message-ID: Hello, I'm having trouble passing the Vec and Mat data types to a C function. Specifically, In my header for example I'm using, void assembly( Vec F, Mat A ); and I'm getting an error saying that : cannot convert `_p_Vec**' to `_p_Vec*' for argument `8' to `void adjust_boundary(int, double*, int*, int, double, double*, double*, _p_Vec*, _p_Mat*)' I've been going through the man pages and it seems like I'm doing this right. Can anyone suggest a fix? Thank you, Chris Harden From balay at mcs.anl.gov Wed Jul 19 12:56:06 2006 From: balay at mcs.anl.gov (Satish Balay) Date: Wed, 19 Jul 2006 12:56:06 -0500 (CDT) Subject: Passing Vec and Mat in C++ In-Reply-To: References: Message-ID: The error says - you are passing in '&F' instead of 'F' to adjust_boundary() Satish On Wed, 19 Jul 2006, Christopher Harden wrote: > Hello, > > I'm having trouble passing the Vec and Mat data types to a C function. > Specifically, > > In my header for example I'm using, > > void assembly( Vec F, Mat A ); > > and I'm getting an error saying that : > > cannot convert `_p_Vec**' to `_p_Vec*' for argument `8' > to `void adjust_boundary(int, double*, int*, int, double, double*, double*, > _p_Vec*, _p_Mat*)' > > I've been going through the man pages and it seems like I'm doing this right. > > Can anyone suggest a fix? > > Thank you, > > Chris Harden > > From charden at scs.fsu.edu Wed Jul 19 14:14:55 2006 From: charden at scs.fsu.edu (Christopher Harden) Date: Wed, 19 Jul 2006 15:14:55 -0400 Subject: Passing Vec and Mat in C++ In-Reply-To: References: Message-ID: <82456f95e3f1ee2618a6d2ae9d813fc5@scs.fsu.edu> That was it. It seems to be working fine now. Thank you very much. Chris On Jul 19, 2006, at 1:56 PM, Satish Balay wrote: > The error says - you are passing in '&F' instead of 'F' to > adjust_boundary() > > Satish > > On Wed, 19 Jul 2006, Christopher Harden wrote: > >> Hello, >> >> I'm having trouble passing the Vec and Mat data types to a C >> function. >> Specifically, >> >> In my header for example I'm using, >> >> void assembly( Vec F, Mat A ); >> >> and I'm getting an error saying that : >> >> cannot convert `_p_Vec**' to `_p_Vec*' for argument `8' >> to `void adjust_boundary(int, double*, int*, int, double, double*, >> double*, >> _p_Vec*, _p_Mat*)' >> >> I've been going through the man pages and it seems like I'm doing >> this right. >> >> Can anyone suggest a fix? >> >> Thank you, >> >> Chris Harden >> >> > From mafunk at nmsu.edu Wed Jul 19 16:04:46 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Wed, 19 Jul 2006 15:04:46 -0600 Subject: distributed array In-Reply-To: <44B727E5.9080708@chalmers.se> References: <200607131624.20616.mafunk@nmsu.edu> <44B727E5.9080708@chalmers.se> Message-ID: <200607191504.49448.mafunk@nmsu.edu> Thanks for the response (and sorry for the late 'thanks'), i can see how this is supposed to work. However, my question is then with the creation of the distributed array objects. I take it from your code that you create a DA for each patch (?). M and N then would be the dimension of my 2D patch. My question is what i would be specifying for n and m. I am a little confused in general what m and n are. It says they denote the process partition in each direction and that m*n must be equal the total number of processes in the communicator. Sorry, i am almost sure this is something simple. thanks mat On Thursday 13 July 2006 23:13, Berend van Wachem wrote: > > Hi, > > > > i was wondering if it is possible (or advisable) to use a distributed > > array > > > for when my domain is not necessarily rectangular but my subdomains are > > rectangular? > > > > thanks > > mat > > Hi Matt, > > This is exactly what I do; I work with a multi-block CFD problem, where > the total geometry consists of a number of structured (not neccesarily > rectangular) blocks. Each block has its own DA. > > How I solved it, with help from the petsc team, is by making functions > which translate vectors from the complete domain (called block global in > my code) to each block (called block local in my code). The matrix > computation is done at the blockglobal level, and I can manipulate the > vectors at the block local level. Just to give an example, I attatch the > code at the end of this email. If you want more help/information please > let me know. > > Good luck, > > Berend. > > > #undef __FUNCT__ > #define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" > int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct > CFDMesh *Mesh) > { > int ierr, i; > double *tmpPTR=NULL; > > PetscFunctionBegin; > > AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); > > for (i=0; iNMyBlocks; i++) > { > ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); > CHKERRQ(ierr); > > ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[ >i]); CHKERRQ(ierr); > > ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i] >); CHKERRQ(ierr); > > } > > PetscFunctionReturn(0); > } > > #undef __FUNCT__ > #define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" > int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh > *Mesh,int CopyData) > { > int i,ierr; > > PetscFunctionBegin; > > for (i=0; iNMyBlocks; i++) > { > if (CopyData==MF_COPY_DATA) > { > > ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]); > CHKERRQ(ierr); > } > ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); > CHKERRQ(ierr); > } > DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); > > PetscFunctionReturn(0); > > } From bsmith at mcs.anl.gov Wed Jul 19 19:21:05 2006 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 19 Jul 2006 19:21:05 -0500 (CDT) Subject: I don't find information about this error In-Reply-To: <44BB4D63.7020305@upc.edu> References: <44B60C14.6090801@upc.edu> <44B74DFE.8060501@upc.edu> <44BB4D63.7020305@upc.edu> Message-ID: I added support for PETSC_DEFAULT here. Barry On Mon, 17 Jul 2006, Jordi Marc? Nogu? wrote: > Matthew Knepley wrote: >> 1) The error message is wrong. I fixed this in the dev tree. >> >> 2) 'fill' is the fill ratio for speculative allocation. It looks like >> you gave PETSC_DEFAULT which is invalid. You need to >> give a real number (the ratio of the final to initial allocation) >> so it should be > 1.0. > > > Ok. I write "1" and runs. You guessed, after I writed "PETSC_DEFAULT".... > but, I don't know/understand the value of "fill". There is a default value > for this parameter??? What is the best input??? > > thanks, > jordi > > PD: the solution was easy but the message "Nonconforming object sizes!" > confused me. > >> >> Matt >> >> On 7/14/06, *Jordi Marc? Nogu?* > > wrote: >> >> >> Hello, >> >> I'm using Petsc 2.3.1 in Debian (updated by apt-get) and I don't find >> information in the web and in the troubleshooting section about this >> error message: >> >> > [0]PETSC ERROR: MatMatMult() line 6543 in src/mat/interface/matrix.c >> > [0]PETSC ERROR: Nonconforming object sizes! >> > [0]PETSC ERROR: fill=-2 must be > 0.0! >> >> Maybe you know what does it mean because I've tried a lot of thing in >> my >> code to solve this problem :-((( My Matrices have the same dimension. >> >> regards,\j >> >> >> >> -- >> Jordi Marc?-Nogu? >> Dept. Resist?ncia de Materials i Estructures a l'Enginyeria >> Universitat Polit?cnica de Catalunya (UPC) >> >> Edifici T45 - despatx 137 >> ETSEIAT (Terrassa) >> >> phone: +34 937 398 728 >> mail: jordi.marce at upc.edu >> >> >> >> >> -- >> "Failure has a thousand explanations. Success doesn't need one" -- Sir Alec >> Guiness > > > From berend at chalmers.se Thu Jul 20 04:05:33 2006 From: berend at chalmers.se (Berend van Wachem) Date: Thu, 20 Jul 2006 11:05:33 +0200 Subject: distributed array In-Reply-To: <200607191504.49448.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <44B727E5.9080708@chalmers.se> <200607191504.49448.mafunk@nmsu.edu> Message-ID: <44BF475D.3040103@chalmers.se> Hi Matt, Yes, I create a DA for each domain (block I call it in the code) where M and M and N are the dimensions of that block. In my problem I do specify m and n, by a simple algorithm; I try to find a solution so that each processor has about the same number of nodes in total. I do this by taking the first block, looking at how much processors it would get (by dividing its number of nodes by the total number of nodes times the number of processors) and fill it up from that. For each block I create a seperate communicator as well, so each block has its own communicator. From the vectors of the various blocks I glue together one larger vector on which the computations are done, with these functions, #undef __FUNCT__ #define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct CFDMesh *Mesh) { int ierr,i; double *GPtr; PetscFunctionBegin; ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); for (i=0; iNMyBlocks; i++) { ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); CHKERRQ(ierr); ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); CHKERRQ(ierr); } PetscFunctionReturn(0); } #undef __FUNCT__ #define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh) { int ierr,i; double *GPtr; PetscFunctionBegin; if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) { ErrorH(1,"Trying to delete a non existing vector from BGCC list"); } for (i=0; iNMyBlocks; i++) { ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); CHKERRQ(ierr); } ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); PetscFunctionReturn(0); } > Thanks for the response (and sorry for the late 'thanks'), > > i can see how this is supposed to work. However, my question is then with the > creation of the distributed array objects. I take it from your code that you > create a DA for each patch (?). M and N then would be the dimension of my 2D > patch. My question is what i would be specifying for n and m. I am a little > confused in general what m and n are. It says they denote the process > partition in each direction and that m*n must be equal the total number of > processes in the communicator. > Sorry, i am almost sure this is something simple. > > thanks > mat > > On Thursday 13 July 2006 23:13, Berend van Wachem wrote: > >> > Hi, >> > >> > i was wondering if it is possible (or advisable) to use a distributed >> >>array >> >> > for when my domain is not necessarily rectangular but my subdomains are >> > rectangular? >> > >> > thanks >> > mat >> >>Hi Matt, >> >>This is exactly what I do; I work with a multi-block CFD problem, where >>the total geometry consists of a number of structured (not neccesarily >>rectangular) blocks. Each block has its own DA. >> >>How I solved it, with help from the petsc team, is by making functions >>which translate vectors from the complete domain (called block global in >>my code) to each block (called block local in my code). The matrix >>computation is done at the blockglobal level, and I can manipulate the >>vectors at the block local level. Just to give an example, I attatch the >>code at the end of this email. If you want more help/information please >>let me know. >> >>Good luck, >> >>Berend. >> >> >>#undef __FUNCT__ >>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" >>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct >>CFDMesh *Mesh) >>{ >> int ierr, i; >> double *tmpPTR=NULL; >> >> PetscFunctionBegin; >> >> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); >> >> for (i=0; iNMyBlocks; i++) >> { >> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); >>CHKERRQ(ierr); >> >>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[ >>i]); CHKERRQ(ierr); >> >>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i] >>); CHKERRQ(ierr); >> >> } >> >> PetscFunctionReturn(0); >>} >> >>#undef __FUNCT__ >>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" >>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh >>*Mesh,int CopyData) >>{ >> int i,ierr; >> >> PetscFunctionBegin; >> >> for (i=0; iNMyBlocks; i++) >> { >> if (CopyData==MF_COPY_DATA) >> { >> >>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]); >>CHKERRQ(ierr); >> } >> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); >>CHKERRQ(ierr); >> } >> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); >> >> PetscFunctionReturn(0); >> >>} > > -- /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ | Berend van Wachem | | Multiphase Flow Group | | Chalmers University of Technology | | | | Please note that my email address has changed to: | | Berend at chalmers.se | | | | Please make the appropriate changes in your address | | list. | | | __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ \/______________________________________________________/ From mafunk at nmsu.edu Thu Jul 20 09:31:14 2006 From: mafunk at nmsu.edu (mafunk at nmsu.edu) Date: Thu, 20 Jul 2006 08:31:14 -0600 Subject: distributed array In-Reply-To: <44BF475D.3040103@chalmers.se> References: <200607131624.20616.mafunk@nmsu.edu> <44B727E5.9080708@chalmers.se> <200607191504.49448.mafunk@nmsu.edu> <44BF475D.3040103@chalmers.se> Message-ID: <1153405874.44bf93b247f7b@webmail.nmsu.edu> Hi Berend, i think i am getting there. Sorry for me being a little slow (it's the first time I am using PETSc). I need to look at these functions you send me and see what the PETSc calls are. My problem is different from yours to some degree in that i already have loadbalanced subblocks. And multiple subblocks could already reside on the same processor. So what i was thinking was to create a DA and a local sequential vector on each subblock (and here i would specify m=n=1 since the local Seqvector/DA resides only on one proc, which would give me multiple Seqvectors/DA per processor), and then as you say, somehow glue together the "super"-vector (the global vector) ... :). Though that is the part which am i stuck on right now. So i guess the question that i have left for now are: Do you create an MPI vector for your global vector? Why did you have to create a communicator for each subblock? Sorry for pestering you again, but since you are in Sweden (or your email is anyway) i think i might not get you again today ... :) thanks for all your help mat Quoting Berend van Wachem : > Hi Matt, > > Yes, I create a DA for each domain (block I call it in the code) where M > and M and N are the dimensions of that block. > > In my problem I do specify m and n, by a simple algorithm; I try to find > a solution so that each processor has about the same number of nodes in > total. I do this by taking the first block, looking at how much > processors it would get (by dividing its number of nodes by the total > number of nodes times the number of processors) and fill it up from > that. For each block I create a seperate communicator as well, so each > block has its own communicator. > > From the vectors of the various blocks I glue together one larger > vector on which the computations are done, with these functions, > > #undef __FUNCT__ > #define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" > int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct > CFDMesh *Mesh) > { > int ierr,i; > double *GPtr; > > PetscFunctionBegin; > > ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); > AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); > > for (i=0; iNMyBlocks; i++) > { > ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > CHKERRQ(ierr); > ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); > CHKERRQ(ierr); > } > PetscFunctionReturn(0); > } > > > #undef __FUNCT__ > #define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" > int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh) > { > int ierr,i; > double *GPtr; > > PetscFunctionBegin; > > if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) > { > ErrorH(1,"Trying to delete a non existing vector from BGCC list"); > } > > for (i=0; iNMyBlocks; i++) > { > ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); > ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > CHKERRQ(ierr); > } > ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); > DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); > > PetscFunctionReturn(0); > } > > > > Thanks for the response (and sorry for the late 'thanks'), > > > > i can see how this is supposed to work. However, my question is then with > the > > creation of the distributed array objects. I take it from your code that > you > > create a DA for each patch (?). M and N then would be the dimension of my > 2D > > patch. My question is what i would be specifying for n and m. I am a little > > > confused in general what m and n are. It says they denote the process > > partition in each direction and that m*n must be equal the total number of > > > processes in the communicator. > > Sorry, i am almost sure this is something simple. > > > > thanks > > mat > > > > On Thursday 13 July 2006 23:13, Berend van Wachem wrote: > > > >> > Hi, > >> > > >> > i was wondering if it is possible (or advisable) to use a distributed > >> > >>array > >> > >> > for when my domain is not necessarily rectangular but my subdomains > are > >> > rectangular? > >> > > >> > thanks > >> > mat > >> > >>Hi Matt, > >> > >>This is exactly what I do; I work with a multi-block CFD problem, where > >>the total geometry consists of a number of structured (not neccesarily > >>rectangular) blocks. Each block has its own DA. > >> > >>How I solved it, with help from the petsc team, is by making functions > >>which translate vectors from the complete domain (called block global in > >>my code) to each block (called block local in my code). The matrix > >>computation is done at the blockglobal level, and I can manipulate the > >>vectors at the block local level. Just to give an example, I attatch the > >>code at the end of this email. If you want more help/information please > >>let me know. > >> > >>Good luck, > >> > >>Berend. > >> > >> > >>#undef __FUNCT__ > >>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" > >>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct > >>CFDMesh *Mesh) > >>{ > >> int ierr, i; > >> double *tmpPTR=NULL; > >> > >> PetscFunctionBegin; > >> > >> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); > >> > >> for (i=0; iNMyBlocks; i++) > >> { > >> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); > >>CHKERRQ(ierr); > >> > >>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[ > >>i]); CHKERRQ(ierr); > >> > >>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i] > >>); CHKERRQ(ierr); > >> > >> } > >> > >> PetscFunctionReturn(0); > >>} > >> > >>#undef __FUNCT__ > >>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" > >>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh > >>*Mesh,int CopyData) > >>{ > >> int i,ierr; > >> > >> PetscFunctionBegin; > >> > >> for (i=0; iNMyBlocks; i++) > >> { > >> if (CopyData==MF_COPY_DATA) > >> { > >> > >>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]); > >>CHKERRQ(ierr); > >> } > >> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); > >>CHKERRQ(ierr); > >> } > >> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); > >> > >> PetscFunctionReturn(0); > >> > >>} > > > > > > > -- > /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ > L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > | Berend van Wachem | > | Multiphase Flow Group | > | Chalmers University of Technology | > | | > | Please note that my email address has changed to: | > | Berend at chalmers.se | > | | > | Please make the appropriate changes in your address | > | list. | > | | > __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > \/______________________________________________________/ > > From berend at chalmers.se Thu Jul 20 13:21:36 2006 From: berend at chalmers.se (Berend van Wachem) Date: Thu, 20 Jul 2006 20:21:36 +0200 Subject: distributed array In-Reply-To: <1153405874.44bf93b247f7b@webmail.nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <44B727E5.9080708@chalmers.se> <200607191504.49448.mafunk@nmsu.edu> <44BF475D.3040103@chalmers.se> <1153405874.44bf93b247f7b@webmail.nmsu.edu> Message-ID: <44BFC9B0.2090605@chalmers.se> Hi Mat, I understand it's confusing in the beginning - it took me a while to grasp it as well. > So what i was thinking was to create a DA and a local sequential vector on each > subblock (and here i would specify m=n=1 since the local Seqvector/DA resides > only on one proc, which would give me multiple Seqvectors/DA per processor), and > then as you say, somehow glue together the "super"-vector (the global vector) > ... :). Though that is the part which am i stuck on right now. > Why did you have to create a communicator for each subblock? > If everything is balanced and the blocks have the same size, you could specify m=n=1, but you would still need to make a seperate communicator for each of the blocks; how would the DA otherwise know on which processor it lies on? I created the communicators with ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup); CHKERRQ(ierr); for (i = 0; i < Mesh->TOTBlocks; i++) { ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc, Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup); CHKERRQ(ierr); ierr = MPI_Comm_create(PETSC_COMM_WORLD, Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm)); CHKERRQ(ierr); } Another tip on communicators. Why it is also convenient to have a communicator for one processor is for data processing from file. For instance, I have the initial data in one file. I make a DA, similar to the DA I want to use later on, but for just one processor. I read the data, and then save the DA to disk. Then I make the real DA with the communicator and the number of processors I want to have and use DALoad to read the DA - and all the data is transferred to the appropriate processor. > Do you create an MPI vector for your global vector? Yes - because I want to get a solution over the COMPLETE problem in one matrix. If your blocks are in any way connected, you will want to do this as well. "Glueing" the DA vectors into one big one is not difficult if you use the length of the various DA vectors and glue them together (see code I sent in previous email) One thing you also have to think about is cross-block-addressing. One point in one block has a neighbour it is dependent upon in another block. In the code I sent you, this addressjump is called "daoffset" and is member of the block structure (I am an old C++ programmer so I use mostly structures). > Sorry for pestering you again, but since you are in Sweden (or your email is > anyway) i think i might not get you again today ... :) No problem - just ask. It took me a while before I understood it and the best thing is to look at examples that come with petsc and maybe the code examples I send you, please let me know if you want me to send more. Good luck, Berend. > thanks for all your help > mat > > > > Quoting Berend van Wachem : > > >>Hi Matt, >> >>Yes, I create a DA for each domain (block I call it in the code) where M >>and M and N are the dimensions of that block. >> >>In my problem I do specify m and n, by a simple algorithm; I try to find >>a solution so that each processor has about the same number of nodes in >>total. I do this by taking the first block, looking at how much >>processors it would get (by dividing its number of nodes by the total >>number of nodes times the number of processors) and fill it up from >>that. For each block I create a seperate communicator as well, so each >>block has its own communicator. >> >> From the vectors of the various blocks I glue together one larger >>vector on which the computations are done, with these functions, >> >>#undef __FUNCT__ >>#define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" >>int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct >>CFDMesh *Mesh) >>{ >> int ierr,i; >> double *GPtr; >> >> PetscFunctionBegin; >> >> ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); >> AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); >> >> for (i=0; iNMyBlocks; i++) >> { >> ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); >>CHKERRQ(ierr); >> ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); >> CHKERRQ(ierr); >> } >> PetscFunctionReturn(0); >>} >> >> >>#undef __FUNCT__ >>#define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" >>int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh) >>{ >> int ierr,i; >> double *GPtr; >> >> PetscFunctionBegin; >> >> if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) >> { >> ErrorH(1,"Trying to delete a non existing vector from BGCC list"); >> } >> >> for (i=0; iNMyBlocks; i++) >> { >> ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); >> ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); >>CHKERRQ(ierr); >> } >> ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); >> DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); >> >> PetscFunctionReturn(0); >>} >> >> >> >>>Thanks for the response (and sorry for the late 'thanks'), >>> >>>i can see how this is supposed to work. However, my question is then with >> >>the >> >>>creation of the distributed array objects. I take it from your code that >> >>you >> >>>create a DA for each patch (?). M and N then would be the dimension of my >> >>2D >> >>>patch. My question is what i would be specifying for n and m. I am a little >> >>>confused in general what m and n are. It says they denote the process >>>partition in each direction and that m*n must be equal the total number of >> >>>processes in the communicator. >>>Sorry, i am almost sure this is something simple. >>> >>>thanks >>>mat >>> >>>On Thursday 13 July 2006 23:13, Berend van Wachem wrote: >>> >>> >>>>>Hi, >>>>> >>>>>i was wondering if it is possible (or advisable) to use a distributed >>>> >>>>array >>>> >>>> >>>>>for when my domain is not necessarily rectangular but my subdomains >> >>are >> >>>>>rectangular? >>>>> >>>>>thanks >>>>>mat >>>> >>>>Hi Matt, >>>> >>>>This is exactly what I do; I work with a multi-block CFD problem, where >>>>the total geometry consists of a number of structured (not neccesarily >>>>rectangular) blocks. Each block has its own DA. >>>> >>>>How I solved it, with help from the petsc team, is by making functions >>>>which translate vectors from the complete domain (called block global in >>>>my code) to each block (called block local in my code). The matrix >>>>computation is done at the blockglobal level, and I can manipulate the >>>>vectors at the block local level. Just to give an example, I attatch the >>>>code at the end of this email. If you want more help/information please >>>>let me know. >>>> >>>>Good luck, >>>> >>>>Berend. >>>> >>>> >>>>#undef __FUNCT__ >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" >>>>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, struct >>>>CFDMesh *Mesh) >>>>{ >>>> int ierr, i; >>>> double *tmpPTR=NULL; >>>> >>>> PetscFunctionBegin; >>>> >>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); >>>> >>>> for (i=0; iNMyBlocks; i++) >>>> { >>>> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); >>>>CHKERRQ(ierr); >>>> >>>>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[ >>>>i]); CHKERRQ(ierr); >>>> >>>>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A)[i] >>>>); CHKERRQ(ierr); >>>> >>>> } >>>> >>>> PetscFunctionReturn(0); >>>>} >>>> >>>>#undef __FUNCT__ >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" >>>>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh >>>>*Mesh,int CopyData) >>>>{ >>>> int i,ierr; >>>> >>>> PetscFunctionBegin; >>>> >>>> for (i=0; iNMyBlocks; i++) >>>> { >>>> if (CopyData==MF_COPY_DATA) >>>> { >>>> >>>>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i]); >>>>CHKERRQ(ierr); >>>> } >>>> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); >>>>CHKERRQ(ierr); >>>> } >>>> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); >>>> >>>> PetscFunctionReturn(0); >>>> >>>>} >>> >>> >> >>-- >> /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ >> L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ >> | Berend van Wachem | >> | Multiphase Flow Group | >> | Chalmers University of Technology | >> | | >> | Please note that my email address has changed to: | >> | Berend at chalmers.se | >> | | >> | Please make the appropriate changes in your address | >> | list. | >> | | >> __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ >> \/______________________________________________________/ >> >> > > > -- /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ | Berend van Wachem | | Multiphase Flow Group | | Chalmers University of Technology | | | | Please note that my email address has changed to: | | Berend at chalmers.se | | | | Please make the appropriate changes in your address | | list. | | | __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ \/______________________________________________________/ From mafunk at nmsu.edu Thu Jul 20 14:06:14 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 20 Jul 2006 13:06:14 -0600 Subject: distributed array In-Reply-To: <44BFC9B0.2090605@chalmers.se> References: <200607131624.20616.mafunk@nmsu.edu> <1153405874.44bf93b247f7b@webmail.nmsu.edu> <44BFC9B0.2090605@chalmers.se> Message-ID: <200607201306.15731.mafunk@nmsu.edu> Hi Berend, your code examples helped me a lot to simply get a glimpse of how some of this stuff works. And i think i understand mostly what your functions are doing. However, after going through your code i rethought what i am trying to do. And i am not sure if the idea of using DA's is the best way or not. The reason i initially thought of using DA's was due to its description in the user manual " ... are intended for use with logically regular rectangular grids when communication of nonlocal data is needed before certain local computations can occur." Well, my problem is such that i already have my right hand side. What i need to do is transfer this to a PETSc vector and build the matrix. So i guess i don't know what the advantage is to me in using DA's because my right hand side is ready to be used once i get to PETSc. My problem is different from yours (i think anyway) in that any of your blocks are distributed over several processors. For me a single processors can own several blocks. What i am not sure of is whether that makes a difference in how i approach the problem vs what you are doing. So the most basic question (and maybe somebody else has an answer to that as well) i have is what the advantage of the DA is as opposed to using a regular vector when my right hand side is already given? Because when i solve the linear system i believe PETSc will take care of the nonlocal communication that has to be done during the solve for me, right? mat On Thursday 20 July 2006 12:21, Berend van Wachem wrote: > Hi Mat, > > I understand it's confusing in the beginning - it took me a while to > grasp it as well. > > > So what i was thinking was to create a DA and a local sequential vector > > on each subblock (and here i would specify m=n=1 since the local > > Seqvector/DA resides only on one proc, which would give me multiple > > Seqvectors/DA per processor), and then as you say, somehow glue together > > the "super"-vector (the global vector) ... :). Though that is the part > > which am i stuck on right now. > > > > Why did you have to create a communicator for each subblock? > > If everything is balanced and the blocks have the same size, you could > specify m=n=1, but you would still need to make a seperate communicator > for each of the blocks; how would the DA otherwise know on which > processor it lies on? I created the communicators with > > ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup); > CHKERRQ(ierr); > for (i = 0; i < Mesh->TOTBlocks; i++) > { > ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc, > Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup); > CHKERRQ(ierr); > > ierr = MPI_Comm_create(PETSC_COMM_WORLD, > Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm)); > CHKERRQ(ierr); > > } > > > Another tip on communicators. Why it is also convenient to have a > communicator for one processor is for data processing from file. For > instance, I have the initial data in one file. I make a DA, similar to > the DA I want to use later on, but for just one processor. I read the > data, and then save the DA to disk. Then I make the real DA with the > communicator and the number of processors I want to have and use DALoad > to read the DA - and all the data is transferred to the appropriate > processor. > > > Do you create an MPI vector for your global vector? > > Yes - because I want to get a solution over the COMPLETE problem in one > matrix. If your blocks are in any way connected, you will want to do > this as well. > > "Glueing" the DA vectors into one big one is not difficult if you use > the length of the various DA vectors and glue them together (see code I > sent in previous email) > > One thing you also have to think about is cross-block-addressing. One > point in one block has a neighbour it is dependent upon in another > block. In the code I sent you, this addressjump is called "daoffset" and > is member of the block structure (I am an old C++ programmer so I use > mostly structures). > > > Sorry for pestering you again, but since you are in Sweden (or your email > > is anyway) i think i might not get you again today ... :) > > No problem - just ask. It took me a while before I understood it and the > best thing is to look at examples that come with petsc and maybe the > code examples I send you, please let me know if you want me to send more. > > Good luck, > > Berend. > > > thanks for all your help > > mat > > > > Quoting Berend van Wachem : > >>Hi Matt, > >> > >>Yes, I create a DA for each domain (block I call it in the code) where M > >>and M and N are the dimensions of that block. > >> > >>In my problem I do specify m and n, by a simple algorithm; I try to find > >>a solution so that each processor has about the same number of nodes in > >>total. I do this by taking the first block, looking at how much > >>processors it would get (by dividing its number of nodes by the total > >>number of nodes times the number of processors) and fill it up from > >>that. For each block I create a seperate communicator as well, so each > >>block has its own communicator. > >> > >> From the vectors of the various blocks I glue together one larger > >>vector on which the computations are done, with these functions, > >> > >>#undef __FUNCT__ > >>#define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" > >>int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct > >>CFDMesh *Mesh) > >>{ > >> int ierr,i; > >> double *GPtr; > >> > >> PetscFunctionBegin; > >> > >> ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); > >> AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); > >> > >> for (i=0; iNMyBlocks; i++) > >> { > >> ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > >>CHKERRQ(ierr); > >> ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); > >> CHKERRQ(ierr); > >> } > >> PetscFunctionReturn(0); > >>} > >> > >> > >>#undef __FUNCT__ > >>#define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" > >>int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh) > >>{ > >> int ierr,i; > >> double *GPtr; > >> > >> PetscFunctionBegin; > >> > >> if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) > >> { > >> ErrorH(1,"Trying to delete a non existing vector from BGCC list"); > >> } > >> > >> for (i=0; iNMyBlocks; i++) > >> { > >> ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); > >> ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > >>CHKERRQ(ierr); > >> } > >> ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); > >> DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); > >> > >> PetscFunctionReturn(0); > >>} > >> > >>>Thanks for the response (and sorry for the late 'thanks'), > >>> > >>>i can see how this is supposed to work. However, my question is then > >>> with > >> > >>the > >> > >>>creation of the distributed array objects. I take it from your code that > >> > >>you > >> > >>>create a DA for each patch (?). M and N then would be the dimension of > >>> my > >> > >>2D > >> > >>>patch. My question is what i would be specifying for n and m. I am a > >>> little > >>> > >>>confused in general what m and n are. It says they denote the process > >>>partition in each direction and that m*n must be equal the total number > >>> of > >>> > >>>processes in the communicator. > >>>Sorry, i am almost sure this is something simple. > >>> > >>>thanks > >>>mat > >>> > >>>On Thursday 13 July 2006 23:13, Berend van Wachem wrote: > >>>>>Hi, > >>>>> > >>>>>i was wondering if it is possible (or advisable) to use a distributed > >>>> > >>>>array > >>>> > >>>>>for when my domain is not necessarily rectangular but my subdomains > >> > >>are > >> > >>>>>rectangular? > >>>>> > >>>>>thanks > >>>>>mat > >>>> > >>>>Hi Matt, > >>>> > >>>>This is exactly what I do; I work with a multi-block CFD problem, where > >>>>the total geometry consists of a number of structured (not neccesarily > >>>>rectangular) blocks. Each block has its own DA. > >>>> > >>>>How I solved it, with help from the petsc team, is by making functions > >>>>which translate vectors from the complete domain (called block global > >>>> in my code) to each block (called block local in my code). The matrix > >>>> computation is done at the blockglobal level, and I can manipulate the > >>>> vectors at the block local level. Just to give an example, I attatch > >>>> the code at the end of this email. If you want more help/information > >>>> please let me know. > >>>> > >>>>Good luck, > >>>> > >>>>Berend. > >>>> > >>>> > >>>>#undef __FUNCT__ > >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" > >>>>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, > >>>> struct CFDMesh *Mesh) > >>>>{ > >>>> int ierr, i; > >>>> double *tmpPTR=NULL; > >>>> > >>>> PetscFunctionBegin; > >>>> > >>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); > >>>> > >>>> for (i=0; iNMyBlocks; i++) > >>>> { > >>>> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); > >>>>CHKERRQ(ierr); > >>>> > >>>>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,( > >>>>*A)[ i]); CHKERRQ(ierr); > >>>> > >>>>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A > >>>>)[i] ); CHKERRQ(ierr); > >>>> > >>>> } > >>>> > >>>> PetscFunctionReturn(0); > >>>>} > >>>> > >>>>#undef __FUNCT__ > >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" > >>>>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh > >>>>*Mesh,int CopyData) > >>>>{ > >>>> int i,ierr; > >>>> > >>>> PetscFunctionBegin; > >>>> > >>>> for (i=0; iNMyBlocks; i++) > >>>> { > >>>> if (CopyData==MF_COPY_DATA) > >>>> { > >>>> > >>>>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i > >>>>]); CHKERRQ(ierr); > >>>> } > >>>> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); > >>>>CHKERRQ(ierr); > >>>> } > >>>> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); > >>>> > >>>> PetscFunctionReturn(0); > >>>> > >>>>} > >> > >>-- > >> /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ > >> L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > >> > >> | Berend van Wachem | > >> | Multiphase Flow Group | > >> | Chalmers University of Technology | > >> | > >> | Please note that my email address has changed to: | > >> | Berend at chalmers.se | > >> | > >> | Please make the appropriate changes in your address | > >> | list. | > >> > >> __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > >> \/______________________________________________________/ From berend at chalmers.se Thu Jul 20 15:19:50 2006 From: berend at chalmers.se (Berend van Wachem) Date: Thu, 20 Jul 2006 22:19:50 +0200 Subject: distributed array In-Reply-To: <200607201306.15731.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <1153405874.44bf93b247f7b@webmail.nmsu.edu> <44BFC9B0.2090605@chalmers.se> <200607201306.15731.mafunk@nmsu.edu> Message-ID: <44BFE566.2020702@chalmers.se> Dear Mat, > However, after going through your code i rethought what i am trying to do. And > i am not sure if the idea of using DA's is the best way or not. > The reason i initially thought of using DA's was due to its description in the > user manual " ... are intended for use with logically regular rectangular > grids when communication of nonlocal data is needed before certain local > computations can occur." > > Well, my problem is such that i already have my right hand side. What i need > to do is transfer this to a PETSc vector and build the matrix. > So i guess i don't know what the advantage is to me in using DA's because my > right hand side is ready to be used once i get to PETSc. I would advise you to be careful to keep your own program and start inserting vectors and copying your existing arrays into these vectors. It really pays off redeveloping (parts of) the code. I agree, if you have each DA on one processor, then there is no need to use a DA. A DA would give you the flexibility of easily running the problem on more processors. > So the most basic question (and maybe somebody else has an answer to that as > well) i have is what the advantage of the DA is as opposed to using a regular > vector when my right hand side is already given? Because when i solve the > linear system i believe PETSc will take care of the nonlocal communication > that has to be done during the solve for me, right? I don't think you have any use of a DA in your case. The communications for solving the matrix is taken care of. What you need to think about is how the blocks are connected and set-up IS and scatter for that communication; you will probably need the same addressing when you fill your matrix. Good luck, Berend. From mafunk at nmsu.edu Thu Jul 20 15:28:53 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 20 Jul 2006 14:28:53 -0600 Subject: distributed array In-Reply-To: <44BFC9B0.2090605@chalmers.se> References: <200607131624.20616.mafunk@nmsu.edu> <1153405874.44bf93b247f7b@webmail.nmsu.edu> <44BFC9B0.2090605@chalmers.se> Message-ID: <200607201428.53567.mafunk@nmsu.edu> One other thing, in your function VecOverGlobalToBlockGlobalBegin and VecOverGlobalToBlockGlobalEnd: i assume that Vec *A is a the pointer to the blockglobal Vector? further Vec **B is an array of pointers to MPI vectors where each element of the array is a MPI vector associated with one subblock? if that is so, then this is what i believe your functions are doing (please correct me if i am wrong): VecOverGlobalToBlockGlobalBegin: splits the blockglobal vector A into its MPI subblock vectors VecOverGlobalToBlockGlobalEnd: restores the vectors And in between these two function calls you can mess with the MPI subblock vectors? mat then you iterate over all blocks (i assume this is the glueing part? ) On Thursday 20 July 2006 12:21, Berend van Wachem wrote: > Hi Mat, > > I understand it's confusing in the beginning - it took me a while to > grasp it as well. > > > So what i was thinking was to create a DA and a local sequential vector > > on each subblock (and here i would specify m=n=1 since the local > > Seqvector/DA resides only on one proc, which would give me multiple > > Seqvectors/DA per processor), and then as you say, somehow glue together > > the "super"-vector (the global vector) ... :). Though that is the part > > which am i stuck on right now. > > > > Why did you have to create a communicator for each subblock? > > If everything is balanced and the blocks have the same size, you could > specify m=n=1, but you would still need to make a seperate communicator > for each of the blocks; how would the DA otherwise know on which > processor it lies on? I created the communicators with > > ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup); > CHKERRQ(ierr); > for (i = 0; i < Mesh->TOTBlocks; i++) > { > ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc, > Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup); > CHKERRQ(ierr); > > ierr = MPI_Comm_create(PETSC_COMM_WORLD, > Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm)); > CHKERRQ(ierr); > > } > > > Another tip on communicators. Why it is also convenient to have a > communicator for one processor is for data processing from file. For > instance, I have the initial data in one file. I make a DA, similar to > the DA I want to use later on, but for just one processor. I read the > data, and then save the DA to disk. Then I make the real DA with the > communicator and the number of processors I want to have and use DALoad > to read the DA - and all the data is transferred to the appropriate > processor. > > > Do you create an MPI vector for your global vector? > > Yes - because I want to get a solution over the COMPLETE problem in one > matrix. If your blocks are in any way connected, you will want to do > this as well. > > "Glueing" the DA vectors into one big one is not difficult if you use > the length of the various DA vectors and glue them together (see code I > sent in previous email) > > One thing you also have to think about is cross-block-addressing. One > point in one block has a neighbour it is dependent upon in another > block. In the code I sent you, this addressjump is called "daoffset" and > is member of the block structure (I am an old C++ programmer so I use > mostly structures). > > > Sorry for pestering you again, but since you are in Sweden (or your email > > is anyway) i think i might not get you again today ... :) > > No problem - just ask. It took me a while before I understood it and the > best thing is to look at examples that come with petsc and maybe the > code examples I send you, please let me know if you want me to send more. > > Good luck, > > Berend. > > > thanks for all your help > > mat > > > > Quoting Berend van Wachem : > >>Hi Matt, > >> > >>Yes, I create a DA for each domain (block I call it in the code) where M > >>and M and N are the dimensions of that block. > >> > >>In my problem I do specify m and n, by a simple algorithm; I try to find > >>a solution so that each processor has about the same number of nodes in > >>total. I do this by taking the first block, looking at how much > >>processors it would get (by dividing its number of nodes by the total > >>number of nodes times the number of processors) and fill it up from > >>that. For each block I create a seperate communicator as well, so each > >>block has its own communicator. > >> > >> From the vectors of the various blocks I glue together one larger > >>vector on which the computations are done, with these functions, > >> > >>#undef __FUNCT__ > >>#define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" > >>int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct > >>CFDMesh *Mesh) > >>{ > >> int ierr,i; > >> double *GPtr; > >> > >> PetscFunctionBegin; > >> > >> ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); > >> AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); > >> > >> for (i=0; iNMyBlocks; i++) > >> { > >> ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > >>CHKERRQ(ierr); > >> ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); > >> CHKERRQ(ierr); > >> } > >> PetscFunctionReturn(0); > >>} > >> > >> > >>#undef __FUNCT__ > >>#define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" > >>int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh *Mesh) > >>{ > >> int ierr,i; > >> double *GPtr; > >> > >> PetscFunctionBegin; > >> > >> if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) > >> { > >> ErrorH(1,"Trying to delete a non existing vector from BGCC list"); > >> } > >> > >> for (i=0; iNMyBlocks; i++) > >> { > >> ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); > >> ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > >>CHKERRQ(ierr); > >> } > >> ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); > >> DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); > >> > >> PetscFunctionReturn(0); > >>} > >> > >>>Thanks for the response (and sorry for the late 'thanks'), > >>> > >>>i can see how this is supposed to work. However, my question is then > >>> with > >> > >>the > >> > >>>creation of the distributed array objects. I take it from your code that > >> > >>you > >> > >>>create a DA for each patch (?). M and N then would be the dimension of > >>> my > >> > >>2D > >> > >>>patch. My question is what i would be specifying for n and m. I am a > >>> little > >>> > >>>confused in general what m and n are. It says they denote the process > >>>partition in each direction and that m*n must be equal the total number > >>> of > >>> > >>>processes in the communicator. > >>>Sorry, i am almost sure this is something simple. > >>> > >>>thanks > >>>mat > >>> > >>>On Thursday 13 July 2006 23:13, Berend van Wachem wrote: > >>>>>Hi, > >>>>> > >>>>>i was wondering if it is possible (or advisable) to use a distributed > >>>> > >>>>array > >>>> > >>>>>for when my domain is not necessarily rectangular but my subdomains > >> > >>are > >> > >>>>>rectangular? > >>>>> > >>>>>thanks > >>>>>mat > >>>> > >>>>Hi Matt, > >>>> > >>>>This is exactly what I do; I work with a multi-block CFD problem, where > >>>>the total geometry consists of a number of structured (not neccesarily > >>>>rectangular) blocks. Each block has its own DA. > >>>> > >>>>How I solved it, with help from the petsc team, is by making functions > >>>>which translate vectors from the complete domain (called block global > >>>> in my code) to each block (called block local in my code). The matrix > >>>> computation is done at the blockglobal level, and I can manipulate the > >>>> vectors at the block local level. Just to give an example, I attatch > >>>> the code at the end of this email. If you want more help/information > >>>> please let me know. > >>>> > >>>>Good luck, > >>>> > >>>>Berend. > >>>> > >>>> > >>>>#undef __FUNCT__ > >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" > >>>>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, > >>>> struct CFDMesh *Mesh) > >>>>{ > >>>> int ierr, i; > >>>> double *tmpPTR=NULL; > >>>> > >>>> PetscFunctionBegin; > >>>> > >>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); > >>>> > >>>> for (i=0; iNMyBlocks; i++) > >>>> { > >>>> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); > >>>>CHKERRQ(ierr); > >>>> > >>>>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,( > >>>>*A)[ i]); CHKERRQ(ierr); > >>>> > >>>>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,(*A > >>>>)[i] ); CHKERRQ(ierr); > >>>> > >>>> } > >>>> > >>>> PetscFunctionReturn(0); > >>>>} > >>>> > >>>>#undef __FUNCT__ > >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" > >>>>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh > >>>>*Mesh,int CopyData) > >>>>{ > >>>> int i,ierr; > >>>> > >>>> PetscFunctionBegin; > >>>> > >>>> for (i=0; iNMyBlocks; i++) > >>>> { > >>>> if (CopyData==MF_COPY_DATA) > >>>> { > >>>> > >>>>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B)[i > >>>>]); CHKERRQ(ierr); > >>>> } > >>>> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); > >>>>CHKERRQ(ierr); > >>>> } > >>>> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); > >>>> > >>>> PetscFunctionReturn(0); > >>>> > >>>>} > >> > >>-- > >> /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ > >> L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > >> > >> | Berend van Wachem | > >> | Multiphase Flow Group | > >> | Chalmers University of Technology | > >> | > >> | Please note that my email address has changed to: | > >> | Berend at chalmers.se | > >> | > >> | Please make the appropriate changes in your address | > >> | list. | > >> > >> __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > >> \/______________________________________________________/ From mafunk at nmsu.edu Thu Jul 20 16:52:56 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Thu, 20 Jul 2006 15:52:56 -0600 Subject: question concerning building vectors In-Reply-To: <200607201428.53567.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <44BFC9B0.2090605@chalmers.se> <200607201428.53567.mafunk@nmsu.edu> Message-ID: <200607201552.56821.mafunk@nmsu.edu> Hi, i was wondering if there exists a function that lets me "assemble" a vector. Basically i would like to do the following: I have several PETSc vectors on the same proc. I would like to be able to assemble all of these into one vector if possible without having to copy the data. Is that possible? thanks for all the help mat On Thursday 20 July 2006 14:28, Matt Funk wrote: > One other thing, > > in your function VecOverGlobalToBlockGlobalBegin and > VecOverGlobalToBlockGlobalEnd: > > i assume that Vec *A is a the pointer to the blockglobal Vector? > further Vec **B is an array of pointers to MPI vectors where each element > of the array is a MPI vector associated with one subblock? > > if that is so, then this is what i believe your functions are doing (please > correct me if i am wrong): > > VecOverGlobalToBlockGlobalBegin: splits the blockglobal vector A into its > MPI subblock vectors > > VecOverGlobalToBlockGlobalEnd: restores the vectors > > And in between these two function calls you can mess with the MPI subblock > vectors? > > mat > > > then you iterate over all blocks (i assume this is the glueing part? ) > > On Thursday 20 July 2006 12:21, Berend van Wachem wrote: > > Hi Mat, > > > > I understand it's confusing in the beginning - it took me a while to > > grasp it as well. > > > > > So what i was thinking was to create a DA and a local sequential vector > > > on each subblock (and here i would specify m=n=1 since the local > > > Seqvector/DA resides only on one proc, which would give me multiple > > > Seqvectors/DA per processor), and then as you say, somehow glue > > > together the "super"-vector (the global vector) ... :). Though that is > > > the part which am i stuck on right now. > > > > > > Why did you have to create a communicator for each subblock? > > > > If everything is balanced and the blocks have the same size, you could > > specify m=n=1, but you would still need to make a seperate communicator > > for each of the blocks; how would the DA otherwise know on which > > processor it lies on? I created the communicators with > > > > ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup); > > CHKERRQ(ierr); > > for (i = 0; i < Mesh->TOTBlocks; i++) > > { > > ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc, > > Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup); > > CHKERRQ(ierr); > > > > ierr = MPI_Comm_create(PETSC_COMM_WORLD, > > Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm)); > > CHKERRQ(ierr); > > > > } > > > > > > Another tip on communicators. Why it is also convenient to have a > > communicator for one processor is for data processing from file. For > > instance, I have the initial data in one file. I make a DA, similar to > > the DA I want to use later on, but for just one processor. I read the > > data, and then save the DA to disk. Then I make the real DA with the > > communicator and the number of processors I want to have and use DALoad > > to read the DA - and all the data is transferred to the appropriate > > processor. > > > > > Do you create an MPI vector for your global vector? > > > > Yes - because I want to get a solution over the COMPLETE problem in one > > matrix. If your blocks are in any way connected, you will want to do > > this as well. > > > > "Glueing" the DA vectors into one big one is not difficult if you use > > the length of the various DA vectors and glue them together (see code I > > sent in previous email) > > > > One thing you also have to think about is cross-block-addressing. One > > point in one block has a neighbour it is dependent upon in another > > block. In the code I sent you, this addressjump is called "daoffset" and > > is member of the block structure (I am an old C++ programmer so I use > > mostly structures). > > > > > Sorry for pestering you again, but since you are in Sweden (or your > > > email is anyway) i think i might not get you again today ... :) > > > > No problem - just ask. It took me a while before I understood it and the > > best thing is to look at examples that come with petsc and maybe the > > code examples I send you, please let me know if you want me to send more. > > > > Good luck, > > > > Berend. > > > > > thanks for all your help > > > mat > > > > > > Quoting Berend van Wachem : > > >>Hi Matt, > > >> > > >>Yes, I create a DA for each domain (block I call it in the code) where > > >> M and M and N are the dimensions of that block. > > >> > > >>In my problem I do specify m and n, by a simple algorithm; I try to > > >> find a solution so that each processor has about the same number of > > >> nodes in total. I do this by taking the first block, looking at how > > >> much processors it would get (by dividing its number of nodes by the > > >> total number of nodes times the number of processors) and fill it up > > >> from that. For each block I create a seperate communicator as well, so > > >> each block has its own communicator. > > >> > > >> From the vectors of the various blocks I glue together one larger > > >>vector on which the computations are done, with these functions, > > >> > > >>#undef __FUNCT__ > > >>#define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" > > >>int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct > > >>CFDMesh *Mesh) > > >>{ > > >> int ierr,i; > > >> double *GPtr; > > >> > > >> PetscFunctionBegin; > > >> > > >> ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); > > >> AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); > > >> > > >> for (i=0; iNMyBlocks; i++) > > >> { > > >> ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > > >>CHKERRQ(ierr); > > >> ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); > > >> CHKERRQ(ierr); > > >> } > > >> PetscFunctionReturn(0); > > >>} > > >> > > >> > > >>#undef __FUNCT__ > > >>#define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" > > >>int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh > > >> *Mesh) { > > >> int ierr,i; > > >> double *GPtr; > > >> > > >> PetscFunctionBegin; > > >> > > >> if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) > > >> { > > >> ErrorH(1,"Trying to delete a non existing vector from BGCC list"); > > >> } > > >> > > >> for (i=0; iNMyBlocks; i++) > > >> { > > >> ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); > > >> ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); > > >>CHKERRQ(ierr); > > >> } > > >> ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); > > >> DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); > > >> > > >> PetscFunctionReturn(0); > > >>} > > >> > > >>>Thanks for the response (and sorry for the late 'thanks'), > > >>> > > >>>i can see how this is supposed to work. However, my question is then > > >>> with > > >> > > >>the > > >> > > >>>creation of the distributed array objects. I take it from your code > > >>> that > > >> > > >>you > > >> > > >>>create a DA for each patch (?). M and N then would be the dimension of > > >>> my > > >> > > >>2D > > >> > > >>>patch. My question is what i would be specifying for n and m. I am a > > >>> little > > >>> > > >>>confused in general what m and n are. It says they denote the process > > >>>partition in each direction and that m*n must be equal the total > > >>> number of > > >>> > > >>>processes in the communicator. > > >>>Sorry, i am almost sure this is something simple. > > >>> > > >>>thanks > > >>>mat > > >>> > > >>>On Thursday 13 July 2006 23:13, Berend van Wachem wrote: > > >>>>>Hi, > > >>>>> > > >>>>>i was wondering if it is possible (or advisable) to use a > > >>>>> distributed > > >>>> > > >>>>array > > >>>> > > >>>>>for when my domain is not necessarily rectangular but my subdomains > > >> > > >>are > > >> > > >>>>>rectangular? > > >>>>> > > >>>>>thanks > > >>>>>mat > > >>>> > > >>>>Hi Matt, > > >>>> > > >>>>This is exactly what I do; I work with a multi-block CFD problem, > > >>>> where the total geometry consists of a number of structured (not > > >>>> neccesarily rectangular) blocks. Each block has its own DA. > > >>>> > > >>>>How I solved it, with help from the petsc team, is by making > > >>>> functions which translate vectors from the complete domain (called > > >>>> block global in my code) to each block (called block local in my > > >>>> code). The matrix computation is done at the blockglobal level, and > > >>>> I can manipulate the vectors at the block local level. Just to give > > >>>> an example, I attatch the code at the end of this email. If you want > > >>>> more help/information please let me know. > > >>>> > > >>>>Good luck, > > >>>> > > >>>>Berend. > > >>>> > > >>>> > > >>>>#undef __FUNCT__ > > >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" > > >>>>int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, > > >>>> struct CFDMesh *Mesh) > > >>>>{ > > >>>> int ierr, i; > > >>>> double *tmpPTR=NULL; > > >>>> > > >>>> PetscFunctionBegin; > > >>>> > > >>>> > > >>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); > > >>>> > > >>>> for (i=0; iNMyBlocks; i++) > > >>>> { > > >>>> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); > > >>>>CHKERRQ(ierr); > > >>>> > > >>>>ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES > > >>>>,( *A)[ i]); CHKERRQ(ierr); > > >>>> > > >>>>ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,( > > >>>>*A )[i] ); CHKERRQ(ierr); > > >>>> > > >>>> } > > >>>> > > >>>> PetscFunctionReturn(0); > > >>>>} > > >>>> > > >>>>#undef __FUNCT__ > > >>>>#define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" > > >>>>int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh > > >>>>*Mesh,int CopyData) > > >>>>{ > > >>>> int i,ierr; > > >>>> > > >>>> PetscFunctionBegin; > > >>>> > > >>>> for (i=0; iNMyBlocks; i++) > > >>>> { > > >>>> if (CopyData==MF_COPY_DATA) > > >>>> { > > >>>> > > >>>>ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B) > > >>>>[i ]); CHKERRQ(ierr); > > >>>> } > > >>>> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); > > >>>>CHKERRQ(ierr); > > >>>> } > > >>>> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); > > >>>> > > >>>> PetscFunctionReturn(0); > > >>>> > > >>>>} > > >> > > >>-- > > >> /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ > > >> L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > > >> > > >> | Berend van Wachem | > > >> | Multiphase Flow Group | > > >> | Chalmers University of Technology | > > >> | > > >> | Please note that my email address has changed to: | > > >> | Berend at chalmers.se | > > >> | > > >> | Please make the appropriate changes in your address | > > >> | list. | > > >> > > >> __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ > > >> \/______________________________________________________/ From berend at chalmers.se Fri Jul 21 02:43:12 2006 From: berend at chalmers.se (Berend van Wachem) Date: Fri, 21 Jul 2006 09:43:12 +0200 Subject: distributed array In-Reply-To: <200607201428.53567.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <1153405874.44bf93b247f7b@webmail.nmsu.edu> <44BFC9B0.2090605@chalmers.se> <200607201428.53567.mafunk@nmsu.edu> Message-ID: <44C08590.4070400@chalmers.se> Dear Mat, > One other thing, > > in your function VecOverGlobalToBlockGlobalBegin and > VecOverGlobalToBlockGlobalEnd: > > i assume that Vec *A is a the pointer to the blockglobal Vector? > further Vec **B is an array of pointers to MPI vectors where each element of > the array is a MPI vector associated with one subblock? A is the pointer to the single vector over the complete problem. This is the vector that will be used for the matrix computation. Indeed, *B is the pointer to an array of vectors which each correspond with a block. > if that is so, then this is what i believe your functions are doing (please > correct me if i am wrong): > > VecOverGlobalToBlockGlobalBegin: splits the blockglobal vector A into its MPI > subblock vectors > > VecOverGlobalToBlockGlobalEnd: restores the vectors > > And in between these two function calls you can mess with the MPI subblock > vectors? Exactly. > > then you iterate over all blocks (i assume this is the glueing part? ) I am not sure what you precisely mean. For glueing the blocks, I create an array of IS's which I scatter over. In my problem it's even more complicated than just that, because the I-direction in one block can be another direction in a neighbouring block which makes the neighbour-seeking a little more difficult. Once the IS are created, a scatter makes the neighbour values available to the current block. I use the same scatter once in the beginning to find out the addresses of the matrix locations. Good luck, Berend. From berend at chalmers.se Fri Jul 21 02:46:12 2006 From: berend at chalmers.se (Berend van Wachem) Date: Fri, 21 Jul 2006 09:46:12 +0200 Subject: question concerning building vectors In-Reply-To: <200607201552.56821.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <44BFC9B0.2090605@chalmers.se> <200607201428.53567.mafunk@nmsu.edu> <200607201552.56821.mafunk@nmsu.edu> Message-ID: <44C08644.7070400@chalmers.se> Hi Mat, Isn't that exactly what the function I sent you does? It just places a series of pointers into the new vector. Berend. > i was wondering if there exists a function that lets me "assemble" a vector. > Basically i would like to do the following: > > I have several PETSc vectors on the same proc. I would like to be able to > assemble all of these into one vector if possible without having to copy the > data. Is that possible? > > thanks for all the help > mat > > > > From jordi.poblet at gmail.com Fri Jul 21 12:29:14 2006 From: jordi.poblet at gmail.com (jordi poblet) Date: Fri, 21 Jul 2006 19:29:14 +0200 Subject: link with other libraries Message-ID: Dear all, I would like to use PETSc within a finite element program (the main part is written in C++). Some classes should use PETSc routines and routines from other libraries. When compiling these classes, I have to include not only the header files from PETSc (which in the examples of a single C or C++ files are included automatically with the sample makefiles provided in the manual) but also header files from the other used libraries. Finally, I would have also to link PETSc with other libraries at the same time. I have tried to construct a makefile including 'by hand' the PETSc headers but I have always problems. Could someone help me? Have you got makefile examples for this situation (or similar)? The compilation options for the PETSc library that I have employed are: ./config/configure.py --with-mpi=0 --with-clanguage=c++ --with-scalar-type=complex tests and examples are passed correctly. I am running PETSc in linux machines. Thank you in advance, Jordi Poblet -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Jul 21 12:49:37 2006 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 21 Jul 2006 12:49:37 -0500 (CDT) Subject: link with other libraries In-Reply-To: References: Message-ID: The basic form of a PETSc makefile is as follows: >>>>>> CFLAGS = FFLAGS = CPPFLAGS = FPPFLAGS = CLEANFILES = include ${PETSC_DIR}/bmake/common/base ex1: ex1.o chkopts -${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB} ${RM} ex1.o >>>>>>>>>> If you wish to specify additonal libraries - you can do it as: >>>>>>> MY_INC = -I/my/include MY_LIB = -L/my/lib -lmy CFLAGS = FFLAGS = CPPFLAGS = ${MY_INC} FPPFLAGS = CLEANFILES = include ${PETSC_DIR}/bmake/common/base ex1: ex1.o chkopts -${CLINKER} -o ex1 ex1.o ${MY_LIB} ${PETSC_KSP_LIB} ${RM} ex1.o >>>>>>>>> Also - you should make sure this library is compiled with the same compilers as PETSc is compiled with. Satish On Fri, 21 Jul 2006, jordi poblet wrote: > Dear all, > > I would like to use PETSc within a finite element program (the main part is > written in C++). Some classes should use PETSc routines and routines from > other libraries. When compiling these classes, I have to include not only > the header files from PETSc (which in the examples of a single C or C++ > files are included automatically with the sample makefiles provided in the > manual) but also header files from the other used libraries. Finally, I > would have also to link PETSc with other libraries at the same time. > I have tried to construct a makefile including 'by hand' the PETSc headers > but I have always problems. Could someone help me? Have you got makefile > examples for this situation (or similar)? > > The compilation options for the PETSc library that I have employed are: > > ./config/configure.py --with-mpi=0 --with-clanguage=c++ > --with-scalar-type=complex > > tests and examples are passed correctly. > > I am running PETSc in linux machines. > > > Thank you in advance, > > Jordi Poblet > From jordi.poblet at gmail.com Fri Jul 21 13:45:42 2006 From: jordi.poblet at gmail.com (jordi poblet) Date: Fri, 21 Jul 2006 20:45:42 +0200 Subject: link with other libraries In-Reply-To: References: Message-ID: Satish, Thank you very much for your fast response. It has solved the problem with including files and I can generate the object file for my class (myclass.o). However, I suppose that with ex1: myclass.o chkopts -${CLINKER} -o ex1 myclass.o ${MY_LIB} ${PETSC_KSP_LIB} ${RM} myclass.o the makefile is trying to link a self executable program (ex1) and I obtain the following error message: : undefined reference to `main' (in myclass.cpp there is no main block, the class is used by main.cpp). I would like, 1-compile the class and obtain the file myclass.o (now I can do it with the error shown above, but the file is at least generated) taking into account the correct includes (from PETSc and from other libraries). I do not need the libraries in this step 2-compile all the other classes not using PETSc and obtain the other files *.o 3-link all the files *.o and myclass.o taking into account all the libraries Thank you very much. Best regards, Jordi Poblet On 7/21/06, Satish Balay wrote: > > The basic form of a PETSc makefile is as follows: > > >>>>>> > CFLAGS = > FFLAGS = > CPPFLAGS = > FPPFLAGS = > CLEANFILES = > > include ${PETSC_DIR}/bmake/common/base > > ex1: ex1.o chkopts > -${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB} > ${RM} ex1.o > >>>>>>>>>> > > > If you wish to specify additonal libraries - you can do it as: > > > >>>>>>> > MY_INC = -I/my/include > MY_LIB = -L/my/lib -lmy > > CFLAGS = > FFLAGS = > CPPFLAGS = ${MY_INC} > FPPFLAGS = > CLEANFILES = > > include ${PETSC_DIR}/bmake/common/base > > ex1: ex1.o chkopts > -${CLINKER} -o ex1 ex1.o ${MY_LIB} ${PETSC_KSP_LIB} > ${RM} ex1.o > >>>>>>>>> > > Also - you should make sure this library is compiled with the same > compilers as PETSc is compiled with. > > Satish > > On Fri, 21 Jul 2006, jordi poblet wrote: > > > Dear all, > > > > I would like to use PETSc within a finite element program (the main part > is > > written in C++). Some classes should use PETSc routines and routines > from > > other libraries. When compiling these classes, I have to include not > only > > the header files from PETSc (which in the examples of a single C or C++ > > files are included automatically with the sample makefiles provided in > the > > manual) but also header files from the other used libraries. Finally, I > > would have also to link PETSc with other libraries at the same time. > > I have tried to construct a makefile including 'by hand' the PETSc > headers > > but I have always problems. Could someone help me? Have you got makefile > > examples for this situation (or similar)? > > > > The compilation options for the PETSc library that I have employed are: > > > > ./config/configure.py --with-mpi=0 --with-clanguage=c++ > > --with-scalar-type=complex > > > > tests and examples are passed correctly. > > > > I am running PETSc in linux machines. > > > > > > Thank you in advance, > > > > Jordi Poblet > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Fri Jul 21 13:53:34 2006 From: balay at mcs.anl.gov (Satish Balay) Date: Fri, 21 Jul 2006 13:53:34 -0500 (CDT) Subject: link with other libraries In-Reply-To: References: Message-ID: You should be using: main: main.o myclass.o chkopts -${CLINKER} -o main main.o myclass.o ${MY_LIB} ${PETSC_KSP_LIB} ${RM} main.o myclass.o Or better: MY_INC = -I/my/include MY_LIB = -L/my/lib -lmy MY_OBJ = main.o myclass1.o myclass3.o myclass3.o CFLAGS = FFLAGS = CPPFLAGS = ${MY_INC} FPPFLAGS = CLEANFILES = main include ${PETSC_DIR}/bmake/common/base main: ${MY_OBJ} chkopts -${CLINKER} -o main ${MY_OBJ} ${MY_LIB} ${PETSC_KSP_LIB} ${RM} ${MY_OBJ} Satish On Fri, 21 Jul 2006, jordi poblet wrote: > Satish, > > Thank you very much for your fast response. It has solved the problem with > including files and I can generate the object file for my class (myclass.o). > However, I suppose that with > > ex1: myclass.o chkopts > > -${CLINKER} -o ex1 myclass.o ${MY_LIB} ${PETSC_KSP_LIB} > ${RM} myclass.o > the makefile is trying to link a self executable program (ex1) and I > obtain the following error message: > > : undefined reference to `main' > (in myclass.cpp there is no main block, the class is used by main.cpp). > > I would like, > > 1-compile the class and obtain the file myclass.o (now I can do it with the > error shown above, but the file is at least generated) taking into account > the correct includes (from PETSc and from other libraries). I do not need > the libraries in this step > 2-compile all the other classes not using PETSc and obtain the other files > *.o > 3-link all the files *.o and myclass.o taking into account all the libraries > > > Thank you very much. Best regards, > > > Jordi Poblet > > > On 7/21/06, Satish Balay wrote: > > > > The basic form of a PETSc makefile is as follows: > > > > >>>>>> > > CFLAGS = > > FFLAGS = > > CPPFLAGS = > > FPPFLAGS = > > CLEANFILES = > > > > include ${PETSC_DIR}/bmake/common/base > > > > ex1: ex1.o chkopts > > -${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB} > > ${RM} ex1.o > > >>>>>>>>>> > > > > > > If you wish to specify additonal libraries - you can do it as: > > > > > > >>>>>>> > > MY_INC = -I/my/include > > MY_LIB = -L/my/lib -lmy > > > > CFLAGS = > > FFLAGS = > > CPPFLAGS = ${MY_INC} > > FPPFLAGS = > > CLEANFILES = > > > > include ${PETSC_DIR}/bmake/common/base > > > > ex1: ex1.o chkopts > > -${CLINKER} -o ex1 ex1.o ${MY_LIB} ${PETSC_KSP_LIB} > > ${RM} ex1.o > > >>>>>>>>> > > > > Also - you should make sure this library is compiled with the same > > compilers as PETSc is compiled with. > > > > Satish > > > > On Fri, 21 Jul 2006, jordi poblet wrote: > > > > > Dear all, > > > > > > I would like to use PETSc within a finite element program (the main part > > is > > > written in C++). Some classes should use PETSc routines and routines > > from > > > other libraries. When compiling these classes, I have to include not > > only > > > the header files from PETSc (which in the examples of a single C or C++ > > > files are included automatically with the sample makefiles provided in > > the > > > manual) but also header files from the other used libraries. Finally, I > > > would have also to link PETSc with other libraries at the same time. > > > I have tried to construct a makefile including 'by hand' the PETSc > > headers > > > but I have always problems. Could someone help me? Have you got makefile > > > examples for this situation (or similar)? > > > > > > The compilation options for the PETSc library that I have employed are: > > > > > > ./config/configure.py --with-mpi=0 --with-clanguage=c++ > > > --with-scalar-type=complex > > > > > > tests and examples are passed correctly. > > > > > > I am running PETSc in linux machines. > > > > > > > > > Thank you in advance, > > > > > > Jordi Poblet > > > > > > > > From bsmith at mcs.anl.gov Fri Jul 21 13:55:25 2006 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 21 Jul 2006 13:55:25 -0500 (CDT) Subject: link with other libraries In-Reply-To: References: Message-ID: Jordi, If you want to provide the rules to compile your code you can instead just include bmake/common/variables this sets the variables like ${PETSC_KSP_LIB} but you provide the rules for compiling. Satish, Do we have written down somewhere on the website for the two ways of using PETSc makefiles? There should be the directions (that only you know :-). Maybe in the FAQ? Barry On Fri, 21 Jul 2006, jordi poblet wrote: > Satish, > > Thank you very much for your fast response. It has solved the problem with > including files and I can generate the object file for my class (myclass.o). > However, I suppose that with > > ex1: myclass.o chkopts > > -${CLINKER} -o ex1 myclass.o ${MY_LIB} ${PETSC_KSP_LIB} > ${RM} myclass.o > the makefile is trying to link a self executable program (ex1) and I > obtain the following error message: > > : undefined reference to `main' > (in myclass.cpp there is no main block, the class is used by main.cpp). > > I would like, > > 1-compile the class and obtain the file myclass.o (now I can do it with the > error shown above, but the file is at least generated) taking into account > the correct includes (from PETSc and from other libraries). I do not need > the libraries in this step > 2-compile all the other classes not using PETSc and obtain the other files > *.o > 3-link all the files *.o and myclass.o taking into account all the libraries > > > Thank you very much. Best regards, > > > Jordi Poblet > > > On 7/21/06, Satish Balay wrote: >> >> The basic form of a PETSc makefile is as follows: >> >> >>>>>> >> CFLAGS = >> FFLAGS = >> CPPFLAGS = >> FPPFLAGS = >> CLEANFILES = >> >> include ${PETSC_DIR}/bmake/common/base >> >> ex1: ex1.o chkopts >> -${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB} >> ${RM} ex1.o >> >>>>>>>>>> >> >> >> If you wish to specify additonal libraries - you can do it as: >> >> >> >>>>>>> >> MY_INC = -I/my/include >> MY_LIB = -L/my/lib -lmy >> >> CFLAGS = >> FFLAGS = >> CPPFLAGS = ${MY_INC} >> FPPFLAGS = >> CLEANFILES = >> >> include ${PETSC_DIR}/bmake/common/base >> >> ex1: ex1.o chkopts >> -${CLINKER} -o ex1 ex1.o ${MY_LIB} ${PETSC_KSP_LIB} >> ${RM} ex1.o >> >>>>>>>>> >> >> Also - you should make sure this library is compiled with the same >> compilers as PETSc is compiled with. >> >> Satish >> >> On Fri, 21 Jul 2006, jordi poblet wrote: >> >> > Dear all, >> > >> > I would like to use PETSc within a finite element program (the main part >> is >> > written in C++). Some classes should use PETSc routines and routines >> from >> > other libraries. When compiling these classes, I have to include not >> only >> > the header files from PETSc (which in the examples of a single C or C++ >> > files are included automatically with the sample makefiles provided in >> the >> > manual) but also header files from the other used libraries. Finally, I >> > would have also to link PETSc with other libraries at the same time. >> > I have tried to construct a makefile including 'by hand' the PETSc >> headers >> > but I have always problems. Could someone help me? Have you got makefile >> > examples for this situation (or similar)? >> > >> > The compilation options for the PETSc library that I have employed are: >> > >> > ./config/configure.py --with-mpi=0 --with-clanguage=c++ >> > --with-scalar-type=complex >> > >> > tests and examples are passed correctly. >> > >> > I am running PETSc in linux machines. >> > >> > >> > Thank you in advance, >> > >> > Jordi Poblet >> > >> >> > From bsmith at mcs.anl.gov Fri Jul 21 21:24:57 2006 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 21 Jul 2006 21:24:57 -0500 (CDT) Subject: question concerning building vectors In-Reply-To: <200607201552.56821.mafunk@nmsu.edu> References: <200607131624.20616.mafunk@nmsu.edu> <44BFC9B0.2090605@chalmers.se> <200607201428.53567.mafunk@nmsu.edu> <200607201552.56821.mafunk@nmsu.edu> Message-ID: Mat, There is no "built in" way to go from a bunch of seperately created small vectors to one large vector, this is because the default VECMPI in PETSc requires all the vector entries to be stored contiquously. But if you start with the large vector you can chop up the big array and put a piece of it for each "subvector". This is basically the code Berend sent you before. Just call VecGetArray() on the large vector and then use VecCreateSeqWithArray() to provide the storage for each subvector, where you pass in the properly offset value from the array pointer you got from VecGetArray(). Good luck, Barry On Thu, 20 Jul 2006, Matt Funk wrote: > Hi, > > i was wondering if there exists a function that lets me "assemble" a vector. > Basically i would like to do the following: > > I have several PETSc vectors on the same proc. I would like to be able to > assemble all of these into one vector if possible without having to copy the > data. Is that possible? > > thanks for all the help > mat > > > > > On Thursday 20 July 2006 14:28, Matt Funk wrote: >> One other thing, >> >> in your function VecOverGlobalToBlockGlobalBegin and >> VecOverGlobalToBlockGlobalEnd: >> >> i assume that Vec *A is a the pointer to the blockglobal Vector? >> further Vec **B is an array of pointers to MPI vectors where each element >> of the array is a MPI vector associated with one subblock? >> >> if that is so, then this is what i believe your functions are doing (please >> correct me if i am wrong): >> >> VecOverGlobalToBlockGlobalBegin: splits the blockglobal vector A into its >> MPI subblock vectors >> >> VecOverGlobalToBlockGlobalEnd: restores the vectors >> >> And in between these two function calls you can mess with the MPI subblock >> vectors? >> >> mat >> >> >> then you iterate over all blocks (i assume this is the glueing part? ) >> >> On Thursday 20 July 2006 12:21, Berend van Wachem wrote: >>> Hi Mat, >>> >>> I understand it's confusing in the beginning - it took me a while to >>> grasp it as well. >>> >>>> So what i was thinking was to create a DA and a local sequential vector >>>> on each subblock (and here i would specify m=n=1 since the local >>>> Seqvector/DA resides only on one proc, which would give me multiple >>>> Seqvectors/DA per processor), and then as you say, somehow glue >>>> together the "super"-vector (the global vector) ... :). Though that is >>>> the part which am i stuck on right now. >>>> >>> > Why did you have to create a communicator for each subblock? >>> >>> If everything is balanced and the blocks have the same size, you could >>> specify m=n=1, but you would still need to make a seperate communicator >>> for each of the blocks; how would the DA otherwise know on which >>> processor it lies on? I created the communicators with >>> >>> ierr = MPI_Comm_group(PETSC_COMM_WORLD, &tmpGroup); >>> CHKERRQ(ierr); >>> for (i = 0; i < Mesh->TOTBlocks; i++) >>> { >>> ierr = MPI_Group_incl(tmpGroup, Mesh->CFDParBlocks[i].NProc, >>> Mesh->CFDParBlocks[i].Proclist, &Mesh->CFDParBlocks[i].BlockGroup); >>> CHKERRQ(ierr); >>> >>> ierr = MPI_Comm_create(PETSC_COMM_WORLD, >>> Mesh->CFDParBlocks[i].BlockGroup, &(Mesh->CFDParBlocks[i].BlockComm)); >>> CHKERRQ(ierr); >>> >>> } >>> >>> >>> Another tip on communicators. Why it is also convenient to have a >>> communicator for one processor is for data processing from file. For >>> instance, I have the initial data in one file. I make a DA, similar to >>> the DA I want to use later on, but for just one processor. I read the >>> data, and then save the DA to disk. Then I make the real DA with the >>> communicator and the number of processors I want to have and use DALoad >>> to read the DA - and all the data is transferred to the appropriate >>> processor. >>> >>> > Do you create an MPI vector for your global vector? >>> >>> Yes - because I want to get a solution over the COMPLETE problem in one >>> matrix. If your blocks are in any way connected, you will want to do >>> this as well. >>> >>> "Glueing" the DA vectors into one big one is not difficult if you use >>> the length of the various DA vectors and glue them together (see code I >>> sent in previous email) >>> >>> One thing you also have to think about is cross-block-addressing. One >>> point in one block has a neighbour it is dependent upon in another >>> block. In the code I sent you, this addressjump is called "daoffset" and >>> is member of the block structure (I am an old C++ programmer so I use >>> mostly structures). >>> >>>> Sorry for pestering you again, but since you are in Sweden (or your >>>> email is anyway) i think i might not get you again today ... :) >>> >>> No problem - just ask. It took me a while before I understood it and the >>> best thing is to look at examples that come with petsc and maybe the >>> code examples I send you, please let me know if you want me to send more. >>> >>> Good luck, >>> >>> Berend. >>> >>>> thanks for all your help >>>> mat >>>> >>>> Quoting Berend van Wachem : >>>>> Hi Matt, >>>>> >>>>> Yes, I create a DA for each domain (block I call it in the code) where >>>>> M and M and N are the dimensions of that block. >>>>> >>>>> In my problem I do specify m and n, by a simple algorithm; I try to >>>>> find a solution so that each processor has about the same number of >>>>> nodes in total. I do this by taking the first block, looking at how >>>>> much processors it would get (by dividing its number of nodes by the >>>>> total number of nodes times the number of processors) and fill it up >>>>> from that. For each block I create a seperate communicator as well, so >>>>> each block has its own communicator. >>>>> >>>>> From the vectors of the various blocks I glue together one larger >>>>> vector on which the computations are done, with these functions, >>>>> >>>>> #undef __FUNCT__ >>>>> #define __FUNCT__ "VecOverGlobalToBlockGlobalBegin" >>>>> int VecOverGlobalToBlockGlobalBegin(Vec* A, Vec** B, char *name, struct >>>>> CFDMesh *Mesh) >>>>> { >>>>> int ierr,i; >>>>> double *GPtr; >>>>> >>>>> PetscFunctionBegin; >>>>> >>>>> ierr=VecGetArray(*A,&GPtr); CHKERRQ(ierr); >>>>> AddToListofLocalVectors(&Mesh->BGCC,Mesh->NMyBlocks,B,GPtr,name); >>>>> >>>>> for (i=0; iNMyBlocks; i++) >>>>> { >>>>> ierr=DAGetGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); >>>>> CHKERRQ(ierr); >>>>> ierr=VecPlaceArray((*B)[i], GPtr+Mesh->CFDBlocks[i].daoffset); >>>>> CHKERRQ(ierr); >>>>> } >>>>> PetscFunctionReturn(0); >>>>> } >>>>> >>>>> >>>>> #undef __FUNCT__ >>>>> #define __FUNCT__ "VecOverGlobalToBlockGlobalEnd" >>>>> int VecOverGlobalToBlockGlobalEnd(Vec* A, Vec** B, struct CFDMesh >>>>> *Mesh) { >>>>> int ierr,i; >>>>> double *GPtr; >>>>> >>>>> PetscFunctionBegin; >>>>> >>>>> if (!ExistListOfLocalVectors(Mesh->BGCC,*B,&GPtr)) >>>>> { >>>>> ErrorH(1,"Trying to delete a non existing vector from BGCC list"); >>>>> } >>>>> >>>>> for (i=0; iNMyBlocks; i++) >>>>> { >>>>> ierr=VecResetArray((*B)[i]); CHKERRQ(ierr); >>>>> ierr=DARestoreGlobalVector(Mesh->CFDBlocks[i].da,&((*B)[i])); >>>>> CHKERRQ(ierr); >>>>> } >>>>> ierr=VecRestoreArray(*A,&GPtr); CHKERRQ(ierr); >>>>> DeleteFromListOfLocalVectors(&Mesh->BGCC,*B); >>>>> >>>>> PetscFunctionReturn(0); >>>>> } >>>>> >>>>>> Thanks for the response (and sorry for the late 'thanks'), >>>>>> >>>>>> i can see how this is supposed to work. However, my question is then >>>>>> with >>>>> >>>>> the >>>>> >>>>>> creation of the distributed array objects. I take it from your code >>>>>> that >>>>> >>>>> you >>>>> >>>>>> create a DA for each patch (?). M and N then would be the dimension of >>>>>> my >>>>> >>>>> 2D >>>>> >>>>>> patch. My question is what i would be specifying for n and m. I am a >>>>>> little >>>>>> >>>>>> confused in general what m and n are. It says they denote the process >>>>>> partition in each direction and that m*n must be equal the total >>>>>> number of >>>>>> >>>>>> processes in the communicator. >>>>>> Sorry, i am almost sure this is something simple. >>>>>> >>>>>> thanks >>>>>> mat >>>>>> >>>>>> On Thursday 13 July 2006 23:13, Berend van Wachem wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> i was wondering if it is possible (or advisable) to use a >>>>>>>> distributed >>>>>>> >>>>>>> array >>>>>>> >>>>>>>> for when my domain is not necessarily rectangular but my subdomains >>>>> >>>>> are >>>>> >>>>>>>> rectangular? >>>>>>>> >>>>>>>> thanks >>>>>>>> mat >>>>>>> >>>>>>> Hi Matt, >>>>>>> >>>>>>> This is exactly what I do; I work with a multi-block CFD problem, >>>>>>> where the total geometry consists of a number of structured (not >>>>>>> neccesarily rectangular) blocks. Each block has its own DA. >>>>>>> >>>>>>> How I solved it, with help from the petsc team, is by making >>>>>>> functions which translate vectors from the complete domain (called >>>>>>> block global in my code) to each block (called block local in my >>>>>>> code). The matrix computation is done at the blockglobal level, and >>>>>>> I can manipulate the vectors at the block local level. Just to give >>>>>>> an example, I attatch the code at the end of this email. If you want >>>>>>> more help/information please let me know. >>>>>>> >>>>>>> Good luck, >>>>>>> >>>>>>> Berend. >>>>>>> >>>>>>> >>>>>>> #undef __FUNCT__ >>>>>>> #define __FUNCT__ "VecBlockGlobalToBlockLocalBegin" >>>>>>> int VecBlockGlobalToBlockLocalBegin(Vec **B, Vec **A, char* name, >>>>>>> struct CFDMesh *Mesh) >>>>>>> { >>>>>>> int ierr, i; >>>>>>> double *tmpPTR=NULL; >>>>>>> >>>>>>> PetscFunctionBegin; >>>>>>> >>>>>>> >>>>>>> AddToListofLocalVectors(&Mesh->BLCC,Mesh->NMyBlocks,A,tmpPTR,name); >>>>>>> >>>>>>> for (i=0; iNMyBlocks; i++) >>>>>>> { >>>>>>> ierr=DAGetLocalVector(Mesh->CFDBlocks[i].da, &((*A)[i])); >>>>>>> CHKERRQ(ierr); >>>>>>> >>>>>>> ierr=DAGlobalToLocalBegin(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES >>>>>>> ,( *A)[ i]); CHKERRQ(ierr); >>>>>>> >>>>>>> ierr=DAGlobalToLocalEnd(Mesh->CFDBlocks[i].da,(*B)[i],INSERT_VALUES,( >>>>>>> *A )[i] ); CHKERRQ(ierr); >>>>>>> >>>>>>> } >>>>>>> >>>>>>> PetscFunctionReturn(0); >>>>>>> } >>>>>>> >>>>>>> #undef __FUNCT__ >>>>>>> #define __FUNCT__ "VecBlockGlobalToBlockLocalEnd" >>>>>>> int VecBlockGlobalToBlockLocalEnd(Vec **B, Vec **A, struct CFDMesh >>>>>>> *Mesh,int CopyData) >>>>>>> { >>>>>>> int i,ierr; >>>>>>> >>>>>>> PetscFunctionBegin; >>>>>>> >>>>>>> for (i=0; iNMyBlocks; i++) >>>>>>> { >>>>>>> if (CopyData==MF_COPY_DATA) >>>>>>> { >>>>>>> >>>>>>> ierr=DALocalToGlobal(Mesh->CFDBlocks[i].da,(*A)[i],INSERT_VALUES,(*B) >>>>>>> [i ]); CHKERRQ(ierr); >>>>>>> } >>>>>>> ierr=DARestoreLocalVector(Mesh->CFDBlocks[i].da,&((*A)[i])); >>>>>>> CHKERRQ(ierr); >>>>>>> } >>>>>>> DeleteFromListOfLocalVectors(&Mesh->BLCC,*A); >>>>>>> >>>>>>> PetscFunctionReturn(0); >>>>>>> >>>>>>> } >>>>> >>>>> -- >>>>> /\-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-=-\ >>>>> L_@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ >>>>> >>>>> | Berend van Wachem | >>>>> | Multiphase Flow Group | >>>>> | Chalmers University of Technology | >>>>> | >>>>> | Please note that my email address has changed to: | >>>>> | Berend at chalmers.se | >>>>> | >>>>> | Please make the appropriate changes in your address | >>>>> | list. | >>>>> >>>>> __@~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~@ >>>>> \/______________________________________________________/ > > From jordi.poblet at gmail.com Mon Jul 24 09:35:13 2006 From: jordi.poblet at gmail.com (jordi poblet) Date: Mon, 24 Jul 2006 16:35:13 +0200 Subject: link with other libraries In-Reply-To: References: Message-ID: Barry and Satish, Than you very much for your useful answers. I finally can compile the code using both types of makefiles. Best regards, Jordi Poblet On 7/21/06, Barry Smith wrote: > > > Jordi, > > If you want to provide the rules to compile your code you > can instead just include bmake/common/variables this sets the variables > like ${PETSC_KSP_LIB} but you provide the rules for compiling. > > Satish, > > Do we have written down somewhere on the website for the two > ways of using PETSc makefiles? There should be the directions (that only > you know :-). Maybe in the FAQ? > > Barry > > > On Fri, 21 Jul 2006, jordi poblet wrote: > > > Satish, > > > > Thank you very much for your fast response. It has solved the problem > with > > including files and I can generate the object file for my class ( > myclass.o). > > However, I suppose that with > > > > ex1: myclass.o chkopts > > > > -${CLINKER} -o ex1 myclass.o ${MY_LIB} ${PETSC_KSP_LIB} > > ${RM} myclass.o > > the makefile is trying to link a self executable program (ex1) and I > > obtain the following error message: > > > > : undefined reference to `main' > > (in myclass.cpp there is no main block, the class is used by main.cpp). > > > > I would like, > > > > 1-compile the class and obtain the file myclass.o (now I can do it with > the > > error shown above, but the file is at least generated) taking into > account > > the correct includes (from PETSc and from other libraries). I do not > need > > the libraries in this step > > 2-compile all the other classes not using PETSc and obtain the other > files > > *.o > > 3-link all the files *.o and myclass.o taking into account all the > libraries > > > > > > Thank you very much. Best regards, > > > > > > Jordi Poblet > > > > > > On 7/21/06, Satish Balay wrote: > >> > >> The basic form of a PETSc makefile is as follows: > >> > >> >>>>>> > >> CFLAGS = > >> FFLAGS = > >> CPPFLAGS = > >> FPPFLAGS = > >> CLEANFILES = > >> > >> include ${PETSC_DIR}/bmake/common/base > >> > >> ex1: ex1.o chkopts > >> -${CLINKER} -o ex1 ex1.o ${PETSC_KSP_LIB} > >> ${RM} ex1.o > >> >>>>>>>>>> > >> > >> > >> If you wish to specify additonal libraries - you can do it as: > >> > >> > >> >>>>>>> > >> MY_INC = -I/my/include > >> MY_LIB = -L/my/lib -lmy > >> > >> CFLAGS = > >> FFLAGS = > >> CPPFLAGS = ${MY_INC} > >> FPPFLAGS = > >> CLEANFILES = > >> > >> include ${PETSC_DIR}/bmake/common/base > >> > >> ex1: ex1.o chkopts > >> -${CLINKER} -o ex1 ex1.o ${MY_LIB} ${PETSC_KSP_LIB} > >> ${RM} ex1.o > >> >>>>>>>>> > >> > >> Also - you should make sure this library is compiled with the same > >> compilers as PETSc is compiled with. > >> > >> Satish > >> > >> On Fri, 21 Jul 2006, jordi poblet wrote: > >> > >> > Dear all, > >> > > >> > I would like to use PETSc within a finite element program (the main > part > >> is > >> > written in C++). Some classes should use PETSc routines and routines > >> from > >> > other libraries. When compiling these classes, I have to include not > >> only > >> > the header files from PETSc (which in the examples of a single C or > C++ > >> > files are included automatically with the sample makefiles provided > in > >> the > >> > manual) but also header files from the other used libraries. Finally, > I > >> > would have also to link PETSc with other libraries at the same time. > >> > I have tried to construct a makefile including 'by hand' the PETSc > >> headers > >> > but I have always problems. Could someone help me? Have you got > makefile > >> > examples for this situation (or similar)? > >> > > >> > The compilation options for the PETSc library that I have employed > are: > >> > > >> > ./config/configure.py --with-mpi=0 --with-clanguage=c++ > >> > --with-scalar-type=complex > >> > > >> > tests and examples are passed correctly. > >> > > >> > I am running PETSc in linux machines. > >> > > >> > > >> > Thank you in advance, > >> > > >> > Jordi Poblet > >> > > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mafunk at nmsu.edu Mon Jul 31 13:10:38 2006 From: mafunk at nmsu.edu (Matt Funk) Date: Mon, 31 Jul 2006 12:10:38 -0600 Subject: thanks for all the help In-Reply-To: <44C08644.7070400@chalmers.se> References: <200607131624.20616.mafunk@nmsu.edu> <200607201552.56821.mafunk@nmsu.edu> <44C08644.7070400@chalmers.se> Message-ID: <200607311210.39627.mafunk@nmsu.edu> Just wanted to thank you for your help again. It really gave me a jumpstart on how to use PETSc and such. Not that i have become an expert with it or anything, but, like i said, it REALLY helped (your code and the emails) to get started. Also thanks to everyone else on the mailing list willing to help and put up with sometimes stupid and/or redundant questions. I don't want to clutter the mailing list but i thought a word of appreciation can help those who offer advice on this list stay motivated to keep doing so. (after all, credit where credit is due, right?) that's all, mat On Friday 21 July 2006 01:46, Berend van Wachem wrote: > Hi Mat, > > > Isn't that exactly what the function I sent you does? It just places a > series of pointers into the new vector. > > Berend. > > > i was wondering if there exists a function that lets me "assemble" a > > vector. Basically i would like to do the following: > > > > I have several PETSc vectors on the same proc. I would like to be able to > > assemble all of these into one vector if possible without having to copy > > the data. Is that possible? > > > > thanks for all the help > > mat