From Z.Sheng at tudelft.nl Mon Mar 1 05:21:59 2010 From: Z.Sheng at tudelft.nl (Zhifeng Sheng - EWI) Date: Mon, 1 Mar 2010 12:21:59 +0100 Subject: [petsc-users] How to write and load a SEQSBAIJ matrix? References: <6374BBC9-1FF0-45EC-B4FE-1121D8200D75@mcs.anl.gov> <87tyt4s4yr.fsf@59A2.org> Message-ID: <947C04CD618D16429440FED56EAE47BA06B340@SRV564.tudelft.net> Dear all I am trying to generate a system and save it for later usage... I used the function MatView and Matload. they work perfectly with SEQAIJ matrix but when comes to symmetric SEQSBAIJ matrix, they do work. It seems that MatView does not put any information into the *.info file.. Does any know how to write and load a SEQSBAIJ matrix? Thanks and best regards Zhifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Mar 1 06:07:30 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 01 Mar 2010 13:07:30 +0100 Subject: [petsc-users] How to write and load a SEQSBAIJ matrix? In-Reply-To: <947C04CD618D16429440FED56EAE47BA06B340@SRV564.tudelft.net> References: <6374BBC9-1FF0-45EC-B4FE-1121D8200D75@mcs.anl.gov> <87tyt4s4yr.fsf@59A2.org> <947C04CD618D16429440FED56EAE47BA06B340@SRV564.tudelft.net> Message-ID: <87pr3oi6t9.fsf@59A2.org> On Mon, 1 Mar 2010 12:21:59 +0100, "Zhifeng Sheng - EWI" wrote: > Dear all > > I am trying to generate a system and save it for later usage... Just a word of caution: consider providing the function that assembles the system instead of writing it to a file since this is much faster and more scalable than writing it to disk. > but when comes to symmetric SEQSBAIJ matrix, they do work. It seems that MatView does not put any information into the *.info file.. Actually, MatView for SBAIJ just writes the matrix in AIJ format, you can load it into whatever format you want. You can specify the format you want to use in the parameter to MatLoad, or with -matload_type. Unfortunately, the block size can only be set with the option -matload_block_size (you cannot set a prefix for this option). (This poor interface would be fixed by MatLoadIntoMatrix, but that code has yet to be written.) If this does not work for you, you can try to reproduce the loading problem with src/ksp/ksp/examples/tutorials/ex10.c. Jed From tyoung at ippt.gov.pl Mon Mar 1 06:20:49 2010 From: tyoung at ippt.gov.pl (Toby D. Young) Date: Mon, 1 Mar 2010 13:20:49 +0100 Subject: [petsc-users] query on meaning of libmpiuni.* Message-ID: <20100301132049.051ae12b@wiatr.ippt.gov.pl> Greetings, I am very curious to understand why libmpiuni is there PETSc and how he is implemented by PETSc and PETSc-based programs. I understand that this library can be built as static or dynamic; right? I also understand that this library is used when PETSc is compiled with MPI, but the program is run without mpirun. (What about mpirun -np 1 ./program.x ?) In short, in which casses should I make sure that libmpiuni exists? Is there some documentation I have missed, but that I can read to dig a little deeper to understand the usage of the libmp[iuni.* library? Many thanks. Best, Toby From jed at 59A2.org Mon Mar 1 06:43:19 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 01 Mar 2010 13:43:19 +0100 Subject: [petsc-users] query on meaning of libmpiuni.* In-Reply-To: <20100301132049.051ae12b@wiatr.ippt.gov.pl> References: <20100301132049.051ae12b@wiatr.ippt.gov.pl> Message-ID: <87ocj8i55k.fsf@59A2.org> On Mon, 1 Mar 2010 13:20:49 +0100, "Toby D. Young" wrote: > > Greetings, > > I am very curious to understand why libmpiuni is there PETSc and how > he is implemented by PETSc and PETSc-based programs. > > I understand that this library can be built as static or dynamic; right? > I also understand that this library is used when PETSc is compiled with > MPI, but the program is run without mpirun. (What about mpirun -np > 1 ./program.x ?) MPIUNI is only used if you have not built with a real MPI. It's purpose is to allow code to be written for the more complex (parallel) case without littering the code with "#ifdef PARALLEL" statements. It implements trivial MPI functionality when there is only process (note that one process is not the same as "one processor" or "one core"). If a real MPI is available on your system, then you should use it even if you only intend to run single-process jobs. This will not reduce serial performance, but it allows you to interoperate more easily with other libraries that also use MPI, and it allows you to run with multiple jobs (again, you do not need multiple cores for this to be useful, it is convenient for testing algorithmic correctness and scalability). Jed From tyoung at ippt.gov.pl Mon Mar 1 07:18:08 2010 From: tyoung at ippt.gov.pl (Toby D. Young) Date: Mon, 1 Mar 2010 14:18:08 +0100 (CET) Subject: [petsc-users] query on meaning of libmpiuni.* In-Reply-To: <87ocj8i55k.fsf@59A2.org> References: <20100301132049.051ae12b@wiatr.ippt.gov.pl> <87ocj8i55k.fsf@59A2.org> Message-ID: Thanks Jed, that more-or-less covers everything I wanted to know! Best, Toby ----- Toby D. Young Assistant Professor Philosophy-Physics Polish Academy of Sciences Warszawa, Polska www: http://www.ippt.gov.pl/~tyoung skype: stenografia On Mon, 1 Mar 2010, Jed Brown wrote: > On Mon, 1 Mar 2010 13:20:49 +0100, "Toby D. Young" wrote: > > > > Greetings, > > > > I am very curious to understand why libmpiuni is there PETSc and how > > he is implemented by PETSc and PETSc-based programs. > > > > I understand that this library can be built as static or dynamic; right? > > I also understand that this library is used when PETSc is compiled with > > MPI, but the program is run without mpirun. (What about mpirun -np > > 1 ./program.x ?) > > MPIUNI is only used if you have not built with a real MPI. It's purpose > is to allow code to be written for the more complex (parallel) case > without littering the code with "#ifdef PARALLEL" statements. It > implements trivial MPI functionality when there is only process (note > that one process is not the same as "one processor" or "one core"). > > If a real MPI is available on your system, then you should use it even > if you only intend to run single-process jobs. This will not reduce > serial performance, but it allows you to interoperate more easily with > other libraries that also use MPI, and it allows you to run with > multiple jobs (again, you do not need multiple cores for this to be > useful, it is convenient for testing algorithmic correctness and > scalability). > > Jed > From bsmith at mcs.anl.gov Mon Mar 1 10:50:57 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 1 Mar 2010 10:50:57 -0600 Subject: [petsc-users] How to write and load a SEQSBAIJ matrix? In-Reply-To: <947C04CD618D16429440FED56EAE47BA06B340@SRV564.tudelft.net> References: <6374BBC9-1FF0-45EC-B4FE-1121D8200D75@mcs.anl.gov> <87tyt4s4yr.fsf@59A2.org> <947C04CD618D16429440FED56EAE47BA06B340@SRV564.tudelft.net> Message-ID: <2647335F-9A19-42BD-99D4-CF7739228CDD@mcs.anl.gov> On Mar 1, 2010, at 5:21 AM, Zhifeng Sheng - EWI wrote: > Dear all > > I am trying to generate a system and save it for later usage... I > used the function MatView and Matload. they work perfectly with > SEQAIJ matrix > > but when comes to symmetric SEQSBAIJ matrix, they do work. It seems > that MatView does not put any information into the *.info file.. > Sorry, this is our mistake. If you edit src/mat/impls/sbaij/seq/ sbaij.c and replace the function #undef __FUNCT__ #define __FUNCT__ "MatView_SeqSBAIJ" PetscErrorCode MatView_SeqSBAIJ(Mat A,PetscViewer viewer) { PetscErrorCode ierr; PetscTruth iascii,isdraw; FILE *file = 0; PetscFunctionBegin; ierr = PetscTypeCompare ((PetscObject)viewer,PETSC_VIEWER_ASCII,&iascii);CHKERRQ(ierr); ierr = PetscTypeCompare ((PetscObject)viewer,PETSC_VIEWER_DRAW,&isdraw);CHKERRQ(ierr); if (iascii){ ierr = MatView_SeqSBAIJ_ASCII(A,viewer);CHKERRQ(ierr); } else if (isdraw) { ierr = MatView_SeqSBAIJ_Draw(A,viewer);CHKERRQ(ierr); } else { Mat B; ierr = MatConvert(A,MATSEQAIJ,MAT_INITIAL_MATRIX,&B);CHKERRQ(ierr); ierr = MatView(B,viewer);CHKERRQ(ierr); ierr = MatDestroy(B);CHKERRQ(ierr); ierr = PetscViewerBinaryGetInfoPointer(viewer,&file);CHKERRQ(ierr); if (file) { fprintf(file,"-matload_block_size %d\n",(int)A->rmap->bs); } } PetscFunctionReturn(0); } and run make in that directory it will now save the block size. Thanks for pointing out the problem, Barry > > Does any know how to write and load a SEQSBAIJ matrix? > > Thanks and best regards > Zhifeng > From hxie at umn.edu Mon Mar 1 12:30:37 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 01 Mar 2010 12:30:37 -0600 Subject: [petsc-users] pilut from hypre In-Reply-To: References: Message-ID: Hi, I got the option for pilut from help: ------------ -pc_hypre_pilut_maxiter <-2> -pc_hypre_pilut_tol <-2> -pc_hypre_pilut_factorrowsize <-2 > ------------ What are the default values? "-pc_hypre_pilut_tol " is 0.001? "-pc_hypre_pilut_factorrowsize" is 20? And I do not understand the option "-pc_hypre_pilut_maxiter". Thanks, Hui From tyoung at ippt.gov.pl Mon Mar 1 13:52:49 2010 From: tyoung at ippt.gov.pl (Toby D. Young) Date: Mon, 1 Mar 2010 20:52:49 +0100 (CET) Subject: [petsc-users] query on meaning of libmpiuni.* In-Reply-To: References: <20100301132049.051ae12b@wiatr.ippt.gov.pl> <87ocj8i55k.fsf@59A2.org> Message-ID: One final question please: > I understand that this library can be built as static or dynamic; right? Then I look at my build, and only libmpiuni.a appears. Why is that? Can a libmpiuni.so exist? I build --with-shared=1. Thanks. Toby ----- Toby D. Young Assistant Professor Philosophy-Physics Polish Academy of Sciences Warszawa, Polska www: http://www.ippt.gov.pl/~tyoung skype: stenografia From balay at mcs.anl.gov Mon Mar 1 14:14:21 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Mon, 1 Mar 2010 14:14:21 -0600 (CST) Subject: [petsc-users] query on meaning of libmpiuni.* In-Reply-To: References: <20100301132049.051ae12b@wiatr.ippt.gov.pl> <87ocj8i55k.fsf@59A2.org> Message-ID: On Mon, 1 Mar 2010, Toby D. Young wrote: > > One final question please: > > > I understand that this library can be built as static or dynamic; right? > > Then I look at my build, and only libmpiuni.a appears. Why is that? Can a > libmpiuni.so exist? I build --with-shared=1. Yes - is is the deficiency of the old build tools. libpetsc.so is created by linking in all dependencies [and if libmpiuni.a is a dependency - these symbols are added to libpetsc.so] With petsc-dev both .a/.so versions of libmpiuni are merged into -lpetsc [or -lpetcsys for --with-single-library=0] Satish From knepley at gmail.com Mon Mar 1 17:00:13 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 2 Mar 2010 10:00:13 +1100 Subject: [petsc-users] pilut from hypre In-Reply-To: References: Message-ID: You can see the default values using -ksp_view. You can consult the Hypre manual for the last option. Matt On Tue, Mar 2, 2010 at 5:30 AM, wrote: > Hi, > > I got the option for pilut from help: > ------------ > -pc_hypre_pilut_maxiter <-2> -pc_hypre_pilut_tol <-2> > -pc_hypre_pilut_factorrowsize <-2 > > ------------ > > What are the default values? "-pc_hypre_pilut_tol " is 0.001? > "-pc_hypre_pilut_factorrowsize" is 20? And I do not understand the option > "-pc_hypre_pilut_maxiter". > > Thanks, > Hui > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Tue Mar 2 16:24:35 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 02 Mar 2010 17:24:35 -0500 Subject: [petsc-users] Different communicators in the two objects Message-ID: <20100302172435.yxa5ec442oos480s@cubmail.cc.columbia.edu> Hi, I tried to save the solution from DMComposte() as two binary files(one for da, and one for a scalar), but I get the error messages as below when running in two processors: (gdb) where #0 0xb7f1a410 in __kernel_vsyscall () #1 0xb7c9a085 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7c9ba01 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0x0873f24d in PetscAbortErrorHandler (line=697, fun=0x8868fd6 "VecView", file=0x8868e50 "vector.c", dir=0x8868e59 "src/vec/vec/interface/", n=80, p=1, mess=0xbfc40b74 "Different communicators in the two objects: Argument # 1 and 2", ctx=0x0) at errabort.c:62 #4 0x086b41be in PetscError (line=697, func=0x8868fd6 "VecView", file=0x8868e50 "vector.c", dir=0x8868e59 "src/vec/vec/interface/", n=80, p=1, mess=0x8869130 "Different communicators in the two objects: Argument # %d and %d") at err.c:482 #5 0x085f2356 in VecView (vec=0x8a14000, viewer=0x89cc6f0) at vector.c:697 #6 0x0804f30b in DumpSolutionToMatlab (dmmg=0x89b3370, fn=0xbfc416b7 "twmgoreggt_tx7_ty6_x7_y6_nl1_s100_t375000_r30_pn10.m") at twmgoreggt.c:430 #7 0x0804d72c in main (argc=Cannot access memory at address 0x2072 ) at twmgoreggt.c:234 The piece of code is: X = DMMGGetx(dmmg); ierr = DMCompositeGetEntries(dm,&da1,PETSC_IGNORE);CHKERRQ(ierr); ierr = DAGetLocalInfo(da1,&info1);CHKERRQ(ierr); // ierr = DMCompositeGetAccess(dm,X,&GRID,&c);CHKERRQ(ierr); ierr = DMCompositeGetLocalVectors(dm,&GRID,&c);CHKERRQ(ierr); ierr = DMCompositeScatter(dm,X,GRID,c);CHKERRQ(ierr); if(parameters->adaptiveTimeStepSize){ sprintf(fileName, "g_atwgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl%i_nt%1.5f.dat",info1.mx,info1.my, parameters->mxgrid,parameters->mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); }else{ sprintf(fileName, "g_twgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl%i_nt%1.5f.dat",info1.mx,info1.my, parameters->mxgrid,parameters->mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); } PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer_g); VecView(GRID,viewer_g); ierr = PetscViewerDestroy (viewer_g); CHKERRQ (ierr); if(parameters->adaptiveTimeStepSize){ sprintf(fileName, "g_atwgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl%i_nt%1.5f.c.dat",info1.mx,info1.my, parameters->mxgrid,parameters->mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); }else{ sprintf(fileName, "g_twgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl%i_nt%1.5f.c.dat",info1.mx,info1.my, parameters->mxgrid,parameters->mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); } int fd; PetscViewerBinaryOpen(PETSC_COMM_SELF,fileName,FILE_MODE_WRITE,&viewer_out); PetscViewerBinaryGetDescriptor(viewer_out,&fd); PetscBinaryWrite(fd,&c[0],1,PETSC_DOUBLE,PETSC_FALSE); // ierr = DMCompositeRestoreAccess(dm,X,&GRID,&c);CHKERRQ(ierr); ierr = DMCompositeGather(dm,X,GRID,c);CHKERRQ(ierr); ierr = DMCompositeRestoreLocalVectors(dm,&GRID,&c);CHKERRQ(ierr); ierr = PetscViewerDestroy (viewer_out); CHKERRQ (ierr); As debugging in gdb, in processor one: Breakpoint 1, DumpSolutionToMatlab (dmmg=0x89946b0, fn=0xbfc63f17 "twmgoreggt_tx7_ty6_x7_y6_nl1_s100_t375000_r30_pn10.m") at twmgoreggt.c:430 430 VecView(GRID,viewer_g); (gdb) s VecView (vec=0x8a139e0, viewer=0x89b9750) at vector.c:690 690 PetscFunctionBegin; (gdb) n 691 PetscValidHeaderSpecific(vec,VEC_COOKIE,1); (gdb) 692 PetscValidType(vec,1); (gdb) 693 if (!viewer) { (gdb) 696 PetscValidHeaderSpecific(viewer,PETSC_VIEWER_COOKIE,2); (gdb) 697 PetscCheckSameComm(vec,1,viewer,2); (gdb) s PMPI_Comm_compare (comm1=-2080374780, comm2=-2080374782, result=0xbfc63c14) at comm_compare.c:81 81 MPIU_THREADPRIV_GET; and in processor two: Breakpoint 1, DumpSolutionToMatlab (dmmg=0x89b3370, fn=0xbf867a67 "twmgoreggt_tx7_ty6_x7_y6_nl1_s100_t375000_r30_pn10.m") at twmgoreggt.c:430 430 VecView(GRID,viewer_g); (gdb) s VecView (vec=0x8a14000, viewer=0x89ae380) at vector.c:690 690 PetscFunctionBegin; (gdb) n 691 PetscValidHeaderSpecific(vec,VEC_COOKIE,1); (gdb) 692 PetscValidType(vec,1); (gdb) 693 if (!viewer) { (gdb) 696 PetscValidHeaderSpecific(viewer,PETSC_VIEWER_COOKIE,2); (gdb) 697 PetscCheckSameComm(vec,1,viewer,2); (gdb) s PMPI_Comm_compare (comm1=-2080374777, comm2=-2080374780, result=0xbf867764) at comm_compare.c:81 81 MPIU_THREADPRIV_GET; (gdb) In processor one, comm1=-2080374780, comm2=-2080374782, while in processor two, comm1=-2080374777, comm2=-2080374780. I do not know what causes the two communicators are different. Any idea about it? Thanks very much! - (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Tue Mar 2 16:41:56 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 2 Mar 2010 16:41:56 -0600 Subject: [petsc-users] Different communicators in the two objects In-Reply-To: <20100302172435.yxa5ec442oos480s@cubmail.cc.columbia.edu> References: <20100302172435.yxa5ec442oos480s@cubmail.cc.columbia.edu> Message-ID: GRID is local vectors which means each process has its own that lives on MPI_COMM_SELF. You are trying to view them with a viewer created with PETSC_COMM_WORLD. I don't think you want to view the local vectors, you want to view the parallel vectors. If you want to view the local vectors then each needs to go in its own file and you need to create PETSC_COMM_SELF viewers for each file. Barry On Mar 2, 2010, at 4:24 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I tried to save the solution from DMComposte() as two binary > files(one for da, and one for a scalar), but I get the error > messages as below when running in two processors: > > (gdb) where > #0 0xb7f1a410 in __kernel_vsyscall () > #1 0xb7c9a085 in raise () from /lib/tls/i686/cmov/libc.so.6 > #2 0xb7c9ba01 in abort () from /lib/tls/i686/cmov/libc.so.6 > #3 0x0873f24d in PetscAbortErrorHandler (line=697, fun=0x8868fd6 > "VecView", > file=0x8868e50 "vector.c", dir=0x8868e59 "src/vec/vec/ > interface/", n=80, > p=1, > mess=0xbfc40b74 "Different communicators in the two objects: > Argument # 1 and 2", ctx=0x0) at errabort.c:62 > #4 0x086b41be in PetscError (line=697, func=0x8868fd6 "VecView", > file=0x8868e50 "vector.c", dir=0x8868e59 "src/vec/vec/ > interface/", n=80, > p=1, > mess=0x8869130 "Different communicators in the two objects: > Argument # %d and %d") at err.c:482 > #5 0x085f2356 in VecView (vec=0x8a14000, viewer=0x89cc6f0) at > vector.c:697 > #6 0x0804f30b in DumpSolutionToMatlab (dmmg=0x89b3370, > fn=0xbfc416b7 > "twmgoreggt_tx7_ty6_x7_y6_nl1_s100_t375000_r30_pn10.m") > at twmgoreggt.c:430 > #7 0x0804d72c in main (argc=Cannot access memory at address 0x2072 > ) at twmgoreggt.c:234 > > The piece of code is: > > X = DMMGGetx(dmmg); > > ierr = DMCompositeGetEntries(dm,&da1,PETSC_IGNORE);CHKERRQ(ierr); > ierr = DAGetLocalInfo(da1,&info1);CHKERRQ(ierr); > > // ierr = DMCompositeGetAccess(dm,X,&GRID,&c);CHKERRQ(ierr); > ierr = DMCompositeGetLocalVectors(dm,&GRID,&c);CHKERRQ(ierr); > ierr = DMCompositeScatter(dm,X,GRID,c);CHKERRQ(ierr); > > if(parameters->adaptiveTimeStepSize){ > sprintf(fileName, "g_atwgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl > %i_nt%1.5f.dat",info1.mx,info1.my, parameters->mxgrid,parameters- > >mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); > }else{ > sprintf(fileName, "g_twgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl > %i_nt%1.5f.dat",info1.mx,info1.my, parameters->mxgrid,parameters- > >mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); > } > > PetscViewerBinaryOpen > (PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer_g); > VecView(GRID,viewer_g); > ierr = PetscViewerDestroy (viewer_g); CHKERRQ (ierr); > if(parameters->adaptiveTimeStepSize){ > sprintf(fileName, "g_atwgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl > %i_nt%1.5f.c.dat",info1.mx,info1.my, parameters->mxgrid,parameters- > >mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); > }else{ > sprintf(fileName, "g_twgcqt2unffnictv_tx%i_ty%i_x%i_y%i_nl > %i_nt%1.5f.c.dat",info1.mx,info1.my, parameters->mxgrid,parameters- > >mygrid,parameters->numberOfLevels,parameters->timeToGenerateGrid); > } > int fd; > > PetscViewerBinaryOpen > (PETSC_COMM_SELF,fileName,FILE_MODE_WRITE,&viewer_out); > PetscViewerBinaryGetDescriptor(viewer_out,&fd); > PetscBinaryWrite(fd,&c[0],1,PETSC_DOUBLE,PETSC_FALSE); > // ierr = DMCompositeRestoreAccess(dm,X,&GRID,&c);CHKERRQ(ierr); > ierr = DMCompositeGather(dm,X,GRID,c);CHKERRQ(ierr); > ierr = DMCompositeRestoreLocalVectors(dm,&GRID,&c);CHKERRQ(ierr); > ierr = PetscViewerDestroy (viewer_out); CHKERRQ (ierr); > > > As debugging in gdb, > > in processor one: > Breakpoint 1, DumpSolutionToMatlab (dmmg=0x89946b0, > fn=0xbfc63f17 > "twmgoreggt_tx7_ty6_x7_y6_nl1_s100_t375000_r30_pn10.m") > at twmgoreggt.c:430 > 430 VecView(GRID,viewer_g); > (gdb) s > VecView (vec=0x8a139e0, viewer=0x89b9750) at vector.c:690 > 690 PetscFunctionBegin; > (gdb) n > 691 PetscValidHeaderSpecific(vec,VEC_COOKIE,1); > (gdb) > 692 PetscValidType(vec,1); > (gdb) > 693 if (!viewer) { > (gdb) > 696 PetscValidHeaderSpecific(viewer,PETSC_VIEWER_COOKIE,2); > (gdb) > 697 PetscCheckSameComm(vec,1,viewer,2); > (gdb) s > PMPI_Comm_compare (comm1=-2080374780, comm2=-2080374782, > result=0xbfc63c14) > at comm_compare.c:81 > 81 MPIU_THREADPRIV_GET; > > and in processor two: > > Breakpoint 1, DumpSolutionToMatlab (dmmg=0x89b3370, > fn=0xbf867a67 > "twmgoreggt_tx7_ty6_x7_y6_nl1_s100_t375000_r30_pn10.m") > at twmgoreggt.c:430 > 430 VecView(GRID,viewer_g); > (gdb) s > VecView (vec=0x8a14000, viewer=0x89ae380) at vector.c:690 > 690 PetscFunctionBegin; > (gdb) n > 691 PetscValidHeaderSpecific(vec,VEC_COOKIE,1); > (gdb) > 692 PetscValidType(vec,1); > (gdb) > 693 if (!viewer) { > (gdb) > 696 PetscValidHeaderSpecific(viewer,PETSC_VIEWER_COOKIE,2); > (gdb) > 697 PetscCheckSameComm(vec,1,viewer,2); > (gdb) s > PMPI_Comm_compare (comm1=-2080374777, comm2=-2080374780, > result=0xbf867764) > at comm_compare.c:81 > 81 MPIU_THREADPRIV_GET; > (gdb) > > In processor one, comm1=-2080374780, comm2=-2080374782, while in > processor two, > comm1=-2080374777, comm2=-2080374780. I do not know what causes the > two communicators are different. > > Any idea about it? Thanks very much! > > - > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From dlr at desktopaero.com Tue Mar 2 20:42:28 2010 From: dlr at desktopaero.com (David L Rodriguez) Date: Tue, 2 Mar 2010 18:42:28 -0800 Subject: [petsc-users] using PETSc libraries in an OpenMP application Message-ID: <2DEED04E-FFFA-4D57-BEC4-5FF5A0125A22@desktopaero.com> Hello. I am new to PETSc so bare with me. I have gone through the archived questions but have not been able to find an answer to my question. So here it goes. I have a huge CFD application that is parallelized exclusively with OpenMP. There is no use of MPI anywhere. I am making significant enhancements to this code and one enhancement includes an additional elliptic solver for that builds very sparse matrices. The matrix is huge and has significantly varying eigenvalues and therefore I need a preconditioned iterative solver. I would like to implement PETSc in this code to solve this matrix, but it from what I can tell, I can only run it in serial. Is there a way to call the parallel PETSc libraries from an application that is parallelized strictly with OpenMP? Any ideas would be very much appreciated. Thanks. ____________________ David L Rodriguez, Ph.D. Desktop Aeronautics, Inc. PH: 650-323-3141 dlr at desktopaero.com http://desktopaero.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Tue Mar 2 21:04:30 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 2 Mar 2010 21:04:30 -0600 Subject: [petsc-users] using PETSc libraries in an OpenMP application In-Reply-To: <2DEED04E-FFFA-4D57-BEC4-5FF5A0125A22@desktopaero.com> References: <2DEED04E-FFFA-4D57-BEC4-5FF5A0125A22@desktopaero.com> Message-ID: <2505EA2B-9694-4440-AEC0-AAD2040CE6D0@mcs.anl.gov> Yes, see the manual page for PCOPENMP. Possibly you want to use the hypre BoomerAMG algebraic multigrid solver inside it. The command line argument would be mpiexec -n

programname - openmp_merge_size

-ksp_type preonly -pc_type openmp - openmp_pc_type hypre You need to start your program with the line PetscInitialize() end with PetscFinalize() have your usual OpenMP code in between and create a sequential AIJ matrix to hold the matrix and a sequential KSP object to hold the linear system. Barry On Mar 2, 2010, at 8:42 PM, David L Rodriguez wrote: > Hello. I am new to PETSc so bare with me. I have gone through the > archived questions but have not been able to find an answer to my > question. So here it goes. > > I have a huge CFD application that is parallelized exclusively with > OpenMP. There is no use of MPI anywhere. I am making significant > enhancements to this code and one enhancement includes an additional > elliptic solver for that builds very sparse matrices. The matrix is > huge and has significantly varying eigenvalues and therefore I need > a preconditioned iterative solver. I would like to implement PETSc > in this code to solve this matrix, but it from what I can tell, I > can only run it in serial. Is there a way to call the parallel > PETSc libraries from an application that is parallelized strictly > with OpenMP? > > Any ideas would be very much appreciated. Thanks. > > > > ____________________ > David L Rodriguez, Ph.D. > Desktop Aeronautics, Inc. > > PH: 650-323-3141 > dlr at desktopaero.com > http://desktopaero.com > > > > From vasoula_emp at hotmail.com Wed Mar 3 01:14:51 2010 From: vasoula_emp at hotmail.com (Vasia Kalavri) Date: Wed, 3 Mar 2010 09:14:51 +0200 Subject: [petsc-users] MatFactorNumeric_PASTIX error Message-ID: Hello. I use MatGetFactor(), MatCholeskyFactorSymbolic() and MatCholeskyFactorNumeric() with Pastix solver in an iterative loop in order to solve a system. The program behaves as desired when using a matrix of size 118x118 and solves the system after 7 iterations. When using a matrix of size 1000x1000, the program crashes at the 2nd iteration just after calling MatCholeskyFactorNumeric(). The message I get is: [0]PETSC ERROR: [0] MatFactorNumeric_PASTIX line 347 src/mat/impls/aij/mpi/pastix/pastix.c [0]PETSC ERROR: [0] MatCholeskyFactorNumeric line 2517 src/mat/interface/matrix.c [0]PETSC ERROR: --------------------- Error Message ------------------------------------ [0]PETSC ERROR: Signal received! Option -start_in_debugger gives me the following output: [1]PETSC ERROR: MPI error 15 [vasia-desktop:15869] *** Process received signal *** [vasia-desktop:15869] Signal: Aborted (6) [vasia-desktop:15869] Signal code: (-6) [vasia-desktop:15869] [ 0] [0xb7eeb440] [vasia-desktop:15869] [ 1] /lib/tls/i686/cmov/libc.so.6(abort+0x101) [0xb7b4ea01] [vasia-desktop:15869] [ 2] ./ex1(Petsc_MPI_DebuggerOnError+0) [0x843d50e] [vasia-desktop:15869] [ 3] /usr/lib/libmpi.so.0(ompi_errhandler_invoke+0x113) [0xb7e773e3] [vasia-desktop:15869] [ 4] /usr/lib/libmpi.so.0(MPI_Recv+0x1b5) [0xb7eac125] [vasia-desktop:15869] [ 5] ./ex1(Dreceive_one_fanin+0x8e) [0x849aa05] [vasia-desktop:15869] [ 6] ./ex1(Dwait_contrib_comp_1d+0x43) [0x84a69f9] [vasia-desktop:15869] [ 7] ./ex1(Dsopalin_smp+0x2be) [0x84a6d6f] [vasia-desktop:15869] [ 8] /lib/tls/i686/cmov/libpthread.so.0 [0xb7d6d4fb] [vasia-desktop:15869] [ 9] /lib/tls/i686/cmov/libc.so.6(clone+0x5e) [0xb7bf8e5e] [vasia-desktop:15869] *** End of error message *** . . . [vasia-desktop:15868] *** Process received signal *** [vasia-desktop:15868] Signal: Aborted (6) [vasia-desktop:15868] Signal code: (-6) [vasia-desktop:15868] [ 0] [0xb7fdd440] [vasia-desktop:15868] [ 1] /lib/tls/i686/cmov/libc.so.6(abort+0x101) [0xb7c40a01] [vasia-desktop:15868] [ 2] ./ex1 [0x848d825] [vasia-desktop:15868] [ 3] ./ex1(PetscError+0x317) [0x840d5c9] [vasia-desktop:15868] [ 4] ./ex1(PetscDefaultSignalHandler+0x3b4) [0x848e32f] [vasia-desktop:15868] [ 5] ./ex1 [0x848df5b] [vasia-desktop:15868] [ 6] [0xb7fdd420] [vasia-desktop:15868] [ 7] ./ex1(sopalin_launch_thread+0x560) [0x84c8c70] [vasia-desktop:15868] [ 8] ./ex1(Dsopalin_thread+0x168) [0x84a745e] [vasia-desktop:15868] [ 9] ./ex1(pastix_task_sopalin+0x52b) [0x8497f1b] [vasia-desktop:15868] [10] ./ex1(pastix+0x261) [0x8499804] [vasia-desktop:15868] [11] ./ex1(MatFactorNumeric_PaStiX+0xff6) [0x817777c] [vasia-desktop:15868] [12] ./ex1(MatCholeskyFactorNumeric+0x760) [0x80957d4] [vasia-desktop:15868] [13] ./ex1(main+0x3a01) [0x807e5f9] [vasia-desktop:15868] [14] /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0) [0xb7c2a450] [vasia-desktop:15868] [15] ./ex1 [0x8077291] [vasia-desktop:15868] *** End of error message *** Any suggestions? Thanks, Vasia. _________________________________________________________________ Hotmail: Powerful Free email with security by Microsoft. https://signup.live.com/signup.aspx?id=60969 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 3 08:58:58 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 3 Mar 2010 08:58:58 -0600 Subject: [petsc-users] MatFactorNumeric_PASTIX error In-Reply-To: References: Message-ID: <6C7E6897-1B3E-4450-B548-BE64E339EE5A@mcs.anl.gov> Build PETSc and Pastix with debugging turned on. ./configure --with-debugging=1 --download-pastix and see if it still crashes. If it does then in the debugger you can get detailed information about exactly where/why it crashed. Are you changing the nonzero structure of the matrix? Barry On Mar 3, 2010, at 1:14 AM, Vasia Kalavri wrote: > Hello. > I use MatGetFactor(), MatCholeskyFactorSymbolic() and > MatCholeskyFactorNumeric() with Pastix solver in an iterative loop > in order to solve a system. > The program behaves as desired when using a matrix of size 118x118 > and solves the system after 7 iterations. > When using a matrix of size 1000x1000, the program crashes at the > 2nd iteration just after calling MatCholeskyFactorNumeric(). > The message I get is: > > [0]PETSC ERROR: [0] MatFactorNumeric_PASTIX line 347 src/mat/impls/ > aij/mpi/pastix/pastix.c > [0]PETSC ERROR: [0] MatCholeskyFactorNumeric line 2517 src/mat/ > interface/matrix.c > [0]PETSC ERROR: --------------------- Error Message > ------------------------------------ > [0]PETSC ERROR: Signal received! > > Option -start_in_debugger gives me the following output: > > [1]PETSC ERROR: MPI error 15 > [vasia-desktop:15869] *** Process received signal *** > [vasia-desktop:15869] Signal: Aborted (6) > [vasia-desktop:15869] Signal code: (-6) > [vasia-desktop:15869] [ 0] [0xb7eeb440] > [vasia-desktop:15869] [ 1] /lib/tls/i686/cmov/libc.so.6(abort+0x101) > [0xb7b4ea01] > [vasia-desktop:15869] [ 2] ./ex1(Petsc_MPI_DebuggerOnError+0) > [0x843d50e] > [vasia-desktop:15869] [ 3] /usr/lib/libmpi.so. > 0(ompi_errhandler_invoke+0x113) [0xb7e773e3] > [vasia-desktop:15869] [ 4] /usr/lib/libmpi.so.0(MPI_Recv+0x1b5) > [0xb7eac125] > [vasia-desktop:15869] [ 5] ./ex1(Dreceive_one_fanin+0x8e) [0x849aa05] > [vasia-desktop:15869] [ 6] ./ex1(Dwait_contrib_comp_1d+0x43) > [0x84a69f9] > [vasia-desktop:15869] [ 7] ./ex1(Dsopalin_smp+0x2be) [0x84a6d6f] > [vasia-desktop:15869] [ 8] /lib/tls/i686/cmov/libpthread.so.0 > [0xb7d6d4fb] > [vasia-desktop:15869] [ 9] /lib/tls/i686/cmov/libc.so.6(clone+0x5e) > [0xb7bf8e5e] > [vasia-desktop:15869] *** End of error message *** > . > . > . > [vasia-desktop:15868] *** Process received signal *** > [vasia-desktop:15868] Signal: Aborted (6) > [vasia-desktop:15868] Signal code: (-6) > [vasia-desktop:15868] [ 0] [0xb7fdd440] > [vasia-desktop:15868] [ 1] /lib/tls/i686/cmov/libc.so.6(abort+0x101) > [0xb7c40a01] > [vasia-desktop:15868] [ 2] ./ex1 [0x848d825] > [vasia-desktop:15868] [ 3] ./ex1(PetscError+0x317) [0x840d5c9] > [vasia-desktop:15868] [ 4] ./ex1(PetscDefaultSignalHandler+0x3b4) > [0x848e32f] > [vasia-desktop:15868] [ 5] ./ex1 [0x848df5b] > [vasia-desktop:15868] [ 6] [0xb7fdd420] > [vasia-desktop:15868] [ 7] ./ex1(sopalin_launch_thread+0x560) > [0x84c8c70] > [vasia-desktop:15868] [ 8] ./ex1(Dsopalin_thread+0x168) [0x84a745e] > [vasia-desktop:15868] [ 9] ./ex1(pastix_task_sopalin+0x52b) > [0x8497f1b] > [vasia-desktop:15868] [10] ./ex1(pastix+0x261) [0x8499804] > [vasia-desktop:15868] [11] ./ex1(MatFactorNumeric_PaStiX+0xff6) > [0x817777c] > [vasia-desktop:15868] [12] ./ex1(MatCholeskyFactorNumeric+0x760) > [0x80957d4] > [vasia-desktop:15868] [13] ./ex1(main+0x3a01) [0x807e5f9] > [vasia-desktop:15868] [14] /lib/tls/i686/cmov/libc.so. > 6(__libc_start_main+0xe0) [0xb7c2a450] > [vasia-desktop:15868] [15] ./ex1 [0x8077291] > [vasia-desktop:15868] *** End of error message *** > > Any suggestions? > > Thanks, > Vasia. > Hotmail: Powerful Free email with security by Microsoft. Get it now. From hxie at umn.edu Sat Mar 6 15:41:12 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 06 Mar 2010 15:41:12 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: References: Message-ID: Hi, I am solving incompressible navier-stokes equations. The code implements the Newton iteration and just uses the KSP solver for each iteration. It takes 14 nonlinear iterations (~8min) to converge using the default solver in PETSc. But it takes 6 nonlinear iterations (~30min) using boomeramg from hypre. The amg taks less iterations but takes too much cputime. Is this normal? One friend told me the reason may be that I did not call the destroy function to free the memory in each Newton iteration (I called PetscFinalize after all the Newton iterations). How can I call the PCDestroy_HYPRE from a fortran code? Will there be significant difference in cputime to call amg directly from hypre? Thanks for any help. Bests, Hui From knepley at gmail.com Sat Mar 6 15:46:48 2010 From: knepley at gmail.com (Matthew Knepley) Date: Sat, 6 Mar 2010 15:46:48 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: References: Message-ID: On Sat, Mar 6, 2010 at 3:41 PM, wrote: > Hi, > > I am solving incompressible navier-stokes equations. The code implements > the Newton iteration and just uses the KSP solver for each iteration. It > takes 14 nonlinear iterations (~8min) to converge using the default solver > in PETSc. But it takes 6 nonlinear iterations (~30min) using boomeramg from > hypre. The amg taks less iterations but takes too much cputime. Is this > normal? One friend told me the reason may be that I did not call the destroy > function to free the memory in each Newton iteration (I called PetscFinalize > after all the Newton iterations). How can I call the PCDestroy_HYPRE from a > fortran code? Will there be significant difference in cputime to call amg > directly from hypre? Thanks for any help. > This has nothing to do with the memory.BoomerAMG does a lot of processing. Matt > Bests, > Hui -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From vasoula_emp at hotmail.com Sun Mar 7 10:14:26 2010 From: vasoula_emp at hotmail.com (Vasia Kalavri) Date: Sun, 7 Mar 2010 18:14:26 +0200 Subject: [petsc-users] MatFactorNumeric_PASTIX error In-Reply-To: <6C7E6897-1B3E-4450-B548-BE64E339EE5A@mcs.anl.gov> References: , <6C7E6897-1B3E-4450-B548-BE64E339EE5A@mcs.anl.gov> Message-ID: Hello again. > From: bsmith at mcs.anl.gov > To: petsc-users at mcs.anl.gov > Date: Wed, 3 Mar 2010 08:58:58 -0600 > Subject: Re: [petsc-users] MatFactorNumeric_PASTIX error > > > Build PETSc and Pastix with debugging turned on. > > ./configure --with-debugging=1 --download-pastix and see if it > still crashes. If it does then in the debugger you can get detailed > information about exactly where/why it crashed. I rebuilt with debugging turned on but unfortunately, it still crashes. Where can I find the debugger detailed info? I added some prints inside MatFactorNumeric_PaStiX() function and found out that the program crashes when calling pastix() function, line 441 in pastix.c > Are you changing the nonzero structure of the matrix? No I am not. In fact, I ran the program without updating the input matrix after the first iteration and it still crashed! > > Barry > > On Mar 3, 2010, at 1:14 AM, Vasia Kalavri wrote: > > > Hello. > > I use MatGetFactor(), MatCholeskyFactorSymbolic() and > > MatCholeskyFactorNumeric() with Pastix solver in an iterative loop > > in order to solve a system. > > The program behaves as desired when using a matrix of size 118x118 > > and solves the system after 7 iterations. > > When using a matrix of size 1000x1000, the program crashes at the > > 2nd iteration just after calling MatCholeskyFactorNumeric(). > > The message I get is: > > > > [0]PETSC ERROR: [0] MatFactorNumeric_PASTIX line 347 src/mat/impls/ > > aij/mpi/pastix/pastix.c > > [0]PETSC ERROR: [0] MatCholeskyFactorNumeric line 2517 src/mat/ > > interface/matrix.c > > [0]PETSC ERROR: --------------------- Error Message > > ------------------------------------ > > [0]PETSC ERROR: Signal received! > > > > Option -start_in_debugger gives me the following output: > > > > [1]PETSC ERROR: MPI error 15 > > [vasia-desktop:15869] *** Process received signal *** > > [vasia-desktop:15869] Signal: Aborted (6) > > [vasia-desktop:15869] Signal code: (-6) > > [vasia-desktop:15869] [ 0] [0xb7eeb440] > > [vasia-desktop:15869] [ 1] /lib/tls/i686/cmov/libc.so.6(abort+0x101) > > [0xb7b4ea01] > > [vasia-desktop:15869] [ 2] ./ex1(Petsc_MPI_DebuggerOnError+0) > > [0x843d50e] > > [vasia-desktop:15869] [ 3] /usr/lib/libmpi.so. > > 0(ompi_errhandler_invoke+0x113) [0xb7e773e3] > > [vasia-desktop:15869] [ 4] /usr/lib/libmpi.so.0(MPI_Recv+0x1b5) > > [0xb7eac125] > > [vasia-desktop:15869] [ 5] ./ex1(Dreceive_one_fanin+0x8e) [0x849aa05] > > [vasia-desktop:15869] [ 6] ./ex1(Dwait_contrib_comp_1d+0x43) > > [0x84a69f9] > > [vasia-desktop:15869] [ 7] ./ex1(Dsopalin_smp+0x2be) [0x84a6d6f] > > [vasia-desktop:15869] [ 8] /lib/tls/i686/cmov/libpthread.so.0 > > [0xb7d6d4fb] > > [vasia-desktop:15869] [ 9] /lib/tls/i686/cmov/libc.so.6(clone+0x5e) > > [0xb7bf8e5e] > > [vasia-desktop:15869] *** End of error message *** > > . > > . > > . > > [vasia-desktop:15868] *** Process received signal *** > > [vasia-desktop:15868] Signal: Aborted (6) > > [vasia-desktop:15868] Signal code: (-6) > > [vasia-desktop:15868] [ 0] [0xb7fdd440] > > [vasia-desktop:15868] [ 1] /lib/tls/i686/cmov/libc.so.6(abort+0x101) > > [0xb7c40a01] > > [vasia-desktop:15868] [ 2] ./ex1 [0x848d825] > > [vasia-desktop:15868] [ 3] ./ex1(PetscError+0x317) [0x840d5c9] > > [vasia-desktop:15868] [ 4] ./ex1(PetscDefaultSignalHandler+0x3b4) > > [0x848e32f] > > [vasia-desktop:15868] [ 5] ./ex1 [0x848df5b] > > [vasia-desktop:15868] [ 6] [0xb7fdd420] > > [vasia-desktop:15868] [ 7] ./ex1(sopalin_launch_thread+0x560) > > [0x84c8c70] > > [vasia-desktop:15868] [ 8] ./ex1(Dsopalin_thread+0x168) [0x84a745e] > > [vasia-desktop:15868] [ 9] ./ex1(pastix_task_sopalin+0x52b) > > [0x8497f1b] > > [vasia-desktop:15868] [10] ./ex1(pastix+0x261) [0x8499804] > > [vasia-desktop:15868] [11] ./ex1(MatFactorNumeric_PaStiX+0xff6) > > [0x817777c] > > [vasia-desktop:15868] [12] ./ex1(MatCholeskyFactorNumeric+0x760) > > [0x80957d4] > > [vasia-desktop:15868] [13] ./ex1(main+0x3a01) [0x807e5f9] > > [vasia-desktop:15868] [14] /lib/tls/i686/cmov/libc.so. > > 6(__libc_start_main+0xe0) [0xb7c2a450] > > [vasia-desktop:15868] [15] ./ex1 [0x8077291] > > [vasia-desktop:15868] *** End of error message *** > > > > Any suggestions? > > > > Thanks, > > Vasia. > > Hotmail: Powerful Free email with security by Microsoft. Get it now. > Vasia. _________________________________________________________________ Hotmail: Trusted email with Microsoft?s powerful SPAM protection. https://signup.live.com/signup.aspx?id=60969 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Sun Mar 7 11:11:04 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 7 Mar 2010 11:11:04 -0600 Subject: [petsc-users] MatFactorNumeric_PASTIX error In-Reply-To: References: , <6C7E6897-1B3E-4450-B548-BE64E339EE5A@mcs.anl.gov> Message-ID: <65329D09-93C4-46A3-958C-B93ED1164A2E@mcs.anl.gov> I have fixed a couple bugs in the PETSc PasTix interface. You will need to get an install petsc-dev http://www.mcs.anl.gov/petsc/petsc-as/developers/index.html to use the new code. If there is still a problem then please configure with --with-debugging=1and send all the output when it crashes. Barry On Mar 7, 2010, at 10:14 AM, Vasia Kalavri wrote: > Hello again. > > > From: bsmith at mcs.anl.gov > > To: petsc-users at mcs.anl.gov > > Date: Wed, 3 Mar 2010 08:58:58 -0600 > > Subject: Re: [petsc-users] MatFactorNumeric_PASTIX error > > > > > > Build PETSc and Pastix with debugging turned on. > > > > ./configure --with-debugging=1 --download-pastix and see if it > > still crashes. If it does then in the debugger you can get detailed > > information about exactly where/why it crashed. > > I rebuilt with debugging turned on but unfortunately, it still > crashes. > Where can I find the debugger detailed info? > I added some prints inside MatFactorNumeric_PaStiX() function > and found out that the program crashes when calling pastix() function, > line 441 in pastix.c > > > Are you changing the nonzero structure of the matrix? > > No I am not. > In fact, I ran the program without updating the input matrix after > the first iteration and it still crashed! > > > > > Barry > > > > On Mar 3, 2010, at 1:14 AM, Vasia Kalavri wrote: > > > > > Hello. > > > I use MatGetFactor(), MatCholeskyFactorSymbolic() and > > > MatCholeskyFactorNumeric() with Pastix solver in an iterative loop > > > in order to solve a system. > > > The program behaves as desired when using a matrix of size 118x118 > > > and solves the system after 7 iterations. > > > When using a matrix of size 1000x1000, the program crashes at the > > > 2nd iteration just after calling MatCholeskyFactorNumeric(). > > > The message I get is: > > > > > > [0]PETSC ERROR: [0] MatFactorNumeric_PASTIX line 347 src/mat/ > impls/ > > > aij/mpi/pastix/pastix.c > > > [0]PETSC ERROR: [0] MatCholeskyFactorNumeric line 2517 src/mat/ > > > interface/matrix.c > > > [0]PETSC ERROR: --------------------- Error Message > > > ------------------------------------ > > > [0]PETSC ERROR: Signal received! > > > > > > Option -start_in_debugger gives me the following output: > > > > > > [1]PETSC ERROR: MPI error 15 > > > [vasia-desktop:15869] *** Process received signal *** > > > [vasia-desktop:15869] Signal: Aborted (6) > > > [vasia-desktop:15869] Signal code: (-6) > > > [vasia-desktop:15869] [ 0] [0xb7eeb440] > > > [vasia-desktop:15869] [ 1] /lib/tls/i686/cmov/libc.so.6(abort > +0x101) > > > [0xb7b4ea01] > > > [vasia-desktop:15869] [ 2] ./ex1(Petsc_MPI_DebuggerOnError+0) > > > [0x843d50e] > > > [vasia-desktop:15869] [ 3] /usr/lib/libmpi.so. > > > 0(ompi_errhandler_invoke+0x113) [0xb7e773e3] > > > [vasia-desktop:15869] [ 4] /usr/lib/libmpi.so.0(MPI_Recv+0x1b5) > > > [0xb7eac125] > > > [vasia-desktop:15869] [ 5] ./ex1(Dreceive_one_fanin+0x8e) > [0x849aa05] > > > [vasia-desktop:15869] [ 6] ./ex1(Dwait_contrib_comp_1d+0x43) > > > [0x84a69f9] > > > [vasia-desktop:15869] [ 7] ./ex1(Dsopalin_smp+0x2be) [0x84a6d6f] > > > [vasia-desktop:15869] [ 8] /lib/tls/i686/cmov/libpthread.so.0 > > > [0xb7d6d4fb] > > > [vasia-desktop:15869] [ 9] /lib/tls/i686/cmov/libc.so.6(clone > +0x5e) > > > [0xb7bf8e5e] > > > [vasia-desktop:15869] *** End of error message *** > > > . > > > . > > > . > > > [vasia-desktop:15868] *** Process received signal *** > > > [vasia-desktop:15868] Signal: Aborted (6) > > > [vasia-desktop:15868] Signal code: (-6) > > > [vasia-desktop:15868] [ 0] [0xb7fdd440] > > > [vasia-desktop:15868] [ 1] /lib/tls/i686/cmov/libc.so.6(abort > +0x101) > > > [0xb7c40a01] > > > [vasia-desktop:15868] [ 2] ./ex1 [0x848d825] > > > [vasia-desktop:15868] [ 3] ./ex1(PetscError+0x317) [0x840d5c9] > > > [vasia-desktop:15868] [ 4] ./ex1(PetscDefaultSignalHandler+0x3b4) > > > [0x848e32f] > > > [vasia-desktop:15868] [ 5] ./ex1 [0x848df5b] > > > [vasia-desktop:15868] [ 6] [0xb7fdd420] > > > [vasia-desktop:15868] [ 7] ./ex1(sopalin_launch_thread+0x560) > > > [0x84c8c70] > > > [vasia-desktop:15868] [ 8] ./ex1(Dsopalin_thread+0x168) > [0x84a745e] > > > [vasia-desktop:15868] [ 9] ./ex1(pastix_task_sopalin+0x52b) > > > [0x8497f1b] > > > [vasia-desktop:15868] [10] ./ex1(pastix+0x261) [0x8499804] > > > [vasia-desktop:15868] [11] ./ex1(MatFactorNumeric_PaStiX+0xff6) > > > [0x817777c] > > > [vasia-desktop:15868] [12] ./ex1(MatCholeskyFactorNumeric+0x760) > > > [0x80957d4] > > > [vasia-desktop:15868] [13] ./ex1(main+0x3a01) [0x807e5f9] > > > [vasia-desktop:15868] [14] /lib/tls/i686/cmov/libc.so. > > > 6(__libc_start_main+0xe0) [0xb7c2a450] > > > [vasia-desktop:15868] [15] ./ex1 [0x8077291] > > > [vasia-desktop:15868] *** End of error message *** > > > > > > Any suggestions? > > > > > > Thanks, > > > Vasia. > > > Hotmail: Powerful Free email with security by Microsoft. Get it > now. > > > Vasia. > > Hotmail: Trusted email with Microsoft?s powerful SPAM protection. > Sign up now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From torres.pedrozpk at gmail.com Sun Mar 7 11:48:53 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sun, 7 Mar 2010 14:48:53 -0300 Subject: [petsc-users] KSPSetUp Message-ID: Hello, I trying to solve successive linear system and I see in some examples the call of the funtion KSPSetUP(), but I don't see clearly the purpose. When should I need to call this function?. Thanks in advance. Pedro Torres -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Sun Mar 7 11:59:22 2010 From: knepley at gmail.com (Matthew Knepley) Date: Sun, 7 Mar 2010 11:59:22 -0600 Subject: [petsc-users] KSPSetUp In-Reply-To: References: Message-ID: This method will be called automatically. Only expert users may want to call this manually. Thanks, Matt On Sun, Mar 7, 2010 at 11:48 AM, Pedro Torres wrote: > Hello, > > I trying to solve successive linear system and I see in some examples the > call of the funtion KSPSetUP(), but I don't see clearly the purpose. When > should I need to call this function?. Thanks in advance. > > > Pedro Torres > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From torres.pedrozpk at gmail.com Sun Mar 7 12:03:10 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sun, 7 Mar 2010 15:03:10 -0300 Subject: [petsc-users] KSPSetUp Message-ID: Hello, I trying to solve successive linear system and I see in some examples the call of the funtion KSPSetUP(), but I don't see clearly the purpose. When should I need to call this function?. Thanks in advance. Pedro Torres -------------- next part -------------- An HTML attachment was scrubbed... URL: From torres.pedrozpk at gmail.com Sun Mar 7 12:51:55 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sun, 7 Mar 2010 15:51:55 -0300 Subject: [petsc-users] KSPSetUp In-Reply-To: References: Message-ID: Thanks you! Regards Pedro 2010/3/7 Matthew Knepley > This method will be called automatically. Only expert users may want to > call > this manually. > > Thanks, > > Matt > > > On Sun, Mar 7, 2010 at 11:48 AM, Pedro Torres wrote: > >> Hello, >> >> I trying to solve successive linear system and I see in some examples the >> call of the funtion KSPSetUP(), but I don't see clearly the purpose. When >> should I need to call this function?. Thanks in advance. >> >> >> Pedro Torres >> > > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Mon Mar 8 04:00:53 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 08 Mar 2010 11:00:53 +0100 Subject: [petsc-users] amg from hypre In-Reply-To: References: Message-ID: <87y6i3t9nu.fsf@59A2.org> On 06 Mar 2010 15:41:12 -0600, hxie at umn.edu wrote: > Hi, > > I am solving incompressible navier-stokes equations. The code implements > the Newton iteration and just uses the KSP solver for each iteration. It > takes 14 nonlinear iterations (~8min) to converge using the default solver > in PETSc. But it takes 6 nonlinear iterations (~30min) using boomeramg from > hypre. Can you confirm that these are the number of NONLINEAR iterations? Changing the preconditioner should not change this if it behaving correctly. Are you solving incompressible NS in a coupled manner, sending the indefinite (momentum + continuity equations) to the preconditioner? In this case, BoomerAMG can be giving you completely the wrong answer. For example, on a different Navier-Stokes example, it produces very nice preconditioned residuals 0 KSP Residual norm 9.526419214320e+00 1 KSP Residual norm 9.406988599189e-02 2 KSP Residual norm 2.767040414824e-03 3 KSP Residual norm 1.649604812054e-03 4 KSP Residual norm 1.611023301144e-03 5 KSP Residual norm 7.073431543229e-04 6 KSP Residual norm 1.404066303578e-04 7 KSP Residual norm 1.347821147393e-04 8 KSP Residual norm 7.713640141302e-05 9 KSP Residual norm 5.121198361232e-05 10 KSP Residual norm 4.790100151952e-05 11 KSP Residual norm 1.769376148248e-05 12 KSP Residual norm 1.671836687758e-05 13 KSP Residual norm 9.561298137614e-06 14 KSP Residual norm 6.509746067580e-07 15 KSP Residual norm 5.863323408081e-07 16 KSP Residual norm 5.120192651612e-07 17 KSP Residual norm 3.403122131501e-07 18 KSP Residual norm 3.342529361191e-07 19 KSP Residual norm 9.178974981883e-08 but converges to completely the WRONG answer. This is because the preconditioner is singular, and the unpreconditioned residuals (using right-preconditioned GMRES) look like 0 KSP Residual norm 9.899494936612e+05 1 KSP Residual norm 3.460941545811e-02 2 KSP Residual norm 3.411670842883e-02 3 KSP Residual norm 3.406431296292e-02 4 KSP Residual norm 3.376198186394e-02 5 KSP Residual norm 3.374434209905e-02 6 KSP Residual norm 3.370086274150e-02 7 KSP Residual norm 3.334190783058e-02 8 KSP Residual norm 3.321057363881e-02 9 KSP Residual norm 3.321055343672e-02 10 KSP Residual norm 3.318392045928e-02 which is NO CONVERGENCE at all after the penalty boundary conditions were enforced. So CPU time aside, the nonlinear iteration count could occur because Newton is failing due to an incorrect search direction or a vanishing step size, but the answer could be completely wrong. You have to clear up subtleties like getting the wrong answer before looking any further at the amount of time the various preconditioners are taking. Always run with -snes_converged_reason -ksp_converged_reason and check the unpreconditioned residuals, especially when dealing with indefinite problems. Jed From juhaj at iki.fi Mon Mar 8 07:12:47 2010 From: juhaj at iki.fi (Juha =?iso-8859-1?q?J=E4ykk=E4?=) Date: Mon, 8 Mar 2010 13:12:47 +0000 Subject: [petsc-users] DA_XYZGHOSTED Message-ID: <201003081312.49853.juhaj@iki.fi> Hi! I was wondering what's the status of DA_XYZGHOSTED? It seems to be implemented for 1D DAs only. Putting the boundary data at inside (i.e. non-ghosted part) of the DA is not very nice since it necessitates separate handling of the for- loops on the ranks that happen to be on the physical boundary of the lattice. This makes the code unnecessarily complicated from users' point of view. Having the ghosts available at the boundaries as well would be the ideal place to put the boundary data into. I understand these ghost values are never updated (if running with no periodicity), so just setting them in the beginning would be correct. Cheers, Juha -- ----------------------------------------------- | Juha J?ykk?, juhaj at iki.fi | | http://www.maths.leeds.ac.uk/~juhaj | ----------------------------------------------- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part. URL: From hxie at umn.edu Mon Mar 8 09:05:43 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 08 Mar 2010 09:05:43 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: <87y6i3t9nu.fsf@59A2.org> References: <87y6i3t9nu.fsf@59A2.org> Message-ID: On Mar 8 2010, Jed Brown wrote: >On 06 Mar 2010 15:41:12 -0600, hxie at umn.edu wrote: >> Hi, >> >> I am solving incompressible navier-stokes equations. The code implements >> the Newton iteration and just uses the KSP solver for each iteration. It >> takes 14 nonlinear iterations (~8min) to converge using the default >> solver >> in PETSc. But it takes 6 nonlinear iterations (~30min) using boomeramg >> from >> hypre. > >Can you confirm that these are the number of NONLINEAR iterations? Yes. >Changing the preconditioner should not change this if it behaving >correctly. Are you solving incompressible NS in a coupled manner, >sending the indefinite (momentum + continuity equations) to the >preconditioner? It is a temperature driven flow. Temperature field is also included. We use high order method (2nd order for velocity) to discretize the system and solve it in a coupled manner. We do a small shifting for the preconditioner to avoid zero diagonals. >In this case, BoomerAMG can be giving you completely >the wrong answer. For example, on a different Navier-Stokes example, it >produces very nice preconditioned residuals > > 0 KSP Residual norm 9.526419214320e+00 > 1 KSP Residual norm 9.406988599189e-02 > 2 KSP Residual norm 2.767040414824e-03 > 3 KSP Residual norm 1.649604812054e-03 > 4 KSP Residual norm 1.611023301144e-03 > 5 KSP Residual norm 7.073431543229e-04 > 6 KSP Residual norm 1.404066303578e-04 > 7 KSP Residual norm 1.347821147393e-04 > 8 KSP Residual norm 7.713640141302e-05 > 9 KSP Residual norm 5.121198361232e-05 > 10 KSP Residual norm 4.790100151952e-05 > 11 KSP Residual norm 1.769376148248e-05 > 12 KSP Residual norm 1.671836687758e-05 > 13 KSP Residual norm 9.561298137614e-06 > 14 KSP Residual norm 6.509746067580e-07 > 15 KSP Residual norm 5.863323408081e-07 > 16 KSP Residual norm 5.120192651612e-07 > 17 KSP Residual norm 3.403122131501e-07 > 18 KSP Residual norm 3.342529361191e-07 > 19 KSP Residual norm 9.178974981883e-08 > >but converges to completely the WRONG answer. This is because the >preconditioner is singular, and the unpreconditioned residuals (using >right-preconditioned GMRES) look like > > 0 KSP Residual norm 9.899494936612e+05 > 1 KSP Residual norm 3.460941545811e-02 > 2 KSP Residual norm 3.411670842883e-02 > 3 KSP Residual norm 3.406431296292e-02 > 4 KSP Residual norm 3.376198186394e-02 > 5 KSP Residual norm 3.374434209905e-02 > 6 KSP Residual norm 3.370086274150e-02 > 7 KSP Residual norm 3.334190783058e-02 > 8 KSP Residual norm 3.321057363881e-02 > 9 KSP Residual norm 3.321055343672e-02 > 10 KSP Residual norm 3.318392045928e-02 > >which is NO CONVERGENCE at all after the penalty boundary conditions >were enforced. So CPU time aside, the nonlinear iteration count could >occur because Newton is failing due to an incorrect search direction or >a vanishing step size, but the answer could be completely wrong. > >You have to clear up subtleties like getting the wrong answer before >looking any further at the amount of time the various preconditioners >are taking. Always run with > > -snes_converged_reason -ksp_converged_reason > >and check the unpreconditioned residuals, especially when dealing with >indefinite problems. > >Jed > The result looks correct. I will double check by computing the real residual by b-Ax. For all the methods I set the same options "-ksp_max_it 500 -ksp_gmres_restart 50". And each nonlinear iteration will take 500 gmres iteration. Maybe the 500 V cycles takes much more time than the ilu0. Will there be a big difference to implement the amg directly from hypre? Thanks. Bests, Hui From knepley at gmail.com Mon Mar 8 09:08:39 2010 From: knepley at gmail.com (Matthew Knepley) Date: Mon, 8 Mar 2010 09:08:39 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: References: <87y6i3t9nu.fsf@59A2.org> Message-ID: On Mon, Mar 8, 2010 at 9:05 AM, wrote: > > > > On Mar 8 2010, Jed Brown wrote: > > On 06 Mar 2010 15:41:12 -0600, hxie at umn.edu wrote: >> >>> Hi, >>> >>> I am solving incompressible navier-stokes equations. The code implements >>> the Newton iteration and just uses the KSP solver for each iteration. It >>> takes 14 nonlinear iterations (~8min) to converge using the default solver >>> in PETSc. But it takes 6 nonlinear iterations (~30min) using boomeramg >>> from >>> hypre. >>> >> >> Can you confirm that these are the number of NONLINEAR iterations? >> > > Yes. > > > Changing the preconditioner should not change this if it behaving >> correctly. Are you solving incompressible NS in a coupled manner, >> sending the indefinite (momentum + continuity equations) to the >> preconditioner? >> > > It is a temperature driven flow. Temperature field is also included. We use > high order method (2nd order for velocity) to discretize the system and > solve it in a coupled manner. We do a small shifting for the preconditioner > to avoid zero diagonals. > > > In this case, BoomerAMG can be giving you completely >> the wrong answer. For example, on a different Navier-Stokes example, it >> produces very nice preconditioned residuals >> >> 0 KSP Residual norm 9.526419214320e+00 1 KSP Residual norm >> 9.406988599189e-02 2 KSP Residual norm 2.767040414824e-03 3 KSP Residual >> norm 1.649604812054e-03 4 KSP Residual norm 1.611023301144e-03 5 KSP >> Residual norm 7.073431543229e-04 6 KSP Residual norm 1.404066303578e-04 >> 7 KSP Residual norm 1.347821147393e-04 8 KSP Residual norm >> 7.713640141302e-05 9 KSP Residual norm 5.121198361232e-05 10 KSP Residual >> norm 4.790100151952e-05 11 KSP Residual norm 1.769376148248e-05 12 KSP >> Residual norm 1.671836687758e-05 13 KSP Residual norm 9.561298137614e-06 >> 14 KSP Residual norm 6.509746067580e-07 15 KSP Residual norm >> 5.863323408081e-07 16 KSP Residual norm 5.120192651612e-07 17 KSP Residual >> norm 3.403122131501e-07 18 KSP Residual norm 3.342529361191e-07 19 KSP >> Residual norm 9.178974981883e-08 >> but converges to completely the WRONG answer. This is because the >> preconditioner is singular, and the unpreconditioned residuals (using >> right-preconditioned GMRES) look like >> >> 0 KSP Residual norm 9.899494936612e+05 >> 1 KSP Residual norm 3.460941545811e-02 >> 2 KSP Residual norm 3.411670842883e-02 >> 3 KSP Residual norm 3.406431296292e-02 >> 4 KSP Residual norm 3.376198186394e-02 >> 5 KSP Residual norm 3.374434209905e-02 >> 6 KSP Residual norm 3.370086274150e-02 >> 7 KSP Residual norm 3.334190783058e-02 >> 8 KSP Residual norm 3.321057363881e-02 >> 9 KSP Residual norm 3.321055343672e-02 >> 10 KSP Residual norm 3.318392045928e-02 >> >> which is NO CONVERGENCE at all after the penalty boundary conditions >> were enforced. So CPU time aside, the nonlinear iteration count could >> occur because Newton is failing due to an incorrect search direction or >> a vanishing step size, but the answer could be completely wrong. >> >> You have to clear up subtleties like getting the wrong answer before >> looking any further at the amount of time the various preconditioners >> are taking. Always run with >> >> -snes_converged_reason -ksp_converged_reason >> >> and check the unpreconditioned residuals, especially when dealing with >> indefinite problems. >> >> Jed >> >> > The result looks correct. I will double check by computing the real > residual by b-Ax. For all the methods I set the same options "-ksp_max_it > 500 -ksp_gmres_restart 50". And each nonlinear iteration will take 500 gmres > iteration. Maybe the 500 V cycles takes much more time than the ilu0. Will > there be a big difference to implement the amg directly from hypre? Thanks. > This is the problem. The inner linear systems are just not really being solved. Have you looked at the convergence (-ksp_monitor)? You can get different nonlinear iteration counts this way. Matt > Bests, > Hui > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From hxie at umn.edu Mon Mar 8 11:50:46 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 08 Mar 2010 11:50:46 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: References: Message-ID: I printed the real residuals by computing ||b-Ax||_2 at the last column: equation iter gm_its gmres -------------------------------------------------------------------------------- Linear solve converged due to CONVERGED_RTOL iterations 223 field 1 223 0.414E+00 Linear solve did not converge due to DIVERGED_ITS iterations 500 field 2 500 0.116E-05 Linear solve did not converge due to DIVERGED_ITS iterations 500 field 3 500 0.922E-08 Linear solve did not converge due to DIVERGED_ITS iterations 500 field 4 500 0.182E-08 Linear solve did not converge due to DIVERGED_ITS iterations 500 field 5 500 0.256E-09 Nothing wrong with the inner linear systems. I chose "gmres rtol" as 1.0d-6 and "gmres atol" as 1.d-10 for the preconditioned system. >This is the problem. The inner linear systems are just not really being >solved. Have you looked at >the convergence (-ksp_monitor)? You can get different nonlinear iteration >counts this way. > > Matt From jed at 59A2.org Mon Mar 8 12:02:51 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 08 Mar 2010 19:02:51 +0100 Subject: [petsc-users] amg from hypre In-Reply-To: References: <87y6i3t9nu.fsf@59A2.org> Message-ID: <877hpmbsj8.fsf@59A2.org> On 08 Mar 2010 09:05:43 -0600, hxie at umn.edu wrote: > The result looks correct. I will double check by computing the real > residual by b-Ax. Run with -ksp_type fgmres or -ksp_type lgmres -ksp_right_pc to do work with unpreconditioned residuals, or run with -ksp_monitor_true_residual to see what they are. But for checking that the final solution is correct, you should be looking at the function norm in SNES. > For all the methods I set the same options "-ksp_max_it 500 > -ksp_gmres_restart 50". And each nonlinear iteration will take 500 > gmres iteration. You should try a smaller problem size if possible, and converge the linear solve to high tolerance to find out how fast Newton will converge if you do a good job on the nonlinear solve. Note that if you just stop after some arbitrary number of iterations, then you increase the risk of computing a search direction that is not a descent direction. > Maybe the 500 V cycles takes much more time than the ilu0. Definitely. It would appear based on your lower nonlinear iteration counts that 500 V-cycles is doing a slightly better job than 500 ILU(0). > Will there be a big difference to implement the amg directly from > hypre? No, but there will be a difference in the time it takes you to do something else when it still sucks, which it will. Black box solvers for indefinite problems do not exist [1], such problems really require you to design a preconditioner that respects this property if you want scalability. There are two schools of thought, you can use a domain decomposition scheme with subdomain and coarse problems carefully designed to be compatible with the indefiniteness of the problem, or you can use Schur-complement schemes to produce subproblems that are more amenable to conventional methods. The former is fairly discretization dependent and generic software does not exist. The best thing I'm aware of is to use PCMG and PCASM, but it definitely requires that you understand the algorithms, and you will have to do some manual work. The latter is mostly independent of discretization and can be done with PCFIELDSPLIT, but usually requires some approximate commutator arguments to find a good preconditioner for the Schur complement. If you don't want to think about these issues, then the only truly reliable option is to use a direct solver, but that will only scale so far. Jed [1] Many would that black box solvers don't exist period, but they can get you a lot further definite problems. From jed at 59A2.org Mon Mar 8 12:05:26 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 08 Mar 2010 19:05:26 +0100 Subject: [petsc-users] amg from hypre In-Reply-To: References: Message-ID: <876356bsex.fsf@59A2.org> On 08 Mar 2010 11:50:46 -0600, hxie at umn.edu wrote: > > I printed the real residuals by computing ||b-Ax||_2 at the last column: > > equation iter gm_its gmres > -------------------------------------------------------------------------------- > Linear solve converged due to CONVERGED_RTOL iterations 223 > field 1 223 0.414E+00 ^^^^^^^^^ How is there "nothing wrong" with this? Jed From bsmith at mcs.anl.gov Mon Mar 8 12:05:43 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 8 Mar 2010 12:05:43 -0600 Subject: [petsc-users] DA_XYZGHOSTED In-Reply-To: <201003081312.49853.juhaj@iki.fi> References: <201003081312.49853.juhaj@iki.fi> Message-ID: Juha, You are correct. DA_XYZGHOSTED is only implemented for 1D arrays. It is straightforward, but requires careful understanding of the code, to make the changes for 2 and 3d. I'm sorry I cannot do it a the moment. If you, or someone else, could add this support, that would be great, we'd love to include the fixes in PETSc. Barry On Mar 8, 2010, at 7:12 AM, Juha J?ykk? wrote: > Hi! > > I was wondering what's the status of DA_XYZGHOSTED? It seems to be > implemented > for 1D DAs only. Putting the boundary data at inside (i.e. non- > ghosted part) > of the DA is not very nice since it necessitates separate handling > of the for- > loops on the ranks that happen to be on the physical boundary of the > lattice. > This makes the code unnecessarily complicated from users' point of > view. > Having the ghosts available at the boundaries as well would be the > ideal place > to put the boundary data into. I understand these ghost values are > never > updated (if running with no periodicity), so just setting them in the > beginning would be correct. > > Cheers, > Juha > > -- > ----------------------------------------------- > | Juha J?ykk?, juhaj at iki.fi | > | http://www.maths.leeds.ac.uk/~juhaj | > ----------------------------------------------- From hxie at umn.edu Mon Mar 8 12:31:30 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 08 Mar 2010 12:31:30 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: <876356bsex.fsf@59A2.org> References: <876356bsex.fsf@59A2.org> Message-ID: I use left preconditioner. So the preconditioned residual should be less than 10e-6||b||_2+10e^-10. How to print the final preconditioned residual using the command line options? Or I will use -ksp_mpnitor to show all the residuals. Hui On Mar 8 2010, Jed Brown wrote: >On 08 Mar 2010 11:50:46 -0600, hxie at umn.edu wrote: >> >> I printed the real residuals by computing ||b-Ax||_2 at the last column: >> >> equation iter gm_its gmres >> >> >> -------------------------------------------------------------------------------- >> Linear solve converged due to CONVERGED_RTOL iterations 223 >> field 1 223 0.414E+00 > ^^^^^^^^^ > >How is there "nothing wrong" with this? > >Jed > From jed at 59A2.org Mon Mar 8 12:42:49 2010 From: jed at 59A2.org (Jed Brown) Date: Mon, 08 Mar 2010 19:42:49 +0100 Subject: [petsc-users] amg from hypre In-Reply-To: References: <876356bsex.fsf@59A2.org> Message-ID: <874okqbqom.fsf@59A2.org> On 08 Mar 2010 12:31:30 -0600, hxie at umn.edu wrote: > > I use left preconditioner. So the preconditioned residual should be less > than 10e-6||b||_2+10e^-10. How to print the final preconditioned residual > using the command line options? Or I will use -ksp_mpnitor to show all the > residuals. As I said in my last message, -ksp_monitor_true_residual Jed From christian.klettner at ucl.ac.uk Mon Mar 8 13:09:42 2010 From: christian.klettner at ucl.ac.uk (Christian Klettner) Date: Mon, 8 Mar 2010 19:09:42 -0000 Subject: [petsc-users] superlinear scale-up with hypre Message-ID: Dear PETSc, I am using a fractional step method to solve the Navier-Stokes equation which is composed of three steps. I have to solve a Poisson equation for pressure in Step 2 and I use the GMRES solver with Hypre's BoomerAMG for preconditioning. I have tested for strong scaling using a fixed problem size of 16million degrees of freedom and varied the number of cores from 32 to 256. I have found superlinear speedup up until this number of cores. Is there a reason why BoomerAMG exhibits this kind of behaviour? Best regards, Christian From bsmith at mcs.anl.gov Mon Mar 8 13:13:36 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 8 Mar 2010 13:13:36 -0600 Subject: [petsc-users] superlinear scale-up with hypre In-Reply-To: References: Message-ID: <93E86515-A5E3-4A8D-AA16-0E31D9094498@mcs.anl.gov> Cannot really say without more information about what is taking time on 32 cores and 256 cores. If you run 32 core and 256 core with -log_summary (also --with- debugging=0 ./configure version of PETSc) we'll be able to see where the time is being spent and so if it makes sense. Barry On Mar 8, 2010, at 1:09 PM, Christian Klettner wrote: > Dear PETSc, > I am using a fractional step method to solve the Navier-Stokes > equation > which is composed of three steps. I have to solve a Poisson equation > for > pressure in Step 2 and I use the GMRES solver with Hypre's BoomerAMG > for > preconditioning. I have tested for strong scaling using a fixed > problem > size of 16million degrees of freedom and varied the number of cores > from > 32 to 256. I have found superlinear speedup up until this number of > cores. > Is there a reason why BoomerAMG exhibits this kind of behaviour? > > Best regards, > Christian > From hxie at umn.edu Mon Mar 8 13:53:09 2010 From: hxie at umn.edu (hxie at umn.edu) Date: 08 Mar 2010 13:53:09 -0600 Subject: [petsc-users] amg from hypre In-Reply-To: References: Message-ID: When I use -ksp_monitor_true_residual, I get ----- 1 223 KSP preconditioned resid norm 7.587441767451e-05 true resid norm 1.919239379677e-07 ||Ae||/||Ax|| 4.636852053066e-07 2 500 500 KSP preconditioned resid norm 2.984438235355e-06 true resid norm 1.067989408349e-08 ||Ae||/||Ax|| 9.216680867037e-03 3 500 KSP preconditioned resid norm 4.928211668435e-07 true resid norm 2.134107616561e-09 ||Ae||/||Ax|| 1.995810570512e-01 4 500 KSP preconditioned resid norm 1.251658900301e-07 true resid norm 3.922169991045e-10 ||Ae||/||Ax|| 1.837856779354e-01 5 500 KSP preconditioned resid norm 4.086703850989e-08 true resid norm 1.521775833365e-10 ||Ae||/||Ax|| 3.879931422346e-01 ------ It is strange to me to get a different true residual by adding the following codes: !--compute the norm of true residual----- call MatMult(ptMat,ptSol,ptRes,pterr) call VecAXPY(ptRes,-1.0,ptRHS,pterr) call VecNorm(ptRes,NORM_2,rmax,pterr) >Linear solve converged due to CONVERGED_RTOL iterations 223 >field 1 223 0.414E+00 >I printed the real residuals by computing ||b-Ax||_2 at the last column: > equation iter gm_its gmres > -------------------------------------------------------------------------------- >Linear solve converged due to CONVERGED_RTOL iterations 223 >field 1 223 0.414E+00 >Linear solve did not converge due to DIVERGED_ITS iterations 500 >field 2 500 0.116E-05 >Linear solve did not converge due to DIVERGED_ITS iterations 500 > field 3 500 0.922E-08 >Linear solve did not converge due to DIVERGED_ITS iterations 500 >field 4 500 0.182E-08 >Linear solve did not converge due to DIVERGED_ITS iterations 500 >field 5 500 0.256E-09 Nothing wrong with the inner linear systems. I chose "gmres rtol" as 1.0d-6 and "gmres atol" as 1.d-10 for the preconditioned system. >This is the problem. The inner linear systems are just not really being >solved. Have you looked at >the convergence (-ksp_monitor)? You can get different nonlinear iteration >counts this way. > > Matt From xy2102 at columbia.edu Tue Mar 9 17:16:46 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 09 Mar 2010 18:16:46 -0500 Subject: [petsc-users] PetscSplitOwnerShip() Message-ID: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> Hi, I ran np=4 for a small (-da_grid_x 6, -da_grid_y 5) problem, with PETSC_STENCIL_BOX and width=2, dof = 4. When I use vecLoad() to load the binary file with the following set: PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer); VecLoad(viewer,PETSC_NULL,&FIELD); ierr = DAVecGetArray(da2_4,FIELD,&field);CHKERRQ(ierr); ierr = DAVecRestoreArray(da2_4,FIELD,&field);CHKERRQ(ierr); ierr = VecDestroy(FIELD);CHKERRQ(ierr); ierr = DADestroy(da2_4);CHKERRQ(ierr); ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); However, I got the error messages as below: [0]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 32 is not compatible with DA local sizes 36 100 [1]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 32 is not compatible with DA local sizes 36 100 [2]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 28 is not compatible with DA local sizes 24 80 [3]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 28 is not compatible with DA local sizes 24 80 Then I tracked down and tried to find how this 32,32,28,28 coming from. It turns out that my da2_4 has the parameters at each processor: xs xe ys ye x y Xs Xe Ys Ye Nl base nlocal Nlocal p0: 0 12 0 3 12 3 0 20 0 5 100 0 100 36 p1: 12 24 0 3 12 3 4 24 0 5 100 36 100 36 p2: 0 12 3 5 12 2 0 20 1 5 80 72 80 24 p3: 12 24 3 5 12 2 4 24 1 5 80 96 80 24 and deep in #0 PetscSplitOwnership (comm=-2080374782, n=0x8a061b4, N=0x8a061b8) at psplit.c:81 #1 0x08628384 in PetscLayoutSetUp (map=0x8a061b0) at pmap.c:140 #2 0x08618320 in VecCreate_MPI_Private (v=0x8a05c50, alloc=PETSC_TRUE, nghost=0, array=0x0) at pbvec.c:182 #3 0x08618ba7 in VecCreate_MPI (vv=0x8a05c50) at pbvec.c:232 #4 0x085f1554 in VecSetType (vec=0x8a05c50, method=0x885dd0b "mpi") at vecreg.c:54 #5 0x085ec4f0 in VecSetTypeFromOptions_Private (vec=0x8a05c50) at vector.c:1335 #6 0x085ec909 in VecSetFromOptions (vec=0x8a05c50) at vector.c:1370 #7 0x085d7a7e in VecLoad_Binary (viewer=0x89f70b0, itype=0x885ce3d "mpi", newvec=0xbfe148e4) at vecio.c:228 #8 0x085d70e4 in VecLoad (viewer=0x89f70b0, outtype=0x885ce3d "mpi", newvec=0xbfe148e4) at vecio.c:134 #9 0x0804f140 in FormInitialGuess_physical (dmmg=0x89a2880, X=0x89b34f0) at vecviewload_out.c:390 #10 0x08052ced in DMMGSolve (dmmg=0x89a2720) at damg.c:307 #11 0x0804d479 in main (argc=-1, argv=0xbfe14cd4) at vecviewload_out.c:186 it says that *n = *N/size + ((*N % size) > rank); where *N = 30; size = 4; rank = 0,1,2,3, in such a case, this gives me *n = 32,32,28,28 for pro0,1,2,3, separately. Where could be wrong with this mismatch of the local vector size and da local size(excluding ghost pts or including ghost pts)? Thanks so much! Rebecca From knepley at gmail.com Tue Mar 9 17:41:11 2010 From: knepley at gmail.com (Matthew Knepley) Date: Tue, 9 Mar 2010 17:41:11 -0600 Subject: [petsc-users] PetscSplitOwnerShip() In-Reply-To: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> References: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> Message-ID: If you do your own partitioning, you need to use VecLoadIntoVector() with a Vec you get from the DA. Matt On Tue, Mar 9, 2010 at 5:16 PM, (Rebecca) Xuefei YUAN wrote: > Hi, > > I ran np=4 for a small (-da_grid_x 6, -da_grid_y 5) problem, with > PETSC_STENCIL_BOX and width=2, dof = 4. > > When I use vecLoad() to load the binary file with the following set: > > PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer); > VecLoad(viewer,PETSC_NULL,&FIELD); > ierr = DAVecGetArray(da2_4,FIELD,&field);CHKERRQ(ierr); > > ierr = DAVecRestoreArray(da2_4,FIELD,&field);CHKERRQ(ierr); > ierr = VecDestroy(FIELD);CHKERRQ(ierr); > ierr = DADestroy(da2_4);CHKERRQ(ierr); > ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); > > However, I got the error messages as below: > > [0]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c > Vector local size 32 is not compatible with DA local sizes 36 100 > > [1]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c > Vector local size 32 is not compatible with DA local sizes 36 100 > > [2]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c > Vector local size 28 is not compatible with DA local sizes 24 80 > > [3]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c > Vector local size 28 is not compatible with DA local sizes 24 80 > > Then I tracked down and tried to find how this 32,32,28,28 coming from. > > It turns out that my da2_4 has the parameters at each processor: > > > xs xe ys ye x y Xs Xe Ys Ye Nl base nlocal Nlocal > p0: 0 12 0 3 12 3 0 20 0 5 100 0 100 36 > p1: 12 24 0 3 12 3 4 24 0 5 100 36 100 36 > p2: 0 12 3 5 12 2 0 20 1 5 80 72 80 24 > p3: 12 24 3 5 12 2 4 24 1 5 80 96 80 24 > > and deep in > > #0 PetscSplitOwnership (comm=-2080374782, n=0x8a061b4, N=0x8a061b8) > at psplit.c:81 > #1 0x08628384 in PetscLayoutSetUp (map=0x8a061b0) at pmap.c:140 > #2 0x08618320 in VecCreate_MPI_Private (v=0x8a05c50, alloc=PETSC_TRUE, > nghost=0, array=0x0) at pbvec.c:182 > #3 0x08618ba7 in VecCreate_MPI (vv=0x8a05c50) at pbvec.c:232 > #4 0x085f1554 in VecSetType (vec=0x8a05c50, method=0x885dd0b "mpi") > at vecreg.c:54 > #5 0x085ec4f0 in VecSetTypeFromOptions_Private (vec=0x8a05c50) > at vector.c:1335 > #6 0x085ec909 in VecSetFromOptions (vec=0x8a05c50) at vector.c:1370 > #7 0x085d7a7e in VecLoad_Binary (viewer=0x89f70b0, itype=0x885ce3d "mpi", > newvec=0xbfe148e4) at vecio.c:228 > #8 0x085d70e4 in VecLoad (viewer=0x89f70b0, outtype=0x885ce3d "mpi", > newvec=0xbfe148e4) at vecio.c:134 > #9 0x0804f140 in FormInitialGuess_physical (dmmg=0x89a2880, X=0x89b34f0) > at vecviewload_out.c:390 > #10 0x08052ced in DMMGSolve (dmmg=0x89a2720) at damg.c:307 > #11 0x0804d479 in main (argc=-1, argv=0xbfe14cd4) at vecviewload_out.c:186 > > it says that > > *n = *N/size + ((*N % size) > rank); > > where *N = 30; size = 4; rank = 0,1,2,3, in such a case, this gives me > > *n = 32,32,28,28 for pro0,1,2,3, separately. > > Where could be wrong with this mismatch of the local vector size and da > local size(excluding ghost pts or including ghost pts)? > > Thanks so much! > > Rebecca > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Tue Mar 9 18:52:09 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Tue, 09 Mar 2010 19:52:09 -0500 Subject: [petsc-users] PetscSplitOwnerShip() In-Reply-To: References: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> Message-ID: <20100309195209.lefvqtddj448wk8w@cubmail.cc.columbia.edu> Dear Matt, Thanks for your reply. I did not partition by my own, and even if I use VecLoadIntoVector(), the same error shows up. Anything wrong with it? Cheers, Rebecca Quoting Matthew Knepley : > If you do your own partitioning, you need to use VecLoadIntoVector() with a > Vec you get from the DA. > > Matt > > On Tue, Mar 9, 2010 at 5:16 PM, (Rebecca) Xuefei YUAN > wrote: > >> Hi, >> >> I ran np=4 for a small (-da_grid_x 6, -da_grid_y 5) problem, with >> PETSC_STENCIL_BOX and width=2, dof = 4. >> >> When I use vecLoad() to load the binary file with the following set: >> >> PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer); >> VecLoad(viewer,PETSC_NULL,&FIELD); >> ierr = DAVecGetArray(da2_4,FIELD,&field);CHKERRQ(ierr); >> >> ierr = DAVecRestoreArray(da2_4,FIELD,&field);CHKERRQ(ierr); >> ierr = VecDestroy(FIELD);CHKERRQ(ierr); >> ierr = DADestroy(da2_4);CHKERRQ(ierr); >> ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); >> >> However, I got the error messages as below: >> >> [0]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c >> Vector local size 32 is not compatible with DA local sizes 36 100 >> >> [1]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c >> Vector local size 32 is not compatible with DA local sizes 36 100 >> >> [2]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c >> Vector local size 28 is not compatible with DA local sizes 24 80 >> >> [3]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c >> Vector local size 28 is not compatible with DA local sizes 24 80 >> >> Then I tracked down and tried to find how this 32,32,28,28 coming from. >> >> It turns out that my da2_4 has the parameters at each processor: >> >> >> xs xe ys ye x y Xs Xe Ys Ye Nl base nlocal Nlocal >> p0: 0 12 0 3 12 3 0 20 0 5 100 0 100 36 >> p1: 12 24 0 3 12 3 4 24 0 5 100 36 100 36 >> p2: 0 12 3 5 12 2 0 20 1 5 80 72 80 24 >> p3: 12 24 3 5 12 2 4 24 1 5 80 96 80 24 >> >> and deep in >> >> #0 PetscSplitOwnership (comm=-2080374782, n=0x8a061b4, N=0x8a061b8) >> at psplit.c:81 >> #1 0x08628384 in PetscLayoutSetUp (map=0x8a061b0) at pmap.c:140 >> #2 0x08618320 in VecCreate_MPI_Private (v=0x8a05c50, alloc=PETSC_TRUE, >> nghost=0, array=0x0) at pbvec.c:182 >> #3 0x08618ba7 in VecCreate_MPI (vv=0x8a05c50) at pbvec.c:232 >> #4 0x085f1554 in VecSetType (vec=0x8a05c50, method=0x885dd0b "mpi") >> at vecreg.c:54 >> #5 0x085ec4f0 in VecSetTypeFromOptions_Private (vec=0x8a05c50) >> at vector.c:1335 >> #6 0x085ec909 in VecSetFromOptions (vec=0x8a05c50) at vector.c:1370 >> #7 0x085d7a7e in VecLoad_Binary (viewer=0x89f70b0, itype=0x885ce3d "mpi", >> newvec=0xbfe148e4) at vecio.c:228 >> #8 0x085d70e4 in VecLoad (viewer=0x89f70b0, outtype=0x885ce3d "mpi", >> newvec=0xbfe148e4) at vecio.c:134 >> #9 0x0804f140 in FormInitialGuess_physical (dmmg=0x89a2880, X=0x89b34f0) >> at vecviewload_out.c:390 >> #10 0x08052ced in DMMGSolve (dmmg=0x89a2720) at damg.c:307 >> #11 0x0804d479 in main (argc=-1, argv=0xbfe14cd4) at vecviewload_out.c:186 >> >> it says that >> >> *n = *N/size + ((*N % size) > rank); >> >> where *N = 30; size = 4; rank = 0,1,2,3, in such a case, this gives me >> >> *n = 32,32,28,28 for pro0,1,2,3, separately. >> >> Where could be wrong with this mismatch of the local vector size and da >> local size(excluding ghost pts or including ghost pts)? >> >> Thanks so much! >> >> Rebecca >> >> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Tue Mar 9 18:56:25 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 9 Mar 2010 18:56:25 -0600 Subject: [petsc-users] PetscSplitOwnerShip() In-Reply-To: <20100309195209.lefvqtddj448wk8w@cubmail.cc.columbia.edu> References: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> <20100309195209.lefvqtddj448wk8w@cubmail.cc.columbia.edu> Message-ID: <0F760E78-DD19-4EBE-9AD8-C519EF6D6FF8@mcs.anl.gov> On Mar 9, 2010, at 6:52 PM, (Rebecca) Xuefei YUAN wrote: > Dear Matt, > > Thanks for your reply. I did not partition by my own, and even if I > use VecLoadIntoVector(), the same error shows up. It cannot be the same error, because the trace below has a call to VecLoad() which you are not using. If you saved the vector to the binary file before with VecView() from a DA vector then you need to use VecLoadIntoVector() with the same type of vector from the same size DA. That is if DAGetGlobalVector() was used to create the vector that you saved to disk you need to use the same DAGetGlobalVector() to get a vector to read into. Barry > > Anything wrong with it? > > Cheers, > > Rebecca > > > > Quoting Matthew Knepley : > >> If you do your own partitioning, you need to use >> VecLoadIntoVector() with a >> Vec you get from the DA. >> >> Matt >> >> On Tue, Mar 9, 2010 at 5:16 PM, (Rebecca) Xuefei YUAN >> wrote: >> >>> Hi, >>> >>> I ran np=4 for a small (-da_grid_x 6, -da_grid_y 5) problem, with >>> PETSC_STENCIL_BOX and width=2, dof = 4. >>> >>> When I use vecLoad() to load the binary file with the following set: >>> >>> >>> PetscViewerBinaryOpen >>> (PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer); >>> VecLoad(viewer,PETSC_NULL,&FIELD); >>> ierr = DAVecGetArray(da2_4,FIELD,&field);CHKERRQ(ierr); >>> >>> ierr = DAVecRestoreArray(da2_4,FIELD,&field);CHKERRQ(ierr); >>> ierr = VecDestroy(FIELD);CHKERRQ(ierr); >>> ierr = DADestroy(da2_4);CHKERRQ(ierr); >>> ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); >>> >>> However, I got the error messages as below: >>> >>> [0]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/ >>> dagetarray.c >>> Vector local size 32 is not compatible with DA local sizes 36 100 >>> >>> [1]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/ >>> dagetarray.c >>> Vector local size 32 is not compatible with DA local sizes 36 100 >>> >>> [2]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/ >>> dagetarray.c >>> Vector local size 28 is not compatible with DA local sizes 24 80 >>> >>> [3]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/ >>> dagetarray.c >>> Vector local size 28 is not compatible with DA local sizes 24 80 >>> >>> Then I tracked down and tried to find how this 32,32,28,28 coming >>> from. >>> >>> It turns out that my da2_4 has the parameters at each processor: >>> >>> >>> xs xe ys ye x y Xs Xe Ys Ye Nl base nlocal Nlocal >>> p0: 0 12 0 3 12 3 0 20 0 5 100 0 100 36 >>> p1: 12 24 0 3 12 3 4 24 0 5 100 36 100 36 >>> p2: 0 12 3 5 12 2 0 20 1 5 80 72 80 24 >>> p3: 12 24 3 5 12 2 4 24 1 5 80 96 80 24 >>> >>> and deep in >>> >>> #0 PetscSplitOwnership (comm=-2080374782, n=0x8a061b4, N=0x8a061b8) >>> at psplit.c:81 >>> #1 0x08628384 in PetscLayoutSetUp (map=0x8a061b0) at pmap.c:140 >>> #2 0x08618320 in VecCreate_MPI_Private (v=0x8a05c50, >>> alloc=PETSC_TRUE, >>> nghost=0, array=0x0) at pbvec.c:182 >>> #3 0x08618ba7 in VecCreate_MPI (vv=0x8a05c50) at pbvec.c:232 >>> #4 0x085f1554 in VecSetType (vec=0x8a05c50, method=0x885dd0b "mpi") >>> at vecreg.c:54 >>> #5 0x085ec4f0 in VecSetTypeFromOptions_Private (vec=0x8a05c50) >>> at vector.c:1335 >>> #6 0x085ec909 in VecSetFromOptions (vec=0x8a05c50) at vector.c:1370 >>> #7 0x085d7a7e in VecLoad_Binary (viewer=0x89f70b0, >>> itype=0x885ce3d "mpi", >>> newvec=0xbfe148e4) at vecio.c:228 >>> #8 0x085d70e4 in VecLoad (viewer=0x89f70b0, outtype=0x885ce3d >>> "mpi", >>> newvec=0xbfe148e4) at vecio.c:134 >>> #9 0x0804f140 in FormInitialGuess_physical (dmmg=0x89a2880, >>> X=0x89b34f0) >>> at vecviewload_out.c:390 >>> #10 0x08052ced in DMMGSolve (dmmg=0x89a2720) at damg.c:307 >>> #11 0x0804d479 in main (argc=-1, argv=0xbfe14cd4) at >>> vecviewload_out.c:186 >>> >>> it says that >>> >>> *n = *N/size + ((*N % size) > rank); >>> >>> where *N = 30; size = 4; rank = 0,1,2,3, in such a case, this >>> gives me >>> >>> *n = 32,32,28,28 for pro0,1,2,3, separately. >>> >>> Where could be wrong with this mismatch of the local vector size >>> and da >>> local size(excluding ghost pts or including ghost pts)? >>> >>> Thanks so much! >>> >>> Rebecca >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments >> is infinitely more interesting than any results to which their >> experiments >> lead. >> -- Norbert Wiener >> > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From xy2102 at columbia.edu Wed Mar 10 09:50:26 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 10 Mar 2010 10:50:26 -0500 Subject: [petsc-users] PetscSplitOwnerShip() In-Reply-To: <0F760E78-DD19-4EBE-9AD8-C519EF6D6FF8@mcs.anl.gov> References: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> <20100309195209.lefvqtddj448wk8w@cubmail.cc.columbia.edu> <0F760E78-DD19-4EBE-9AD8-C519EF6D6FF8@mcs.anl.gov> Message-ID: <20100310105026.uvlte9e2sk48ssgs@cubmail.cc.columbia.edu> Dear Barry and Matt, Thanks very much for the reply. I am a little bit confused about binary write and read as an input and output method. For example, I have code1 and code2, where code1 will take code2's output as an input. In code2, the following routine is used to save the solution as a binary file: #undef __FUNCT__ #define __FUNCT__ "DumpSolutionToMatlab" PetscErrorCode DumpSolutionToMatlab (DMMG*dmmg, char * fn) { DALocalInfo info; PetscViewer viewer; PetscFunctionBegin; ierr = DAGetLocalInfo (DMMGGetDA(dmmg),&info);CHKERRQ(ierr); sprintf(fileName, "ff_atwqt2ff_tx%i_ty%i_x%i_y%i_nl%i_gt%i_ot%i_s%i.dat",info.mx,info.my,parameters->mxfield,parameters->myfield,parameters->numberOfLevels,(PetscInt)(parameters->currentTime*10000),parameters->timeAccuracyOrder,(PetscInt)(parameters->smoothFactor*100)); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); ierr = PetscObjectSetName((PetscObject)dmmg[DMMGGetLevels(dmmg)-1]->x,"ff"); ierr = VecView(dmmg[DMMGGetLevels(dmmg)-1]->x,viewer);CHKERRQ(ierr); ierr = PetscViewerDestroy (viewer); CHKERRQ (ierr); PetscFunctionReturn(0); } In code1, the following routine is used to read the solution to a Vec named FIELD: #undef __FUNCT__ #define __FUNCT__ "FormInitialGuess_physical" PetscErrorCode FormInitialGuess_physical(DMMG dmmg, Vec X) { Vec FIELD; Field **field; DA da,da2_4; PetscViewer viewer; DALocalInfo info,info2; PetscFunctionBegin; ierr = DMCompositeGetEntries((DMComposite)(dmmg->dm),&da,PETSC_IGNORE);CHKERRQ(ierr); ierr = DAGetLocalInfo(da,&info);CHKERRQ(ierr); ierr = DACreate2d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_BOX, (info.mx-1-parameters->abandonNumber), (info.my-1-parameters->abandonNumber), PETSC_DECIDE, PETSC_DECIDE, 4, 2, PETSC_NULL, PETSC_NULL, &da2_4);CHKERRQ(ierr); ierr = DAGetLocalInfo(da2_4,&info2);CHKERRQ(ierr); sprintf(fileName, "ff_twqt2ff_tx%i_ty%i_x%i_y%i_nl%i_gt%i_ot%i_s%i.dat",info2.mx,info2.my,parameters->mxgrid-1,parameters->mygrid-1,parameters->numberOfLevels,(PetscInt)(parameters->timeToGenerateGrid*10000),parameters->timeAccuracyOrder,(PetscInt)(parameters->smoothFactor*100)); PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer); VecLoad(viewer,PETSC_NULL,&FIELD); VecLoadIntoVector(viewer,FIELD); ierr = DAVecGetArray(da2_4,FIELD,&field);CHKERRQ(ierr); ierr = DAVecRestoreArray(da2_4,FIELD,&field);CHKERRQ(ierr); ierr = VecDestroy(FIELD);CHKERRQ(ierr); ierr = DADestroy(da2_4);CHKERRQ(ierr); ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); PetscFunctionReturn(0); } When I call VecLoad(viewer,PETSC_NULL,&FIELD); VecLoadIntoVector(viewer,FIELD); there is an error in proc0: [0]PETSC ERROR: PetscBinaryRead() line 251 in src/sys/fileio/sysio.c Read past end of file In gdb, it shows that Program received signal SIGABRT, Aborted. [Switching to Thread 0xb7c7c6b0 (LWP 598)] 0xb7f4c410 in __kernel_vsyscall () (gdb) where #0 0xb7f4c410 in __kernel_vsyscall () #1 0xb7ccc085 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ccda01 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0x08733fe9 in PetscAbortErrorHandler (line=251, fun=0x886a9c9 "PetscBinaryRead", file=0x886a942 "sysio.c", dir=0x886a94a "src/sys/fileio/", n=66, p=1, mess=0xbffdbe64 "Read past end of file", ctx=0x0) at errabort.c:62 #4 0x086a8f5a in PetscError (line=251, func=0x886a9c9 "PetscBinaryRead", file=0x886a942 "sysio.c", dir=0x886a94a "src/sys/fileio/", n=66, p=1, mess=0x886a9e6 "Read past end of file") at err.c:482 #5 0x086b73a2 in PetscBinaryRead (fd=9, p=0xbffdc764, n=1, type=PETSC_INT) at sysio.c:251 #6 0x085d8dee in VecLoadIntoVector_Binary (viewer=0x89f2870, vec=0x89f4a00) at vecio.c:445 #7 0x085d9e76 in VecLoadIntoVector_Default (viewer=0x89f2870, vec=0x89f4a00) at vecio.c:514 #8 0x085e9d2c in VecLoadIntoVector (viewer=0x89f2870, vec=0x89f4a00) at vector.c:1031 #9 0x0804f158 in FormInitialGuess_physical (dmmg=0x8999b30, X=0x898b3c0) at vecviewload_out.c:391 #10 0x08052d05 in DMMGSolve (dmmg=0x89999e0) at damg.c:307 #11 0x0804d479 in main (argc=Cannot access memory at address 0x256 ) at vecviewload_out.c:186 When I call VecLoad(viewer,PETSC_NULL,&FIELD); // VecLoadIntoVector(viewer,FIELD); [0]PETSC ERROR: [1]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 32 is not compatible with DA local sizes 36 100 [2]PETSC ERROR: [3]PETSC ERROR: DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 28 is not compatible with DA local sizes 24 80 DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 32 is not compatible with DA local sizes 36 100 DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 28 is not compatible with DA local sizes 24 80 in gdb: Program received signal SIGABRT, Aborted. [Switching to Thread 0xb7c4f6b0 (LWP 840)] 0xb7f1f410 in __kernel_vsyscall () (gdb) where #0 0xb7f1f410 in __kernel_vsyscall () #1 0xb7c9f085 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7ca0a01 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0x08733fd1 in PetscAbortErrorHandler (line=53, fun=0x883be60 "DAVecGetArray", file=0x883be6e "dagetarray.c", dir=0x883be7b "src/dm/da/src/", n=75, p=1, mess=0xbffcbd54 "Vector local size 32 is not compatible with DA local sizes 36 100\n", ctx=0x0) at errabort.c:62 #4 0x086a8f42 in PetscError (line=53, func=0x883be60 "DAVecGetArray", file=0x883be6e "dagetarray.c", dir=0x883be7b "src/dm/da/src/", n=75, p=1, mess=0x883be8c "Vector local size %D is not compatible with DA local sizes %D %D\n") at err.c:482 #5 0x0820d20a in DAVecGetArray (da=0x89f1870, vec=0x89f4770, array=0xbffcc770) at dagetarray.c:53 #6 0x0804f162 in FormInitialGuess_physical (dmmg=0x898a560, X=0x899b070) at vecviewload_out.c:392 #7 0x08052ced in DMMGSolve (dmmg=0x898a400) at damg.c:307 #8 0x0804d479 in main (argc=Cannot access memory at address 0x348 ) at vecviewload_out.c:186 If I call // VecLoad(viewer,PETSC_NULL,&FIELD); VecLoadIntoVector(viewer,FIELD); [1]PETSC ERROR: VecLoadIntoVector() line 1016 in src/vec/vec/interface/vector.c Null Object: Parameter # 2 [0]PETSC ERROR: VecLoadIntoVector() line 1016 in src/vec/vec/interface/vector.c Null Object: Parameter # 2 [2]PETSC ERROR: VecLoadIntoVector() line 1016 in src/vec/vec/interface/vector.c Null Object: Parameter # 2 [3]PETSC ERROR: VecLoadIntoVector() line 1016 in src/vec/vec/interface/vector.c Null Object: Parameter # 2 Program received signal SIGABRT, Aborted. [Switching to Thread 0xb7c256b0 (LWP 1111)] 0xb7ef5410 in __kernel_vsyscall () (gdb) where #0 0xb7ef5410 in __kernel_vsyscall () #1 0xb7c75085 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7c76a01 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0x08733fc9 in PetscAbortErrorHandler (line=1016, fun=0x885dc12 "VecLoadIntoVector", file=0x885d6b0 "vector.c", dir=0x885d6b9 "src/vec/vec/interface/", n=85, p=1, mess=0xbfc7f9d4 "Null Object: Parameter # 2", ctx=0x0) at errabort.c:62 #4 0x086a8f3a in PetscError (line=1016, func=0x885dc12 "VecLoadIntoVector", file=0x885d6b0 "vector.c", dir=0x885d6b9 "src/vec/vec/interface/", n=85, p=1, mess=0x885d6ed "Null Object: Parameter # %d") at err.c:482 #5 0x085e985a in VecLoadIntoVector (viewer=0x8a08d10, vec=0x0) at vector.c:1016 #6 0x0804f138 in FormInitialGuess_physical (dmmg=0x89a2880, X=0x89b34f0) at vecviewload_out.c:391 #7 0x08052ce5 in DMMGSolve (dmmg=0x89a2720) at damg.c:307 #8 0x0804d479 in main (argc=Cannot access memory at address 0x457 ) at vecviewload_out.c:186 So I am not sure how this binary write and read working? Did I miss sth? Thanks a lot! Rebecca From knepley at gmail.com Wed Mar 10 10:28:13 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 10 Mar 2010 10:28:13 -0600 Subject: [petsc-users] PetscSplitOwnerShip() In-Reply-To: <20100310105026.uvlte9e2sk48ssgs@cubmail.cc.columbia.edu> References: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> <20100309195209.lefvqtddj448wk8w@cubmail.cc.columbia.edu> <0F760E78-DD19-4EBE-9AD8-C519EF6D6FF8@mcs.anl.gov> <20100310105026.uvlte9e2sk48ssgs@cubmail.cc.columbia.edu> Message-ID: Why are you loading the vector twice? Shouldn't you just use VecLoadIntoVector()? Matt On Wed, Mar 10, 2010 at 9:50 AM, (Rebecca) Xuefei YUAN wrote: > Dear Barry and Matt, > > Thanks very much for the reply. > > I am a little bit confused about binary write and read as an input and > output method. > > For example, I have code1 and code2, where code1 will take code2's output > as an input. > > In code2, the following routine is used to save the solution as a binary > file: > > > #undef __FUNCT__ > #define __FUNCT__ "DumpSolutionToMatlab" > PetscErrorCode DumpSolutionToMatlab (DMMG*dmmg, char * fn) > { > DALocalInfo info; > PetscViewer viewer; > PetscFunctionBegin; > > ierr = DAGetLocalInfo (DMMGGetDA(dmmg),&info);CHKERRQ(ierr); > sprintf(fileName, > "ff_atwqt2ff_tx%i_ty%i_x%i_y%i_nl%i_gt%i_ot%i_s%i.dat",info.mx > ,info.my,parameters->mxfield,parameters->myfield,parameters->numberOfLevels,(PetscInt)(parameters->currentTime*10000),parameters->timeAccuracyOrder,(PetscInt)(parameters->smoothFactor*100)); > ierr = > PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_WRITE,&viewer);CHKERRQ(ierr); > ierr = > PetscObjectSetName((PetscObject)dmmg[DMMGGetLevels(dmmg)-1]->x,"ff"); > ierr = VecView(dmmg[DMMGGetLevels(dmmg)-1]->x,viewer);CHKERRQ(ierr); > ierr = PetscViewerDestroy (viewer); CHKERRQ (ierr); > > PetscFunctionReturn(0); > } > > > In code1, the following routine is used to read the solution to a Vec named > FIELD: > > #undef __FUNCT__ > #define __FUNCT__ "FormInitialGuess_physical" > PetscErrorCode FormInitialGuess_physical(DMMG dmmg, Vec X) > { > Vec FIELD; > Field **field; > DA da,da2_4; > PetscViewer viewer; > DALocalInfo info,info2; > > PetscFunctionBegin; > > ierr = > DMCompositeGetEntries((DMComposite)(dmmg->dm),&da,PETSC_IGNORE);CHKERRQ(ierr); > ierr = DAGetLocalInfo(da,&info);CHKERRQ(ierr); > ierr = DACreate2d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_BOX, > (info.mx-1-parameters->abandonNumber), > (info.my-1-parameters->abandonNumber), PETSC_DECIDE, PETSC_DECIDE, 4, 2, > PETSC_NULL, PETSC_NULL, &da2_4);CHKERRQ(ierr); > ierr = DAGetLocalInfo(da2_4,&info2);CHKERRQ(ierr); > sprintf(fileName, "ff_twqt2ff_tx%i_ty%i_x%i_y%i_nl%i_gt%i_ot%i_s%i.dat", > info2.mx > ,info2.my,parameters->mxgrid-1,parameters->mygrid-1,parameters->numberOfLevels,(PetscInt)(parameters->timeToGenerateGrid*10000),parameters->timeAccuracyOrder,(PetscInt)(parameters->smoothFactor*100)); > > PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer); > VecLoad(viewer,PETSC_NULL,&FIELD); > VecLoadIntoVector(viewer,FIELD); > ierr = DAVecGetArray(da2_4,FIELD,&field);CHKERRQ(ierr); > > ierr = DAVecRestoreArray(da2_4,FIELD,&field);CHKERRQ(ierr); > ierr = VecDestroy(FIELD);CHKERRQ(ierr); > ierr = DADestroy(da2_4);CHKERRQ(ierr); > ierr = PetscViewerDestroy(viewer);CHKERRQ(ierr); > > PetscFunctionReturn(0); > } > > When I call > VecLoad(viewer,PETSC_NULL,&FIELD); > VecLoadIntoVector(viewer,FIELD); > > there is an error in proc0: > [0]PETSC ERROR: PetscBinaryRead() line 251 in src/sys/fileio/sysio.c Read > past end of file > > In gdb, it shows that > > Program received signal SIGABRT, Aborted. > [Switching to Thread 0xb7c7c6b0 (LWP 598)] > 0xb7f4c410 in __kernel_vsyscall () > (gdb) where > #0 0xb7f4c410 in __kernel_vsyscall () > #1 0xb7ccc085 in raise () from /lib/tls/i686/cmov/libc.so.6 > #2 0xb7ccda01 in abort () from /lib/tls/i686/cmov/libc.so.6 > #3 0x08733fe9 in PetscAbortErrorHandler (line=251, > fun=0x886a9c9 "PetscBinaryRead", file=0x886a942 "sysio.c", > dir=0x886a94a "src/sys/fileio/", n=66, p=1, > mess=0xbffdbe64 "Read past end of file", ctx=0x0) at errabort.c:62 > #4 0x086a8f5a in PetscError (line=251, func=0x886a9c9 "PetscBinaryRead", > file=0x886a942 "sysio.c", dir=0x886a94a "src/sys/fileio/", n=66, p=1, > mess=0x886a9e6 "Read past end of file") at err.c:482 > #5 0x086b73a2 in PetscBinaryRead (fd=9, p=0xbffdc764, n=1, type=PETSC_INT) > at sysio.c:251 > #6 0x085d8dee in VecLoadIntoVector_Binary (viewer=0x89f2870, > vec=0x89f4a00) > at vecio.c:445 > #7 0x085d9e76 in VecLoadIntoVector_Default (viewer=0x89f2870, > vec=0x89f4a00) > at vecio.c:514 > #8 0x085e9d2c in VecLoadIntoVector (viewer=0x89f2870, vec=0x89f4a00) > at vector.c:1031 > #9 0x0804f158 in FormInitialGuess_physical (dmmg=0x8999b30, X=0x898b3c0) > at vecviewload_out.c:391 > #10 0x08052d05 in DMMGSolve (dmmg=0x89999e0) at damg.c:307 > #11 0x0804d479 in main (argc=Cannot access memory at address 0x256 > ) at vecviewload_out.c:186 > > When I call > VecLoad(viewer,PETSC_NULL,&FIELD); > // VecLoadIntoVector(viewer,FIELD); > [0]PETSC ERROR: [1]PETSC ERROR: DAVecGetArray() line 53 in > src/dm/da/src/dagetarray.c Vector local size 32 is not compatible with DA > local sizes 36 100 > > [2]PETSC ERROR: [3]PETSC ERROR: DAVecGetArray() line 53 in > src/dm/da/src/dagetarray.c Vector local size 28 is not compatible with DA > local sizes 24 80 > > DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 32 > is not compatible with DA local sizes 36 100 > > DAVecGetArray() line 53 in src/dm/da/src/dagetarray.c Vector local size 28 > is not compatible with DA local sizes 24 80 > > in gdb: > > Program received signal SIGABRT, Aborted. > [Switching to Thread 0xb7c4f6b0 (LWP 840)] > 0xb7f1f410 in __kernel_vsyscall () > (gdb) where > #0 0xb7f1f410 in __kernel_vsyscall () > #1 0xb7c9f085 in raise () from /lib/tls/i686/cmov/libc.so.6 > #2 0xb7ca0a01 in abort () from /lib/tls/i686/cmov/libc.so.6 > #3 0x08733fd1 in PetscAbortErrorHandler (line=53, > fun=0x883be60 "DAVecGetArray", file=0x883be6e "dagetarray.c", > dir=0x883be7b "src/dm/da/src/", n=75, p=1, > mess=0xbffcbd54 "Vector local size 32 is not compatible with DA local > sizes 36 100\n", ctx=0x0) at errabort.c:62 > #4 0x086a8f42 in PetscError (line=53, func=0x883be60 "DAVecGetArray", > file=0x883be6e "dagetarray.c", dir=0x883be7b "src/dm/da/src/", n=75, > p=1, > mess=0x883be8c "Vector local size %D is not compatible with DA local > sizes %D %D\n") at err.c:482 > #5 0x0820d20a in DAVecGetArray (da=0x89f1870, vec=0x89f4770, > array=0xbffcc770) > at dagetarray.c:53 > #6 0x0804f162 in FormInitialGuess_physical (dmmg=0x898a560, X=0x899b070) > at vecviewload_out.c:392 > #7 0x08052ced in DMMGSolve (dmmg=0x898a400) at damg.c:307 > #8 0x0804d479 in main (argc=Cannot access memory at address 0x348 > ) at vecviewload_out.c:186 > > If I call > // VecLoad(viewer,PETSC_NULL,&FIELD); > VecLoadIntoVector(viewer,FIELD); > [1]PETSC ERROR: VecLoadIntoVector() line 1016 in > src/vec/vec/interface/vector.c Null Object: Parameter # 2 > [0]PETSC ERROR: VecLoadIntoVector() line 1016 in > src/vec/vec/interface/vector.c Null Object: Parameter # 2 > [2]PETSC ERROR: VecLoadIntoVector() line 1016 in > src/vec/vec/interface/vector.c Null Object: Parameter # 2 > [3]PETSC ERROR: VecLoadIntoVector() line 1016 in > src/vec/vec/interface/vector.c Null Object: Parameter # 2 > Program received signal SIGABRT, Aborted. > [Switching to Thread 0xb7c256b0 (LWP 1111)] > 0xb7ef5410 in __kernel_vsyscall () > (gdb) where > #0 0xb7ef5410 in __kernel_vsyscall () > #1 0xb7c75085 in raise () from /lib/tls/i686/cmov/libc.so.6 > #2 0xb7c76a01 in abort () from /lib/tls/i686/cmov/libc.so.6 > #3 0x08733fc9 in PetscAbortErrorHandler (line=1016, > fun=0x885dc12 "VecLoadIntoVector", file=0x885d6b0 "vector.c", > dir=0x885d6b9 "src/vec/vec/interface/", n=85, p=1, > mess=0xbfc7f9d4 "Null Object: Parameter # 2", ctx=0x0) at errabort.c:62 > #4 0x086a8f3a in PetscError (line=1016, func=0x885dc12 > "VecLoadIntoVector", > file=0x885d6b0 "vector.c", dir=0x885d6b9 "src/vec/vec/interface/", n=85, > p=1, mess=0x885d6ed "Null Object: Parameter # %d") at err.c:482 > #5 0x085e985a in VecLoadIntoVector (viewer=0x8a08d10, vec=0x0) > at vector.c:1016 > #6 0x0804f138 in FormInitialGuess_physical (dmmg=0x89a2880, X=0x89b34f0) > at vecviewload_out.c:391 > #7 0x08052ce5 in DMMGSolve (dmmg=0x89a2720) at damg.c:307 > #8 0x0804d479 in main (argc=Cannot access memory at address 0x457 > ) at vecviewload_out.c:186 > > > So I am not sure how this binary write and read working? > > Did I miss sth? > > Thanks a lot! > > Rebecca > > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From xy2102 at columbia.edu Wed Mar 10 10:43:23 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 10 Mar 2010 11:43:23 -0500 Subject: [petsc-users] How subdomain talks to neighbors? Message-ID: <20100310114323.9261vm3eo0gsc48o@cubmail.cc.columbia.edu> Dear all, I have a question about how subdomains "talk" to neighbor subdomains. For example, I have a 6X5 mesh, the solution at each mesh f[j][i] is given for (j = jFirst; j <= jLast; j++){ for (i = iFirst; i <= iLast; i++){ f[j][i] = i+j*info.mx; } } i.e., 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 When I read this solution (binary file) as an input of another set of code with np=2, the array gives me rank=0: xs=0,x=3,ys=0,y=5 0 1 2 3 4 5 3 4 5 6 7 8 6 7 8 9 10 11 9 10 11 12 13 14 12 13 14 X X X rank=1: xs=3,x=3,ys=0,y=5 X X X 15 16 17 15 16 17 18 19 20 18 19 20 21 22 23 21 22 23 24 25 26 24 25 26 27 28 29 or np=3 gives rank=0: xs=0,x=2,ys=0,y=5 0 1 2 3 4 5 2 3 4 5 6 7 4 5 6 7 8 9 6 7 8 9 X X 8 9 X X X X rank=1: xs=2,x=2,ys=0,y=5 X X 10 11 12 13 10 11 12 13 14 15 12 13 14 15 16 17 14 15 16 17 18 19 16 17 18 19 X X rank=2: xs=4,x=2,ys=0,y=5 X X X X 20 21 X X 20 21 22 23 20 21 22 23 24 25 22 23 24 25 26 27 24 25 26 27 28 29 , where X means this location is not visited. Still I do not know how subdomains talk to each other. Any references on how to understand those communication between subdomains? Thanks a lot! Rebecca From bsmith at mcs.anl.gov Wed Mar 10 11:32:24 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Mar 2010 11:32:24 -0600 Subject: [petsc-users] PetscSplitOwnerShip() In-Reply-To: <20100310105026.uvlte9e2sk48ssgs@cubmail.cc.columbia.edu> References: <20100309181646.30w6ukjmkggk4kwo@cubmail.cc.columbia.edu> <20100309195209.lefvqtddj448wk8w@cubmail.cc.columbia.edu> <0F760E78-DD19-4EBE-9AD8-C519EF6D6FF8@mcs.anl.gov> <20100310105026.uvlte9e2sk48ssgs@cubmail.cc.columbia.edu> Message-ID: <84F8A3E6-1D06-4C9E-ABB4-1AEE36DE27A2@mcs.anl.gov> VecLoadIntoVector() takes as INPUT a vector like dmmg[DMMGGetLevels(dmmg)-1]->x That is a vector that comes from the finest level of the DMMG. Here you are passing a null into VecLoadIntoVector() in other words you are not passing a vector at all into the routine. This cannot possibly work. Your FormInitialGuess_physical(DMMG dmmg, Vec X) takes a DMMG as input, and a vector X. Likely this vector X is what you pass into VecLoadIntoVector(). Barry On Mar 10, 2010, at 9:50 AM, (Rebecca) Xuefei YUAN wrote: > 5 0x085e985a in VecLoadIntoVector (viewer=0x8a08d10, vec=0x0) > at vector.c:1016 From bsmith at mcs.anl.gov Wed Mar 10 11:36:31 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Mar 2010 11:36:31 -0600 Subject: [petsc-users] How subdomain talks to neighbors? In-Reply-To: <20100310114323.9261vm3eo0gsc48o@cubmail.cc.columbia.edu> References: <20100310114323.9261vm3eo0gsc48o@cubmail.cc.columbia.edu> Message-ID: On Mar 10, 2010, at 10:43 AM, (Rebecca) Xuefei YUAN wrote: > Dear all, > > I have a question about how subdomains "talk" to neighbor subdomains. > > For example, I have a 6X5 mesh, the solution at each mesh f[j][i] is > given > > for (j = jFirst; j <= jLast; j++){ > for (i = iFirst; i <= iLast; i++){ > f[j][i] = i+j*info.mx; > } > } > i.e., > 0 1 2 3 4 5 > 6 7 8 9 10 11 > 12 13 14 15 16 17 > 18 19 20 21 22 23 > 24 25 26 27 28 29 > > When I read this solution (binary file) as an input of another set > of code with np=2, the array gives me > > rank=0: xs=0,x=3,ys=0,y=5 > > 0 1 2 3 4 5 > 3 4 5 6 7 8 > 6 7 8 9 10 11 > 9 10 11 12 13 14 > 12 13 14 X X X This process has 3*5 = 15 locations in its part of the global array. All the entries are filled (the final entry with a 14). There are no X X X at the end. > > rank=1: xs=3,x=3,ys=0,y=5 > > X X X 15 16 17 > 15 16 17 18 19 20 > 18 19 20 21 22 23 > 21 22 23 24 25 26 > 24 25 26 27 28 29 Similarly here their are 15 slots and they are all filled up. There are no XXX > > or np=3 gives > > rank=0: xs=0,x=2,ys=0,y=5 > > 0 1 2 3 4 5 > 2 3 4 5 6 7 > 4 5 6 7 8 9 > 6 7 8 9 X X > 8 9 X X X X > > rank=1: xs=2,x=2,ys=0,y=5 > > X X 10 11 12 13 > 10 11 12 13 14 15 > 12 13 14 15 16 17 > 14 15 16 17 18 19 > 16 17 18 19 X X > > rank=2: xs=4,x=2,ys=0,y=5 > > X X X X 20 21 > X X 20 21 22 23 > 20 21 22 23 24 25 > 22 23 24 25 26 27 > 24 25 26 27 28 29 > > , where X means this location is not visited. Still I do not know > how subdomains talk to each other. Any references on how to > understand those communication between subdomains? > > Thanks a lot! > > Rebecca > > > > > > > From christian.klettner at ucl.ac.uk Wed Mar 10 13:03:16 2010 From: christian.klettner at ucl.ac.uk (Christian Klettner) Date: Wed, 10 Mar 2010 19:03:16 -0000 Subject: [petsc-users] superlinear scale-up with hypre In-Reply-To: <93E86515-A5E3-4A8D-AA16-0E31D9094498@mcs.anl.gov> References: <93E86515-A5E3-4A8D-AA16-0E31D9094498@mcs.anl.gov> Message-ID: Dear Barry, Below is the performance on 32 and 64 cores respectively. I run my case for 19 time steps and for each time step there are 4 parabolic equations to be solved (Step 1 (u,v) and Step 3 (u,v)) and 1 elliptic equation (Step 2). This is why there are 95 KSPSolves. The biggest difference I can see is in KSPSolve but I'm guessing this is made up of other functions? Also, as you can see I set "-poeq_ksp_rtol 0.000000001" for the Poisson solve however when I print it out it says Residual norms for poeq_ solve. 0 KSP Residual norm 7.862045205096e-02 1 KSP Residual norm 1.833734529269e-02 2 KSP Residual norm 9.243822053526e-04 3 KSP Residual norm 1.534786635844e-04 4 KSP Residual norm 2.032435231176e-05 5 KSP Residual norm 3.201182258546e-06 so the tolerance has not been reached. Should I set the tolerance with a different command? Thanks for any advice, Christian ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex115 on a linux-gnu named node-c47 with 32 processors, by ucemckl Wed Mar 10 02:12:45 2010 Using Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009 Max Max/Min Avg Total Time (sec): 5.424e+02 1.00012 5.423e+02 Objects: 2.860e+02 1.00000 2.860e+02 Flops: 1.675e+10 1.02726 1.635e+10 5.232e+11 Flops/sec: 3.088e+07 1.02726 3.015e+07 9.647e+08 MPI Messages: 3.603e+03 2.00278 3.447e+03 1.103e+05 MPI Message Lengths: 8.272e+06 1.90365 2.285e+03 2.520e+08 MPI Reductions: 4.236e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 5.4232e+02 100.0% 5.2317e+11 100.0% 1.103e+05 100.0% 2.285e+03 100.0% 4.056e+03 95.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecMin 19 1.0 9.5495e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 1362 1.0 1.0272e+01 1.4 1.38e+09 1.0 0.0e+00 0.0e+00 1.4e+03 2 8 0 0 32 2 8 0 0 34 4212 VecMDot 101 1.0 1.3028e+00 1.0 3.44e+08 1.0 0.0e+00 0.0e+00 1.0e+02 0 2 0 0 2 0 2 0 0 2 8241 VecNorm 972 1.0 1.0458e+01 1.6 9.88e+08 1.0 0.0e+00 0.0e+00 9.7e+02 1 6 0 0 23 1 6 0 0 24 2952 VecScale 139 1.0 4.4759e-01 1.1 7.07e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4932 VecCopy 133 1.0 6.7746e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1136 1.0 4.2686e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 1666 1.0 1.0439e+01 1.0 1.69e+09 1.0 0.0e+00 0.0e+00 0.0e+00 2 10 0 0 0 2 10 0 0 0 5069 VecAYPX 681 1.0 4.1510e+00 1.1 6.92e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 5211 VecAXPBYCZ 38 1.0 3.5104e-01 1.1 7.73e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6877 VecMAXPY 120 1.0 1.7512e+00 1.0 4.46e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 7963 VecAssemblyBegin 290 1.0 1.4337e+0164.9 0.00e+00 0.0 3.6e+03 1.0e+03 8.7e+02 2 0 3 1 21 2 0 3 1 21 0 VecAssemblyEnd 290 1.0 8.1372e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 280 1.0 2.5121e+00 1.1 1.42e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1770 VecScatterBegin 1373 1.0 5.1618e-02 1.7 0.00e+00 0.0 7.7e+04 1.3e+03 0.0e+00 0 0 70 40 0 0 0 70 40 0 0 VecScatterEnd 1373 1.0 6.2953e-0118.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 120 1.0 1.1371e+00 1.0 1.83e+08 1.0 0.0e+00 0.0e+00 1.2e+02 0 1 0 0 3 0 1 0 0 3 5028 MatMult 1048 1.0 5.6495e+01 1.1 6.86e+09 1.0 6.5e+04 1.3e+03 0.0e+00 10 41 59 34 0 10 41 59 34 0 3793 MatMultTranspose 57 1.0 3.4194e+00 1.1 4.02e+08 1.0 3.5e+03 1.3e+03 0.0e+00 1 2 3 2 0 1 2 3 2 0 3673 MatSolve 553 1.0 4.6169e+01 1.1 3.62e+09 1.0 0.0e+00 0.0e+00 0.0e+00 8 22 0 0 0 8 22 0 0 0 2448 MatLUFactorNum 2 1.0 7.9745e-01 1.2 2.78e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1088 MatILUFactorSym 2 1.0 2.7597e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCopy 133 1.0 4.7596e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatConvert 27 1.0 1.7435e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 263 1.0 1.3145e+0132.9 0.00e+00 0.0 2.4e+04 3.7e+03 5.3e+02 2 0 22 36 12 2 0 22 36 13 0 MatAssemblyEnd 263 1.0 9.1696e+00 1.0 0.00e+00 0.0 2.5e+02 3.3e+02 6.6e+01 2 0 0 0 2 2 0 0 0 2 0 MatGetRow 901474 1.5 2.9092e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 4 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 2 1.0 7.2280e-02 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 160 1.0 3.0731e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 KSPGMRESOrthog 101 1.0 2.6510e+00 1.0 6.87e+08 1.0 0.0e+00 0.0e+00 1.0e+02 0 4 0 0 2 0 4 0 0 2 8100 KSPSetup 78 1.0 1.4449e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 95 1.0 3.0155e+02 1.0 1.49e+10 1.0 5.4e+04 1.3e+03 2.4e+03 56 89 49 28 58 56 89 49 28 60 1540 PCSetUp 6 1.0 6.2894e+00 1.0 2.78e+07 1.0 0.0e+00 0.0e+00 6.0e+00 1 0 0 0 0 1 0 0 0 0 138 PCSetUpOnBlocks 57 1.0 1.0523e+00 1.2 2.78e+07 1.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 824 PCApply 972 1.0 2.1798e+02 1.0 3.76e+09 1.0 0.0e+00 0.0e+00 0.0e+00 40 22 0 0 0 40 22 0 0 0 539 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Application Order 4 4 142960400 0 Index Set 42 42 11937496 0 IS L to G Mapping 18 18 39700456 0 Vec 131 131 335147648 0 Vec Scatter 31 31 26412 0 Matrix 47 47 1003139256 0 Krylov Solver 6 6 22376 0 Preconditioner 6 6 4256 0 Viewer 1 1 544 0 ======================================================================================================================== Average time to get PetscTime(): 2.86102e-07 Average time for MPI_Barrier(): 1.27792e-05 Average time for zero size MPI_Send(): 1.71363e-06 #PETSc Option Table entries: -log_summary -moeq_ksp_rtol 0.000000001 -moeq_ksp_type cg -moeq_pc_type jacobi -poeq_ksp_monitor -poeq_ksp_rtol 0.000000001 -poeq_ksp_type gmres -poeq_pc_hypre_type boomeramg -poeq_pc_type hypre -ueq_ksp_rtol 0.000000001 -ueq_ksp_type cg -veq_ksp_rtol 0.000000001 -veq_ksp_type cg #End o PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Fri Jan 29 15:15:03 2010 Configure options: --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpiCC --with-blas-lapack-dir=/cvos/shared/apps/intel/mkl/10.0.2.018/lib/em64t/ --download-triangle --download-hypre --with-debugging=0 COPTFLAGS=" -03 -ffast-math -finline-functions" CXXOPTFLAGS=" -03 -ffast-math -finline-functions" --with-shared=0 ----------------------------------------- Libraries compiled on Fri Jan 29 15:17:56 GMT 2010 on login01 Machine characteristics: Linux login01 2.6.9-89.el4_lustre.1.6.7.2ddn1 #11 SMP Wed Sep 9 18:48:21 CEST 2009 x86_64 x86_64 x86_64 GNU/Linux Using PETSc directory: /shared/home/ucemckl/petsc-3.0.0-p10 Using PETSc arch: linux-gnu-c-opt ----------------------------------------- Using C compiler: mpicc Using Fortran compiler: mpif90 -O ----------------------------------------- Using include paths: -I/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/include -I/shared/home/ucemckl/petsc-3.0.0-p10/include -I/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/include -I/usr/X11R6/include ------------------------------------------ Using C linker: mpicc Using Fortran linker: mpif90 -O Using libraries: -Wl,-rpath,/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib -L/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -Wl,-rpath,/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib -L/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib -ltriangle -L/usr/X11R6/lib64 -lX11 -lHYPRE -lstdc++ -Wl,-rpath,/cvos/shared/apps/intel/mkl/10.0.2.018/lib/em64t -L/cvos/shared/apps/intel/mkl/10.0.2.018/lib/em64t -lmkl_lapack -lmkl -lguide -lpthread -lnsl -laio -lrt -lPEPCF90 -L/cvos/shared/apps/infinipath/2.1/mpi/lib64 -ldl -lmpich -L/cvos/shared/apps/intel/cce/10.1.008/lib -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -limf -lsvml -lipgo -lirc -lgcc_s -lirc_s -lmpichf90nc -lmpichabiglue_intel9 -L/cvos/shared/apps/intel/fce/10.1.008/lib -lifport -lifcore -lm -lm -lstdc++ -lstdc++ -ldl -lmpich -limf -lsvml -lipgo -lirc -lgcc_s -lirc_s -ldl ------------------------------------------ //////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////// ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex115 on a linux-gnu named node-f56 with 64 processors, by ucemckl Wed Mar 10 04:33:32 2010 Using Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST 2009 Max Max/Min Avg Total Time (sec): 2.394e+02 1.00022 2.394e+02 Objects: 2.860e+02 1.00000 2.860e+02 Flops: 8.606e+09 1.04191 8.283e+09 5.301e+11 Flops/sec: 3.595e+07 1.04196 3.461e+07 2.215e+09 MPI Messages: 3.627e+03 1.98414 3.565e+03 2.282e+05 MPI Message Lengths: 7.563e+06 1.99911 2.009e+03 4.584e+08 MPI Reductions: 4.269e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.3936e+02 100.0% 5.3013e+11 100.0% 2.282e+05 100.0% 2.009e+03 100.0% 4.089e+03 95.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecMin 19 1.0 4.7353e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 1380 1.0 5.3245e+00 1.7 7.11e+08 1.0 0.0e+00 0.0e+00 1.4e+03 2 8 0 0 32 2 8 0 0 34 8224 VecMDot 104 1.0 6.9024e-01 1.0 1.84e+08 1.0 0.0e+00 0.0e+00 1.0e+02 0 2 0 0 2 0 2 0 0 3 16458 VecNorm 984 1.0 5.8349e+00 1.7 5.07e+08 1.0 0.0e+00 0.0e+00 9.8e+02 2 6 0 0 23 2 6 0 0 24 5351 VecScale 142 1.0 1.5187e-01 1.7 3.66e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 14835 VecCopy 133 1.0 3.9400e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1148 1.0 2.0722e+00 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 1684 1.0 5.1021e+00 1.1 8.67e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 10 0 0 0 2 10 0 0 0 10473 VecAYPX 690 1.0 1.9134e+00 1.1 3.55e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 11443 VecAXPBYCZ 38 1.0 1.7525e-01 1.1 3.91e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 13761 VecMAXPY 123 1.0 8.9613e-01 1.1 2.38e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 16359 VecAssemblyBegin 290 1.0 6.6559e+0015.4 0.00e+00 0.0 7.3e+03 1.0e+03 8.7e+02 2 0 3 2 20 2 0 3 2 21 0 VecAssemblyEnd 290 1.0 1.5714e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 280 1.0 1.2558e+00 1.1 7.21e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3538 VecScatterBegin 1385 1.0 4.7455e-02 1.8 0.00e+00 0.0 1.6e+05 1.3e+03 0.0e+00 0 0 69 45 0 0 0 69 45 0 0 VecScatterEnd 1385 1.0 4.8537e-0115.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 123 1.0 6.2763e-01 1.1 9.50e+07 1.0 0.0e+00 0.0e+00 1.2e+02 0 1 0 0 3 0 1 0 0 3 9328 MatMult 1060 1.0 2.4949e+01 1.1 3.51e+09 1.0 1.3e+05 1.3e+03 0.0e+00 10 41 59 38 0 10 41 59 38 0 8678 MatMultTranspose 57 1.0 1.4921e+00 1.2 2.04e+08 1.0 7.2e+03 1.3e+03 0.0e+00 1 2 3 2 0 1 2 3 2 0 8409 MatSolve 562 1.0 2.1214e+01 1.1 1.86e+09 1.0 0.0e+00 0.0e+00 0.0e+00 8 22 0 0 0 8 22 0 0 0 5409 MatLUFactorNum 2 1.0 3.7373e-01 1.2 1.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2320 MatILUFactorSym 2 1.0 1.2428e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCopy 133 1.0 2.3860e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatConvert 27 1.0 8.3217e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 263 1.0 8.3536e+0040.7 0.00e+00 0.0 5.0e+04 3.7e+03 5.3e+02 3 0 22 40 12 3 0 22 40 13 0 MatAssemblyEnd 263 1.0 4.4723e+00 1.1 0.00e+00 0.0 5.0e+02 3.3e+02 6.6e+01 2 0 0 0 2 2 0 0 0 2 0 MatGetRow 453796 1.5 1.8176e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 4 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 2 1.0 3.0140e-02 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 160 1.0 1.5786e+00 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 KSPGMRESOrthog 104 1.0 1.3677e+00 1.0 3.69e+08 1.0 0.0e+00 0.0e+00 1.0e+02 1 4 0 0 2 1 4 0 0 3 16612 KSPSetup 78 1.0 4.9393e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 95 1.0 1.3637e+02 1.0 7.65e+09 1.0 1.1e+05 1.3e+03 2.5e+03 57 89 49 32 58 57 89 49 32 61 3457 PCSetUp 6 1.0 2.7957e+00 1.0 1.41e+07 1.0 0.0e+00 0.0e+00 6.0e+00 1 0 0 0 0 1 0 0 0 0 310 PCSetUpOnBlocks 57 1.0 5.0076e-01 1.2 1.41e+07 1.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 1732 PCApply 984 1.0 9.8020e+01 1.0 1.93e+09 1.0 0.0e+00 0.0e+00 0.0e+00 41 22 0 0 0 41 22 0 0 0 1216 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. --- Event Stage 0: Main Stage Application Order 4 4 134876056 0 Index Set 42 42 5979736 0 IS L to G Mapping 18 18 19841256 0 Vec 131 131 167538256 0 Vec Scatter 31 31 26412 0 Matrix 47 47 501115544 0 Krylov Solver 6 6 22376 0 Preconditioner 6 6 4256 0 Viewer 1 1 544 0 ======================================================================================================================== Average time to get PetscTime(): 1.90735e-07 Average time for MPI_Barrier(): 1.35899e-05 Average time for zero size MPI_Send(): 1.79559e-06 #PETSc Option Table entries: -log_summary -moeq_ksp_rtol 0.000000001 -moeq_ksp_type cg -moeq_pc_type jacobi -poeq_ksp_monitor -poeq_ksp_rtol 0.000000001 -poeq_ksp_type gmres -poeq_pc_hypre_type boomeramg -poeq_pc_type hypre -ueq_ksp_rtol 0.000000001 -ueq_ksp_type cg -veq_ksp_rtol 0.000000001 -veq_ksp_type cg #End o PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Fri Jan 29 15:15:03 2010 Configure options: --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpiCC --with-blas-lapack-dir=/cvos/shared/apps/intel/mkl/10.0.2.018/lib/em64t/ --download-triangle --download-hypre --with-debugging=0 COPTFLAGS=" -03 -ffast-math -finline-functions" CXXOPTFLAGS=" -03 -ffast-math -finline-functions" --with-shared=0 ----------------------------------------- > > Cannot really say without more information about what is taking > time on 32 cores and 256 cores. > > If you run 32 core and 256 core with -log_summary (also --with- > debugging=0 ./configure version of PETSc) we'll be able to see where > the time is being spent and so if it makes sense. > > Barry > > On Mar 8, 2010, at 1:09 PM, Christian Klettner wrote: > >> Dear PETSc, >> I am using a fractional step method to solve the Navier-Stokes >> equation >> which is composed of three steps. I have to solve a Poisson equation >> for >> pressure in Step 2 and I use the GMRES solver with Hypre's BoomerAMG >> for >> preconditioning. I have tested for strong scaling using a fixed >> problem >> size of 16million degrees of freedom and varied the number of cores >> from >> 32 to 256. I have found superlinear speedup up until this number of >> cores. >> Is there a reason why BoomerAMG exhibits this kind of behaviour? >> >> Best regards, >> Christian >> > > From bsmith at mcs.anl.gov Wed Mar 10 13:29:46 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 10 Mar 2010 13:29:46 -0600 Subject: [petsc-users] superlinear scale-up with hypre In-Reply-To: References: <93E86515-A5E3-4A8D-AA16-0E31D9094498@mcs.anl.gov> Message-ID: <8FD520D5-FB8A-42C7-9215-D25D967B4B53@mcs.anl.gov> Christian, The multiply, the triangular solves and the preconditioner application are all getting super-linear speedup. My guess is that this is due to cache-effects. Since the working set on each process is smaller more of it stays in the cache more of the time so the run time depends less on the time for memory access hence superlinear speedup. If you use a nonzero initial guess the stopping criteria for the Krylov solvers, by default is a reduction in the 2-norm of the residual RELATIVE to the RIGHT HAND SIDE, not the initial residual. Hence it converges "sooner than you expect". You can use the option - ksp_converged_use_initial_residual_norm to have the decrease be relative to the initial residual instead but I think the default is best for time-dependent problems. If you use a zero initial guess I cannot explain why it seems to converge "early" You can run with - ksp_converged_reason to have it print why it stops or in the debugger put a break point in KSPDefaultConverged() to see what is going on with the test. Barry On Mar 10, 2010, at 1:03 PM, Christian Klettner wrote: Dear Barry, Below is the performance on 32 and 64 cores respectively. I run my case for 19 time steps and for each time step there are 4 parabolic equations to be solved (Step 1 (u,v) and Step 3 (u,v)) and 1 elliptic equation (Step 2). This is why there are 95 KSPSolves. The biggest difference I can see is in KSPSolve but I'm guessing this is made up of other functions? Also, as you can see I set "-poeq_ksp_rtol 0.000000001" for the Poisson solve however when I print it out it says Residual norms for poeq_ solve. 0 KSP Residual norm 7.862045205096e-02 1 KSP Residual norm 1.833734529269e-02 2 KSP Residual norm 9.243822053526e-04 3 KSP Residual norm 1.534786635844e-04 4 KSP Residual norm 2.032435231176e-05 5 KSP Residual norm 3.201182258546e-06 so the tolerance has not been reached. Should I set the tolerance with a different command? Thanks for any advice, Christian > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r > -fCourier9' to print this document *** > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance > Summary: > ---------------------------------------------- > > ./ex115 on a linux-gnu named node-c47 with 32 processors, by ucemckl > Wed > Mar 10 02:12:45 2010 > Using Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST > 2009 > > Max Max/Min Avg Total > Time (sec): 5.424e+02 1.00012 5.423e+02 > Objects: 2.860e+02 1.00000 2.860e+02 > Flops: 1.675e+10 1.02726 1.635e+10 5.232e+11 > Flops/sec: 3.088e+07 1.02726 3.015e+07 9.647e+08 > MPI Messages: 3.603e+03 2.00278 3.447e+03 1.103e+05 > MPI Message Lengths: 8.272e+06 1.90365 2.285e+03 2.520e+08 > MPI Reductions: 4.236e+03 1.00000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of > length N > --> 2N flops > and VecAXPY() for complex vectors of > length N > --> 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- > Messages > --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts > %Total Avg %Total counts %Total > 0: Main Stage: 5.4232e+02 100.0% 5.2317e+11 100.0% 1.103e+05 > 100.0% 2.285e+03 100.0% 4.056e+03 95.8% > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all > processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with > PetscLogStagePush() and > PetscLogStagePop(). > %T - percent time in this phase %F - percent flops in > this > phase > %M - percent messages in this phase %L - percent message > lengths > in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time > over all processors) > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flops > --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg > len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > VecMin 19 1.0 9.5495e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecDot 1362 1.0 1.0272e+01 1.4 1.38e+09 1.0 0.0e+00 0.0e > +00 > 1.4e+03 2 8 0 0 32 2 8 0 0 34 4212 > VecMDot 101 1.0 1.3028e+00 1.0 3.44e+08 1.0 0.0e+00 0.0e > +00 > 1.0e+02 0 2 0 0 2 0 2 0 0 2 8241 > VecNorm 972 1.0 1.0458e+01 1.6 9.88e+08 1.0 0.0e+00 0.0e > +00 > 9.7e+02 1 6 0 0 23 1 6 0 0 24 2952 > VecScale 139 1.0 4.4759e-01 1.1 7.07e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 4932 > VecCopy 133 1.0 6.7746e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 1136 1.0 4.2686e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > VecAXPY 1666 1.0 1.0439e+01 1.0 1.69e+09 1.0 0.0e+00 0.0e > +00 > 0.0e+00 2 10 0 0 0 2 10 0 0 0 5069 > VecAYPX 681 1.0 4.1510e+00 1.1 6.92e+08 1.0 0.0e+00 0.0e > +00 > 0.0e+00 1 4 0 0 0 1 4 0 0 0 5211 > VecAXPBYCZ 38 1.0 3.5104e-01 1.1 7.73e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 6877 > VecMAXPY 120 1.0 1.7512e+00 1.0 4.46e+08 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 3 0 0 0 0 3 0 0 0 7963 > VecAssemblyBegin 290 1.0 1.4337e+0164.9 0.00e+00 0.0 3.6e+03 1.0e > +03 > 8.7e+02 2 0 3 1 21 2 0 3 1 21 0 > VecAssemblyEnd 290 1.0 8.1372e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecPointwiseMult 280 1.0 2.5121e+00 1.1 1.42e+08 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 1770 > VecScatterBegin 1373 1.0 5.1618e-02 1.7 0.00e+00 0.0 7.7e+04 1.3e > +03 > 0.0e+00 0 0 70 40 0 0 0 70 40 0 0 > VecScatterEnd 1373 1.0 6.2953e-0118.2 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 120 1.0 1.1371e+00 1.0 1.83e+08 1.0 0.0e+00 0.0e > +00 > 1.2e+02 0 1 0 0 3 0 1 0 0 3 5028 > MatMult 1048 1.0 5.6495e+01 1.1 6.86e+09 1.0 6.5e+04 1.3e > +03 > 0.0e+00 10 41 59 34 0 10 41 59 34 0 3793 > MatMultTranspose 57 1.0 3.4194e+00 1.1 4.02e+08 1.0 3.5e+03 1.3e > +03 > 0.0e+00 1 2 3 2 0 1 2 3 2 0 3673 > MatSolve 553 1.0 4.6169e+01 1.1 3.62e+09 1.0 0.0e+00 0.0e > +00 > 0.0e+00 8 22 0 0 0 8 22 0 0 0 2448 > MatLUFactorNum 2 1.0 7.9745e-01 1.2 2.78e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 1088 > MatILUFactorSym 2 1.0 2.7597e-01 1.5 0.00e+00 0.0 0.0e+00 0.0e > +00 > 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatCopy 133 1.0 4.7596e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > MatConvert 27 1.0 1.7435e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyBegin 263 1.0 1.3145e+0132.9 0.00e+00 0.0 2.4e+04 3.7e > +03 > 5.3e+02 2 0 22 36 12 2 0 22 36 13 0 > MatAssemblyEnd 263 1.0 9.1696e+00 1.0 0.00e+00 0.0 2.5e+02 3.3e > +02 > 6.6e+01 2 0 0 0 2 2 0 0 0 2 0 > MatGetRow 901474 1.5 2.9092e-01 1.4 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetRowIJ 4 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetOrdering 2 1.0 7.2280e-02 3.2 0.00e+00 0.0 0.0e+00 0.0e > +00 > 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatZeroEntries 160 1.0 3.0731e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > KSPGMRESOrthog 101 1.0 2.6510e+00 1.0 6.87e+08 1.0 0.0e+00 0.0e > +00 > 1.0e+02 0 4 0 0 2 0 4 0 0 2 8100 > KSPSetup 78 1.0 1.4449e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 95 1.0 3.0155e+02 1.0 1.49e+10 1.0 5.4e+04 1.3e > +03 > 2.4e+03 56 89 49 28 58 56 89 49 28 60 1540 > PCSetUp 6 1.0 6.2894e+00 1.0 2.78e+07 1.0 0.0e+00 0.0e > +00 > 6.0e+00 1 0 0 0 0 1 0 0 0 0 138 > PCSetUpOnBlocks 57 1.0 1.0523e+00 1.2 2.78e+07 1.0 0.0e+00 0.0e > +00 > 6.0e+00 0 0 0 0 0 0 0 0 0 0 824 > PCApply 972 1.0 2.1798e+02 1.0 3.76e+09 1.0 0.0e+00 0.0e > +00 > 0.0e+00 40 22 0 0 0 40 22 0 0 0 539 > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' > Mem. > > --- Event Stage 0: Main Stage > > Application Order 4 4 142960400 0 > Index Set 42 42 11937496 0 > IS L to G Mapping 18 18 39700456 0 > Vec 131 131 335147648 0 > Vec Scatter 31 31 26412 0 > Matrix 47 47 1003139256 0 > Krylov Solver 6 6 22376 0 > Preconditioner 6 6 4256 0 > Viewer 1 1 544 0 > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > ====================================================================== > Average time to get PetscTime(): 2.86102e-07 > Average time for MPI_Barrier(): 1.27792e-05 > Average time for zero size MPI_Send(): 1.71363e-06 > #PETSc Option Table entries: > -log_summary > -moeq_ksp_rtol 0.000000001 > -moeq_ksp_type cg > -moeq_pc_type jacobi > -poeq_ksp_monitor > -poeq_ksp_rtol 0.000000001 > -poeq_ksp_type gmres > -poeq_pc_hypre_type boomeramg > -poeq_pc_type hypre > -ueq_ksp_rtol 0.000000001 > -ueq_ksp_type cg > -veq_ksp_rtol 0.000000001 > -veq_ksp_type cg > #End o PETSc Option Table entries > Compiled without FORTRAN kernels > Compiled with full precision matrices (default) > sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 > sizeof(PetscScalar) 8 > Configure run at: Fri Jan 29 15:15:03 2010 > Configure options: --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpiCC > --with-blas-lapack-dir=/cvos/shared/apps/intel/mkl/10.0.2.018/lib/ > em64t/ > --download-triangle --download-hypre --with-debugging=0 COPTFLAGS=" > -03 > -ffast-math -finline-functions" CXXOPTFLAGS=" -03 -ffast-math > -finline-functions" --with-shared=0 > ----------------------------------------- > Libraries compiled on Fri Jan 29 15:17:56 GMT 2010 on login01 > Machine characteristics: Linux login01 2.6.9-89.el4_lustre. > 1.6.7.2ddn1 #11 > SMP Wed Sep 9 18:48:21 CEST 2009 x86_64 x86_64 x86_64 GNU/Linux > Using PETSc directory: /shared/home/ucemckl/petsc-3.0.0-p10 > Using PETSc arch: linux-gnu-c-opt > ----------------------------------------- > Using C compiler: mpicc > Using Fortran compiler: mpif90 -O > ----------------------------------------- > Using include paths: > -I/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/include > -I/shared/home/ucemckl/petsc-3.0.0-p10/include > -I/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/include > -I/usr/X11R6/include > ------------------------------------------ > Using C linker: mpicc > Using Fortran linker: mpif90 -O > Using libraries: > -Wl,-rpath,/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib > -L/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib -lpetscts > -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc > -Wl,-rpath,/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib > -L/shared/home/ucemckl/petsc-3.0.0-p10/linux-gnu-c-opt/lib -ltriangle > -L/usr/X11R6/lib64 -lX11 -lHYPRE -lstdc++ > -Wl,-rpath,/cvos/shared/apps/intel/mkl/10.0.2.018/lib/em64t > -L/cvos/shared/apps/intel/mkl/10.0.2.018/lib/em64t -lmkl_lapack -lmkl > -lguide -lpthread -lnsl -laio -lrt -lPEPCF90 > -L/cvos/shared/apps/infinipath/2.1/mpi/lib64 -ldl -lmpich > -L/cvos/shared/apps/intel/cce/10.1.008/lib > -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -limf -lsvml -lipgo -lirc - > lgcc_s > -lirc_s -lmpichf90nc -lmpichabiglue_intel9 > -L/cvos/shared/apps/intel/fce/10.1.008/lib -lifport -lifcore -lm -lm > -lstdc++ -lstdc++ -ldl -lmpich -limf -lsvml -lipgo -lirc -lgcc_s - > lirc_s > -ldl > ------------------------------------------ > > > //////////////////////////////////////////////////////////////////////// > ///////////////////////////////////////////////////////////////////////// > //////////////////////////////////////////////////////////////////////// > > ************************************************************************************************************************ > *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r > -fCourier9' to print this document *** > ************************************************************************************************************************ > > ---------------------------------------------- PETSc Performance > Summary: > ---------------------------------------------- > > ./ex115 on a linux-gnu named node-f56 with 64 processors, by ucemckl > Wed > Mar 10 04:33:32 2010 > Using Petsc Release Version 3.0.0, Patch 10, Tue Nov 24 16:38:09 CST > 2009 > > Max Max/Min Avg Total > Time (sec): 2.394e+02 1.00022 2.394e+02 > Objects: 2.860e+02 1.00000 2.860e+02 > Flops: 8.606e+09 1.04191 8.283e+09 5.301e+11 > Flops/sec: 3.595e+07 1.04196 3.461e+07 2.215e+09 > MPI Messages: 3.627e+03 1.98414 3.565e+03 2.282e+05 > MPI Message Lengths: 7.563e+06 1.99911 2.009e+03 4.584e+08 > MPI Reductions: 4.269e+03 1.00000 > > Flop counting convention: 1 flop = 1 real number operation of type > (multiply/divide/add/subtract) > e.g., VecAXPY() for real vectors of > length N > --> 2N flops > and VecAXPY() for complex vectors of > length N > --> 8N flops > > Summary of Stages: ----- Time ------ ----- Flops ----- --- > Messages > --- -- Message Lengths -- -- Reductions -- > Avg %Total Avg %Total counts > %Total Avg %Total counts %Total > 0: Main Stage: 2.3936e+02 100.0% 5.3013e+11 100.0% 2.282e+05 > 100.0% 2.009e+03 100.0% 4.089e+03 95.8% > > ------------------------------------------------------------------------------------------------------------------------ > See the 'Profiling' chapter of the users' manual for details on > interpreting output. > Phase summary info: > Count: number of times phase was executed > Time and Flops: Max - maximum over all processors > Ratio - ratio of maximum to minimum over all > processors > Mess: number of messages sent > Avg. len: average message length > Reduct: number of global reductions > Global: entire computation > Stage: stages of a computation. Set stages with > PetscLogStagePush() and > PetscLogStagePop(). > %T - percent time in this phase %F - percent flops in > this > phase > %M - percent messages in this phase %L - percent message > lengths > in this phase > %R - percent reductions in this phase > Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time > over all processors) > ------------------------------------------------------------------------------------------------------------------------ > Event Count Time (sec) Flops > --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg > len > Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > VecMin 19 1.0 4.7353e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecDot 1380 1.0 5.3245e+00 1.7 7.11e+08 1.0 0.0e+00 0.0e > +00 > 1.4e+03 2 8 0 0 32 2 8 0 0 34 8224 > VecMDot 104 1.0 6.9024e-01 1.0 1.84e+08 1.0 0.0e+00 0.0e > +00 > 1.0e+02 0 2 0 0 2 0 2 0 0 3 16458 > VecNorm 984 1.0 5.8349e+00 1.7 5.07e+08 1.0 0.0e+00 0.0e > +00 > 9.8e+02 2 6 0 0 23 2 6 0 0 24 5351 > VecScale 142 1.0 1.5187e-01 1.7 3.66e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 14835 > VecCopy 133 1.0 3.9400e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecSet 1148 1.0 2.0722e+00 1.2 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > VecAXPY 1684 1.0 5.1021e+00 1.1 8.67e+08 1.0 0.0e+00 0.0e > +00 > 0.0e+00 2 10 0 0 0 2 10 0 0 0 10473 > VecAYPX 690 1.0 1.9134e+00 1.1 3.55e+08 1.0 0.0e+00 0.0e > +00 > 0.0e+00 1 4 0 0 0 1 4 0 0 0 11443 > VecAXPBYCZ 38 1.0 1.7525e-01 1.1 3.91e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 13761 > VecMAXPY 123 1.0 8.9613e-01 1.1 2.38e+08 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 3 0 0 0 0 3 0 0 0 16359 > VecAssemblyBegin 290 1.0 6.6559e+0015.4 0.00e+00 0.0 7.3e+03 1.0e > +03 > 8.7e+02 2 0 3 2 20 2 0 3 2 21 0 > VecAssemblyEnd 290 1.0 1.5714e-03 2.8 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecPointwiseMult 280 1.0 1.2558e+00 1.1 7.21e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 1 0 0 0 0 1 0 0 0 3538 > VecScatterBegin 1385 1.0 4.7455e-02 1.8 0.00e+00 0.0 1.6e+05 1.3e > +03 > 0.0e+00 0 0 69 45 0 0 0 69 45 0 0 > VecScatterEnd 1385 1.0 4.8537e-0115.5 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > VecNormalize 123 1.0 6.2763e-01 1.1 9.50e+07 1.0 0.0e+00 0.0e > +00 > 1.2e+02 0 1 0 0 3 0 1 0 0 3 9328 > MatMult 1060 1.0 2.4949e+01 1.1 3.51e+09 1.0 1.3e+05 1.3e > +03 > 0.0e+00 10 41 59 38 0 10 41 59 38 0 8678 > MatMultTranspose 57 1.0 1.4921e+00 1.2 2.04e+08 1.0 7.2e+03 1.3e > +03 > 0.0e+00 1 2 3 2 0 1 2 3 2 0 8409 > MatSolve 562 1.0 2.1214e+01 1.1 1.86e+09 1.0 0.0e+00 0.0e > +00 > 0.0e+00 8 22 0 0 0 8 22 0 0 0 5409 > MatLUFactorNum 2 1.0 3.7373e-01 1.2 1.41e+07 1.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 2320 > MatILUFactorSym 2 1.0 1.2428e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e > +00 > 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatCopy 133 1.0 2.3860e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > MatConvert 27 1.0 8.3217e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatAssemblyBegin 263 1.0 8.3536e+0040.7 0.00e+00 0.0 5.0e+04 3.7e > +03 > 5.3e+02 3 0 22 40 12 3 0 22 40 13 0 > MatAssemblyEnd 263 1.0 4.4723e+00 1.1 0.00e+00 0.0 5.0e+02 3.3e > +02 > 6.6e+01 2 0 0 0 2 2 0 0 0 2 0 > MatGetRow 453796 1.5 1.8176e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetRowIJ 4 1.0 5.0068e-06 2.6 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatGetOrdering 2 1.0 3.0140e-02 2.7 0.00e+00 0.0 0.0e+00 0.0e > +00 > 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 > MatZeroEntries 160 1.0 1.5786e+00 1.2 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 > KSPGMRESOrthog 104 1.0 1.3677e+00 1.0 3.69e+08 1.0 0.0e+00 0.0e > +00 > 1.0e+02 1 4 0 0 2 1 4 0 0 3 16612 > KSPSetup 78 1.0 4.9393e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e > +00 > 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 > KSPSolve 95 1.0 1.3637e+02 1.0 7.65e+09 1.0 1.1e+05 1.3e > +03 > 2.5e+03 57 89 49 32 58 57 89 49 32 61 3457 > PCSetUp 6 1.0 2.7957e+00 1.0 1.41e+07 1.0 0.0e+00 0.0e > +00 > 6.0e+00 1 0 0 0 0 1 0 0 0 0 310 > PCSetUpOnBlocks 57 1.0 5.0076e-01 1.2 1.41e+07 1.0 0.0e+00 0.0e > +00 > 6.0e+00 0 0 0 0 0 0 0 0 0 0 1732 > PCApply 984 1.0 9.8020e+01 1.0 1.93e+09 1.0 0.0e+00 0.0e > +00 > 0.0e+00 41 22 0 0 0 41 22 0 0 0 1216 > ------------------------------------------------------------------------------------------------------------------------ > > Memory usage is given in bytes: > > Object Type Creations Destructions Memory Descendants' > Mem. > > --- Event Stage 0: Main Stage > > Application Order 4 4 134876056 0 > Index Set 42 42 5979736 0 > IS L to G Mapping 18 18 19841256 0 > Vec 131 131 167538256 0 > Vec Scatter 31 31 26412 0 > Matrix 47 47 501115544 0 > Krylov Solver 6 6 22376 0 > Preconditioner 6 6 4256 0 > Viewer 1 1 544 0 > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > = > ====================================================================== > Average time to get PetscTime(): 1.90735e-07 > Average time for MPI_Barrier(): 1.35899e-05 > Average time for zero size MPI_Send(): 1.79559e-06 > #PETSc Option Table entries: > -log_summary > -moeq_ksp_rtol 0.000000001 > -moeq_ksp_type cg > -moeq_pc_type jacobi > -poeq_ksp_monitor > -poeq_ksp_rtol 0.000000001 > -poeq_ksp_type gmres > -poeq_pc_hypre_type boomeramg > -poeq_pc_type hypre > -ueq_ksp_rtol 0.000000001 From fischej at umich.edu Thu Mar 11 10:34:31 2010 From: fischej at umich.edu (John-Michael Fischer) Date: Thu, 11 Mar 2010 11:34:31 -0500 Subject: [petsc-users] choosing MatSetValue or MatSetValues Message-ID: <88C5EA23-72C5-4074-AD9E-B0785388185E@umich.edu> I need some help in determining the extent of the benefit when doing dense matrix insertions. Essentially, in our software we need every last byte of memory we can get, and I wanted to get some feedback on how much slower it would be when building a dense matrix to use different insertion methods. Option A: Use MatSetValue for every datapoint. Option B: Store some amount of values locally in an array as they are generated (which would take up extra memory), say 1024, then using MatSetValues to insert them over several sweeps of local generation. If the speed tradeoff for building the matrix is say, a factor of 2, in exchange for not having to allocate the extra local memory -- then its probably worth it for us. I just wanted to get some intelligent comment on what those tradeoffs might be and how they would scale. Thanks John-Michael Fischer From jed at 59A2.org Thu Mar 11 10:50:06 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 11 Mar 2010 17:50:06 +0100 Subject: [petsc-users] choosing MatSetValue or MatSetValues In-Reply-To: <88C5EA23-72C5-4074-AD9E-B0785388185E@umich.edu> References: <88C5EA23-72C5-4074-AD9E-B0785388185E@umich.edu> Message-ID: <87d3za3irl.fsf@59A2.org> On Thu, 11 Mar 2010 11:34:31 -0500, John-Michael Fischer wrote: > I need some help in determining the extent of the benefit when doing dense matrix insertions. > > Essentially, in our software we need every last byte of memory we can get, and I wanted to get some feedback on how much slower it would be when building a dense matrix to use different insertion methods. > Option A: > Use MatSetValue for every datapoint. > Option B: > Store some amount of values locally in an array as they are generated (which would take up extra memory), say 1024, then using MatSetValues to insert them over several sweeps of local generation. Why is memory so extremely tight? Is this some sort of embedded application? Inserting, a few (say 8) values at a time will give you most of the benefit of operating in chunks, you don't have to insert entire rows at once to get a speedup. Are you working in parallel? For sequential dense matrices, it's also reasonable to just get the array and index directly into it. Jed From bsmith at mcs.anl.gov Thu Mar 11 11:35:57 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 11 Mar 2010 11:35:57 -0600 Subject: [petsc-users] choosing MatSetValue or MatSetValues In-Reply-To: <88C5EA23-72C5-4074-AD9E-B0785388185E@umich.edu> References: <88C5EA23-72C5-4074-AD9E-B0785388185E@umich.edu> Message-ID: <93424A2C-1D55-4F53-B40A-8EA8843654D6@mcs.anl.gov> John-Michael, For dense matrices the only extra overhead of calling MatSetValue() for each point versus for a set of points is the function call overhead, since there is no searching etc for dense matrices since the values are stored directly in a dense array. As Jed points out for dense matrices BOTH sequential and parallel you can call MatGetArray() which gives direct access to the array (column oriented) for that process then stick values directly into the array. This eliminates the function call overhead of MatSetValues() and so is the way to go. Barry By column oriented I mean that PetscScalar *a; MatGetArray(A,&a); a[0] is the from the first column, first row of the on the processes part of the matrix, a[1] is the first column, SECOND row on the processes part of the matrix. a[2] third column, a[m] is first column, second row where m is the local number of rows. Etc On Mar 11, 2010, at 10:34 AM, John-Michael Fischer wrote: > I need some help in determining the extent of the benefit when doing > dense matrix insertions. > > Essentially, in our software we need every last byte of memory we > can get, and I wanted to get some feedback on how much slower it > would be when building a dense matrix to use different insertion > methods. > Option A: > Use MatSetValue for every datapoint. > Option B: > Store some amount of values locally in an array as they are > generated (which would take up extra memory), say 1024, then using > MatSetValues to insert them over several sweeps of local generation. > > If the speed tradeoff for building the matrix is say, a factor of 2, > in exchange for not having to allocate the extra local memory -- > then its probably worth it for us. I just wanted to get some > intelligent comment on what those tradeoffs might be and how they > would scale. > > Thanks > John-Michael Fischer From xy2102 at columbia.edu Fri Mar 12 09:33:58 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Fri, 12 Mar 2010 10:33:58 -0500 Subject: [petsc-users] A global vector reading from the binary file. In-Reply-To: References: <20100310114323.9261vm3eo0gsc48o@cubmail.cc.columbia.edu> Message-ID: <20100312103358.gzy9wuo1fogw080s@cubmail.cc.columbia.edu> Dear Barry, I was trying to read from a binary file and make the vector be global at each processor, but if the following routine applies, ierr = DACreate2d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_BOX, (info1.mx-1-parameters->abandonNumber), (info1.my-1-parameters->abandonNumber), PETSC_DECIDE, PETSC_DECIDE, 4, 1, PETSC_NULL, PETSC_NULL, &da2_4);CHKERRQ(ierr); ierr = DACreateGlobalVector(da2_4,&FIELD);CHKERRQ(ierr); sprintf(fileName,"solution"); ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer);CHKERRQ(ierr); ierr = VecLoadIntoVector(viewer,FIELD);CHKERRQ(ierr); ierr = DAVecGetArray(da2_4,FIELD, &field);CHKERRQ(ierr); I expect to let field[][] in each processor are the same at each grid point, i.e., for np=2, at the first processor, field[][] = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 and at the second processor, field[][] = 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 However, this routine gives me at the first processor, field[][] = (xs=0,x=3,ys=0,y=5) 0 1 2 3 4 5 3 4 5 6 7 8 6 7 8 9 10 11 9 10 11 12 13 14 12 13 14 X X X and at the second processor, field[][] = (xs=3,x=3,ys=0,y=5) X X X 15 16 17 15 16 17 18 19 20 18 19 20 21 22 23 21 22 23 24 25 26 24 25 26 27 28 29 I know in each subdomain, the values are right, but how could I have this array be exact the same as a 'global' one? Thanks very much! Rebecca From bsmith at mcs.anl.gov Fri Mar 12 10:55:42 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 12 Mar 2010 10:55:42 -0600 Subject: [petsc-users] A global vector reading from the binary file. In-Reply-To: <20100312103358.gzy9wuo1fogw080s@cubmail.cc.columbia.edu> References: <20100310114323.9261vm3eo0gsc48o@cubmail.cc.columbia.edu> <20100312103358.gzy9wuo1fogw080s@cubmail.cc.columbia.edu> Message-ID: <7882C4E8-57F9-465C-AF58-F3B8C95239CC@mcs.anl.gov> DACreateGlobalVector() gives a vector that is "spread-out" across the processors. It is never going to contain the "entire vector" on each process. That is not what it is for. If you want to take parallel DA global vector and get each process to have the whole vector see http://www.mcs.anl.gov/petsc/petsc-2/documentation/faq.html#mpi-vec-to-seq-vec note in particular you first need to use the DAGlobalToNaturalBegin/ End(). Barry On Mar 12, 2010, at 9:33 AM, (Rebecca) Xuefei YUAN wrote: > Dear Barry, > > I was trying to read from a binary file and make the vector be > global at each processor, but if the following routine applies, > > ierr = DACreate2d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_BOX, > (info1.mx-1-parameters->abandonNumber), (info1.my-1-parameters- > >abandonNumber), PETSC_DECIDE, PETSC_DECIDE, 4, 1, PETSC_NULL, > PETSC_NULL, &da2_4);CHKERRQ(ierr); > ierr = DACreateGlobalVector(da2_4,&FIELD);CHKERRQ(ierr); > sprintf(fileName,"solution"); > ierr = > PetscViewerBinaryOpen > (PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer);CHKERRQ(ierr); > ierr = VecLoadIntoVector(viewer,FIELD);CHKERRQ(ierr); > ierr = DAVecGetArray(da2_4,FIELD, &field);CHKERRQ(ierr); > > I expect to let field[][] in each processor are the same at each > grid point, i.e., for np=2, > > at the first processor, field[][] = > > 0 1 2 3 4 5 > 6 7 8 9 10 11 > 12 13 14 15 16 17 > 18 19 20 21 22 23 > 24 25 26 27 28 29 > > and at the second processor, field[][] = > > 0 1 2 3 4 5 > 6 7 8 9 10 11 > 12 13 14 15 16 17 > 18 19 20 21 22 23 > 24 25 26 27 28 29 > > > However, this routine gives me at the first processor, field[][] = > (xs=0,x=3,ys=0,y=5) > > 0 1 2 3 4 5 > 3 4 5 6 7 8 > 6 7 8 9 10 11 > 9 10 11 12 13 14 > 12 13 14 X X X > > and at the second processor, field[][] = > (xs=3,x=3,ys=0,y=5) > > X X X 15 16 17 > 15 16 17 18 19 20 > 18 19 20 21 22 23 > 21 22 23 24 25 26 > 24 25 26 27 28 29 > > I know in each subdomain, the values are right, but how could I have > this array be exact the same as a 'global' one? > > Thanks very much! > > Rebecca > From xy2102 at columbia.edu Fri Mar 12 11:04:59 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Fri, 12 Mar 2010 12:04:59 -0500 Subject: [petsc-users] A global vector reading from the binary file. In-Reply-To: <7882C4E8-57F9-465C-AF58-F3B8C95239CC@mcs.anl.gov> References: <20100310114323.9261vm3eo0gsc48o@cubmail.cc.columbia.edu> <20100312103358.gzy9wuo1fogw080s@cubmail.cc.columbia.edu> <7882C4E8-57F9-465C-AF58-F3B8C95239CC@mcs.anl.gov> Message-ID: <20100312120459.3r5pr2f20wwsgc8g@cubmail.cc.columbia.edu> Dear Barry, Thanks so much for your kind reply. I will look into this. Cheers, Rebecca Quoting Barry Smith : > > DACreateGlobalVector() gives a vector that is "spread-out" across > the processors. It is never going to contain the "entire vector" on > each process. That is not what it is for. If you want to take parallel > DA global vector and get each process to have the whole vector see > http://www.mcs.anl.gov/petsc/petsc-2/documentation/faq.html#mpi-vec-to-seq-vec > note in particular you first need to use the > DAGlobalToNaturalBegin/End(). > > Barry > > On Mar 12, 2010, at 9:33 AM, (Rebecca) Xuefei YUAN wrote: > >> Dear Barry, >> >> I was trying to read from a binary file and make the vector be >> global at each processor, but if the following routine applies, >> >> ierr = DACreate2d(PETSC_COMM_WORLD,DA_NONPERIODIC,DA_STENCIL_BOX, >> (info1.mx-1-parameters->abandonNumber), >> (info1.my-1-parameters->abandonNumber), PETSC_DECIDE, PETSC_DECIDE, >> 4, 1, PETSC_NULL, PETSC_NULL, &da2_4);CHKERRQ(ierr); >> ierr = DACreateGlobalVector(da2_4,&FIELD);CHKERRQ(ierr); >> sprintf(fileName,"solution"); >> ierr = >> PetscViewerBinaryOpen(PETSC_COMM_WORLD,fileName,FILE_MODE_READ,&viewer);CHKERRQ(ierr); >> ierr = VecLoadIntoVector(viewer,FIELD);CHKERRQ(ierr); >> ierr = DAVecGetArray(da2_4,FIELD, &field);CHKERRQ(ierr); >> >> I expect to let field[][] in each processor are the same at each >> grid point, i.e., for np=2, >> >> at the first processor, field[][] = >> >> 0 1 2 3 4 5 >> 6 7 8 9 10 11 >> 12 13 14 15 16 17 >> 18 19 20 21 22 23 >> 24 25 26 27 28 29 >> >> and at the second processor, field[][] = >> >> 0 1 2 3 4 5 >> 6 7 8 9 10 11 >> 12 13 14 15 16 17 >> 18 19 20 21 22 23 >> 24 25 26 27 28 29 >> >> >> However, this routine gives me at the first processor, field[][] = >> (xs=0,x=3,ys=0,y=5) >> >> 0 1 2 3 4 5 >> 3 4 5 6 7 8 >> 6 7 8 9 10 11 >> 9 10 11 12 13 14 >> 12 13 14 X X X >> >> and at the second processor, field[][] = >> (xs=3,x=3,ys=0,y=5) >> >> X X X 15 16 17 >> 15 16 17 18 19 20 >> 18 19 20 21 22 23 >> 21 22 23 24 25 26 >> 24 25 26 27 28 29 >> >> I know in each subdomain, the values are right, but how could I >> have this array be exact the same as a 'global' one? >> >> Thanks very much! >> >> Rebecca >> -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From dave.mayhem23 at gmail.com Fri Mar 12 11:49:55 2010 From: dave.mayhem23 at gmail.com (Dave May) Date: Fri, 12 Mar 2010 18:49:55 +0100 Subject: [petsc-users] VecLoadIntoVector Message-ID: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> Hello, When I use VecLoadIntoVector(), I noticed that the contents of the *.info file is inserted into the options list. In practice, I only ever see the variable -vecload_block_size added to the .info file if the vector has a block size which is not one. If I use VecLoadIntoVector() to load two vectors when one has a block size set (and thus -vecload_block_size appears in the info file) and the other doesn't (-vecload_block_size is not present in the info file), an error will occur during VecLoad_Binary(). I got around this problem by calling ierr = PetscOptionsClearValue("-vecload_block_size"); CHKERRQ(ierr); after my call to VecLoadIntoVector(), but I'm not super happy with this as I am potentially clobbering a command line option. Is there a cleaner way to deal with this problem? Maybe .info should always contain -vecload_block_size (even it is 1) so that different vectors with different block sizes can be loaded. Cheers, Dave From bsmith at mcs.anl.gov Fri Mar 12 12:48:19 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Fri, 12 Mar 2010 12:48:19 -0600 Subject: [petsc-users] VecLoadIntoVector In-Reply-To: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> References: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> Message-ID: <86EBA889-AA24-4852-B9B0-0F5B1AD35C44@mcs.anl.gov> You can use -viewer_binary_skip_info and/or - viewer_binary_skip_options and/or PetscViewerBinarySkipInfo() and/or PetscViewerBinarySetSkipOptions(). Barry On Mar 12, 2010, at 11:49 AM, Dave May wrote: > Hello, > When I use VecLoadIntoVector(), I noticed that the contents of the > *.info file is inserted into the options list. > In practice, I only ever see the variable -vecload_block_size added to > the .info file if the vector has a block size which is not one. > > If I use VecLoadIntoVector() to load two vectors when one has a block > size set (and thus -vecload_block_size appears in the info file) > and the other doesn't (-vecload_block_size is not present in the info > file), an error will occur during VecLoad_Binary(). > > I got around this problem by calling > ierr = PetscOptionsClearValue("-vecload_block_size"); CHKERRQ(ierr); > after my call to VecLoadIntoVector(), but I'm not super happy with > this as I am potentially clobbering a command line option. > > Is there a cleaner way to deal with this problem? > > Maybe .info should always contain -vecload_block_size (even it is 1) > so that different vectors with different block sizes can be loaded. > > Cheers, > Dave From jed at 59A2.org Sat Mar 13 08:31:58 2010 From: jed at 59A2.org (Jed Brown) Date: Sat, 13 Mar 2010 15:31:58 +0100 Subject: [petsc-users] VecLoadIntoVector In-Reply-To: <86EBA889-AA24-4852-B9B0-0F5B1AD35C44@mcs.anl.gov> References: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> <86EBA889-AA24-4852-B9B0-0F5B1AD35C44@mcs.anl.gov> Message-ID: <87zl2c1ee9.fsf@59A2.org> On Fri, 12 Mar 2010 12:48:19 -0600, Barry Smith wrote: > > You can use -viewer_binary_skip_info and/or - > viewer_binary_skip_options and/or PetscViewerBinarySkipInfo() and/or > PetscViewerBinarySetSkipOptions(). This just lets one circumvent *.info, but the application would be obliged to learn about block sizes through some other channel. The problem here is that PetscOptionsInsertFile dumps those options (unprefixed) into the global namespace, and that is (unless I'm mistaken) the only way for the block size to be delivered to the vector. I think this is actually difficult to resolve in a completely satisfactory way, but perhaps I'm missing something. Jed From torres.pedrozpk at gmail.com Sat Mar 13 10:56:11 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sat, 13 Mar 2010 13:56:11 -0300 Subject: [petsc-users] Understanding "indices" in ISLocalToGlobalMappingGetInfo Message-ID: Hi, I'm trying to use ISLocalToGlobalMappingGetInfo, so I make a few test with its. I run my test-program with two process and I create a Mapping with the following vertices list: Proc 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 90 91 92 93 94 95 96 100 101 102 103 104 110 111 112 113 121 Proc 0 77 79 87 88 89 95 96 97 98 99 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 I call ISLocalToGlobalMappingGetInfo and I get the following information: *Process 0:* nproc = 2 procs [0 1] numprocs = [13 13] indices = [0 1 2 3 5 6 10 11 17 18 19 20 28; 0 1 2 3 5 6 10 11 17 18 19 20 28] *Process 1:* nproc = 2 procs [1 0] numprocs = [13 13] indices = [77 79 87 88 94 95 99 100 101 102 103 104 105; 77 79 87 88 94 95 99 100 101 102 103 104 105] It seems that the indices in process 1 are correct, but not in the process 0. Are these results correct? Thanks a lot!. Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Sat Mar 13 11:11:58 2010 From: jed at 59A2.org (Jed Brown) Date: Sat, 13 Mar 2010 18:11:58 +0100 Subject: [petsc-users] Understanding "indices" in ISLocalToGlobalMappingGetInfo In-Reply-To: References: Message-ID: <87tysk16zl.fsf@59A2.org> On Sat, 13 Mar 2010 13:56:11 -0300, Pedro Torres wrote: > I call ISLocalToGlobalMappingGetInfo and I get the following information: > > *Process 0:* > > nproc = 2 procs [0 1] numprocs = [13 13] > indices = [0 1 2 3 5 6 10 11 17 18 19 20 28; > 0 1 2 3 5 6 10 11 17 18 19 20 28] > > *Process 1:* > nproc = 2 procs [1 0] numprocs = [13 13] > indices = [77 79 87 88 94 95 99 100 101 102 103 104 105; > 77 79 87 88 94 95 99 100 101 102 103 104 105] > It seems that the indices in process 1 are correct, but not in the process > 0. Are these results correct? The indices are with respect to the local numbering. So rank 0's last shared entry is (global index) 121 which has local index 28. Maybe it would be clearer if the docs read local indices of nodes shared with neighbor (sorted by global numbering) instead of the present indices of local nodes shared with neighbor (sorted by global numbering) Can you think of a better way to word it? Jed From torres.pedrozpk at gmail.com Sat Mar 13 11:38:56 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Sat, 13 Mar 2010 14:38:56 -0300 Subject: [petsc-users] Understanding "indices" in ISLocalToGlobalMappingGetInfo In-Reply-To: <87tysk16zl.fsf@59A2.org> References: <87tysk16zl.fsf@59A2.org> Message-ID: 2010/3/13 Jed Brown > On Sat, 13 Mar 2010 13:56:11 -0300, Pedro Torres < > torres.pedrozpk at gmail.com> wrote: > > I call ISLocalToGlobalMappingGetInfo and I get the following information: > > > > *Process 0:* > > > > nproc = 2 procs [0 1] numprocs = [13 13] > > indices = [0 1 2 3 5 6 10 11 17 18 19 20 28; > > 0 1 2 3 5 6 10 11 17 18 19 20 28] > > > > *Process 1:* > > nproc = 2 procs [1 0] numprocs = [13 13] > > indices = [77 79 87 88 94 95 99 100 101 102 103 104 105; > > 77 79 87 88 94 95 99 100 101 102 103 104 105] > > It seems that the indices in process 1 are correct, but not in the > process > > 0. Are these results correct? > > The indices are with respect to the local numbering. So rank 0's last > shared entry is (global index) 121 which has local index 28. Great, now it's totally clear. > Maybe it > would be clearer if the docs read > > local indices of nodes shared with neighbor (sorted by global numbering) > > instead of the present > > indices of local nodes shared with neighbor (sorted by global numbering) > > Can you think of a better way to word it? > I'm not an english native speakers, but another choice (not necessarily better) could be * *indices of nodes (in local numbering) shared with neighbor (sorted by global numbering) Thanks a lot!. Regards Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59a2.org Sat Mar 13 11:48:19 2010 From: jed at 59a2.org (Jed Brown) Date: Sat, 13 Mar 2010 18:48:19 +0100 Subject: [petsc-users] Understanding "indices" in ISLocalToGlobalMappingGetInfo In-Reply-To: References: <87tysk16zl.fsf@59A2.org> Message-ID: <87r5no15b0.fsf@59A2.org> On Sat, 13 Mar 2010 14:38:56 -0300, Pedro Torres wrote: > I'm not an english native speakers, but another choice (not > necessarily better) > could be > * *indices of nodes (in local numbering) shared with neighbor (sorted by > global numbering) Thanks, I've update the man page in petsc-dev. Jed From bsmith at mcs.anl.gov Sat Mar 13 21:09:29 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sat, 13 Mar 2010 21:09:29 -0600 Subject: [petsc-users] VecLoadIntoVector In-Reply-To: <87zl2c1ee9.fsf@59A2.org> References: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> <86EBA889-AA24-4852-B9B0-0F5B1AD35C44@mcs.anl.gov> <87zl2c1ee9.fsf@59A2.org> Message-ID: On Mar 13, 2010, at 8:31 AM, Jed Brown wrote: > On Fri, 12 Mar 2010 12:48:19 -0600, Barry Smith > wrote: >> >> You can use -viewer_binary_skip_info and/or - >> viewer_binary_skip_options and/or PetscViewerBinarySkipInfo() and/or >> PetscViewerBinarySetSkipOptions(). > > This just lets one circumvent *.info, but the application would be > obliged to learn about block sizes through some other channel. The > problem here is that PetscOptionsInsertFile dumps those options > (unprefixed) into the global namespace, and that is (unless I'm > mistaken) the only way for the block size to be delivered to the > vector. > I think this is actually difficult to resolve in a completely > satisfactory way, but perhaps I'm missing something. You are not missing something. You are right there is no "completely satisfactory way" currently. Part of this is because I hesitate to add any additional data to the binary file for fear it will break its 15 years of portability. Yes, it is ironic that I never care about PETSc source code portability over time, but one can literally load today any vector or matrix that was saved 15 years ago. (And sadly we do, some of our test matrices are that old :-). We might want to think of an extensible way to add this additional information. Barry > > Jed From jed at 59A2.org Sun Mar 14 09:48:27 2010 From: jed at 59A2.org (Jed Brown) Date: Sun, 14 Mar 2010 15:48:27 +0100 Subject: [petsc-users] VecLoadIntoVector In-Reply-To: References: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> <86EBA889-AA24-4852-B9B0-0F5B1AD35C44@mcs.anl.gov> <87zl2c1ee9.fsf@59A2.org> Message-ID: <87d3z70xj8.fsf@59A2.org> On Sat, 13 Mar 2010 21:09:29 -0600, Barry Smith wrote: > Part of this is because I hesitate to add any additional data to > the binary file for fear it will break its 15 years of portability. > Yes, it is ironic that I never care about PETSc source code > portability over time, but one can literally load today any vector or > matrix that was saved 15 years ago. Agreed, this type of compatibility is way more important than API compatibility. > We might want to think of an extensible way to add this additional > information. As a long-term solution, I don't think extra metadata should be placed into the binary files. I think we actually desire more semantic information than can practically be placed in these files, because a lot of this information is actually relational. So I think the right thing is to point the viewer at the file with all the relational and semantic information, and have the XLoadIntoX set itself up according to this metadata and then read the binary file. This proposal is for a completely new viewer implementation, although it may still use low-level functions from current viewers. (We had a brief discussion about this on -dev recently.) Jed From bsmith at mcs.anl.gov Sun Mar 14 15:21:41 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Sun, 14 Mar 2010 15:21:41 -0500 Subject: [petsc-users] VecLoadIntoVector In-Reply-To: <87d3z70xj8.fsf@59A2.org> References: <956373f1003120949i26bab36dm7602a012ae2b95e5@mail.gmail.com> <86EBA889-AA24-4852-B9B0-0F5B1AD35C44@mcs.anl.gov> <87zl2c1ee9.fsf@59A2.org> <87d3z70xj8.fsf@59A2.org> Message-ID: <5CB8425E-3850-4A03-849A-C08B73382F86@mcs.anl.gov> On Mar 14, 2010, at 9:48 AM, Jed Brown wrote: > On Sat, 13 Mar 2010 21:09:29 -0600, Barry Smith > wrote: >> Part of this is because I hesitate to add any additional data to >> the binary file for fear it will break its 15 years of portability. >> Yes, it is ironic that I never care about PETSc source code >> portability over time, but one can literally load today any vector or >> matrix that was saved 15 years ago. > > Agreed, this type of compatibility is way more important than API > compatibility. > >> We might want to think of an extensible way to add this additional >> information. > > As a long-term solution, I don't think extra metadata should be placed > into the binary files. I think we actually desire more semantic > information than can practically be placed in these files, because a > lot > of this information is actually relational. So I think the right > thing > is to point the viewer at the file with all the relational and > semantic > information, and have the XLoadIntoX set itself up according to this > metadata and then read the binary file. This proposal is for a > completely new viewer implementation, although it may still use > low-level functions from current viewers. (We had a brief discussion > about this on -dev recently.) I've actually been planning this for a few months. As soon as we get the release out the door I'm going to have Shri completely rip-out the current VecLoad, VecLoadIntoVector, MatLoad (and the non-existent MatLoadIntoMatrix) and replace them with only a VecLoad() and MatLoad() that use as much information about the passed in Vec and Matrix and then "complete" the information. For example, if you want to load a Vec with a predetermined parallel layout, then you create a VECMPI, set its local sizes and then call VecLoad() (so it essentially behaves like VecLoadIntoVector(); if you want PETSc to determine the parallel layout then you do not set the local sizes before calling the VecLoad(). Barry > > Jed From burckhardt at itis.ethz.ch Mon Mar 15 05:58:36 2010 From: burckhardt at itis.ethz.ch (burckhardt at itis.ethz.ch) Date: Mon, 15 Mar 2010 11:58:36 +0100 Subject: [petsc-users] -log_summary Message-ID: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> Dear all, I have problems with the display of -log_summary on the Cray XT5. Only the lines Event Count Time (sec) Flops/sec --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage are printed but nothing follows. Do you have an idea why? Best regards, Kathrin From bsmith at mcs.anl.gov Mon Mar 15 13:01:04 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 15 Mar 2010 13:01:04 -0500 Subject: [petsc-users] -log_summary In-Reply-To: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> Message-ID: <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> The Cray software may not be properly flushing all the standard out data before the process ends. Try using -log_summary afilename and see if the file afilename is complete with the log information. Barry On Mar 15, 2010, at 5:58 AM, burckhardt at itis.ethz.ch wrote: > Dear all, > > > I have problems with the display of -log_summary on the Cray XT5. > Only the lines > > > Event Count Time (sec) Flops/ > sec --- Global --- --- Stage --- Total > Max Ratio Max Ratio Max Ratio Mess Avg > len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s > ------------------------------------------------------------------------------------------------------------------------ > > --- Event Stage 0: Main Stage > > > are printed but nothing follows. Do you have an idea why? > > Best regards, > Kathrin > > > From burckhardt at itis.ethz.ch Tue Mar 16 05:51:42 2010 From: burckhardt at itis.ethz.ch (burckhardt at itis.ethz.ch) Date: Tue, 16 Mar 2010 11:51:42 +0100 Subject: [petsc-users] -log_summary In-Reply-To: <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> Message-ID: <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> Dear Barry, the Cray software seems to be innocent as -log_summary afilename did'nt have an effect. However, the introduction of a PetscLogBegin() after PetscInitialize(...) solved the problem. Thank you anyway! Kathrin Quoting "Barry Smith" : > > The Cray software may not be properly flushing all the standard > out data before the process ends. Try using -log_summary afilename > and see if the file afilename is complete with the log information. > > Barry > > On Mar 15, 2010, at 5:58 AM, burckhardt at itis.ethz.ch wrote: > >> Dear all, >> >> >> I have problems with the display of -log_summary on the Cray XT5. >> Only the lines >> >> >> Event Count Time (sec) Flops/sec >> --- Global --- --- Stage --- Total >> Max Ratio Max Ratio Max Ratio Mess Avg >> len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >> ------------------------------------------------------------------------------------------------------------------------ >> >> --- Event Stage 0: Main Stage >> >> >> are printed but nothing follows. Do you have an idea why? >> >> Best regards, >> Kathrin >> >> >> > > From bsmith at mcs.anl.gov Tue Mar 16 08:32:56 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Mar 2010 08:32:56 -0500 Subject: [petsc-users] -log_summary In-Reply-To: <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> Message-ID: <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> Hmm, you shouldn't need to use a PetscLogBegin() after PetscInitialize(). When -log_summary is used PetscLogBegin() is automatically called by PetscInitialize(). I cannot explain the behavior you are seeing. Barry On Mar 16, 2010, at 5:51 AM, burckhardt at itis.ethz.ch wrote: > Dear Barry, > > the Cray software seems to be innocent as -log_summary afilename > did'nt have an effect. However, the introduction of a > PetscLogBegin() after PetscInitialize(...) solved the problem. > > Thank you anyway! > Kathrin > > > Quoting "Barry Smith" : > >> >> The Cray software may not be properly flushing all the standard >> out data before the process ends. Try using -log_summary afilename >> and see if the file afilename is complete with the log information. >> >> Barry >> >> On Mar 15, 2010, at 5:58 AM, burckhardt at itis.ethz.ch wrote: >> >>> Dear all, >>> >>> >>> I have problems with the display of -log_summary on the Cray XT5. >>> Only the lines >>> >>> >>> Event Count Time (sec) Flops/ >>> sec --- Global --- --- Stage --- Total >>> Max Ratio Max Ratio Max Ratio Mess Avg >>> len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s >>> ------------------------------------------------------------------------------------------------------------------------ >>> >>> --- Event Stage 0: Main Stage >>> >>> >>> are printed but nothing follows. Do you have an idea why? >>> >>> Best regards, >>> Kathrin >>> >>> >>> >> >> > > > From burckhardt at itis.ethz.ch Tue Mar 16 09:18:56 2010 From: burckhardt at itis.ethz.ch (burckhardt at itis.ethz.ch) Date: Tue, 16 Mar 2010 15:18:56 +0100 Subject: [petsc-users] -log_summary In-Reply-To: <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> Message-ID: <20100316151856.68605veod2lfz7f4@email.ee.ethz.ch> Hi Barry, > Hmm, you shouldn't need to use a PetscLogBegin() after > PetscInitialize(). When -log_summary is used PetscLogBegin() is > automatically called by PetscInitialize(). I cannot explain the > behavior you are seeing. I the fact that the Petsc options are read from a file after PetscInitialize() a possible explanation? From jed at 59A2.org Tue Mar 16 09:27:25 2010 From: jed at 59A2.org (Jed Brown) Date: Tue, 16 Mar 2010 15:27:25 +0100 Subject: [petsc-users] -log_summary In-Reply-To: <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> Message-ID: <878w9sxrxu.fsf@59A2.org> On Tue, 16 Mar 2010 08:32:56 -0500, Barry Smith wrote: > > Hmm, you shouldn't need to use a PetscLogBegin() after > PetscInitialize(). When -log_summary is used PetscLogBegin() is > automatically called by PetscInitialize(). I cannot explain the > behavior you are seeing. Just a guess, but perhaps -log_summary was not a command line parameter, instead with e.g. PetscOptionsInsertFile? Jed From balay at mcs.anl.gov Tue Mar 16 10:41:57 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 16 Mar 2010 10:41:57 -0500 (CDT) Subject: [petsc-users] -log_summary In-Reply-To: <20100316151856.68605veod2lfz7f4@email.ee.ethz.ch> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> <20100316151856.68605veod2lfz7f4@email.ee.ethz.ch> Message-ID: On Tue, 16 Mar 2010, burckhardt at itis.ethz.ch wrote: > Hi Barry, > > > Hmm, you shouldn't need to use a PetscLogBegin() after PetscInitialize(). > > When -log_summary is used PetscLogBegin() is automatically called by > > PetscInitialize(). I cannot explain the behavior you are seeing. > > I the fact that the Petsc options are read from a file after PetscInitialize() > a possible explanation? Yes thats the reason. -log_summary is processed by PetscInitialize() - so if options are read-in after PetscInitialize() - you'll need to explicitly call PetscLogBegin(). Satish From bsmith at mcs.anl.gov Tue Mar 16 10:44:30 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Tue, 16 Mar 2010 10:44:30 -0500 Subject: [petsc-users] -log_summary In-Reply-To: <20100316151856.68605veod2lfz7f4@email.ee.ethz.ch> References: <20100315115836.191636l6sr6blf4s@email.ee.ethz.ch> <301FE28F-3694-4920-ABE5-F5294E7D6E24@mcs.anl.gov> <20100316115142.520426dnao6xwvy6@email.ee.ethz.ch> <048A3EC9-782F-4D83-9608-2E6165BF4D05@mcs.anl.gov> <20100316151856.68605veod2lfz7f4@email.ee.ethz.ch> Message-ID: On Mar 16, 2010, at 9:18 AM, burckhardt at itis.ethz.ch wrote: > Hi Barry, > >> Hmm, you shouldn't need to use a PetscLogBegin() after >> PetscInitialize(). When -log_summary is used PetscLogBegin() is >> automatically called by PetscInitialize(). I cannot explain the >> behavior you are seeing. > > I the fact that the Petsc options are read from a file after > PetscInitialize() a possible explanation? > Yes. And your fix of calling PetscLogBegin() after PetscInitialize() is the correct response. Barry > From xy2102 at columbia.edu Thu Mar 18 10:53:27 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Thu, 18 Mar 2010 11:53:27 -0400 Subject: [petsc-users] difference between solns for np=1 and np>1. Message-ID: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> Dear all, I have a piece of code working right for np=1, but when I use np>1(for example, np=2), the solution is different from the one with np=1. I checked the residual function(f_np1_itsi vs f_np2_itsi) and Jacobian matrix(J_np1_itsi,J_np2_itsi), it turns out that for different initial guesses, f_np1_its0 = f_np2_its0; J_np1_its0 = J_np2_its0; although two Jacobians are the same, they have different nonzero structures. After 1 nonlinear iterations, the solution is not the same. I do not understand that if I have F(u^{m}) and J(u^{m}) the same, how could I get different \delta u^{m} in np=1 and np=2 situations? Thanks a lot! Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From jed at 59A2.org Thu Mar 18 11:17:29 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 18 Mar 2010 17:17:29 +0100 Subject: [petsc-users] difference between solns for np=1 and np>1. In-Reply-To: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> References: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> Message-ID: <87zl25vc2u.fsf@59A2.org> On Thu, 18 Mar 2010 11:53:27 -0400, "(Rebecca) Xuefei YUAN" wrote: > Dear all, > > I have a piece of code working right for np=1, but when I use np>1(for > example, np=2), the solution is different from the one with np=1. > > I checked the residual function(f_np1_itsi vs f_np2_itsi) and Jacobian > matrix(J_np1_itsi,J_np2_itsi), it turns out that for different initial > guesses, > > f_np1_its0 = f_np2_its0; > J_np1_its0 = J_np2_its0; > > although two Jacobians are the same, they have different nonzero structures. The format is different. > After 1 nonlinear iterations, the solution is not the same. > > I do not understand that if I have F(u^{m}) and J(u^{m}) the same, how > could I get different \delta u^{m} in np=1 and np=2 situations? How much different? Is it much different with -pc_type jacobi or -pc_type none? http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#differentiterations Jed From xy2102 at columbia.edu Thu Mar 18 12:50:32 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Thu, 18 Mar 2010 13:50:32 -0400 Subject: [petsc-users] difference between solns for np=1 and np>1. In-Reply-To: <20100318122230.fhs0r1ndtwo0ooc8@cubmail.cc.columbia.edu> References: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> <87zl25vc2u.fsf@59A2.org> <20100318122230.fhs0r1ndtwo0ooc8@cubmail.cc.columbia.edu> Message-ID: <20100318135032.kktx6nz480g8kc4c@cubmail.cc.columbia.edu> Dear Jed, I excluded the bug from CreateNullSpace(), but still have different convergence history for np=1 and np=2, both with -pc_type none, and -pc_type jacobi. The convergence history for np=1, -pc_type none is 0 SNES Function norm 3.277654936380e+02 Linear solve converged due to CONVERGED_RTOL iterations 1 1 SNES Function norm 1.010694930474e+01 Linear solve converged due to CONVERGED_RTOL iterations 9 2 SNES Function norm 1.456202001578e+00 Linear solve converged due to CONVERGED_RTOL iterations 23 3 SNES Function norm 6.670544108392e-02 Linear solve converged due to CONVERGED_RTOL iterations 28 4 SNES Function norm 1.924506428876e-04 Linear solve did not converge due to DIVERGED_ITS iterations 10000 5 SNES Function norm 3.554534723246e-05 Linear solve did not converge due to DIVERGED_ITS iterations 10000 6 SNES Function norm 3.554534511905e-05 Linear solve did not converge due to DIVERGED_ITS iterations 10000 7 SNES Function norm 3.554534511895e-05 Nonlinear solve converged due to CONVERGED_PNORM_RELATIVE And for np=2, -pc_type none is 0 SNES Function norm 3.277654936380e+02 Linear solve converged due to CONVERGED_RTOL iterations 2 1 SNES Function norm 4.297188468856e+01 Linear solve converged due to CONVERGED_RTOL iterations 11 2 SNES Function norm 3.808424726613e+01 Linear solve converged due to CONVERGED_RTOL iterations 1 3 SNES Function norm 1.970792409144e+01 Linear solve converged due to CONVERGED_RTOL iterations 1 4 SNES Function norm 1.229134159860e+01 Linear solve converged due to CONVERGED_RTOL iterations 4 5 SNES Function norm 9.585300715665e+00 Linear solve converged due to CONVERGED_RTOL iterations 4 6 SNES Function norm 9.289161375266e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 7 SNES Function norm 9.289161375266e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 8 SNES Function norm 9.289161375266e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 9 SNES Function norm 9.289161375267e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 10 SNES Function norm 9.289161375267e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 11 SNES Function norm 9.289161375267e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 12 SNES Function norm 9.289161375267e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 13 SNES Function norm 9.289161375268e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 14 SNES Function norm 9.289161375268e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 15 SNES Function norm 9.289161375268e+00 Linear solve converged due to CONVERGED_RTOL iterations 3 Nonlinear solve did not converge due to DIVERGED_LS_FAILURE I did take a look at the FAQ, but np=2 gives me quite different (divergent) results. Any other possible bugs? Or what would be a better way to debug this situation? Thanks very much! Rebecca Quoting "(Rebecca) Xuefei YUAN" : > Dear Jed, > > Thanks very much! > > The -pc_type is none. I get a possible bug in CreateNullSpace(), the > null vector is not right for np>1. > > Let me get back more later. > > Thanks! > > Rebecca > > > > Quoting Jed Brown : > >> On Thu, 18 Mar 2010 11:53:27 -0400, "(Rebecca) Xuefei YUAN" >> wrote: >>> Dear all, >>> >>> I have a piece of code working right for np=1, but when I use np>1(for >>> example, np=2), the solution is different from the one with np=1. >>> >>> I checked the residual function(f_np1_itsi vs f_np2_itsi) and Jacobian >>> matrix(J_np1_itsi,J_np2_itsi), it turns out that for different initial >>> guesses, >>> >>> f_np1_its0 = f_np2_its0; >>> J_np1_its0 = J_np2_its0; >>> >>> although two Jacobians are the same, they have different nonzero >>> structures. >> >> The format is different. >> >>> After 1 nonlinear iterations, the solution is not the same. >>> >>> I do not understand that if I have F(u^{m}) and J(u^{m}) the same, how >>> could I get different \delta u^{m} in np=1 and np=2 situations? >> >> How much different? Is it much different with -pc_type jacobi or >> -pc_type none? >> >> http://www.mcs.anl.gov/petsc/petsc-as/documentation/faq.html#differentiterations >> >> Jed >> >> > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From jed at 59A2.org Thu Mar 18 13:03:24 2010 From: jed at 59A2.org (Jed Brown) Date: Thu, 18 Mar 2010 19:03:24 +0100 Subject: [petsc-users] difference between solns for np=1 and np>1. In-Reply-To: <20100318135032.kktx6nz480g8kc4c@cubmail.cc.columbia.edu> References: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> <87zl25vc2u.fsf@59A2.org> <20100318122230.fhs0r1ndtwo0ooc8@cubmail.cc.columbia.edu> <20100318135032.kktx6nz480g8kc4c@cubmail.cc.columbia.edu> Message-ID: <87r5nhv76b.fsf@59A2.org> On Thu, 18 Mar 2010 13:50:32 -0400, "(Rebecca) Xuefei YUAN" wrote: > Dear Jed, > > I excluded the bug from CreateNullSpace(), but still have different > convergence history for np=1 and np=2, both with -pc_type none, and > -pc_type jacobi. > > The convergence history for np=1, -pc_type none is > > > 0 SNES Function norm 3.277654936380e+02 > Linear solve converged due to CONVERGED_RTOL iterations 1 > 1 SNES Function norm 1.010694930474e+01 > Linear solve converged due to CONVERGED_RTOL iterations 9 > 2 SNES Function norm 1.456202001578e+00 > Linear solve converged due to CONVERGED_RTOL iterations 23 > 3 SNES Function norm 6.670544108392e-02 > Linear solve converged due to CONVERGED_RTOL iterations 28 > 4 SNES Function norm 1.924506428876e-04 > Linear solve did not converge due to DIVERGED_ITS iterations 10000 > 5 SNES Function norm 3.554534723246e-05 > Linear solve did not converge due to DIVERGED_ITS iterations 10000 > 6 SNES Function norm 3.554534511905e-05 > Linear solve did not converge due to DIVERGED_ITS iterations 10000 > 7 SNES Function norm 3.554534511895e-05 > Nonlinear solve converged due to CONVERGED_PNORM_RELATIVE It looks like this has stagnated. You said you have checked that the matrices are the same, what did you do to confirm this? How did you check that the null spaces are the same? What do the unpreconditioned residuals look like (e.g. -ksp_type fgmres or -ksp_type lgmres -ksp_right_pc)? If you are working in unpreconditioned residuals, then how are you implementing boundary conditions? Jed From xy2102 at columbia.edu Thu Mar 18 13:18:03 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Thu, 18 Mar 2010 14:18:03 -0400 Subject: [petsc-users] difference between solns for np=1 and np>1. In-Reply-To: <87r5nhv76b.fsf@59A2.org> References: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> <87zl25vc2u.fsf@59A2.org> <20100318122230.fhs0r1ndtwo0ooc8@cubmail.cc.columbia.edu> <20100318135032.kktx6nz480g8kc4c@cubmail.cc.columbia.edu> <87r5nhv76b.fsf@59A2.org> Message-ID: <20100318141803.9c9h9ipkqoccg8w8@cubmail.cc.columbia.edu> Dear Jed, I reran the code adding one option -snes_mf, then np=2 gives the same result as in np=1. So I think the bug is in the Jacobian matrix. Here are steps I used to check on the Jacobian matrix: 1) After final assemble the Jacobian, output the matrix to Matlab files and compare the .m files for both np=1 and np=2: ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);//book15page37 ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); PetscViewer viewer; char fileName[128]; PetscInt p; ierr = MPI_Comm_size(MPI_COMM_WORLD, &p);CHKERRQ(ierr); PetscInt its; ierr = SNESGetIterationNumber(snes,&its);CHKERRQ(ierr); sprintf(fileName, "twvggt_matrix_tx%i_ty%i_p%i_its%i.m",info1.mx, info1.my,p,its); ierr = PetscViewerASCIIOpen(PETSC_COMM_WORLD,fileName,&viewer);CHKERRQ(ierr); ierr = PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_MATLAB);CHKERRQ(ierr); ierr = MatView (jac, viewer); CHKERRQ (ierr); PetscViewerDestroy(viewer); 2) in such a way, after each nonlinear iteration, there is a file for the corresponding Jacobian, I found these J_np1 and J_np2 have different nonzero structures, for example, for a (7X6+1,7X6+1) matrix, standard five-pt scheme with stencil width = 1, (the other 1 after 42 is a scalar in DMComposite) J_np1 has % Size = 43 43 % Nonzeros = 367 zzz = zeros(367,3); and J_np2 has % Size = 43 43 % Nonzeros = 576 zzz = zeros(576,3); I checked the different nonzero structure in J_np1 and J_np2, the different spots in J_np2 are all zeros. 3) The null space in this case is a single vector has first 7X6 elements being a scale, and the last element being zero. I confirmed by comparing v = null(Mat_0) in Matlab with VecView in C, they are the same. Will this be enough to confirm the J_np1, J_np2 and the null vector? Thanks very much! Rebecca Quoting Jed Brown : > On Thu, 18 Mar 2010 13:50:32 -0400, "(Rebecca) Xuefei YUAN" > wrote: >> Dear Jed, >> >> I excluded the bug from CreateNullSpace(), but still have different >> convergence history for np=1 and np=2, both with -pc_type none, and >> -pc_type jacobi. >> >> The convergence history for np=1, -pc_type none is >> >> >> 0 SNES Function norm 3.277654936380e+02 >> Linear solve converged due to CONVERGED_RTOL iterations 1 >> 1 SNES Function norm 1.010694930474e+01 >> Linear solve converged due to CONVERGED_RTOL iterations 9 >> 2 SNES Function norm 1.456202001578e+00 >> Linear solve converged due to CONVERGED_RTOL iterations 23 >> 3 SNES Function norm 6.670544108392e-02 >> Linear solve converged due to CONVERGED_RTOL iterations 28 >> 4 SNES Function norm 1.924506428876e-04 >> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >> 5 SNES Function norm 3.554534723246e-05 >> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >> 6 SNES Function norm 3.554534511905e-05 >> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >> 7 SNES Function norm 3.554534511895e-05 >> Nonlinear solve converged due to CONVERGED_PNORM_RELATIVE > > It looks like this has stagnated. You said you have checked that the > matrices are the same, what did you do to confirm this? How did you > check that the null spaces are the same? What do the unpreconditioned > residuals look like (e.g. -ksp_type fgmres or -ksp_type lgmres > -ksp_right_pc)? If you are working in unpreconditioned residuals, then > how are you implementing boundary conditions? > > Jed > > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Thu Mar 18 13:26:24 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Thu, 18 Mar 2010 14:26:24 -0400 Subject: [petsc-users] difference between solns for np=1 and np>1. In-Reply-To: <20100318141803.9c9h9ipkqoccg8w8@cubmail.cc.columbia.edu> References: <20100318115327.1pyjxg2dws440o0k@cubmail.cc.columbia.edu> <87zl25vc2u.fsf@59A2.org> <20100318122230.fhs0r1ndtwo0ooc8@cubmail.cc.columbia.edu> <20100318135032.kktx6nz480g8kc4c@cubmail.cc.columbia.edu> <87r5nhv76b.fsf@59A2.org> <20100318141803.9c9h9ipkqoccg8w8@cubmail.cc.columbia.edu> Message-ID: <20100318142624.z5wx0grbok4gwwk0@cubmail.cc.columbia.edu> Dear Jed, If I checked the Jacobian with -snes_type test -snes_test_display np1 has good ratios as: Norm of matrix ratio 9.88921e-09 difference 3.32649e-06 but np2 has bad ratios as: Norm of matrix ratio 1.40616 difference 472.998 I am looking at the hand-coded - finite difference and check those difference entries. Let me get back to you later. BTW, I did look at the tidxm[], tidxn[] and tvv[] in MatSetValues() ierr = MatSetValues(jac, 1, tidxm, nonzero, tidxn, tvv, INSERT_VALUES);CHKERRQ(ierr); in gdb for np1 and np2, they are all the same before calling ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);//book15page37 ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); but why there are quite different in -snes_type test? Thanks so much for your kind help! Rebecca Quoting "(Rebecca) Xuefei YUAN" : > Dear Jed, > > I reran the code adding one option -snes_mf, then np=2 gives the same > result as in np=1. > > So I think the bug is in the Jacobian matrix. > > Here are steps I used to check on the Jacobian matrix: > > 1) After final assemble the Jacobian, output the matrix to Matlab files > and compare the .m files for both np=1 and np=2: > > ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);//book15page37 > ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr); > > PetscViewer viewer; > char fileName[128]; > PetscInt p; > ierr = MPI_Comm_size(MPI_COMM_WORLD, &p);CHKERRQ(ierr); > PetscInt its; > ierr = SNESGetIterationNumber(snes,&its);CHKERRQ(ierr); > sprintf(fileName, "twvggt_matrix_tx%i_ty%i_p%i_its%i.m",info1.mx, > info1.my,p,its); > ierr = > PetscViewerASCIIOpen(PETSC_COMM_WORLD,fileName,&viewer);CHKERRQ(ierr); > ierr = PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_MATLAB);CHKERRQ(ierr); > ierr = MatView (jac, viewer); CHKERRQ (ierr); > PetscViewerDestroy(viewer); > > 2) in such a way, after each nonlinear iteration, there is a file for > the corresponding Jacobian, I found these J_np1 and J_np2 have > different nonzero structures, for example, for a (7X6+1,7X6+1) matrix, > standard five-pt scheme with stencil width = 1, (the other 1 after 42 > is a scalar in DMComposite) > J_np1 has > % Size = 43 43 > % Nonzeros = 367 > zzz = zeros(367,3); > > and J_np2 has > % Size = 43 43 > % Nonzeros = 576 > zzz = zeros(576,3); > > I checked the different nonzero structure in J_np1 and J_np2, the > different spots in J_np2 are all zeros. > > 3) The null space in this case is a single vector has first 7X6 > elements being a scale, and the last element being zero. I confirmed by > comparing v = null(Mat_0) in Matlab with VecView in C, they are the > same. > > Will this be enough to confirm the J_np1, J_np2 and the null vector? > > Thanks very much! > > Rebecca > > > > Quoting Jed Brown : > >> On Thu, 18 Mar 2010 13:50:32 -0400, "(Rebecca) Xuefei YUAN" >> wrote: >>> Dear Jed, >>> >>> I excluded the bug from CreateNullSpace(), but still have different >>> convergence history for np=1 and np=2, both with -pc_type none, and >>> -pc_type jacobi. >>> >>> The convergence history for np=1, -pc_type none is >>> >>> >>> 0 SNES Function norm 3.277654936380e+02 >>> Linear solve converged due to CONVERGED_RTOL iterations 1 >>> 1 SNES Function norm 1.010694930474e+01 >>> Linear solve converged due to CONVERGED_RTOL iterations 9 >>> 2 SNES Function norm 1.456202001578e+00 >>> Linear solve converged due to CONVERGED_RTOL iterations 23 >>> 3 SNES Function norm 6.670544108392e-02 >>> Linear solve converged due to CONVERGED_RTOL iterations 28 >>> 4 SNES Function norm 1.924506428876e-04 >>> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >>> 5 SNES Function norm 3.554534723246e-05 >>> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >>> 6 SNES Function norm 3.554534511905e-05 >>> Linear solve did not converge due to DIVERGED_ITS iterations 10000 >>> 7 SNES Function norm 3.554534511895e-05 >>> Nonlinear solve converged due to CONVERGED_PNORM_RELATIVE >> >> It looks like this has stagnated. You said you have checked that the >> matrices are the same, what did you do to confirm this? How did you >> check that the null spaces are the same? What do the unpreconditioned >> residuals look like (e.g. -ksp_type fgmres or -ksp_type lgmres >> -ksp_right_pc)? If you are working in unpreconditioned residuals, then >> how are you implementing boundary conditions? >> >> Jed >> >> > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Thu Mar 18 14:00:36 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Thu, 18 Mar 2010 15:00:36 -0400 Subject: [petsc-users] -snes_type test vs -snes_mf Message-ID: <20100318150036.xb5pabjyjo8c4c88@cubmail.cc.columbia.edu> Dear all, I ran a code for np=1 and np=2. If hand coded Jacobian is provided, np=1 and np=2 gives me different convergence and solutions, so I want to know what is wrong. If option -snes_mf is used, np=1 and np=2 gives me the same convergence and close solutions. I thought there might be a bug in FormJacobian(). If option -snes_type test is used, np=1 and np=2 gives different ratio and differences for finite difference Jacobian(fd) and hand coded Jacobian(hc): J_np1_hc = J_np2_hc, but J_np1_fd != J_np2_fd. so I thought there might be a bug in FormFunction(). I am not sure which one should be my focus? Any suggestions? Thanks a lot! Rebecca -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Thu Mar 18 14:49:08 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Mar 2010 14:49:08 -0500 Subject: [petsc-users] -snes_type test vs -snes_mf In-Reply-To: <20100318150036.xb5pabjyjo8c4c88@cubmail.cc.columbia.edu> References: <20100318150036.xb5pabjyjo8c4c88@cubmail.cc.columbia.edu> Message-ID: <9FBB5100-66FA-4348-B0C1-53477EFB42A7@mcs.anl.gov> I would focus on the Jacobian. On Mar 18, 2010, at 2:00 PM, (Rebecca) Xuefei YUAN wrote: > Dear all, > > I ran a code for np=1 and np=2. > > If hand coded Jacobian is provided, np=1 and np=2 gives me different > convergence and solutions, so I want to know what is wrong. > > If option -snes_mf is used, np=1 and np=2 gives me the same > convergence and close solutions. I thought there might be a bug in > FormJacobian(). > > If option -snes_type test is used, np=1 and np=2 gives different > ratio and differences for finite difference Jacobian(fd) and hand > coded Jacobian(hc): J_np1_hc = J_np2_hc, but J_np1_fd != J_np2_fd. > so I thought there might be a bug in FormFunction(). > > I am not sure which one should be my focus? > > Any suggestions? > > Thanks a lot! > > Rebecca > > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From xy2102 at columbia.edu Thu Mar 18 14:57:34 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Thu, 18 Mar 2010 15:57:34 -0400 Subject: [petsc-users] -snes_type test vs -snes_mf In-Reply-To: <9FBB5100-66FA-4348-B0C1-53477EFB42A7@mcs.anl.gov> References: <20100318150036.xb5pabjyjo8c4c88@cubmail.cc.columbia.edu> <9FBB5100-66FA-4348-B0C1-53477EFB42A7@mcs.anl.gov> Message-ID: <20100318155734.vbjhy3xpo0o4wo4o@cubmail.cc.columbia.edu> Dear Barry, Thanks very much! I am pretty sure that I was wrong with Jacobian now because I took Jacobian in multiprocessors as the same structure in single processor, which had made my indexing wrong in the MatSetValues(). I am working on that to fix it. Thanks a lot! Rebecca Quoting Barry Smith : > > I would focus on the Jacobian. > > > On Mar 18, 2010, at 2:00 PM, (Rebecca) Xuefei YUAN wrote: > >> Dear all, >> >> I ran a code for np=1 and np=2. >> >> If hand coded Jacobian is provided, np=1 and np=2 gives me >> different convergence and solutions, so I want to know what is wrong. >> >> If option -snes_mf is used, np=1 and np=2 gives me the same >> convergence and close solutions. I thought there might be a bug in >> FormJacobian(). >> >> If option -snes_type test is used, np=1 and np=2 gives different >> ratio and differences for finite difference Jacobian(fd) and hand >> coded Jacobian(hc): J_np1_hc = J_np2_hc, but J_np1_fd != J_np2_fd. >> so I thought there might be a bug in FormFunction(). >> >> I am not sure which one should be my focus? >> >> Any suggestions? >> >> Thanks a lot! >> >> Rebecca >> >> >> >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From henc.bouwmeester at gmail.com Thu Mar 18 20:51:39 2010 From: henc.bouwmeester at gmail.com (Henc Bouwmeester) Date: Thu, 18 Mar 2010 19:51:39 -0600 Subject: [petsc-users] Multivector Object Message-ID: <123815011003181851s2321e432oded08f2ddb3c8366@mail.gmail.com> Hello all, In section 4.2 of the manual, it is stated that the only option for solving linear systems having the same preconditioner is to call KSPSolve() multiple times. However, when dealing with such a system, one can take advantage of DGEMM when updating the vectors as a 'multivector'. Has anyone attempted to create a multivector object in PETSc? If so, what is the interface and where can I find the code? If not, can anyone speak to the difficulties? Thank you, Henc -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Thu Mar 18 21:15:54 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 18 Mar 2010 21:15:54 -0500 Subject: [petsc-users] Multivector Object In-Reply-To: <123815011003181851s2321e432oded08f2ddb3c8366@mail.gmail.com> References: <123815011003181851s2321e432oded08f2ddb3c8366@mail.gmail.com> Message-ID: <1628B1CA-B62F-4576-85F4-DD43ECE70ED8@mcs.anl.gov> Henc, PETSc has no concept of "multivectors" nor an infrastructure for solving linear systems with multiple right hand sides. Though there are important subclasses of problems where multiple right hand sides do come up, we have chosen not to invest resources needed to make PETSc general enough to handle them cleanly and efficiently. We have had a small number of requests for such support, it seems the vast majority of users have never expressed any need for it. To truly support "multivectors" I would replace the Vec object in PETSc with an object that represents one or more vectors and propagate this change throughout PETSc; that is I won't have a Vec version of solvers and a multivector version, I would only have a multivector version. The difficulties to truly do this right are 1) multiple right hand size Krylov methods are actually pretty tricky to code robustly since one must properly deal with the various multivectors in the Krylov space becoming linearly dependent and thus one needs to "remove" one of the vectors in the multivector for later iterations and then later remove another etc. 2) to get the best performance it is likely one wants to interlace the vectors in the multivector. Thus suddenly iterating with one fewer vectors is not as simple as simply discarding a pointer, one must uninterlace the vectors by one vector. In an ideal world with unlimited time and money, we would rewrite PETSc around a "multivector" but frankly we have so many other cool things that we want to do that this is unlikely to happen. Barry On Mar 18, 2010, at 8:51 PM, Henc Bouwmeester wrote: > Hello all, > > In section 4.2 of the manual, it is stated that the only option for > solving linear systems having the same preconditioner is to call > KSPSolve() multiple times. However, when dealing with such a > system, one can take advantage of DGEMM when updating the vectors as > a 'multivector'. Has anyone attempted to create a multivector > object in PETSc? If so, what is the interface and where can I find > the code? If not, can anyone speak to the difficulties? > > Thank you, > Henc > From xy2102 at columbia.edu Fri Mar 19 15:24:10 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Fri, 19 Mar 2010 16:24:10 -0400 Subject: [petsc-users] Assemble a Jacobian for a DMComposite object. Message-ID: <20100319162410.uydwcn9aoccoo4kk@cubmail.cc.columbia.edu> Dear all, I am working on assembling a Jacobian for a DMComposite object. In FormJacobian(), I am not sure how to use MatSetValuesStencil() in this case, so I use MatSetValues() to assemble this Jacobian row by row. MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar v[],InsertMode addv) To specify the row and columns' global indices idxm[] and idxn[], I need processor (rank=i) to receive (info.xm*info.ym) from each processor (rank Message-ID: <22015595.177771269030539494.JavaMail.root@zimbra> Sorry, MatSetValuesStencil() only "knows" about DA's, not more general DMs. In the arguments to MatSetValuesStencil() there is no way to indicate anything except the location in a single regular grid. We all need to think about what makes sense for an interface for composite grids. I think we're working on this now. But it's not ready yet. -Mike ----- Original Message ----- From: "(Rebecca) Xuefei YUAN" To: "PETSc users list" Sent: Friday, March 19, 2010 3:24:10 PM GMT -06:00 US/Canada Central Subject: [petsc-users] Assemble a Jacobian for a DMComposite object. Dear all, I am working on assembling a Jacobian for a DMComposite object. In FormJacobian(), I am not sure how to use MatSetValuesStencil() in this case, so I use MatSetValues() to assemble this Jacobian row by row. MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar v[],InsertMode addv) To specify the row and columns' global indices idxm[] and idxn[], I need processor (rank=i) to receive (info.xm*info.ym) from each processor (rank References: <22015595.177771269030539494.JavaMail.root@zimbra> Message-ID: <20100319163736.7oekd2d0f4cw0w4w@cubmail.cc.columbia.edu> Dear Mike, Thanks for the reply. I know that MatSetValuesStencil() is not ready for a Jacobian of DMComposite object, so I am using MatSetValues() to do it row by row. What I am working now is to get the global indices idxm[] and idxn[] for each processor, so I need some information transferred between processors. I think the following iterations would give me the necessary information for global row indices idxm[] if (rank > 0){ PetscInt receivedNumber[rank]; for (i=0;i: > Sorry, MatSetValuesStencil() only "knows" about DA's, not more > general DMs. In the arguments to MatSetValuesStencil() there is no way > to indicate anything except the location in a single regular grid. We > all need to think about what makes sense for an interface for > composite grids. > > I think we're working on this now. But it's not ready yet. > > -Mike > > ----- Original Message ----- > From: "(Rebecca) Xuefei YUAN" > To: "PETSc users list" > Sent: Friday, March 19, 2010 3:24:10 PM GMT -06:00 US/Canada Central > Subject: [petsc-users] Assemble a Jacobian for a DMComposite object. > > Dear all, > > I am working on assembling a Jacobian for a DMComposite object. > > In FormJacobian(), I am not sure how to use MatSetValuesStencil() in > this case, so I use MatSetValues() to assemble this Jacobian row by row. > > MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const > PetscInt idxn[],const PetscScalar v[],InsertMode addv) > > To specify the row and columns' global indices idxm[] and idxn[], I > need processor (rank=i) to receive (info.xm*info.ym) from each > processor (rank info.xm*info.ym to each processor (rank > Is there an existing call for that? > > Thanks a lot! > > Rebecca > > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From balay at mcs.anl.gov Sat Mar 20 09:44:19 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Sat, 20 Mar 2010 09:44:19 -0500 (CDT) Subject: [petsc-users] make error from the ex1.c In-Reply-To: <4BA49EF6.6000805@tudelft.nl> References: <4BA49EF6.6000805@tudelft.nl> Message-ID: Can't flood petsc-users mailing list with huge configure.log. petsc-maint is more apporpriate for such issues. >>>>>>>> asterix:/home/balay/download-pine>cat makefile.test ALL: ex1 CFLAGS = ${PETSC_CC_INCLUDES} FFLAGS = ${PETSC_FC_INCLUDES} include ${PETSC_DIR}/conf/variables ex1: ex1.o ${CLINKER} -o ex1 ex1.o ${PETSC_LIB} <<<<<<<< This makefile is incorrect for petsc-dev. Hence you get errors.. >>>>>>> 11:00 AM utabak at dutw689 ~/thesis/C++/PetscSlepcTests/linSystems $ make -f makefile.test /home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/bin/mpicc -I/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/include -I/home/utabak/progsAndLibs/petsc-dev/include -I/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/include -c -o ex1.o ex1.c In file included from /home/utabak/progsAndLibs/petsc-dev/include/petscis.h:7, from /home/utabak/progsAndLibs/petsc-dev/include/petscvec.h:9, from /home/utabak/progsAndLibs/petsc-dev/include/petscmat.h:6, from /home/utabak/progsAndLibs/petsc-dev/include/petscpc.h:6, from /home/utabak/progsAndLibs/petsc-dev/include/petscksp.h:6, from ex1.c:21: /home/utabak/progsAndLibs/petsc-dev/include/petscsys.h:22:2: error: #error "PETSc configured with --with-clanguage=c++ and NOT --with-c-support - it can be used only with a C++ compiler" <<<<<<<<< Try using the attached makefile. [check src/ksp/ksp/examples/tutorials/makefile] Satish On Sat, 20 Mar 2010, Umut Tabak wrote: > Dear all, > > I changed my system to Debian lately and installed Petsc-dev. However I am > running into some problems on the very test for 'ex1.c'. > > The make file is attached along with configure log and the error messages that > I got in the .dat file. > > Apart from this typing > > 10:58 AM utabak at dutw689 ~/progsAndLibs/petsc-dev $ make getlinklibs > getincludedirs getpetscflags > > -Wl,-rpath,/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/lib > -Wl,-rpath,/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/lib > -L/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/lib -lpetsc > -Wl,-rpath,/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/lib > -L/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/lib -lcmumps -ldmumps > -lsmumps -lzmumps -lmumps_common -lpord -lparmetis -lmetis -lsuperlu_4.0 > -lscalapack -lblacs -lspooles -lumfpack -lamd -lflapack -lfblas -lnsl -lrt > -L/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/lib > -L/usr/lib/gcc/x86_64-linux-gnu/4.3.2 -ldl -lmpich -lpthread -lrt -lgcc_s > -lmpichf90 -lgfortran -lm -L/usr/lib/gcc/x86_64-linux-gnu -lm -lmpichcxx > -lstdc++ -lmpichcxx -lstdc++ -ldl -lmpich -lpthread -lrt -lgcc_s -ldl > -I/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/include > -I/home/utabak/progsAndLibs/petsc-dev/include > -I/home/utabak/progsAndLibs/petsc-dev/linux-gnu-c-debug/include > > Are there some changes in the link structure, petsc libraries are linked with > one flag only, before they were linked separately I suppose. > > Could someone help me on this? > > Best, > Umut > > > -------------- next part -------------- CFLAGS = FFLAGS = CPPFLAGS = FPPFLAGS = CLEANFILES = include ${PETSC_DIR}/conf/variables include ${PETSC_DIR}/conf/rules ex1: ex1.o chkopts -${CLINKER} -o ex1 ex1.o ${PETSC_LIB} ${RM} ex1.o From charlesreid1 at gmail.com Tue Mar 23 19:37:00 2010 From: charlesreid1 at gmail.com (charles reid) Date: Tue, 23 Mar 2010 18:37:00 -0600 Subject: [petsc-users] Linking error with C++ code: undefined symbols Message-ID: (Please let me know if I can give any additional information that would be helpful for this problem.) I'm trying to use Petsc in an object-oriented C++ code, developing with g++ on Mac OS X 10.5, and I'm running into some problems in the linking stage. I've defined an object that uses Petsc (what I'm calling the GmresSolver class), and the object compiles just fine. However, when it comes time to compile the driver (Laplace.cc) and link to Petsc libraries, I see a bunch of "Undefined symbol" errors. In my object code that uses Petsc (GmresSolver.h), I have included the Petsc header file as: extern "C" { #include "petscksp.h" } In the driver (Laplace.cc), depending on how I include the Petsc header file, I get different errors. If I include it like I do in GmresSolver.h, extern "C" { #include "petsc.h" } I get a whole slew of header file syntax errors (see postscript of email). If I just include the header file, #include "petsc.h" then I get the undefined symbols problem (more below). My configure line for Petsc is Users/charles/pkg/petsc-2.3.3-p15/config/configure.py \ --prefix=$HOME/pkg/petsc-2.3.3-p15 \ --with-python \ --with-mpi=0 \ --with-debugging=1 \ PETSC_DIR=$HOME/pkg/petsc-2.3.3-p15 Here's my step-by-step to produce the error: 1. Compile all non-Petsc object code 2. Compile object code that uses Petsc using this command: g++ -c -Wall -I. -I/Users/charles/pkg/petsc-2.3.3-p15 -I/Users/charles/pkg/petsc-2.3.3-p15/bmake/darwin9.5.0-c-opt -I/Users/charles/pkg/petsc-2.3.3-p15/include ./GmresSolver. (as mentioned, this works fine.) 3. Compile the driver, "Laplace.cc", and link it to Petsc's libraries: g++ \ -I/Users/charles/pkg/petsc-2.3.3-p15/ \ -I/Users/charles/pkg/petsc-2.3.3-p15/include \ -I/Users/charles/pkg/petsc-2.3.3-p15/include/mpiuni \ -I/Users/charles/pkg/petsc-2.3.3-p15/include/petsc \ -DPETSC_STATIC_INLINE="" \ Laplace.cc \ -L/Users/charles/pkg/petsc-2.3.3-p15 \ -L/Users/charles/pkg/petsc-2.3.3-p15/lib/darwin9.5.0-c-opt \ -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc \ BoundaryConditionFactory.o BoundaryCondition.o Field.o FileIO.o GmresSolver.o JacobiSolver.o Timer.o TimerFactory.o (Note: I don't know why I need -DPETSC_STATIC_INLINE="", but I do, otherwise I see a bunch of errors like "petsc-2.3.3-p15/include/petscviewer.h:117: error: ?PETSC_STATIC_INLINE? does not name a type" - anyone know what that's all about?) This last compiler command gives the undefined symbols errors: Undefined symbols: "PetscOptionsGetReal(char const*, char const*, double*, PetscTruth*)", referenced from: PetscOptionsGetReal(char const*, double*, PetscTruth*)in ccPG7mg3.o "_Petsc_MPI_Abort", referenced from: _PetscMaxSum_Local in libpetsc.a(pinit.o) _PetscADMax_Local in libpetsc.a(pinit.o) _PetscADMin_Local in libpetsc.a(pinit.o) _PetscSynchronizedFlush in libpetsc.a(mprint.o) _PetscSynchronizedFlush in libpetsc.a(mprint.o) _PetscOptionsCheckInitial_Private in libpetsc.a(init.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscError in libpetsc.a(err.o) _PetscMallocDumpLog in libpetsc.a(mtr.o) _PetscSequentialPhaseBegin_Private in libpetsc.a(mpiu.o) _PetscSequentialPhaseEnd_Private in libpetsc.a(mpiu.o) _PetscSignalHandler_Private in libpetsc.a(signal.o) _PetscSignalHandler_Private in libpetsc.a(signal.o) _PetscDefaultSignalHandler in libpetsc.a(signal.o) _PetscMPIAbortErrorHandler in libpetsc.a(errstop.o) _PetscDefaultFPTrap in libpetsc.a(fp.o) "_Petsc_MPI_Comm_dup", referenced from: _PetscFinalize in libpetsc.a(pinit.o) _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) _PetscCommDuplicate in libpetsc.a(tagm.o) "_Petsc_MPI_Init", referenced from: _PetscInitialize in libpetsc.a(pinit.o) "PetscOptionsGetTruth(char const*, char const*, PetscTruth*, PetscTruth*)", referenced from: PetscOptionsGetTruth(char const*, PetscTruth*, PetscTruth*)in ccPG7mg3.o "PetscInitialize(int*, char***, char const*, char const*)", referenced from: PetscInitialize(int*, char***)in ccPG7mg3.o _main in ccPG7mg3.o "_MPIUNI_TMP", referenced from: _MPIUNI_TMP$non_lazy_ptr in ccPG7mg3.o _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pinit.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mprint.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(init.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(options.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(plog.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpinit.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(err.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mtr.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpiu.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(verboseinfo.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(adebug.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(binv.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(filev.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(eventLog.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(view.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pdisplay.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(tagm.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpiuopen.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(draw.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(sysio.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pbarrier.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(dupl.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(fretrieve.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(send.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(dscatter.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(petscvu.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(axis.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(random.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(drawv.o) _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(lg.o) "PetscOptionsGetScalar(char const*, char const*, double*, PetscTruth*)", referenced from: PetscOptionsGetScalar(char const*, double*, PetscTruth*)in ccPG7mg3.o "_Petsc_MPI_Keyval_create", referenced from: _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) _PetscCommGetNewTag in libpetsc.a(tagm.o) _PetscCommGetNewTag in libpetsc.a(tagm.o) _PetscCommGetNewTag in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscSharedTmp in libpetsc.a(fretrieve.o) _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) "_Petsc_MPI_Attr_delete", referenced from: _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) _PetscCommDestroy in libpetsc.a(tagm.o) "_Petsc_MPI_Attr_get", referenced from: _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) _PetscCommGetNewTag in libpetsc.a(tagm.o) _PetscCommGetNewTag in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscCommDestroy in libpetsc.a(tagm.o) _PetscSharedTmp in libpetsc.a(fretrieve.o) _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) "_Petsc_MPI_Attr_put", referenced from: _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscCommDuplicate in libpetsc.a(tagm.o) _PetscSharedTmp in libpetsc.a(fretrieve.o) _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) "PetscOptionsGetString(char const*, char const*, char*, unsigned long, PetscTruth*)", referenced from: PetscOptionsGetString(char const*, char*, unsigned long, PetscTruth*)in ccPG7mg3.o "_Petsc_MPI_Finalize", referenced from: _PetscFinalize in libpetsc.a(pinit.o) _Petsc_MPI_DebuggerOnError in libpetsc.a(init.o) _PetscAttachDebuggerErrorHandler in libpetsc.a(adebug.o) "PetscOptionsGetRealArray(char const*, char const*, double*, int*, PetscTruth*)", referenced from: PetscOptionsGetRealArray(char const*, double*, int*, PetscTruth*)in ccPG7mg3.o "PetscOptionsGetInt(char const*, char const*, int*, PetscTruth*)", referenced from: PetscOptionsGetInt(char const*, int*, PetscTruth*)in ccPG7mg3.o "PetscViewerCreate(int, _p_PetscViewer**)", referenced from: PetscViewerCreate(_p_PetscViewer**) in ccPG7mg3.o "_Petsc_MPI_Comm_free", referenced from: _PetscFinalize in libpetsc.a(pinit.o) _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) _PetscCommDestroy in libpetsc.a(tagm.o) "PetscFinalize()", referenced from: _main in ccPG7mg3.o "PetscOptionsHasName(char const*, char const*, PetscTruth*)", referenced from: PetscOptionsHasName(char const*, PetscTruth*)in ccPG7mg3.o "PetscOptionsGetStringArray(char const*, char const*, char**, int*, PetscTruth*)", referenced from: PetscOptionsGetStringArray(char const*, char**, int*, PetscTruth*)in ccPG7mg3.o "_MPIUNI_Memcpy", referenced from: _PetscMaxSum in libpetsc.a(pinit.o) _PetscFinalize in libpetsc.a(pinit.o) _PetscGlobalMax in libpetsc.a(pinit.o) _PetscGlobalMin in libpetsc.a(pinit.o) _PetscGlobalSum in libpetsc.a(pinit.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintSummary in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscLogPrintDetailed in libpetsc.a(plog.o) _PetscIntView in libpetsc.a(err.o) _PetscIntView in libpetsc.a(err.o) _PetscIntView in libpetsc.a(err.o) _PetscIntView in libpetsc.a(err.o) _PetscRealView in libpetsc.a(err.o) _PetscRealView in libpetsc.a(err.o) _PetscRealView in libpetsc.a(err.o) _PetscRealView in libpetsc.a(err.o) _PetscScalarView in libpetsc.a(err.o) _PetscScalarView in libpetsc.a(err.o) _PetscScalarView in libpetsc.a(err.o) _PetscScalarView in libpetsc.a(err.o) _PetscSharedTmp in libpetsc.a(fretrieve.o) _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) "_Petsc_MPI_Initialized", referenced from: _PetscInitialize in libpetsc.a(pinit.o) "PetscOptionsGetIntArray(char const*, char const*, int*, int*, PetscTruth*)", referenced from: PetscOptionsGetIntArray(char const*, int*, int*, PetscTruth*)in ccPG7mg3.o "PetscSequentialPhaseBegin(int, int)", referenced from: PetscSequentialPhaseBegin(int) in ccPG7mg3.o PetscSequentialPhaseBegin() in ccPG7mg3.o "PetscSequentialPhaseEnd(int, int)", referenced from: PetscSequentialPhaseEnd(int) in ccPG7mg3.o PetscSequentialPhaseEnd() in ccPG7mg3.o ld: symbol(s) not found collect2: ld returned 1 exit status Is this a problem with my libpetsc.a? Or is this problem because another library is broken or not being linked to? Any insight into this problem would be greatly appreciated. After several hours of trying to figure this out I feel like I'm lost at sea. Charles Postscript: /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h: In function ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: declaration of C function ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:116: error: previous declaration ?PetscErrorCode PetscViewerCreate(MPI_Comm, _p_PetscViewer**)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h: In function ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: invalid conversion from ?int? to ?_p_PetscViewer**? /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: too many arguments to function ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: at this point in file /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:10: error: declaration of C function ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:9: error: previous declaration ?PetscErrorCode PetscOptionsHasName(const char*, const char*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:10: error: cannot convert ?const char*? to ?PetscTruth*? for argument ?2? to ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:12: error: declaration of C function ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:11: error: previous declaration ?PetscErrorCode PetscOptionsGetInt(const char*, const char*, PetscInt*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:12: error: cannot convert ?const char*? to ?PetscInt*? for argument ?2? to ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:14: error: declaration of C function ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:13: error: previous declaration ?PetscErrorCode PetscOptionsGetTruth(const char*, const char*, PetscTruth*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:14: error: cannot convert ?const char*? to ?PetscTruth*? for argument ?2? to ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:16: error: declaration of C function ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:15: error: previous declaration ?PetscErrorCode PetscOptionsGetReal(const char*, const char*, PetscReal*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:16: error: cannot convert ?const char*? to ?PetscReal*? for argument ?2? to ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:18: error: declaration of C function ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:17: error: previous declaration ?PetscErrorCode PetscOptionsGetScalar(const char*, const char*, PetscScalar*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:18: error: cannot convert ?const char*? to ?PetscScalar*? for argument ?2? to ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:20: error: declaration of C function ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:19: error: previous declaration ?PetscErrorCode PetscOptionsGetIntArray(const char*, const char*, PetscInt*, PetscInt*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:20: error: cannot convert ?const char*? to ?PetscInt*? for argument ?2? to ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:22: error: declaration of C function ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:21: error: previous declaration ?PetscErrorCode PetscOptionsGetRealArray(const char*, const char*, PetscReal*, PetscInt*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:22: error: cannot convert ?const char*? to ?PetscReal*? for argument ?2? to ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: declaration of C function ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:23: error: previous declaration ?PetscErrorCode PetscOptionsGetString(const char*, const char*, char*, size_t, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: invalid conversion from ?const char*? to ?char*? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: invalid conversion from ?char*? to ?size_t? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: invalid conversion from ?size_t? to ?PetscTruth*? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: too many arguments to function ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: at this point in file /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:26: error: declaration of C function ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, PetscInt*, PetscTruth*)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:25: error: previous declaration ?PetscErrorCode PetscOptionsGetStringArray(const char*, const char*, char**, PetscInt*, PetscTruth*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, PetscInt*, PetscTruth*)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:26: error: cannot convert ?const char*? to ?char**? for argument ?2? to ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, PetscInt*, PetscTruth*)? /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscInitialize(int*, char***)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: declaration of C function ?PetscErrorCode PetscInitialize(int*, char***)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1159: error: previous declaration ?PetscErrorCode PetscInitialize(int*, char***, const char*, const char*)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscInitialize(int*, char***)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: too many arguments to function ?PetscErrorCode PetscInitialize(int*, char***)? /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: at this point in file /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: declaration of C function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1346: error: previous declaration ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm, PetscMPIInt)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: too many arguments to function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: at this point in file /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseBegin()?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: declaration of C function ?PetscErrorCode PetscSequentialPhaseBegin()? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: previous declaration ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: too many arguments to function ?PetscErrorCode PetscSequentialPhaseBegin()? /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: at this point in file /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: declaration of C function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1349: error: previous declaration ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm, PetscMPIInt)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: too many arguments to function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: at this point in file /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseEnd()?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: declaration of C function ?PetscErrorCode PetscSequentialPhaseEnd()? conflicts with /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: previous declaration ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? here /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: too many arguments to function ?PetscErrorCode PetscSequentialPhaseEnd()? /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: at this point in file Laplace.cc: In function ?int main(int, char**)?: /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: too many arguments to function ?PetscErrorCode PetscInitialize(int*, char***)? Laplace.cc:206: error: at this point in file This last error doesn't even make sense, as it conforms to the usage specified here ( http://www.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-dev/docs/manualpages/Sys/PetscInitialize.html ). -------------- next part -------------- An HTML attachment was scrubbed... URL: From balay at mcs.anl.gov Tue Mar 23 20:12:05 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 23 Mar 2010 20:12:05 -0500 (CDT) Subject: [petsc-users] Linking error with C++ code: undefined symbols In-Reply-To: References: Message-ID: For one we recommend using the latest version i.e petsc-3.0.0 [or the upcoming petsc-3.1 - via petsc-dev] - but not this old version 2.3.3. Secondly - if you want to use PETSc from c++ - the recommended way is to build it with the configure option '--with-clanguage=cxx' And then [after making sure the examples work] use "petsc.h" without the "extern C" etc. in your code. If the examples compile fine - - but not your code - the issuse is usually the makefile [make it as close to a petsc makefile as possible - for eg: src/ksp/ksp/examples/tutorials/makefile]. Satish On Tue, 23 Mar 2010, charles reid wrote: > (Please let me know if I can give any additional information that would be > helpful for this problem.) > > > I'm trying to use Petsc in an object-oriented C++ code, developing with g++ > on Mac OS X 10.5, and I'm running into some problems in the linking stage. > I've defined an object that uses Petsc (what I'm calling the GmresSolver > class), and the object compiles just fine. However, when it comes time to > compile the driver (Laplace.cc) and link to Petsc libraries, I see a bunch > of "Undefined symbol" errors. > > In my object code that uses Petsc (GmresSolver.h), I have included the Petsc > header file as: > extern "C" { > #include "petscksp.h" > } > > In the driver (Laplace.cc), depending on how I include the Petsc header > file, I get different errors. If I include it like I do in GmresSolver.h, > extern "C" { > #include "petsc.h" > } > > I get a whole slew of header file syntax errors (see postscript of email). > If I just include the header file, > #include "petsc.h" > > then I get the undefined symbols problem (more below). > > > My configure line for Petsc is > Users/charles/pkg/petsc-2.3.3-p15/config/configure.py \ > --prefix=$HOME/pkg/petsc-2.3.3-p15 \ > --with-python \ > --with-mpi=0 \ > --with-debugging=1 \ > PETSC_DIR=$HOME/pkg/petsc-2.3.3-p15 > > > > Here's my step-by-step to produce the error: > > 1. Compile all non-Petsc object code > > 2. Compile object code that uses Petsc using this command: > > g++ -c -Wall -I. -I/Users/charles/pkg/petsc-2.3.3-p15 > -I/Users/charles/pkg/petsc-2.3.3-p15/bmake/darwin9.5.0-c-opt > -I/Users/charles/pkg/petsc-2.3.3-p15/include ./GmresSolver. > > (as mentioned, this works fine.) > > 3. Compile the driver, "Laplace.cc", and link it to Petsc's libraries: > > g++ \ > -I/Users/charles/pkg/petsc-2.3.3-p15/ \ > -I/Users/charles/pkg/petsc-2.3.3-p15/include \ > -I/Users/charles/pkg/petsc-2.3.3-p15/include/mpiuni \ > -I/Users/charles/pkg/petsc-2.3.3-p15/include/petsc \ > -DPETSC_STATIC_INLINE="" \ > Laplace.cc \ > -L/Users/charles/pkg/petsc-2.3.3-p15 \ > -L/Users/charles/pkg/petsc-2.3.3-p15/lib/darwin9.5.0-c-opt \ > -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc \ > BoundaryConditionFactory.o BoundaryCondition.o Field.o FileIO.o > GmresSolver.o JacobiSolver.o Timer.o TimerFactory.o > > (Note: I don't know why I need -DPETSC_STATIC_INLINE="", but I do, otherwise > I see a bunch of errors like "petsc-2.3.3-p15/include/petscviewer.h:117: > error: ?PETSC_STATIC_INLINE? does not name a type" - anyone know what that's > all about?) > > This last compiler command gives the undefined symbols errors: > > Undefined symbols: > "PetscOptionsGetReal(char const*, char const*, double*, PetscTruth*)", > referenced from: > PetscOptionsGetReal(char const*, double*, PetscTruth*)in ccPG7mg3.o > "_Petsc_MPI_Abort", referenced from: > _PetscMaxSum_Local in libpetsc.a(pinit.o) > _PetscADMax_Local in libpetsc.a(pinit.o) > _PetscADMin_Local in libpetsc.a(pinit.o) > _PetscSynchronizedFlush in libpetsc.a(mprint.o) > _PetscSynchronizedFlush in libpetsc.a(mprint.o) > _PetscOptionsCheckInitial_Private in libpetsc.a(init.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscError in libpetsc.a(err.o) > _PetscMallocDumpLog in libpetsc.a(mtr.o) > _PetscSequentialPhaseBegin_Private in libpetsc.a(mpiu.o) > _PetscSequentialPhaseEnd_Private in libpetsc.a(mpiu.o) > _PetscSignalHandler_Private in libpetsc.a(signal.o) > _PetscSignalHandler_Private in libpetsc.a(signal.o) > _PetscDefaultSignalHandler in libpetsc.a(signal.o) > _PetscMPIAbortErrorHandler in libpetsc.a(errstop.o) > _PetscDefaultFPTrap in libpetsc.a(fp.o) > "_Petsc_MPI_Comm_dup", referenced from: > _PetscFinalize in libpetsc.a(pinit.o) > _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > "_Petsc_MPI_Init", referenced from: > _PetscInitialize in libpetsc.a(pinit.o) > "PetscOptionsGetTruth(char const*, char const*, PetscTruth*, > PetscTruth*)", referenced from: > PetscOptionsGetTruth(char const*, PetscTruth*, PetscTruth*)in > ccPG7mg3.o > "PetscInitialize(int*, char***, char const*, char const*)", referenced > from: > PetscInitialize(int*, char***)in ccPG7mg3.o > _main in ccPG7mg3.o > "_MPIUNI_TMP", referenced from: > _MPIUNI_TMP$non_lazy_ptr in ccPG7mg3.o > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pinit.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mprint.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(init.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(options.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(plog.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpinit.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(err.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mtr.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpiu.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(verboseinfo.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(adebug.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(binv.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(filev.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(eventLog.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(view.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pdisplay.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(tagm.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpiuopen.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(draw.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(sysio.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pbarrier.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(dupl.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(fretrieve.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(send.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(dscatter.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(petscvu.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(axis.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(random.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(drawv.o) > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(lg.o) > "PetscOptionsGetScalar(char const*, char const*, double*, PetscTruth*)", > referenced from: > PetscOptionsGetScalar(char const*, double*, PetscTruth*)in ccPG7mg3.o > "_Petsc_MPI_Keyval_create", referenced from: > _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) > _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) > _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) > _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) > _PetscCommGetNewTag in libpetsc.a(tagm.o) > _PetscCommGetNewTag in libpetsc.a(tagm.o) > _PetscCommGetNewTag in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscSharedTmp in libpetsc.a(fretrieve.o) > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) > _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) > "_Petsc_MPI_Attr_delete", referenced from: > _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > "_Petsc_MPI_Attr_get", referenced from: > _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) > _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) > _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) > _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) > _PetscCommGetNewTag in libpetsc.a(tagm.o) > _PetscCommGetNewTag in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > _PetscSharedTmp in libpetsc.a(fretrieve.o) > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) > _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) > "_Petsc_MPI_Attr_put", referenced from: > _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) > _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) > _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) > _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscCommDuplicate in libpetsc.a(tagm.o) > _PetscSharedTmp in libpetsc.a(fretrieve.o) > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) > _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) > "PetscOptionsGetString(char const*, char const*, char*, unsigned long, > PetscTruth*)", referenced from: > PetscOptionsGetString(char const*, char*, unsigned long, > PetscTruth*)in ccPG7mg3.o > "_Petsc_MPI_Finalize", referenced from: > _PetscFinalize in libpetsc.a(pinit.o) > _Petsc_MPI_DebuggerOnError in libpetsc.a(init.o) > _PetscAttachDebuggerErrorHandler in libpetsc.a(adebug.o) > "PetscOptionsGetRealArray(char const*, char const*, double*, int*, > PetscTruth*)", referenced from: > PetscOptionsGetRealArray(char const*, double*, int*, PetscTruth*)in > ccPG7mg3.o > "PetscOptionsGetInt(char const*, char const*, int*, PetscTruth*)", > referenced from: > PetscOptionsGetInt(char const*, int*, PetscTruth*)in ccPG7mg3.o > "PetscViewerCreate(int, _p_PetscViewer**)", referenced from: > PetscViewerCreate(_p_PetscViewer**) in ccPG7mg3.o > "_Petsc_MPI_Comm_free", referenced from: > _PetscFinalize in libpetsc.a(pinit.o) > _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) > _PetscCommDestroy in libpetsc.a(tagm.o) > "PetscFinalize()", referenced from: > _main in ccPG7mg3.o > "PetscOptionsHasName(char const*, char const*, PetscTruth*)", referenced > from: > PetscOptionsHasName(char const*, PetscTruth*)in ccPG7mg3.o > "PetscOptionsGetStringArray(char const*, char const*, char**, int*, > PetscTruth*)", referenced from: > PetscOptionsGetStringArray(char const*, char**, int*, PetscTruth*)in > ccPG7mg3.o > "_MPIUNI_Memcpy", referenced from: > _PetscMaxSum in libpetsc.a(pinit.o) > _PetscFinalize in libpetsc.a(pinit.o) > _PetscGlobalMax in libpetsc.a(pinit.o) > _PetscGlobalMin in libpetsc.a(pinit.o) > _PetscGlobalSum in libpetsc.a(pinit.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintSummary in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscLogPrintDetailed in libpetsc.a(plog.o) > _PetscIntView in libpetsc.a(err.o) > _PetscIntView in libpetsc.a(err.o) > _PetscIntView in libpetsc.a(err.o) > _PetscIntView in libpetsc.a(err.o) > _PetscRealView in libpetsc.a(err.o) > _PetscRealView in libpetsc.a(err.o) > _PetscRealView in libpetsc.a(err.o) > _PetscRealView in libpetsc.a(err.o) > _PetscScalarView in libpetsc.a(err.o) > _PetscScalarView in libpetsc.a(err.o) > _PetscScalarView in libpetsc.a(err.o) > _PetscScalarView in libpetsc.a(err.o) > _PetscSharedTmp in libpetsc.a(fretrieve.o) > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > "_Petsc_MPI_Initialized", referenced from: > _PetscInitialize in libpetsc.a(pinit.o) > "PetscOptionsGetIntArray(char const*, char const*, int*, int*, > PetscTruth*)", referenced from: > PetscOptionsGetIntArray(char const*, int*, int*, PetscTruth*)in > ccPG7mg3.o > "PetscSequentialPhaseBegin(int, int)", referenced from: > PetscSequentialPhaseBegin(int) in ccPG7mg3.o > PetscSequentialPhaseBegin() in ccPG7mg3.o > "PetscSequentialPhaseEnd(int, int)", referenced from: > PetscSequentialPhaseEnd(int) in ccPG7mg3.o > PetscSequentialPhaseEnd() in ccPG7mg3.o > ld: symbol(s) not found > collect2: ld returned 1 exit status > > Is this a problem with my libpetsc.a? Or is this problem because another > library is broken or not being linked to? Any insight into this problem > would be greatly appreciated. After several hours of trying to figure this > out I feel like I'm lost at sea. > > > Charles > > > > > > > > > > > > > > > > Postscript: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h: In function > ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: > declaration of C function ?PetscErrorCode > PetscViewerCreate(_p_PetscViewer**)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:116: error: > previous declaration ?PetscErrorCode PetscViewerCreate(MPI_Comm, > _p_PetscViewer**)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h: In function > ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: invalid > conversion from ?int? to ?_p_PetscViewer**? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: too > many arguments to function ?PetscErrorCode > PetscViewerCreate(_p_PetscViewer**)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: at this > point in file > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:10: error: > declaration of C function ?PetscErrorCode PetscOptionsHasName(const char*, > PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:9: error: previous > declaration ?PetscErrorCode PetscOptionsHasName(const char*, const char*, > PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:10: error: cannot > convert ?const char*? to ?PetscTruth*? for argument ?2? to ?PetscErrorCode > PetscOptionsHasName(const char*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:12: error: > declaration of C function ?PetscErrorCode PetscOptionsGetInt(const char*, > PetscInt*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:11: error: > previous declaration ?PetscErrorCode PetscOptionsGetInt(const char*, const > char*, PetscInt*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:12: error: cannot > convert ?const char*? to ?PetscInt*? for argument ?2? to ?PetscErrorCode > PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:14: error: > declaration of C function ?PetscErrorCode PetscOptionsGetTruth(const char*, > PetscTruth*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:13: error: > previous declaration ?PetscErrorCode PetscOptionsGetTruth(const char*, const > char*, PetscTruth*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:14: error: cannot > convert ?const char*? to ?PetscTruth*? for argument ?2? to ?PetscErrorCode > PetscOptionsGetTruth(const char*, PetscTruth*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:16: error: > declaration of C function ?PetscErrorCode PetscOptionsGetReal(const char*, > PetscReal*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:15: error: > previous declaration ?PetscErrorCode PetscOptionsGetReal(const char*, const > char*, PetscReal*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:16: error: cannot > convert ?const char*? to ?PetscReal*? for argument ?2? to ?PetscErrorCode > PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:18: error: > declaration of C function ?PetscErrorCode PetscOptionsGetScalar(const char*, > PetscScalar*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:17: error: > previous declaration ?PetscErrorCode PetscOptionsGetScalar(const char*, > const char*, PetscScalar*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:18: error: cannot > convert ?const char*? to ?PetscScalar*? for argument ?2? to ?PetscErrorCode > PetscOptionsGetScalar(const char*, PetscScalar*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:20: error: > declaration of C function ?PetscErrorCode PetscOptionsGetIntArray(const > char*, PetscInt*, PetscInt*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:19: error: > previous declaration ?PetscErrorCode PetscOptionsGetIntArray(const char*, > const char*, PetscInt*, PetscInt*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:20: error: cannot > convert ?const char*? to ?PetscInt*? for argument ?2? to ?PetscErrorCode > PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:22: error: > declaration of C function ?PetscErrorCode PetscOptionsGetRealArray(const > char*, PetscReal*, PetscInt*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:21: error: > previous declaration ?PetscErrorCode PetscOptionsGetRealArray(const char*, > const char*, PetscReal*, PetscInt*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:22: error: cannot > convert ?const char*? to ?PetscReal*? for argument ?2? to ?PetscErrorCode > PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: > declaration of C function ?PetscErrorCode PetscOptionsGetString(const char*, > char*, size_t, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:23: error: > previous declaration ?PetscErrorCode PetscOptionsGetString(const char*, > const char*, char*, size_t, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: invalid > conversion from ?const char*? to ?char*? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: invalid > conversion from ?char*? to ?size_t? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: invalid > conversion from ?size_t? to ?PetscTruth*? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: too > many arguments to function ?PetscErrorCode PetscOptionsGetString(const > char*, char*, size_t, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: at this > point in file > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, PetscInt*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:26: error: > declaration of C function ?PetscErrorCode PetscOptionsGetStringArray(const > char*, char**, PetscInt*, PetscTruth*)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:25: error: > previous declaration ?PetscErrorCode PetscOptionsGetStringArray(const char*, > const char*, char**, PetscInt*, PetscTruth*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, PetscInt*, > PetscTruth*)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:26: error: cannot > convert ?const char*? to ?char**? for argument ?2? to ?PetscErrorCode > PetscOptionsGetStringArray(const char*, char**, PetscInt*, PetscTruth*)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscInitialize(int*, char***)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: declaration > of C function ?PetscErrorCode PetscInitialize(int*, char***)? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1159: error: previous > declaration ?PetscErrorCode PetscInitialize(int*, char***, const char*, > const char*)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscInitialize(int*, char***)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: too many > arguments to function ?PetscErrorCode PetscInitialize(int*, char***)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: at this > point in file > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: declaration > of C function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? conflicts > with > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1346: error: previous > declaration ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm, > PetscMPIInt)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: too many > arguments to function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: at this > point in file > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseBegin()?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: declaration > of C function ?PetscErrorCode PetscSequentialPhaseBegin()? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: previous > declaration ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: too many > arguments to function ?PetscErrorCode PetscSequentialPhaseBegin()? > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: at this > point in file > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: declaration > of C function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? conflicts > with > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1349: error: previous > declaration ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm, PetscMPIInt)? > here > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: too many > arguments to function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: at this > point in file > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseEnd()?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: declaration > of C function ?PetscErrorCode PetscSequentialPhaseEnd()? conflicts with > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: previous > declaration ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? here > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: too many > arguments to function ?PetscErrorCode PetscSequentialPhaseEnd()? > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: at this > point in file > Laplace.cc: In function ?int main(int, char**)?: > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: too many > arguments to function ?PetscErrorCode PetscInitialize(int*, char***)? > Laplace.cc:206: error: at this point in file > > > This last error doesn't even make sense, as it conforms to the usage > specified here ( > http://www.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-dev/docs/manualpages/Sys/PetscInitialize.html > ). > From charlesreid1 at gmail.com Wed Mar 24 03:02:54 2010 From: charlesreid1 at gmail.com (charles reid) Date: Wed, 24 Mar 2010 02:02:54 -0600 Subject: [petsc-users] Linking error with C++ code: undefined symbols In-Reply-To: References: Message-ID: Hi Satish - Thanks for your response. I think this helped me to realize that I was making things harder than they should be. I've managed to get this working, and here are a few notes: 1. It was not clear to me why the Petsc-provided makefiles worked, because the compilation of the C code into a ".o" file is buried deep inside several makefiles (e.g. $PETSC_DIR/conf/rules) included in the original example makefile. So, I was trying to compile each of my ".cc" files into ".o" files manually, then link. In fact, all I had to do was let Petsc take care of compiling each ".cc" file into a ".o" file, and I only had to give the line to link the libraries to the driver. 2. The only major change I had to make to the example makefiles to get the driver to link and compile was to add ${PETSC_CC_INCLUDES} after ${CLINKER}, like this: Laplace: Laplace.cc $(OBJ_FILES) -${CLINKER} ${PETSC_CC_INCLUDES} Laplace.cc -o bin.x $(OBJ_FILES) ${PETSC_KSP_LIB} ${RM} $(OBJ_FILES) Otherwise there were some error messages related to header files. Thanks again for your help. Charles 2010/3/23 Satish Balay > For one we recommend using the latest version i.e petsc-3.0.0 [or the > upcoming petsc-3.1 - via petsc-dev] - but not this old version 2.3.3. > > Secondly - if you want to use PETSc from c++ - the recommended way is > to build it with the configure option '--with-clanguage=cxx' > > And then [after making sure the examples work] use "petsc.h" without > the "extern C" etc. in your code. > > If the examples compile fine - - but not your code - the issuse is > usually the makefile [make it as close to a petsc makefile as possible > - for eg: src/ksp/ksp/examples/tutorials/makefile]. > > Satish > > On Tue, 23 Mar 2010, charles reid wrote: > > > (Please let me know if I can give any additional information that would > be > > helpful for this problem.) > > > > > > I'm trying to use Petsc in an object-oriented C++ code, developing with > g++ > > on Mac OS X 10.5, and I'm running into some problems in the linking > stage. > > I've defined an object that uses Petsc (what I'm calling the GmresSolver > > class), and the object compiles just fine. However, when it comes time > to > > compile the driver (Laplace.cc) and link to Petsc libraries, I see a > bunch > > of "Undefined symbol" errors. > > > > In my object code that uses Petsc (GmresSolver.h), I have included the > Petsc > > header file as: > > extern "C" { > > #include "petscksp.h" > > } > > > > In the driver (Laplace.cc), depending on how I include the Petsc header > > file, I get different errors. If I include it like I do in > GmresSolver.h, > > extern "C" { > > #include "petsc.h" > > } > > > > I get a whole slew of header file syntax errors (see postscript of > email). > > If I just include the header file, > > #include "petsc.h" > > > > then I get the undefined symbols problem (more below). > > > > > > My configure line for Petsc is > > Users/charles/pkg/petsc-2.3.3-p15/config/configure.py \ > > --prefix=$HOME/pkg/petsc-2.3.3-p15 \ > > --with-python \ > > --with-mpi=0 \ > > --with-debugging=1 \ > > PETSC_DIR=$HOME/pkg/petsc-2.3.3-p15 > > > > > > > > Here's my step-by-step to produce the error: > > > > 1. Compile all non-Petsc object code > > > > 2. Compile object code that uses Petsc using this command: > > > > g++ -c -Wall -I. -I/Users/charles/pkg/petsc-2.3.3-p15 > > -I/Users/charles/pkg/petsc-2.3.3-p15/bmake/darwin9.5.0-c-opt > > -I/Users/charles/pkg/petsc-2.3.3-p15/include ./GmresSolver. > > > > (as mentioned, this works fine.) > > > > 3. Compile the driver, "Laplace.cc", and link it to Petsc's libraries: > > > > g++ \ > > -I/Users/charles/pkg/petsc-2.3.3-p15/ \ > > -I/Users/charles/pkg/petsc-2.3.3-p15/include \ > > -I/Users/charles/pkg/petsc-2.3.3-p15/include/mpiuni \ > > -I/Users/charles/pkg/petsc-2.3.3-p15/include/petsc \ > > -DPETSC_STATIC_INLINE="" \ > > Laplace.cc \ > > -L/Users/charles/pkg/petsc-2.3.3-p15 \ > > -L/Users/charles/pkg/petsc-2.3.3-p15/lib/darwin9.5.0-c-opt \ > > -lpetscts -lpetscsnes -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc > \ > > BoundaryConditionFactory.o BoundaryCondition.o Field.o FileIO.o > > GmresSolver.o JacobiSolver.o Timer.o TimerFactory.o > > > > (Note: I don't know why I need -DPETSC_STATIC_INLINE="", but I do, > otherwise > > I see a bunch of errors like "petsc-2.3.3-p15/include/petscviewer.h:117: > > error: ?PETSC_STATIC_INLINE? does not name a type" - anyone know what > that's > > all about?) > > > > This last compiler command gives the undefined symbols errors: > > > > Undefined symbols: > > "PetscOptionsGetReal(char const*, char const*, double*, PetscTruth*)", > > referenced from: > > PetscOptionsGetReal(char const*, double*, PetscTruth*)in ccPG7mg3.o > > "_Petsc_MPI_Abort", referenced from: > > _PetscMaxSum_Local in libpetsc.a(pinit.o) > > _PetscADMax_Local in libpetsc.a(pinit.o) > > _PetscADMin_Local in libpetsc.a(pinit.o) > > _PetscSynchronizedFlush in libpetsc.a(mprint.o) > > _PetscSynchronizedFlush in libpetsc.a(mprint.o) > > _PetscOptionsCheckInitial_Private in libpetsc.a(init.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscError in libpetsc.a(err.o) > > _PetscMallocDumpLog in libpetsc.a(mtr.o) > > _PetscSequentialPhaseBegin_Private in libpetsc.a(mpiu.o) > > _PetscSequentialPhaseEnd_Private in libpetsc.a(mpiu.o) > > _PetscSignalHandler_Private in libpetsc.a(signal.o) > > _PetscSignalHandler_Private in libpetsc.a(signal.o) > > _PetscDefaultSignalHandler in libpetsc.a(signal.o) > > _PetscMPIAbortErrorHandler in libpetsc.a(errstop.o) > > _PetscDefaultFPTrap in libpetsc.a(fp.o) > > "_Petsc_MPI_Comm_dup", referenced from: > > _PetscFinalize in libpetsc.a(pinit.o) > > _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > "_Petsc_MPI_Init", referenced from: > > _PetscInitialize in libpetsc.a(pinit.o) > > "PetscOptionsGetTruth(char const*, char const*, PetscTruth*, > > PetscTruth*)", referenced from: > > PetscOptionsGetTruth(char const*, PetscTruth*, PetscTruth*)in > > ccPG7mg3.o > > "PetscInitialize(int*, char***, char const*, char const*)", referenced > > from: > > PetscInitialize(int*, char***)in ccPG7mg3.o > > _main in ccPG7mg3.o > > "_MPIUNI_TMP", referenced from: > > _MPIUNI_TMP$non_lazy_ptr in ccPG7mg3.o > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pinit.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mprint.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(init.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(options.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(plog.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpinit.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(err.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mtr.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpiu.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(verboseinfo.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(adebug.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(binv.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(filev.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(eventLog.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(view.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pdisplay.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(tagm.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(mpiuopen.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(draw.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(sysio.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(pbarrier.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(dupl.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(fretrieve.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(send.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(dscatter.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(petscvu.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(axis.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(random.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(drawv.o) > > _MPIUNI_TMP$non_lazy_ptr in libpetsc.a(lg.o) > > "PetscOptionsGetScalar(char const*, char const*, double*, > PetscTruth*)", > > referenced from: > > PetscOptionsGetScalar(char const*, double*, PetscTruth*)in > ccPG7mg3.o > > "_Petsc_MPI_Keyval_create", referenced from: > > _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > > _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) > > _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) > > _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) > > _PetscCommGetNewTag in libpetsc.a(tagm.o) > > _PetscCommGetNewTag in libpetsc.a(tagm.o) > > _PetscCommGetNewTag in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscSharedTmp in libpetsc.a(fretrieve.o) > > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > > _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) > > _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) > > "_Petsc_MPI_Attr_delete", referenced from: > > _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > "_Petsc_MPI_Attr_get", referenced from: > > _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > > _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) > > _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) > > _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) > > _PetscCommGetNewTag in libpetsc.a(tagm.o) > > _PetscCommGetNewTag in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > _PetscSharedTmp in libpetsc.a(fretrieve.o) > > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > > _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) > > _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) > > "_Petsc_MPI_Attr_put", referenced from: > > _PetscViewerASCIIGetStdout in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIGetStderr in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > > _PetscViewerASCIIOpen in libpetsc.a(vcreatea.o) > > _PetscSequentialPhaseBegin in libpetsc.a(mpiu.o) > > _PETSC_VIEWER_BINARY_ in libpetsc.a(binv.o) > > _PetscViewerDestroy_ASCII in libpetsc.a(filev.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscCommDuplicate in libpetsc.a(tagm.o) > > _PetscSharedTmp in libpetsc.a(fretrieve.o) > > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > > _PETSC_VIEWER_SOCKET_ in libpetsc.a(send.o) > > _PETSC_VIEWER_DRAW_ in libpetsc.a(drawv.o) > > "PetscOptionsGetString(char const*, char const*, char*, unsigned long, > > PetscTruth*)", referenced from: > > PetscOptionsGetString(char const*, char*, unsigned long, > > PetscTruth*)in ccPG7mg3.o > > "_Petsc_MPI_Finalize", referenced from: > > _PetscFinalize in libpetsc.a(pinit.o) > > _Petsc_MPI_DebuggerOnError in libpetsc.a(init.o) > > _PetscAttachDebuggerErrorHandler in libpetsc.a(adebug.o) > > "PetscOptionsGetRealArray(char const*, char const*, double*, int*, > > PetscTruth*)", referenced from: > > PetscOptionsGetRealArray(char const*, double*, int*, PetscTruth*)in > > ccPG7mg3.o > > "PetscOptionsGetInt(char const*, char const*, int*, PetscTruth*)", > > referenced from: > > PetscOptionsGetInt(char const*, int*, PetscTruth*)in ccPG7mg3.o > > "PetscViewerCreate(int, _p_PetscViewer**)", referenced from: > > PetscViewerCreate(_p_PetscViewer**) in ccPG7mg3.o > > "_Petsc_MPI_Comm_free", referenced from: > > _PetscFinalize in libpetsc.a(pinit.o) > > _PetscSequentialPhaseEnd in libpetsc.a(mpiu.o) > > _PetscCommDestroy in libpetsc.a(tagm.o) > > "PetscFinalize()", referenced from: > > _main in ccPG7mg3.o > > "PetscOptionsHasName(char const*, char const*, PetscTruth*)", > referenced > > from: > > PetscOptionsHasName(char const*, PetscTruth*)in ccPG7mg3.o > > "PetscOptionsGetStringArray(char const*, char const*, char**, int*, > > PetscTruth*)", referenced from: > > PetscOptionsGetStringArray(char const*, char**, int*, > PetscTruth*)in > > ccPG7mg3.o > > "_MPIUNI_Memcpy", referenced from: > > _PetscMaxSum in libpetsc.a(pinit.o) > > _PetscFinalize in libpetsc.a(pinit.o) > > _PetscGlobalMax in libpetsc.a(pinit.o) > > _PetscGlobalMin in libpetsc.a(pinit.o) > > _PetscGlobalSum in libpetsc.a(pinit.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintSummary in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscLogPrintDetailed in libpetsc.a(plog.o) > > _PetscIntView in libpetsc.a(err.o) > > _PetscIntView in libpetsc.a(err.o) > > _PetscIntView in libpetsc.a(err.o) > > _PetscIntView in libpetsc.a(err.o) > > _PetscRealView in libpetsc.a(err.o) > > _PetscRealView in libpetsc.a(err.o) > > _PetscRealView in libpetsc.a(err.o) > > _PetscRealView in libpetsc.a(err.o) > > _PetscScalarView in libpetsc.a(err.o) > > _PetscScalarView in libpetsc.a(err.o) > > _PetscScalarView in libpetsc.a(err.o) > > _PetscScalarView in libpetsc.a(err.o) > > _PetscSharedTmp in libpetsc.a(fretrieve.o) > > _PetscSharedWorkingDirectory in libpetsc.a(fretrieve.o) > > "_Petsc_MPI_Initialized", referenced from: > > _PetscInitialize in libpetsc.a(pinit.o) > > "PetscOptionsGetIntArray(char const*, char const*, int*, int*, > > PetscTruth*)", referenced from: > > PetscOptionsGetIntArray(char const*, int*, int*, PetscTruth*)in > > ccPG7mg3.o > > "PetscSequentialPhaseBegin(int, int)", referenced from: > > PetscSequentialPhaseBegin(int) in ccPG7mg3.o > > PetscSequentialPhaseBegin() in ccPG7mg3.o > > "PetscSequentialPhaseEnd(int, int)", referenced from: > > PetscSequentialPhaseEnd(int) in ccPG7mg3.o > > PetscSequentialPhaseEnd() in ccPG7mg3.o > > ld: symbol(s) not found > > collect2: ld returned 1 exit status > > > > Is this a problem with my libpetsc.a? Or is this problem because another > > library is broken or not being linked to? Any insight into this problem > > would be greatly appreciated. After several hours of trying to figure > this > > out I feel like I'm lost at sea. > > > > > > Charles > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Postscript: > > > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h: In function > > ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: > > declaration of C function ?PetscErrorCode > > PetscViewerCreate(_p_PetscViewer**)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:116: error: > > previous declaration ?PetscErrorCode PetscViewerCreate(MPI_Comm, > > _p_PetscViewer**)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h: In function > > ?PetscErrorCode PetscViewerCreate(_p_PetscViewer**)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: > invalid > > conversion from ?int? to ?_p_PetscViewer**? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: too > > many arguments to function ?PetscErrorCode > > PetscViewerCreate(_p_PetscViewer**)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscviewer.h:117: error: at > this > > point in file > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:10: error: > > declaration of C function ?PetscErrorCode PetscOptionsHasName(const > char*, > > PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:9: error: > previous > > declaration ?PetscErrorCode PetscOptionsHasName(const char*, const char*, > > PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsHasName(const char*, PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:10: error: > cannot > > convert ?const char*? to ?PetscTruth*? for argument ?2? to > ?PetscErrorCode > > PetscOptionsHasName(const char*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:12: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetInt(const char*, > > PetscInt*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:11: error: > > previous declaration ?PetscErrorCode PetscOptionsGetInt(const char*, > const > > char*, PetscInt*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:12: error: > cannot > > convert ?const char*? to ?PetscInt*? for argument ?2? to ?PetscErrorCode > > PetscOptionsGetInt(const char*, PetscInt*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:14: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetTruth(const > char*, > > PetscTruth*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:13: error: > > previous declaration ?PetscErrorCode PetscOptionsGetTruth(const char*, > const > > char*, PetscTruth*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetTruth(const char*, PetscTruth*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:14: error: > cannot > > convert ?const char*? to ?PetscTruth*? for argument ?2? to > ?PetscErrorCode > > PetscOptionsGetTruth(const char*, PetscTruth*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:16: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetReal(const > char*, > > PetscReal*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:15: error: > > previous declaration ?PetscErrorCode PetscOptionsGetReal(const char*, > const > > char*, PetscReal*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetReal(const char*, PetscReal*, > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:16: error: > cannot > > convert ?const char*? to ?PetscReal*? for argument ?2? to ?PetscErrorCode > > PetscOptionsGetReal(const char*, PetscReal*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:18: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetScalar(const > char*, > > PetscScalar*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:17: error: > > previous declaration ?PetscErrorCode PetscOptionsGetScalar(const char*, > > const char*, PetscScalar*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetScalar(const char*, PetscScalar*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:18: error: > cannot > > convert ?const char*? to ?PetscScalar*? for argument ?2? to > ?PetscErrorCode > > PetscOptionsGetScalar(const char*, PetscScalar*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, > PetscInt*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:20: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetIntArray(const > > char*, PetscInt*, PetscInt*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:19: error: > > previous declaration ?PetscErrorCode PetscOptionsGetIntArray(const char*, > > const char*, PetscInt*, PetscInt*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetIntArray(const char*, PetscInt*, > PetscInt*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:20: error: > cannot > > convert ?const char*? to ?PetscInt*? for argument ?2? to ?PetscErrorCode > > PetscOptionsGetIntArray(const char*, PetscInt*, PetscInt*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, > PetscInt*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:22: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetRealArray(const > > char*, PetscReal*, PetscInt*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:21: error: > > previous declaration ?PetscErrorCode PetscOptionsGetRealArray(const > char*, > > const char*, PetscReal*, PetscInt*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetRealArray(const char*, PetscReal*, > PetscInt*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:22: error: > cannot > > convert ?const char*? to ?PetscReal*? for argument ?2? to ?PetscErrorCode > > PetscOptionsGetRealArray(const char*, PetscReal*, PetscInt*, > PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: > > declaration of C function ?PetscErrorCode PetscOptionsGetString(const > char*, > > char*, size_t, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:23: error: > > previous declaration ?PetscErrorCode PetscOptionsGetString(const char*, > > const char*, char*, size_t, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetString(const char*, char*, size_t, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: > invalid > > conversion from ?const char*? to ?char*? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: > invalid > > conversion from ?char*? to ?size_t? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: > invalid > > conversion from ?size_t? to ?PetscTruth*? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: too > > many arguments to function ?PetscErrorCode PetscOptionsGetString(const > > char*, char*, size_t, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:24: error: at > this > > point in file > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, > PetscInt*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:26: error: > > declaration of C function ?PetscErrorCode > PetscOptionsGetStringArray(const > > char*, char**, PetscInt*, PetscTruth*)? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:25: error: > > previous declaration ?PetscErrorCode PetscOptionsGetStringArray(const > char*, > > const char*, char**, PetscInt*, PetscTruth*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h: In function > > ?PetscErrorCode PetscOptionsGetStringArray(const char*, char**, > PetscInt*, > > PetscTruth*)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petscoptions.h:26: error: > cannot > > convert ?const char*? to ?char**? for argument ?2? to ?PetscErrorCode > > PetscOptionsGetStringArray(const char*, char**, PetscInt*, PetscTruth*)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscInitialize(int*, char***)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: > declaration > > of C function ?PetscErrorCode PetscInitialize(int*, char***)? conflicts > with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1159: error: previous > > declaration ?PetscErrorCode PetscInitialize(int*, char***, const char*, > > const char*)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscInitialize(int*, char***)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: too many > > arguments to function ?PetscErrorCode PetscInitialize(int*, char***)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: at this > > point in file > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: > declaration > > of C function ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? > conflicts > > with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1346: error: previous > > declaration ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm, > > PetscMPIInt)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: too many > > arguments to function ?PetscErrorCode > PetscSequentialPhaseBegin(MPI_Comm)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: at this > > point in file > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseBegin()?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: > declaration > > of C function ?PetscErrorCode PetscSequentialPhaseBegin()? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1347: error: previous > > declaration ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: too many > > arguments to function ?PetscErrorCode PetscSequentialPhaseBegin()? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1348: error: at this > > point in file > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseBegin(MPI_Comm)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: > declaration > > of C function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? > conflicts > > with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1349: error: previous > > declaration ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm, > PetscMPIInt)? > > here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: too many > > arguments to function ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: at this > > point in file > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseEnd()?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: > declaration > > of C function ?PetscErrorCode PetscSequentialPhaseEnd()? conflicts with > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1350: error: previous > > declaration ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)? here > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h: In function > > ?PetscErrorCode PetscSequentialPhaseEnd(MPI_Comm)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: too many > > arguments to function ?PetscErrorCode PetscSequentialPhaseEnd()? > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1351: error: at this > > point in file > > Laplace.cc: In function ?int main(int, char**)?: > > /Users/charles/pkg/petsc-2.3.3-p15/include/petsc.h:1160: error: too many > > arguments to function ?PetscErrorCode PetscInitialize(int*, char***)? > > Laplace.cc:206: error: at this point in file > > > > > > This last error doesn't even make sense, as it conforms to the usage > > specified here ( > > > http://www.mcs.anl.gov/petsc/petsc-2/snapshots/petsc-dev/docs/manualpages/Sys/PetscInitialize.html > > ). > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Wed Mar 24 05:30:18 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Mar 2010 05:30:18 -0500 Subject: [petsc-users] Linking error with C++ code: undefined symbols In-Reply-To: References: Message-ID: On Wed, Mar 24, 2010 at 3:02 AM, charles reid wrote: > Hi Satish - > > Thanks for your response. I think this helped me to realize that I was > making things harder than they should be. > > I've managed to get this working, and here are a few notes: > > 1. It was not clear to me why the Petsc-provided makefiles worked, because > the compilation of the C code into a ".o" file is buried deep inside several > makefiles (e.g. $PETSC_DIR/conf/rules) included in the original example > makefile. So, I was trying to compile each of my ".cc" files into ".o" > files manually, then link. In fact, all I had to do was let Petsc take care > of compiling each ".cc" file into a ".o" file, and I only had to give the > line to link the libraries to the driver. > > 2. The only major change I had to make to the example makefiles to get the > driver to link and compile was to add ${PETSC_CC_INCLUDES} after ${CLINKER}, > like this: > > Laplace: Laplace.cc $(OBJ_FILES) > -${CLINKER} ${PETSC_CC_INCLUDES} Laplace.cc -o bin.x $(OBJ_FILES) > ${PETSC_KSP_LIB} > ${RM} $(OBJ_FILES) > Alternatively Laplace: Laplace.o $(OBJ_FILES) -${CLINKER} -o bin.x Laplace.o $(OBJ_FILES) ${PETSC_KSP_LIB} ${RM} $(OBJ_FILES) Matt > Otherwise there were some error messages related to header files. > > > Thanks again for your help. > > Charles > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From charlesreid1 at gmail.com Wed Mar 24 13:12:14 2010 From: charlesreid1 at gmail.com (charles reid) Date: Wed, 24 Mar 2010 12:12:14 -0600 Subject: [petsc-users] Linking error with C++ code: undefined symbols In-Reply-To: References: Message-ID: Hi Matt, That alternative line gives me problems - but works if I add ${PETSC_CC_INCLUDES} to the compiler command. If I don't, I see this stuff: g++ -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wall -Wwrite-strings -Wno-strict-aliasing -g Laplace.cc -o bin.x BoundaryConditionFactory.o BoundaryCondition.o Field.o FileIO.o JacobiSolver.o Timer.o TimerFactory.o GmresSolver.o -L/Users/charles/pkg/petsc-3.0.0-p12/darwin9.5.0-c-opt/lib -L/Users/charles/pkg/petsc-3.0.0-p12/darwin9.5.0-c-opt/lib -lpetscksp -lpetscdm -lpetscmat -lpetscvec -lpetsc -L/usr/X11/lib -lX11 -llapack -lblas -L/Users/charles/pkg/petsc-3.0.0-p12/darwin9.5.0-c-opt/lib -lmpiuni -L/usr/lib/i686-apple-darwin9/4.0.1 -L/usr/lib/gcc/i686-apple-darwin9/4.0.1 -ldl -lgcc_s.10.5 -lSystem -lgfortranbegin -lgfortran -L/usr/local/libexec/gcc/i386-apple-darwin9.7.0/4.4.1 -L/usr/local/libexec/gcc/i386-apple-darwin9.7.0 -L/usr/local/lib/gcc/i386-apple-darwin9.7.0/4.4.1 -L/usr/local/lib/gcc/i386-apple-darwin9.7.0 -L/usr/local/lib -lstdc++ -lstdc++ -ldl -lgcc_s.10.5 -lSystem -ldl In file included from /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:7, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:9, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:6, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscpc.h:6, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscksp.h:6, from GmresSolver.h:10, from Laplace.cc:8: /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:13:23: error: petscconf.h: No such file or directory /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:14:22: error: petscfix.h: No such file or directory In file included from /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:1431, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:7, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:9, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:6, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscpc.h:6, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscksp.h:6, from GmresSolver.h:10, from Laplace.cc:8: /Users/charles/pkg/petsc-3.0.0-p12/include/private/petscimpl.h:8:21: error: petsc.h: No such file or directory In file included from /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:432, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:6, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscpc.h:6, from /Users/charles/pkg/petsc-3.0.0-p12/include/petscksp.h:6, from GmresSolver.h:10, from Laplace.cc:8: /Users/charles/pkg/petsc-3.0.0-p12/include/private/vecimpl.h:11:22: error: petscvec.h: No such file or directory /Users/charles/pkg/petsc-3.0.0-p12/include/petscviewer.h:117: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:10: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:12: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:14: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:16: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:18: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:20: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:22: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:24: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:26: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscoptions.h:28: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:1220: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:1414: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:1415: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:1417: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petsc.h:1418: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:127: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:157: error: ?PETSC_IS_COLOR_VALUE_TYPE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:158: error: ?ISColoringValue? has not been declared /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:158: error: ?ISColoringValue? has not been declared /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:178: error: ISO C++ forbids declaration of ?ISColoringValue? with no type /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:178: error: expected ?;? before ?*? token /Users/charles/pkg/petsc-3.0.0-p12/include/petscis.h:184: error: ISO C++ forbids declaration of ?ISColoringValue? with no type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:61: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:63: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:65: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:67: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:69: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:83: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:85: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:158: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:159: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:160: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:164: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:166: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:179: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:181: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:183: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:185: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:187: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:205: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:206: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:208: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:210: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:254: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:259: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:321: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:322: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:323: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:324: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:325: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:357: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:362: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:364: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:398: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:404: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:406: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:408: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:410: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:412: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:413: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:415: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:416: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/private/vecimpl.h:29: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/private/vecimpl.h:32: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/private/vecimpl.h:283: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/private/vecimpl.h:304: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscvec.h:452: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:141: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:142: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:218: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:219: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:220: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:221: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:222: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:223: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:224: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:226: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:227: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:228: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:229: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:230: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:231: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:232: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:233: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:234: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:235: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:236: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:237: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:238: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:239: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:245: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:246: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:247: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:248: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:249: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:250: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:251: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:253: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:254: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:255: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:256: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:257: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:258: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:259: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:260: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:261: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:262: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:263: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:264: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:265: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:266: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:269: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:270: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:271: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:272: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:273: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:274: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:275: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:277: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:278: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:279: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:280: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:281: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:282: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:283: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:284: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:285: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:286: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:287: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:288: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:289: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:290: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:292: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:293: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:296: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:395: error: expected constructor, destructor, or type conversion before ?const? /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:406: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:409: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:415: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:418: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:419: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:422: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:440: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:442: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:443: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:449: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:450: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:452: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:454: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:459: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:507: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:509: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:514: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:521: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:595: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:620: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:622: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:624: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:945: error: expected constructor, destructor, or type conversion before ?void? /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:959: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:961: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:963: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:966: error: ?PETSC_STATIC_INLINE? does not name a type /Users/charles/pkg/petsc-3.0.0-p12/include/petscmat.h:1217: error: ?ISColoringValue? has not been declared Laplace.cc: In function ?int main(int, char**)?: Laplace.cc:207: warning: unused variable ?gmres? make: [Laplace] Error 1 (ignored) /bin/rm -f BoundaryConditionFactory.o BoundaryCondition.o Field.o FileIO.o JacobiSolver.o Timer.o TimerFactory.o GmresSolver.o On Wed, Mar 24, 2010 at 04:30, Matthew Knepley wrote: > On Wed, Mar 24, 2010 at 3:02 AM, charles reid wrote: > >> Hi Satish - >> >> Thanks for your response. I think this helped me to realize that I was >> making things harder than they should be. >> >> I've managed to get this working, and here are a few notes: >> >> 1. It was not clear to me why the Petsc-provided makefiles worked, because >> the compilation of the C code into a ".o" file is buried deep inside several >> makefiles (e.g. $PETSC_DIR/conf/rules) included in the original example >> makefile. So, I was trying to compile each of my ".cc" files into ".o" >> files manually, then link. In fact, all I had to do was let Petsc take care >> of compiling each ".cc" file into a ".o" file, and I only had to give the >> line to link the libraries to the driver. >> >> 2. The only major change I had to make to the example makefiles to get the >> driver to link and compile was to add ${PETSC_CC_INCLUDES} after ${CLINKER}, >> like this: >> >> Laplace: Laplace.cc $(OBJ_FILES) >> -${CLINKER} ${PETSC_CC_INCLUDES} Laplace.cc -o bin.x $(OBJ_FILES) >> ${PETSC_KSP_LIB} >> ${RM} $(OBJ_FILES) >> > > Alternatively > > Laplace: Laplace.o $(OBJ_FILES) > -${CLINKER} -o bin.x Laplace.o $(OBJ_FILES) ${PETSC_KSP_LIB} > ${RM} $(OBJ_FILES) > > Matt > > >> Otherwise there were some error messages related to header files. >> >> >> Thanks again for your help. >> >> Charles >> >> -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Mar 24 13:20:08 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 24 Mar 2010 19:20:08 +0100 Subject: [petsc-users] Linking error with C++ code: undefined symbols In-Reply-To: References: Message-ID: <87fx3psht3.fsf@59A2.org> On Wed, 24 Mar 2010 12:12:14 -0600, charles reid wrote: > Hi Matt, > > That alternative line gives me problems - but works if I add > ${PETSC_CC_INCLUDES} to the compiler command. If I don't, I see this stuff: That's because you didn't do what he said: > > Laplace: Laplace.o $(OBJ_FILES) > > -${CLINKER} -o bin.x Laplace.o $(OBJ_FILES) ${PETSC_KSP_LIB} ^^^^^^^^^ That is Laplace.o, not Laplace.cc. There is no reason why one of your source files has to be compiled differently from the others. Jed From hari at iastate.edu Wed Mar 24 13:24:17 2010 From: hari at iastate.edu (Hari Krishna Kodali) Date: Wed, 24 Mar 2010 13:24:17 -0500 Subject: [petsc-users] a direct solver with support for 64-bit integers Message-ID: <4BAA58D1.3060604@iastate.edu> Hi all, I am looking for a parallel direct solver (as my problem is ill conditioned and the petsc provided preconditioners don't help !) that can use 64 bit integers (due to large problem size). I tried SUPERLU_DIST and MUMPS and both don't support 64-bit integer indices. As my problem is not symmetric, i can't use one of the symmetric solvers. Thanks, -- Hari Krishna Kodali From jed at 59A2.org Wed Mar 24 13:38:12 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 24 Mar 2010 19:38:12 +0100 Subject: [petsc-users] a direct solver with support for 64-bit integers In-Reply-To: <4BAA58D1.3060604@iastate.edu> References: <4BAA58D1.3060604@iastate.edu> Message-ID: <87eij9sgyz.fsf@59A2.org> On Wed, 24 Mar 2010 13:24:17 -0500, Hari Krishna Kodali wrote: > Hi all, > > I am looking for a parallel direct solver (as my problem is ill > conditioned and the petsc provided preconditioners don't help !) that > can use 64 bit integers (due to large problem size). I don't have an answer to your question, but what problem are you solving that has over 2 billion degrees of freedom, but for which a direct solver is remotely feasible? Or is this a subproblem of a much larger simulation? If you explain the problem, perhaps someone can suggest a way to solve the system efficiently with iterative methods. Jed From hari at iastate.edu Wed Mar 24 13:41:03 2010 From: hari at iastate.edu (Hari Krishna Kodali) Date: Wed, 24 Mar 2010 13:41:03 -0500 Subject: [petsc-users] a direct solver with support for 64-bit integers In-Reply-To: <87eij9sgyz.fsf@59A2.org> References: <4BAA58D1.3060604@iastate.edu> <87eij9sgyz.fsf@59A2.org> Message-ID: <4BAA5CBF.3030205@iastate.edu> On 03/24/2010 01:38 PM, Jed Brown wrote: > On Wed, 24 Mar 2010 13:24:17 -0500, Hari Krishna Kodali wrote: > >> Hi all, >> >> I am looking for a parallel direct solver (as my problem is ill >> conditioned and the petsc provided preconditioners don't help !) that >> can use 64 bit integers (due to large problem size). >> > I don't have an answer to your question, but what problem are you > solving that has over 2 billion degrees of freedom, but for which a > direct solver is remotely feasible? Or is this a subproblem of a much > larger simulation? > > If you explain the problem, perhaps someone can suggest a way to solve > the system efficiently with iterative methods. > > Jed > > Thanks for the reply Jed I have been using superlu_dist and ran into problems. My two dimensional, structured grid, quadrilateral elements, finite element code creates a square matrix of degrees of freedom (dof) 1,200,000. superlu_dist works fine with this, but when i try a problem of dof 1,500,000 there are errors. The dof is well below 2 billion but the number of nonzeroes in the matrix may have exceeded 2 billion which could be causing the problem. Thanks -- Hari Krishna Kodali From jed at 59A2.org Wed Mar 24 14:20:32 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 24 Mar 2010 20:20:32 +0100 Subject: [petsc-users] a direct solver with support for 64-bit integers In-Reply-To: <4BAA5CBF.3030205@iastate.edu> References: <4BAA58D1.3060604@iastate.edu> <87eij9sgyz.fsf@59A2.org> <4BAA5CBF.3030205@iastate.edu> Message-ID: <87d3ytsf0f.fsf@59A2.org> On Wed, 24 Mar 2010 13:41:03 -0500, Hari Krishna Kodali wrote: > I have been using superlu_dist and ran into problems. My two > dimensional, structured grid, quadrilateral elements, finite element > code creates a square matrix of degrees of freedom (dof) > 1,200,000. superlu_dist works fine with this, but when i try a problem > of dof 1,500,000 there are errors. > > The dof is well below 2 billion but the number of nonzeroes in the > matrix may have exceeded 2 billion which could be causing the problem. Ah, I see. MUMPS and PaStiX both claim to support 64-bit integers, but I don't think --download-mumps or --download-pastix currently support this. In the case of MUMPS, I appears rather involved to get BLACS and SCALAPACK built (portably) with 64-bit integers since they were clearly not written with this capability in mind. Jed From xy2102 at columbia.edu Wed Mar 24 14:41:38 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 24 Mar 2010 15:41:38 -0400 Subject: [petsc-users] DAGlobalToLocalBegin() Message-ID: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> Hi,all, I have an error from ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); CHKERRQ(ierr); where ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for set up the SNES. So I check up the vector size of X, F, localFIELD where ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); PetscInt nlocalFIELD,nX,nF; ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); ierr = VecGetSize(X,&nX);CHKERRQ(ierr); ierr = VecGetSize(F,&nF);CHKERRQ(ierr); ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); CHKERRQ(ierr); ierr = DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); ierr = DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); (gdb) disp nX 1: nX = 120 (gdb) disp nF 2: nF = 120 (gdb) disp nlocalFIELD 3: nlocalFIELD = 100 (gdb) where #0 FormFunction (snes=0x8a47840, X=0x8a39c20, F=0x8a37570, dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 #1 0x080fb28f in SNESComputeFunction (snes=0x8a47840, x=0x8a39c20, y=0x8a37570) at snes.c:1093 #2 0x08123d1b in SNESSolve_LS (snes=0x8a47840) at ls.c:159 #3 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at snes.c:2242 #4 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 #5 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 #6 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 #7 0x0804def5 in main (argc=Cannot access memory at address 0x0 ) at twgcqt2unffnictv.c:303 3: nlocalFIELD = 100 2: nF = 120 1: nX = 100 (gdb) where #0 FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 #1 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:680 #2 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:521 #3 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8aa7b40) at snesj2.c:49 #4 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8a2b810) at damgsnes.c:365 #5 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, X=0x8a39c20, J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ptr=0x8a2b6b0) at damgsnes.c:60 #6 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, A=0x8a47910, B=0x8a47914, flg=0xbfbf4404) at snes.c:1188 #7 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 #8 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at snes.c:2242 #9 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 #10 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 #11 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 #12 0x0804def5 in main (argc=Cannot access memory at address 0x1 ) at twgcqt2unffnictv.c:303 Why nX changes from 120 to 100? Is X a global vector or a local vector? Thanks very much! -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From knepley at gmail.com Wed Mar 24 14:46:26 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Mar 2010 14:46:26 -0500 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> Message-ID: It really impossible to see what is happening in your code from this post. Is this FormFunction or FormFunctionLocal? Matt On Wed, Mar 24, 2010 at 2:41 PM, (Rebecca) Xuefei YUAN wrote: > Hi,all, > > I have an error from > > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > CHKERRQ(ierr); > > where > > ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for set up > the SNES. > > So I check up the vector size of X, F, localFIELD where > ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); > PetscInt nlocalFIELD,nX,nF; > ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); > ierr = VecGetSize(X,&nX);CHKERRQ(ierr); > ierr = VecGetSize(F,&nF);CHKERRQ(ierr); > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > CHKERRQ(ierr); > ierr = > DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); > ierr = DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); > > > (gdb) disp nX > 1: nX = 120 > (gdb) disp nF > 2: nF = 120 > (gdb) disp nlocalFIELD > 3: nlocalFIELD = 100 > (gdb) where > #0 FormFunction (snes=0x8a47840, X=0x8a39c20, F=0x8a37570, > dummg=0x8a2b810) > at twgcqt2unffnictv.c:8382 > #1 0x080fb28f in SNESComputeFunction (snes=0x8a47840, x=0x8a39c20, > y=0x8a37570) at snes.c:1093 > #2 0x08123d1b in SNESSolve_LS (snes=0x8a47840) at ls.c:159 > #3 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at > snes.c:2242 > #4 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 > #5 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 > #6 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 > #7 0x0804def5 in main (argc=Cannot access memory at address 0x0 > ) at twgcqt2unffnictv.c:303 > > 3: nlocalFIELD = 100 > 2: nF = 120 > 1: nX = 100 > (gdb) where > #0 FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, > dummg=0x8a2b810) > at twgcqt2unffnictv.c:8382 > #1 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, > x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:680 > #2 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, > x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:521 > #3 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, > x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8aa7b40) > at snesj2.c:49 > #4 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, > J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8a2b810) at > damgsnes.c:365 > #5 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, > X=0x8a39c20, > J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ptr=0x8a2b6b0) at > damgsnes.c:60 > #6 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, > A=0x8a47910, B=0x8a47914, flg=0xbfbf4404) at snes.c:1188 > #7 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 > #8 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at > snes.c:2242 > #9 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 > #10 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 > #11 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 > #12 0x0804def5 in main (argc=Cannot access memory at address 0x1 > ) at twgcqt2unffnictv.c:303 > > Why nX changes from 120 to 100? Is X a global vector or a local vector? > > Thanks very much! > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From jed at 59A2.org Wed Mar 24 14:55:56 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 24 Mar 2010 20:55:56 +0100 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> Message-ID: <87aatxsddf.fsf@59A2.org> On Wed, 24 Mar 2010 15:41:38 -0400, "(Rebecca) Xuefei YUAN" wrote: > Hi,all, > > I have an error from > > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > CHKERRQ(ierr); > > where > > ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for > set up the SNES. > > So I check up the vector size of X, F, localFIELD where > ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); > PetscInt nlocalFIELD,nX,nF; > ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); > ierr = VecGetSize(X,&nX);CHKERRQ(ierr); > ierr = VecGetSize(F,&nF);CHKERRQ(ierr); > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > CHKERRQ(ierr); > ierr = DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); > ierr = DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); > > > (gdb) disp nX > 1: nX = 120 > (gdb) disp nF > 2: nF = 120 > (gdb) disp nlocalFIELD > 3: nlocalFIELD = 100 Is this run in parallel? Note that the sizes of X and F are global, while localFIELD is serial. What error did you get? Matt, it's clearly FormFunction and not FormFunctionLocal because the function prototype has the SNES. Jed From bsmith at mcs.anl.gov Wed Mar 24 14:53:28 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Mar 2010 14:53:28 -0500 Subject: [petsc-users] a direct solver with support for 64-bit integers In-Reply-To: <87d3ytsf0f.fsf@59A2.org> References: <4BAA58D1.3060604@iastate.edu> <87eij9sgyz.fsf@59A2.org> <4BAA5CBF.3030205@iastate.edu> <87d3ytsf0f.fsf@59A2.org> Message-ID: <80DCA21C-E8CD-4D7B-A5B4-EEF46BF8AD69@mcs.anl.gov> On Mar 24, 2010, at 2:20 PM, Jed Brown wrote: > On Wed, 24 Mar 2010 13:41:03 -0500, Hari Krishna Kodali > wrote: >> I have been using superlu_dist and ran into problems. My two >> dimensional, structured grid, quadrilateral elements, finite element >> code creates a square matrix of degrees of freedom (dof) >> 1,200,000. superlu_dist works fine with this, but when i try a >> problem >> of dof 1,500,000 there are errors. >> >> The dof is well below 2 billion but the number of nonzeroes in the >> matrix may have exceeded 2 billion which could be causing the >> problem. > > Ah, I see. MUMPS and PaStiX both claim to support 64-bit integers, > but > I don't think --download-mumps or --download-pastix currently support > this. In the case of MUMPS, I appears rather involved to get BLACS > and > SCALAPACK built (portably) with 64-bit integers since they were > clearly > not written with this capability in mind. I investigated this and it is all crap stuff. Compile the Fortran with -i8 and other nonsense. This is becoming a serious hole in the direct solver community which none of them has yet portably and properly overcome. Barry > > Jed From xy2102 at columbia.edu Wed Mar 24 14:57:30 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 24 Mar 2010 15:57:30 -0400 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> Message-ID: <20100324155730.p6v5c40qsk88s8og@cubmail.cc.columbia.edu> Dear Matt, It is FormFunction(), not FormFunctionLocal(). I also checked ex25.c http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex25.c.html by commenting ierr = DMMGSetSNES(dmmg,FormFunction,0);CHKERRQ(ierr); // ierr = DMMGSetSNESLocal(dmmg,FormFunctionLocal,0,ad_FormFunctionLocal,0);CHKERRQ(ierr); the same error happened to ex25 as Breakpoint 1, FormFunction (snes=0x8a45890, T=0x8a923e0, F=0x8a7a1f0, ptr=0x8985760) at ex25.c:122 122 ierr = DAGlobalToLocalBegin((DA)dmmg->dm,T,INSERT_VALUES,localT);CHKER RQ(ierr); 3: nF = 289 2: nT = 170 1: nlocalT = 170 (gdb) n Program received signal SIGABRT, Aborted. 0xb7ee9410 in __kernel_vsyscall () (gdb) where #0 0xb7ee9410 in __kernel_vsyscall () #1 0xb7c69085 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7c6aa01 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0x087306a1 in PetscAbortErrorHandler (line=1538, fun=0x88584bd "VecScatterBegin", file=0x8857c47 "vscat.c", dir=0x8857c4f "src/vec/vec/utils/", n=60, p=1, mess=0xbfbae664 "Vector wrong size 170 for scatter 153 (scatter forward and vector from != ctx from size)", ctx=0x0) at errabort.c:62 #4 0x086a5612 in PetscError (line=1538, func=0x88584bd "VecScatterBegin", file=0x8857c47 "vscat.c", dir=0x8857c4f "src/vec/vec/utils/", n=60, p=1, mess=0x88585f0 "Vector wrong size %D for scatter %D (scatter forward and vec tor from != ctx from size)") at err.c:482 #5 0x085b0650 in VecScatterBegin (inctx=0x89a0010, x=0x8a923e0, y=0x8a93c60, addv=INSERT_VALUES, mode=SCATTER_FORWARD) at vscat.c:1538 #6 0x081e62d0 in DAGlobalToLocalBegin (da=0x898ad80, g=0x8a923e0, mode=INSERT_VALUES, l=0x8a93c60) at dagtol.c:50 #7 0x0804c142 in FormFunction (snes=0x8a45890, T=0x8a923e0, F=0x8a7a1f0, ptr=0x8985760) at ex25.c:122 #8 0x085673c1 in MatFDColoringApply_AIJ (J=0x8a46b70, coloring=0x8a757c0, x1=0x8a8e560, flag=0xbfbaf354, sctx=0x8a45890) at fdmatrix.c:680 #9 0x08565933 in MatFDColoringApply (J=0x8a46b70, coloring=0x8a757c0, x1=0x8a8e560, flag=0xbfbaf354, sctx=0x8a45890) at fdmatrix.c:521 #10 0x0807dbc9 in SNESDefaultComputeJacobianColor (snes=0x8a45890, ---Type to continue, or q to quit--- x1=0x8a8e560, J=0x8a45960, B=0x8a45964, flag=0xbfbaf354, ctx=0x8a757c0) at snesj2.c:49 #11 0x08077553 in DMMGComputeJacobianWithFD (snes=0x8a45890, x1=0x8988eb0, J=0x8a45960, B=0x8a45964, flag=0xbfbaf354, ctx=0x8985760) at damgsnes.c:365 #12 0x08075607 in DMMGComputeJacobian_Multigrid (snes=0x8a45890, X=0x8988eb0, J=0x8a45960, B=0x8a45964, flag=0xbfbaf354, ptr=0x8984f40) at damgsnes.c:60 #13 0x08057394 in SNESComputeJacobian (snes=0x8a45890, X=0x8988eb0, A=0x8a45960, B=0x8a45964, flg=0xbfbaf354) at snes.c:1188 #14 0x0807f1f5 in SNESSolve_LS (snes=0x8a45890) at ls.c:189 #15 0x0805ed30 in SNESSolve (snes=0x8a45890, b=0x0, x=0x8988eb0) at snes.c:2242 #16 0x080788b5 in DMMGSolveSNES (dmmg=0x8984f40, level=2) at damgsnes.c:510 #17 0x08071dad in DMMGSolve (dmmg=0x8984f40) at damg.c:313 #18 0x0804ba07 in main (argc=Cannot access memory at address 0x1792 ) at ex25.c:84 (gdb) So I am not sure how this is wrong. Thanks a lot! Rebecca Quoting Matthew Knepley : > It really impossible to see what is happening in your code from this post. > Is this > FormFunction or FormFunctionLocal? > > Matt > > On Wed, Mar 24, 2010 at 2:41 PM, (Rebecca) Xuefei YUAN > wrote: > >> Hi,all, >> >> I have an error from >> >> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >> CHKERRQ(ierr); >> >> where >> >> ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for set up >> the SNES. >> >> So I check up the vector size of X, F, localFIELD where >> ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); >> PetscInt nlocalFIELD,nX,nF; >> ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); >> ierr = VecGetSize(X,&nX);CHKERRQ(ierr); >> ierr = VecGetSize(F,&nF);CHKERRQ(ierr); >> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >> CHKERRQ(ierr); >> ierr = >> DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); >> ierr = DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); >> >> >> (gdb) disp nX >> 1: nX = 120 >> (gdb) disp nF >> 2: nF = 120 >> (gdb) disp nlocalFIELD >> 3: nlocalFIELD = 100 >> (gdb) where >> #0 FormFunction (snes=0x8a47840, X=0x8a39c20, F=0x8a37570, >> dummg=0x8a2b810) >> at twgcqt2unffnictv.c:8382 >> #1 0x080fb28f in SNESComputeFunction (snes=0x8a47840, x=0x8a39c20, >> y=0x8a37570) at snes.c:1093 >> #2 0x08123d1b in SNESSolve_LS (snes=0x8a47840) at ls.c:159 >> #3 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at >> snes.c:2242 >> #4 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 >> #5 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 >> #6 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 >> #7 0x0804def5 in main (argc=Cannot access memory at address 0x0 >> ) at twgcqt2unffnictv.c:303 >> >> 3: nlocalFIELD = 100 >> 2: nF = 120 >> 1: nX = 100 >> (gdb) where >> #0 FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, >> dummg=0x8a2b810) >> at twgcqt2unffnictv.c:8382 >> #1 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, >> x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:680 >> #2 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, >> x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:521 >> #3 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, >> x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8aa7b40) >> at snesj2.c:49 >> #4 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, >> J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8a2b810) at >> damgsnes.c:365 >> #5 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, >> X=0x8a39c20, >> J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ptr=0x8a2b6b0) at >> damgsnes.c:60 >> #6 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, >> A=0x8a47910, B=0x8a47914, flg=0xbfbf4404) at snes.c:1188 >> #7 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 >> #8 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at >> snes.c:2242 >> #9 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 >> #10 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 >> #11 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 >> #12 0x0804def5 in main (argc=Cannot access memory at address 0x1 >> ) at twgcqt2unffnictv.c:303 >> >> Why nX changes from 120 to 100? Is X a global vector or a local vector? >> >> Thanks very much! >> >> >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> >> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From xy2102 at columbia.edu Wed Mar 24 15:01:15 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 24 Mar 2010 16:01:15 -0400 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <87aatxsddf.fsf@59A2.org> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> <87aatxsddf.fsf@59A2.org> Message-ID: <20100324160115.lfl4fu25wc0g8gk0@cubmail.cc.columbia.edu> Dear Jed and Matt, Yes, X and F are global and localFIELD is serial. I ran with np=2. The error I get is: ************************************************** 0 SNES Function norm 1.095445115010e+01 [1]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c Vector wrong size 100 for scatter 60 (scatter forward and vector from != ctx from size) [0]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c Vector wrong size 100 for scatter 60 (scatter forward and vector from != ctx from size) from gdb, I get: Program received signal SIGABRT, Aborted. [Switching to Thread 0xb7c396b0 (LWP 6301)] 0xb7f09410 in __kernel_vsyscall () (gdb) where #0 0xb7f09410 in __kernel_vsyscall () #1 0xb7c89085 in raise () from /lib/tls/i686/cmov/libc.so.6 #2 0xb7c8aa01 in abort () from /lib/tls/i686/cmov/libc.so.6 #3 0x087d591d in PetscAbortErrorHandler (line=1538, fun=0x88fe26d "VecScatterBegin", file=0x88fd9f7 "vscat.c", dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, mess=0xbfd31124 "Vector wrong size 100 for scatter 60 (scatter forward and v ector from != ctx from size)", ctx=0x0) at errabort.c:62 #4 0x0874a88e in PetscError (line=1538, func=0x88fe26d "VecScatterBegin", file=0x88fd9f7 "vscat.c", dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, mess=0x88fe3a0 "Vector wrong size %D for scatter %D (scatter forward and vec tor from != ctx from size)") at err.c:482 #5 0x086558cc in VecScatterBegin (inctx=0x8a393b0, x=0x8b39cc0, y=0x8b3b310, addv=INSERT_VALUES, mode=SCATTER_FORWARD) at vscat.c:1538 #6 0x0828b54c in DAGlobalToLocalBegin (da=0x8a2d360, g=0x8b39cc0, mode=INSERT_VALUES, l=0x8b3b310) at dagtol.c:50 #7 0x080f1b25 in FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 #8 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:680 #9 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:521 #10 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, ---Type to continue, or q to quit--- x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8aa7b40) at snesj2.c:49 #11 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8a2b810) at damgsnes.c:365 #12 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, X=0x8a39c20, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ptr=0x8a2b6b0) at damgsnes.c:60 #13 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, A=0x8a47910, B=0x8a47914, flg=0xbfd32164) at snes.c:1188 #14 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 #15 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at snes.c:2242 #16 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 #17 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 #18 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 #19 0x0804def5 in main (argc=Cannot access memory at address 0x189d ) at twgcqt2unffnictv.c:303 Same things happened to ex25.c from http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex25.c.html with ierr = DMMGSetSNES(dmmg,FormFunction,0);CHKERRQ(ierr); // ierr = DMMGSetSNESLocal(dmmg,FormFunctionLocal,0,ad_FormFunctionLocal,0);CHKERRQ(ierr); Thanks a lot! Rebecca Quoting Jed Brown : > On Wed, 24 Mar 2010 15:41:38 -0400, "(Rebecca) Xuefei YUAN" > wrote: >> Hi,all, >> >> I have an error from >> >> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >> CHKERRQ(ierr); >> >> where >> >> ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for >> set up the SNES. >> >> So I check up the vector size of X, F, localFIELD where >> ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); >> PetscInt nlocalFIELD,nX,nF; >> ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); >> ierr = VecGetSize(X,&nX);CHKERRQ(ierr); >> ierr = VecGetSize(F,&nF);CHKERRQ(ierr); >> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >> CHKERRQ(ierr); >> ierr = >> DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); >> ierr = DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); >> >> >> (gdb) disp nX >> 1: nX = 120 >> (gdb) disp nF >> 2: nF = 120 >> (gdb) disp nlocalFIELD >> 3: nlocalFIELD = 100 > > Is this run in parallel? Note that the sizes of X and F are global, > while localFIELD is serial. What error did you get? > > > Matt, it's clearly FormFunction and not FormFunctionLocal because the > function prototype has the SNES. > > Jed > > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From knepley at gmail.com Wed Mar 24 15:16:33 2010 From: knepley at gmail.com (Matthew Knepley) Date: Wed, 24 Mar 2010 15:16:33 -0500 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <20100324160115.lfl4fu25wc0g8gk0@cubmail.cc.columbia.edu> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> <87aatxsddf.fsf@59A2.org> <20100324160115.lfl4fu25wc0g8gk0@cubmail.cc.columbia.edu> Message-ID: On Wed, Mar 24, 2010 at 3:01 PM, (Rebecca) Xuefei YUAN wrote: > Dear Jed and Matt, > This is a genuine bug. It is in DMMGComputeJacobianWithFD() and only occurs when IS_COLORING_GHOSTED is true. So, using IS_COLORING_GLOBAL would probably work here (can't look up the option right now). Barry, what is supposed to happen here? Clearly a local vector is being passed where a global vector is expected (at least part of the time). Matt > Yes, X and F are global and localFIELD is serial. I ran with np=2. > > The error I get is: > > > ************************************************** > 0 SNES Function norm 1.095445115010e+01 > [1]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c > Vector wrong size 100 for scatter 60 (scatter forward and vector from != ctx > from size) > [0]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c > Vector wrong size 100 for scatter 60 (scatter forward and vector from != ctx > from size) > > > from gdb, I get: > > > Program received signal SIGABRT, Aborted. > [Switching to Thread 0xb7c396b0 (LWP 6301)] > 0xb7f09410 in __kernel_vsyscall () > (gdb) where > #0 0xb7f09410 in __kernel_vsyscall () > #1 0xb7c89085 in raise () from /lib/tls/i686/cmov/libc.so.6 > #2 0xb7c8aa01 in abort () from /lib/tls/i686/cmov/libc.so.6 > #3 0x087d591d in PetscAbortErrorHandler (line=1538, > fun=0x88fe26d "VecScatterBegin", file=0x88fd9f7 "vscat.c", > dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, > mess=0xbfd31124 "Vector wrong size 100 for scatter 60 (scatter forward > and v > ector from != ctx from size)", ctx=0x0) at errabort.c:62 > #4 0x0874a88e in PetscError (line=1538, func=0x88fe26d "VecScatterBegin", > file=0x88fd9f7 "vscat.c", dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, > mess=0x88fe3a0 "Vector wrong size %D for scatter %D (scatter forward and > vec > tor from != ctx from size)") at err.c:482 > #5 0x086558cc in VecScatterBegin (inctx=0x8a393b0, x=0x8b39cc0, > y=0x8b3b310, > addv=INSERT_VALUES, mode=SCATTER_FORWARD) at vscat.c:1538 > #6 0x0828b54c in DAGlobalToLocalBegin (da=0x8a2d360, g=0x8b39cc0, > mode=INSERT_VALUES, l=0x8b3b310) at dagtol.c:50 > #7 0x080f1b25 in FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, > dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 > #8 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, > x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:680 > #9 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, > x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:521 > #10 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, > ---Type to continue, or q to quit--- > x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8aa7b40) > at snesj2.c:49 > #11 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, > J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8a2b810) at > damgsnes.c:365 > #12 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, > X=0x8a39c20, > J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ptr=0x8a2b6b0) at > damgsnes.c:60 > #13 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, > A=0x8a47910, B=0x8a47914, flg=0xbfd32164) at snes.c:1188 > #14 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 > #15 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at > snes.c:2242 > #16 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 > #17 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 > #18 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 > #19 0x0804def5 in main (argc=Cannot access memory at address 0x189d > ) at twgcqt2unffnictv.c:303 > > > Same things happened to ex25.c from > > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex25.c.html > > > with > > ierr = DMMGSetSNES(dmmg,FormFunction,0);CHKERRQ(ierr); > // ierr = > DMMGSetSNESLocal(dmmg,FormFunctionLocal,0,ad_FormFunctionLocal,0);CHKERRQ(ierr); > > > Thanks a lot! > > Rebecca > > > > > Quoting Jed Brown : > > On Wed, 24 Mar 2010 15:41:38 -0400, "(Rebecca) Xuefei YUAN" < >> xy2102 at columbia.edu> wrote: >> >>> Hi,all, >>> >>> I have an error from >>> >>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>> CHKERRQ(ierr); >>> >>> where >>> >>> ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for >>> set up the SNES. >>> >>> So I check up the vector size of X, F, localFIELD where >>> ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); >>> PetscInt nlocalFIELD,nX,nF; >>> ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); >>> ierr = VecGetSize(X,&nX);CHKERRQ(ierr); >>> ierr = VecGetSize(F,&nF);CHKERRQ(ierr); >>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>> CHKERRQ(ierr); >>> ierr = >>> DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); >>> ierr = >>> DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); >>> >>> >>> (gdb) disp nX >>> 1: nX = 120 >>> (gdb) disp nF >>> 2: nF = 120 >>> (gdb) disp nlocalFIELD >>> 3: nlocalFIELD = 100 >>> >> >> Is this run in parallel? Note that the sizes of X and F are global, >> while localFIELD is serial. What error did you get? >> >> >> Matt, it's clearly FormFunction and not FormFunctionLocal because the >> function prototype has the SNES. >> >> Jed >> >> >> > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Wed Mar 24 15:34:24 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Mar 2010 15:34:24 -0500 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> Message-ID: <2FB88588-3A48-45E6-886C-245B6D609FBA@mcs.anl.gov> On Mar 24, 2010, at 2:41 PM, (Rebecca) Xuefei YUAN wrote: > Hi,all, > > I have an error from > > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > CHKERRQ(ierr); > > where > > ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for > set up the SNES. > There is not enough information here to determine what is going on. Where is X coming from? Where is dafield coming from? Send the entire FormFunction(). Sending a few code fragments is almost always useless. Barry > So I check up the vector size of X, F, localFIELD where > ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); > PetscInt nlocalFIELD,nX,nF; > ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); > ierr = VecGetSize(X,&nX);CHKERRQ(ierr); > ierr = VecGetSize(F,&nF);CHKERRQ(ierr); > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > CHKERRQ(ierr); > ierr = > DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); > ierr = DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); > > > (gdb) disp nX > 1: nX = 120 > (gdb) disp nF > 2: nF = 120 > (gdb) disp nlocalFIELD > 3: nlocalFIELD = 100 > (gdb) where > #0 FormFunction (snes=0x8a47840, X=0x8a39c20, F=0x8a37570, > dummg=0x8a2b810) > at twgcqt2unffnictv.c:8382 > #1 0x080fb28f in SNESComputeFunction (snes=0x8a47840, x=0x8a39c20, > y=0x8a37570) at snes.c:1093 > #2 0x08123d1b in SNESSolve_LS (snes=0x8a47840) at ls.c:159 > #3 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at > snes.c:2242 > #4 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at > damgsnes.c:510 > #5 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 > #6 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 > #7 0x0804def5 in main (argc=Cannot access memory at address 0x0 > ) at twgcqt2unffnictv.c:303 > > 3: nlocalFIELD = 100 > 2: nF = 120 > 1: nX = 100 > (gdb) where > #0 FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, > dummg=0x8a2b810) > at twgcqt2unffnictv.c:8382 > #1 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, > coloring=0x8aa7b40, > x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:680 > #2 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, > x1=0x8b12a60, flag=0xbfbf4404, sctx=0x8a47840) at fdmatrix.c:521 > #3 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, > x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, > ctx=0x8aa7b40) > at snesj2.c:49 > #4 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, > x1=0x8a39c20, > J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ctx=0x8a2b810) at > damgsnes.c:365 > #5 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, > X=0x8a39c20, > J=0x8a47910, B=0x8a47914, flag=0xbfbf4404, ptr=0x8a2b6b0) at > damgsnes.c:60 > #6 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, > A=0x8a47910, B=0x8a47914, flg=0xbfbf4404) at snes.c:1188 > #7 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 > #8 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at > snes.c:2242 > #9 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at > damgsnes.c:510 > #10 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 > #11 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 > #12 0x0804def5 in main (argc=Cannot access memory at address 0x1 > ) at twgcqt2unffnictv.c:303 > > Why nX changes from 120 to 100? Is X a global vector or a local > vector? > > Thanks very much! > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From jed at 59A2.org Wed Mar 24 15:50:00 2010 From: jed at 59A2.org (Jed Brown) Date: Wed, 24 Mar 2010 21:50:00 +0100 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <2FB88588-3A48-45E6-886C-245B6D609FBA@mcs.anl.gov> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> <2FB88588-3A48-45E6-886C-245B6D609FBA@mcs.anl.gov> Message-ID: <878w9hsavb.fsf@59A2.org> On Wed, 24 Mar 2010 15:34:24 -0500, Barry Smith wrote: > > On Mar 24, 2010, at 2:41 PM, (Rebecca) Xuefei YUAN wrote: > > > Hi,all, > > > > I have an error from > > > > ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); > > CHKERRQ(ierr); > > > > where > > > > ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for > > set up the SNES. > > > > There is not enough information here to determine what is going > on. Where is X coming from? Where is dafield coming from? Apply the attached patch and run snes ex25 on two procs. The solution vector is a local vector on the second entry to FormFunction (within SNESDefaultComputeJacobianColor) when it would normally be a global one (the residual vector is correctly global in both cases). This vector is created in DMMGComputeJacobianWithFD (damgsnes.c:362). Note that MatFDColoringApply_AIJ calls the function in a different place than the local function (I think this is where all the logic is, FormFunction is called from fdmatrix.c:680, FormFunctionLocal is called from fdmatrix.c:588). Note that this all works fine with -dmmg_iscoloring_type global. Jed -------------- next part -------------- A non-text attachment was scrubbed... Name: iscoloring-ghosted.patch Type: text/x-patch Size: 633 bytes Desc: not available URL: From xy2102 at columbia.edu Wed Mar 24 16:12:23 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 24 Mar 2010 17:12:23 -0400 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> <87aatxsddf.fsf@59A2.org> <20100324160115.lfl4fu25wc0g8gk0@cubmail.cc.columbia.edu> Message-ID: <20100324171223.5jyl0id8g0w0gcks@cubmail.cc.columbia.edu> Dear Matt, Thanks a lot! Yes, when -dmmg_iscoloring_type global is applied, this is right now. But what is difference between IS_COLORING_GHOSTED and IS_COLORING_GLOBAL? Cheers, Rebecca Quoting Matthew Knepley : > On Wed, Mar 24, 2010 at 3:01 PM, (Rebecca) Xuefei YUAN > wrote: > >> Dear Jed and Matt, >> > > This is a genuine bug. It is in DMMGComputeJacobianWithFD() and only occurs > when > IS_COLORING_GHOSTED is true. So, using IS_COLORING_GLOBAL would probably > work here (can't look up the option right now). > > Barry, what is supposed to happen here? Clearly a local vector is being > passed where > a global vector is expected (at least part of the time). > > Matt > > >> Yes, X and F are global and localFIELD is serial. I ran with np=2. >> >> The error I get is: >> >> >> ************************************************** >> 0 SNES Function norm 1.095445115010e+01 >> [1]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c >> Vector wrong size 100 for scatter 60 (scatter forward and vector from != ctx >> from size) >> [0]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c >> Vector wrong size 100 for scatter 60 (scatter forward and vector from != ctx >> from size) >> >> >> from gdb, I get: >> >> >> Program received signal SIGABRT, Aborted. >> [Switching to Thread 0xb7c396b0 (LWP 6301)] >> 0xb7f09410 in __kernel_vsyscall () >> (gdb) where >> #0 0xb7f09410 in __kernel_vsyscall () >> #1 0xb7c89085 in raise () from /lib/tls/i686/cmov/libc.so.6 >> #2 0xb7c8aa01 in abort () from /lib/tls/i686/cmov/libc.so.6 >> #3 0x087d591d in PetscAbortErrorHandler (line=1538, >> fun=0x88fe26d "VecScatterBegin", file=0x88fd9f7 "vscat.c", >> dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, >> mess=0xbfd31124 "Vector wrong size 100 for scatter 60 (scatter forward >> and v >> ector from != ctx from size)", ctx=0x0) at errabort.c:62 >> #4 0x0874a88e in PetscError (line=1538, func=0x88fe26d "VecScatterBegin", >> file=0x88fd9f7 "vscat.c", dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, >> mess=0x88fe3a0 "Vector wrong size %D for scatter %D (scatter forward and >> vec >> tor from != ctx from size)") at err.c:482 >> #5 0x086558cc in VecScatterBegin (inctx=0x8a393b0, x=0x8b39cc0, >> y=0x8b3b310, >> addv=INSERT_VALUES, mode=SCATTER_FORWARD) at vscat.c:1538 >> #6 0x0828b54c in DAGlobalToLocalBegin (da=0x8a2d360, g=0x8b39cc0, >> mode=INSERT_VALUES, l=0x8b3b310) at dagtol.c:50 >> #7 0x080f1b25 in FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, >> dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 >> #8 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, >> x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:680 >> #9 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, >> x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:521 >> #10 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, >> ---Type to continue, or q to quit--- >> x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8aa7b40) >> at snesj2.c:49 >> #11 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, >> J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8a2b810) at >> damgsnes.c:365 >> #12 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, >> X=0x8a39c20, >> J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ptr=0x8a2b6b0) at >> damgsnes.c:60 >> #13 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, >> A=0x8a47910, B=0x8a47914, flg=0xbfd32164) at snes.c:1188 >> #14 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 >> #15 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at >> snes.c:2242 >> #16 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at damgsnes.c:510 >> #17 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 >> #18 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 >> #19 0x0804def5 in main (argc=Cannot access memory at address 0x189d >> ) at twgcqt2unffnictv.c:303 >> >> >> Same things happened to ex25.c from >> >> >> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex25.c.html >> >> >> with >> >> ierr = DMMGSetSNES(dmmg,FormFunction,0);CHKERRQ(ierr); >> // ierr = >> DMMGSetSNESLocal(dmmg,FormFunctionLocal,0,ad_FormFunctionLocal,0);CHKERRQ(ierr); >> >> >> Thanks a lot! >> >> Rebecca >> >> >> >> >> Quoting Jed Brown : >> >> On Wed, 24 Mar 2010 15:41:38 -0400, "(Rebecca) Xuefei YUAN" < >>> xy2102 at columbia.edu> wrote: >>> >>>> Hi,all, >>>> >>>> I have an error from >>>> >>>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>>> CHKERRQ(ierr); >>>> >>>> where >>>> >>>> ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for >>>> set up the SNES. >>>> >>>> So I check up the vector size of X, F, localFIELD where >>>> ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); >>>> PetscInt nlocalFIELD,nX,nF; >>>> ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); >>>> ierr = VecGetSize(X,&nX);CHKERRQ(ierr); >>>> ierr = VecGetSize(F,&nF);CHKERRQ(ierr); >>>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>>> CHKERRQ(ierr); >>>> ierr = >>>> DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); >>>> ierr = >>>> DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); >>>> >>>> >>>> (gdb) disp nX >>>> 1: nX = 120 >>>> (gdb) disp nF >>>> 2: nF = 120 >>>> (gdb) disp nlocalFIELD >>>> 3: nlocalFIELD = 100 >>>> >>> >>> Is this run in parallel? Note that the sizes of X and F are global, >>> while localFIELD is serial. What error did you get? >>> >>> >>> Matt, it's clearly FormFunction and not FormFunctionLocal because the >>> function prototype has the SNES. >>> >>> Jed >>> >>> >>> >> >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> >> > > > -- > What most experimenters take for granted before they begin their experiments > is infinitely more interesting than any results to which their experiments > lead. > -- Norbert Wiener > -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Wed Mar 24 16:41:03 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Wed, 24 Mar 2010 16:41:03 -0500 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <20100324171223.5jyl0id8g0w0gcks@cubmail.cc.columbia.edu> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> <87aatxsddf.fsf@59A2.org> <20100324160115.lfl4fu25wc0g8gk0@cubmail.cc.columbia.edu> <20100324171223.5jyl0id8g0w0gcks@cubmail.cc.columbia.edu> Message-ID: <64444FB2-8811-4D3E-B002-5EADBCBF4EF0@mcs.anl.gov> Rebecca, I apologize, there is an error in our code. In DMMGCreate() is the line (including the comment) p[i]->isctype = IS_COLORING_GHOSTED; /* default to faster version, requires DMMGSetSNESLocal() */ This code is fundamentally wrong, the only place we check that DMMGSetSNESLocal() is used is in the comment!!! These the default value doesn't work when you use DMMGSetSNES(), it only works as Matt points out when you change the value from the default. I have fixed this in the petsc-dev source. I apologize for the run-around on this and that it took us so long to deal with it properly. Barry PETSc developers, it is also disgraceful that we don't have a single test case that detects this problem! . On Mar 24, 2010, at 4:12 PM, (Rebecca) Xuefei YUAN wrote: > Dear Matt, > > Thanks a lot! > > Yes, when > > -dmmg_iscoloring_type global > > is applied, this is right now. But what is difference between > IS_COLORING_GHOSTED and IS_COLORING_GLOBAL? > > Cheers, > > Rebecca > > > Quoting Matthew Knepley : > >> On Wed, Mar 24, 2010 at 3:01 PM, (Rebecca) Xuefei YUAN >> wrote: >> >>> Dear Jed and Matt, >>> >> >> This is a genuine bug. It is in DMMGComputeJacobianWithFD() and >> only occurs >> when >> IS_COLORING_GHOSTED is true. So, using IS_COLORING_GLOBAL would >> probably >> work here (can't look up the option right now). >> >> Barry, what is supposed to happen here? Clearly a local vector is >> being >> passed where >> a global vector is expected (at least part of the time). >> >> Matt >> >> >>> Yes, X and F are global and localFIELD is serial. I ran with np=2. >>> >>> The error I get is: >>> >>> >>> ************************************************** >>> 0 SNES Function norm 1.095445115010e+01 >>> [1]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/ >>> vscat.c >>> Vector wrong size 100 for scatter 60 (scatter forward and vector >>> from != ctx >>> from size) >>> [0]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/ >>> vscat.c >>> Vector wrong size 100 for scatter 60 (scatter forward and vector >>> from != ctx >>> from size) >>> >>> >>> from gdb, I get: >>> >>> >>> Program received signal SIGABRT, Aborted. >>> [Switching to Thread 0xb7c396b0 (LWP 6301)] >>> 0xb7f09410 in __kernel_vsyscall () >>> (gdb) where >>> #0 0xb7f09410 in __kernel_vsyscall () >>> #1 0xb7c89085 in raise () from /lib/tls/i686/cmov/libc.so.6 >>> #2 0xb7c8aa01 in abort () from /lib/tls/i686/cmov/libc.so.6 >>> #3 0x087d591d in PetscAbortErrorHandler (line=1538, >>> fun=0x88fe26d "VecScatterBegin", file=0x88fd9f7 "vscat.c", >>> dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, >>> mess=0xbfd31124 "Vector wrong size 100 for scatter 60 (scatter >>> forward >>> and v >>> ector from != ctx from size)", ctx=0x0) at errabort.c:62 >>> #4 0x0874a88e in PetscError (line=1538, func=0x88fe26d >>> "VecScatterBegin", >>> file=0x88fd9f7 "vscat.c", dir=0x88fd9ff "src/vec/vec/utils/", >>> n=60, p=1, >>> mess=0x88fe3a0 "Vector wrong size %D for scatter %D (scatter >>> forward and >>> vec >>> tor from != ctx from size)") at err.c:482 >>> #5 0x086558cc in VecScatterBegin (inctx=0x8a393b0, x=0x8b39cc0, >>> y=0x8b3b310, >>> addv=INSERT_VALUES, mode=SCATTER_FORWARD) at vscat.c:1538 >>> #6 0x0828b54c in DAGlobalToLocalBegin (da=0x8a2d360, g=0x8b39cc0, >>> mode=INSERT_VALUES, l=0x8b3b310) at dagtol.c:50 >>> #7 0x080f1b25 in FormFunction (snes=0x8a47840, X=0x8b39cc0, >>> F=0x8adb2f0, >>> dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 >>> #8 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, >>> coloring=0x8aa7b40, >>> x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:680 >>> #9 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, >>> coloring=0x8aa7b40, >>> x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:521 >>> #10 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, >>> ---Type to continue, or q to quit--- >>> x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, >>> ctx=0x8aa7b40) >>> at snesj2.c:49 >>> #11 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, >>> x1=0x8a39c20, >>> J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8a2b810) at >>> damgsnes.c:365 >>> #12 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, >>> X=0x8a39c20, >>> J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ptr=0x8a2b6b0) at >>> damgsnes.c:60 >>> #13 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, >>> A=0x8a47910, B=0x8a47914, flg=0xbfd32164) at snes.c:1188 >>> #14 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 >>> #15 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at >>> snes.c:2242 >>> #16 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at >>> damgsnes.c:510 >>> #17 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 >>> #18 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 >>> #19 0x0804def5 in main (argc=Cannot access memory at address 0x189d >>> ) at twgcqt2unffnictv.c:303 >>> >>> >>> Same things happened to ex25.c from >>> >>> >>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex25.c.html >>> >>> >>> with >>> >>> ierr = DMMGSetSNES(dmmg,FormFunction,0);CHKERRQ(ierr); >>> // ierr = >>> DMMGSetSNESLocal(dmmg,FormFunctionLocal,0,ad_FormFunctionLocal, >>> 0);CHKERRQ(ierr); >>> >>> >>> Thanks a lot! >>> >>> Rebecca >>> >>> >>> >>> >>> Quoting Jed Brown : >>> >>> On Wed, 24 Mar 2010 15:41:38 -0400, "(Rebecca) Xuefei YUAN" < >>>> xy2102 at columbia.edu> wrote: >>>> >>>>> Hi,all, >>>>> >>>>> I have an error from >>>>> >>>>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>>>> CHKERRQ(ierr); >>>>> >>>>> where >>>>> >>>>> ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used >>>>> for >>>>> set up the SNES. >>>>> >>>>> So I check up the vector size of X, F, localFIELD where >>>>> ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); >>>>> PetscInt nlocalFIELD,nX,nF; >>>>> ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); >>>>> ierr = VecGetSize(X,&nX);CHKERRQ(ierr); >>>>> ierr = VecGetSize(F,&nF);CHKERRQ(ierr); >>>>> ierr = >>>>> DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>>>> CHKERRQ(ierr); >>>>> ierr = >>>>> DAGlobalToLocalEnd >>>>> (dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); >>>>> ierr = >>>>> DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); >>>>> >>>>> >>>>> (gdb) disp nX >>>>> 1: nX = 120 >>>>> (gdb) disp nF >>>>> 2: nF = 120 >>>>> (gdb) disp nlocalFIELD >>>>> 3: nlocalFIELD = 100 >>>>> >>>> >>>> Is this run in parallel? Note that the sizes of X and F are >>>> global, >>>> while localFIELD is serial. What error did you get? >>>> >>>> >>>> Matt, it's clearly FormFunction and not FormFunctionLocal because >>>> the >>>> function prototype has the SNES. >>>> >>>> Jed >>>> >>>> >>>> >>> >>> >>> -- >>> (Rebecca) Xuefei YUAN >>> Department of Applied Physics and Applied Mathematics >>> Columbia University >>> Tel:917-399-8032 >>> www.columbia.edu/~xy2102 >>> >>> >> >> >> -- >> What most experimenters take for granted before they begin their >> experiments >> is infinitely more interesting than any results to which their >> experiments >> lead. >> -- Norbert Wiener >> > > > > -- > (Rebecca) Xuefei YUAN > Department of Applied Physics and Applied Mathematics > Columbia University > Tel:917-399-8032 > www.columbia.edu/~xy2102 > From xy2102 at columbia.edu Wed Mar 24 19:50:36 2010 From: xy2102 at columbia.edu ((Rebecca) Xuefei YUAN) Date: Wed, 24 Mar 2010 20:50:36 -0400 Subject: [petsc-users] DAGlobalToLocalBegin() In-Reply-To: <64444FB2-8811-4D3E-B002-5EADBCBF4EF0@mcs.anl.gov> References: <20100324154138.hvxdds68mck84ooo@cubmail.cc.columbia.edu> <87aatxsddf.fsf@59A2.org> <20100324160115.lfl4fu25wc0g8gk0@cubmail.cc.columbia.edu> <20100324171223.5jyl0id8g0w0gcks@cubmail.cc.columbia.edu> <64444FB2-8811-4D3E-B002-5EADBCBF4EF0@mcs.anl.gov> Message-ID: <20100324205036.ir8trt87440ocwks@cubmail.cc.columbia.edu> Dear Barry, Thanks for pointing out this issue and my problem is solved with -dmmg_is_coloring_type global. I will add FormJacobian() in my code, so this won't be a problem for me later on. Have a good night! Rebecca Quoting Barry Smith : > > Rebecca, > > I apologize, there is an error in our code. In DMMGCreate() is the > line (including the comment) > > p[i]->isctype = IS_COLORING_GHOSTED; /* default to faster > version, requires DMMGSetSNESLocal() */ > > This code is fundamentally wrong, the only place we check that > DMMGSetSNESLocal() is used is in the comment!!! > > These the default value doesn't work when you use DMMGSetSNES(), it > only works as Matt points out when you change the value from the > default. > > I have fixed this in the petsc-dev source. > > I apologize for the run-around on this and that it took us so long > to deal with it properly. > > Barry > > PETSc developers, it is also disgraceful that we don't have a single > test case that detects this problem! > > . > On Mar 24, 2010, at 4:12 PM, (Rebecca) Xuefei YUAN wrote: > >> Dear Matt, >> >> Thanks a lot! >> >> Yes, when >> >> -dmmg_iscoloring_type global >> >> is applied, this is right now. But what is difference between >> IS_COLORING_GHOSTED and IS_COLORING_GLOBAL? >> >> Cheers, >> >> Rebecca >> >> >> Quoting Matthew Knepley : >> >>> On Wed, Mar 24, 2010 at 3:01 PM, (Rebecca) Xuefei YUAN >>> wrote: >>> >>>> Dear Jed and Matt, >>>> >>> >>> This is a genuine bug. It is in DMMGComputeJacobianWithFD() and only occurs >>> when >>> IS_COLORING_GHOSTED is true. So, using IS_COLORING_GLOBAL would probably >>> work here (can't look up the option right now). >>> >>> Barry, what is supposed to happen here? Clearly a local vector is being >>> passed where >>> a global vector is expected (at least part of the time). >>> >>> Matt >>> >>> >>>> Yes, X and F are global and localFIELD is serial. I ran with np=2. >>>> >>>> The error I get is: >>>> >>>> >>>> ************************************************** >>>> 0 SNES Function norm 1.095445115010e+01 >>>> [1]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c >>>> Vector wrong size 100 for scatter 60 (scatter forward and vector >>>> from != ctx >>>> from size) >>>> [0]PETSC ERROR: VecScatterBegin() line 1538 in src/vec/vec/utils/vscat.c >>>> Vector wrong size 100 for scatter 60 (scatter forward and vector >>>> from != ctx >>>> from size) >>>> >>>> >>>> from gdb, I get: >>>> >>>> >>>> Program received signal SIGABRT, Aborted. >>>> [Switching to Thread 0xb7c396b0 (LWP 6301)] >>>> 0xb7f09410 in __kernel_vsyscall () >>>> (gdb) where >>>> #0 0xb7f09410 in __kernel_vsyscall () >>>> #1 0xb7c89085 in raise () from /lib/tls/i686/cmov/libc.so.6 >>>> #2 0xb7c8aa01 in abort () from /lib/tls/i686/cmov/libc.so.6 >>>> #3 0x087d591d in PetscAbortErrorHandler (line=1538, >>>> fun=0x88fe26d "VecScatterBegin", file=0x88fd9f7 "vscat.c", >>>> dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, >>>> mess=0xbfd31124 "Vector wrong size 100 for scatter 60 (scatter forward >>>> and v >>>> ector from != ctx from size)", ctx=0x0) at errabort.c:62 >>>> #4 0x0874a88e in PetscError (line=1538, func=0x88fe26d "VecScatterBegin", >>>> file=0x88fd9f7 "vscat.c", dir=0x88fd9ff "src/vec/vec/utils/", n=60, p=1, >>>> mess=0x88fe3a0 "Vector wrong size %D for scatter %D (scatter forward and >>>> vec >>>> tor from != ctx from size)") at err.c:482 >>>> #5 0x086558cc in VecScatterBegin (inctx=0x8a393b0, x=0x8b39cc0, >>>> y=0x8b3b310, >>>> addv=INSERT_VALUES, mode=SCATTER_FORWARD) at vscat.c:1538 >>>> #6 0x0828b54c in DAGlobalToLocalBegin (da=0x8a2d360, g=0x8b39cc0, >>>> mode=INSERT_VALUES, l=0x8b3b310) at dagtol.c:50 >>>> #7 0x080f1b25 in FormFunction (snes=0x8a47840, X=0x8b39cc0, F=0x8adb2f0, >>>> dummg=0x8a2b810) at twgcqt2unffnictv.c:8382 >>>> #8 0x0860c63d in MatFDColoringApply_AIJ (J=0x8a6bbb0, coloring=0x8aa7b40, >>>> x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:680 >>>> #9 0x0860abaf in MatFDColoringApply (J=0x8a6bbb0, coloring=0x8aa7b40, >>>> x1=0x8b12a60, flag=0xbfd32164, sctx=0x8a47840) at fdmatrix.c:521 >>>> #10 0x08122e45 in SNESDefaultComputeJacobianColor (snes=0x8a47840, >>>> ---Type to continue, or q to quit--- >>>> x1=0x8b12a60, J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8aa7b40) >>>> at snesj2.c:49 >>>> #11 0x0811c7cf in DMMGComputeJacobianWithFD (snes=0x8a47840, x1=0x8a39c20, >>>> J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ctx=0x8a2b810) at >>>> damgsnes.c:365 >>>> #12 0x0811a883 in DMMGComputeJacobian_Multigrid (snes=0x8a47840, >>>> X=0x8a39c20, >>>> J=0x8a47910, B=0x8a47914, flag=0xbfd32164, ptr=0x8a2b6b0) at >>>> damgsnes.c:60 >>>> #13 0x080fc610 in SNESComputeJacobian (snes=0x8a47840, X=0x8a39c20, >>>> A=0x8a47910, B=0x8a47914, flg=0xbfd32164) at snes.c:1188 >>>> #14 0x08124471 in SNESSolve_LS (snes=0x8a47840) at ls.c:189 >>>> #15 0x08103fac in SNESSolve (snes=0x8a47840, b=0x0, x=0x8a39c20) at >>>> snes.c:2242 >>>> #16 0x0811db31 in DMMGSolveSNES (dmmg=0x8a2b6b0, level=0) at >>>> damgsnes.c:510 >>>> #17 0x08117029 in DMMGSolve (dmmg=0x8a2b6b0) at damg.c:313 >>>> #18 0x08052ecc in Solve (dmmg=0x8a2b6b0) at twgcqt2unffnictv.c:679 >>>> #19 0x0804def5 in main (argc=Cannot access memory at address 0x189d >>>> ) at twgcqt2unffnictv.c:303 >>>> >>>> >>>> Same things happened to ex25.c from >>>> >>>> >>>> http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex25.c.html >>>> >>>> >>>> with >>>> >>>> ierr = DMMGSetSNES(dmmg,FormFunction,0);CHKERRQ(ierr); >>>> // ierr = >>>> DMMGSetSNESLocal(dmmg,FormFunctionLocal,0,ad_FormFunctionLocal,0);CHKERRQ(ierr); >>>> >>>> >>>> Thanks a lot! >>>> >>>> Rebecca >>>> >>>> >>>> >>>> >>>> Quoting Jed Brown : >>>> >>>> On Wed, 24 Mar 2010 15:41:38 -0400, "(Rebecca) Xuefei YUAN" < >>>>> xy2102 at columbia.edu> wrote: >>>>> >>>>>> Hi,all, >>>>>> >>>>>> I have an error from >>>>>> >>>>>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>>>>> CHKERRQ(ierr); >>>>>> >>>>>> where >>>>>> >>>>>> ierr = DMMGSetSNES(dmmg, FormFunction,0);CHKERRQ(ierr); is used for >>>>>> set up the SNES. >>>>>> >>>>>> So I check up the vector size of X, F, localFIELD where >>>>>> ierr = DAGetLocalVector(dafield,&localFIELD);CHKERRQ(ierr); >>>>>> PetscInt nlocalFIELD,nX,nF; >>>>>> ierr = VecGetSize(localFIELD,&nlocalFIELD);CHKERRQ(ierr); >>>>>> ierr = VecGetSize(X,&nX);CHKERRQ(ierr); >>>>>> ierr = VecGetSize(F,&nF);CHKERRQ(ierr); >>>>>> ierr = DAGlobalToLocalBegin(dafield,X,INSERT_VALUES,localFIELD); >>>>>> CHKERRQ(ierr); >>>>>> ierr = >>>>>> DAGlobalToLocalEnd(dafield,X,INSERT_VALUES,localFIELD);CHKERRQ(ierr); >>>>>> ierr = >>>>>> DAVecGetArray(dafield,localFIELD,&localfield);CHKERRQ(ierr); >>>>>> >>>>>> >>>>>> (gdb) disp nX >>>>>> 1: nX = 120 >>>>>> (gdb) disp nF >>>>>> 2: nF = 120 >>>>>> (gdb) disp nlocalFIELD >>>>>> 3: nlocalFIELD = 100 >>>>>> >>>>> >>>>> Is this run in parallel? Note that the sizes of X and F are global, >>>>> while localFIELD is serial. What error did you get? >>>>> >>>>> >>>>> Matt, it's clearly FormFunction and not FormFunctionLocal because the >>>>> function prototype has the SNES. >>>>> >>>>> Jed >>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> (Rebecca) Xuefei YUAN >>>> Department of Applied Physics and Applied Mathematics >>>> Columbia University >>>> Tel:917-399-8032 >>>> www.columbia.edu/~xy2102 >>>> >>>> >>> >>> >>> -- >>> What most experimenters take for granted before they begin their >>> experiments >>> is infinitely more interesting than any results to which their experiments >>> lead. >>> -- Norbert Wiener >>> >> >> >> >> -- >> (Rebecca) Xuefei YUAN >> Department of Applied Physics and Applied Mathematics >> Columbia University >> Tel:917-399-8032 >> www.columbia.edu/~xy2102 >> -- (Rebecca) Xuefei YUAN Department of Applied Physics and Applied Mathematics Columbia University Tel:917-399-8032 www.columbia.edu/~xy2102 From bsmith at mcs.anl.gov Thu Mar 25 20:38:10 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Thu, 25 Mar 2010 20:38:10 -0500 Subject: [petsc-users] release of PETSc 3.1 Message-ID: <152C7B5B-A481-46D4-AA02-98D4E6EC48C2@mcs.anl.gov> PETSc users, We are pleased to announce the release of PETSc 3.1; we urge all users of PETSc 3.0 and earlier versions to upgrade in the near future. We (Hong Zhang and Shrirang Shri) have provided new triangular solve routines for all the sparse matrix data structures which should result in a good performance improvement with many of the solvers. As always the changes/additions to the PETSc API can be found at http://www.mcs.anl.gov/petsc/petsc-as/documentation/changes/31.html please report all installation and usage problems to petsc-maint at mcs.anl.gov Thanks for your support, Barry From fischej at umich.edu Fri Mar 26 10:18:08 2010 From: fischej at umich.edu (John-Michael Fischer) Date: Fri, 26 Mar 2010 11:18:08 -0400 Subject: [petsc-users] inserting into sparse matricies Message-ID: I have a vector of values I want to insert into a sparse matrix as a row. My vector contains non-zero values at their proper positions in the row and zero's elsewhere. If I MatSetValues into a Mat setup in a sparse form, will PETSC automatically see the zero's and not insert them while working out the proper indecies of the non-zero elements? Is there another calling sequence that lets me do this without calling MatSetValue for each non-zero entry since the non-zero entries are not contiguous in the row? Alternatively, I was thinking I could assemble a vector with the indecies of the non-zero elements, then use that in the call to MatSetValues with a row vector of non-zero values only - but I wanted to ask first since this seems like a common usage pattern for PETSC sparse mat's. Thanks, John-Michael From jed at 59A2.org Fri Mar 26 10:28:26 2010 From: jed at 59A2.org (Jed Brown) Date: Fri, 26 Mar 2010 16:28:26 +0100 Subject: [petsc-users] inserting into sparse matricies In-Reply-To: References: Message-ID: <877hozqezp.fsf@59A2.org> On Fri, 26 Mar 2010 11:18:08 -0400, John-Michael Fischer wrote: > I have a vector of values I want to insert into a sparse matrix as a > row. My vector contains non-zero values at their proper positions in > the row and zero's elsewhere. How did you get this vector with all the explicit zeros? There is an option MAT_IGNORE_ZERO_ENTRIES, but I don't think you want to use it here. > Alternatively, I was thinking I could assemble a vector with the > indecies of the non-zero elements, then use that in the call to > MatSetValues with a row vector of non-zero values only - but I wanted > to ask first since this seems like a common usage pattern for PETSC > sparse mat's. Usually you only compute the nonzero entries. Jed From knepley at gmail.com Fri Mar 26 10:25:34 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Mar 2010 09:25:34 -0600 Subject: [petsc-users] inserting into sparse matricies In-Reply-To: References: Message-ID: On Fri, Mar 26, 2010 at 9:18 AM, John-Michael Fischer wrote: > I have a vector of values I want to insert into a sparse matrix as a row. > My vector contains non-zero values at their proper positions in the row and > zero's elsewhere. If I MatSetValues into a Mat setup in a sparse form, will > PETSC automatically see the zero's and not insert them while working out the > proper indecies of the non-zero elements? > > Is there another calling sequence that lets me do this without calling > MatSetValue for each non-zero entry since the non-zero entries are not > contiguous in the row? > > Alternatively, I was thinking I could assemble a vector with the indecies > of the non-zero elements, then use that in the call to MatSetValues with a > row vector of non-zero values only - but I wanted to ask first since this > seems like a common usage pattern for PETSC sparse mat's. > This is not very common, since Mats are usually very sparse, looking at every value would be incredibly time consuming. I recommend compressing the row down to the nonzeros, and then calling MatSetValues(). There is a flag you can set, MAT_IGNORE_ZERO_ENTRIES, that will ignore zeros, but this would not be my first choice. Matt > Thanks, > John-Michael -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From fischej at umich.edu Fri Mar 26 11:09:41 2010 From: fischej at umich.edu (John-Michael Fischer) Date: Fri, 26 Mar 2010 12:09:41 -0400 Subject: [petsc-users] inserting into sparse matricies In-Reply-To: <877hozqezp.fsf@59A2.org> References: <877hozqezp.fsf@59A2.org> Message-ID: Yeah, I inherited an old code written by a very well intentioned physicist - hehe, and I think you know what kind of code that can produce. He rolled his own Parallel Vector and Matrix classes, yet used PETSC for the core matrix solver -- so theres all this horrible business at each step going from his ParallelSparse class to a PETSC one, running the solver, then moving all the data back.... ahhhhhhh. Anyway, I'm replacing his custom classes with proper PETSC objects so we can move this code out of the 20th century and run it on some bigger machines. I'm trying to go in chunks and not optimize everything at once, so I thought it would be nice if I could just insert this vector which one of his functions produces. (sorry for the long answer to the short question). Barring a magic-wand petsc_function I think I'll just find the indecies of the non-zero elements and matsetvalues those even if its inefficient -- i'll come back later and deconstruct his 'row-building' function into something more sane. Thanks! John-Michael On Mar 26, 2010, at 11:28 AM, Jed Brown wrote: > On Fri, 26 Mar 2010 11:18:08 -0400, John-Michael Fischer wrote: >> I have a vector of values I want to insert into a sparse matrix as a >> row. My vector contains non-zero values at their proper positions in >> the row and zero's elsewhere. > > How did you get this vector with all the explicit zeros? > > There is an option MAT_IGNORE_ZERO_ENTRIES, but I don't think you want > to use it here. > >> Alternatively, I was thinking I could assemble a vector with the >> indecies of the non-zero elements, then use that in the call to >> MatSetValues with a row vector of non-zero values only - but I wanted >> to ask first since this seems like a common usage pattern for PETSC >> sparse mat's. > > Usually you only compute the nonzero entries. > > Jed > > From torres.pedrozpk at gmail.com Fri Mar 26 19:44:17 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Fri, 26 Mar 2010 21:44:17 -0300 Subject: [petsc-users] How expensive are these functions? Message-ID: Hi, I'm evaluating how to handle differents mapping and ordering, and I would like to have some concept of how expensive will be to put it in a loop. Is it possible to order these functions according the computational cost??. AOApplicationToPetsc() AOPetscToApplication() ISGlobalToLocalMappingApply() ISLocalToGlobalMappingApply() Thanks you very Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From knepley at gmail.com Fri Mar 26 19:55:03 2010 From: knepley at gmail.com (Matthew Knepley) Date: Fri, 26 Mar 2010 18:55:03 -0600 Subject: [petsc-users] How expensive are these functions? In-Reply-To: References: Message-ID: On Fri, Mar 26, 2010 at 6:44 PM, Pedro Torres wrote: > Hi, > > I'm evaluating how to handle differents mapping and ordering, and I would > like to have some concept of how expensive will be to put it in a loop. Is > it possible to order these functions according the computational cost??. > > AOApplicationToPetsc() > AOPetscToApplication() > > ISGlobalToLocalMappingApply() > ISLocalToGlobalMappingApply() > > These are all just direct array lookups to an index translation table. The AO have a somewhat high memory cost in parallel, but the performance should be near identical for all of these. Matt > > Thanks you very > > > Pedro > > > -- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead. -- Norbert Wiener -------------- next part -------------- An HTML attachment was scrubbed... URL: From torres.pedrozpk at gmail.com Fri Mar 26 20:09:16 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Fri, 26 Mar 2010 22:09:16 -0300 Subject: [petsc-users] How expensive are these functions? In-Reply-To: References: Message-ID: Thanks you Matt. Regards Pedro 2010/3/26 Matthew Knepley > On Fri, Mar 26, 2010 at 6:44 PM, Pedro Torres wrote: > >> Hi, >> >> I'm evaluating how to handle differents mapping and ordering, and I would >> like to have some concept of how expensive will be to put it in a loop. Is >> it possible to order these functions according the computational cost??. >> >> AOApplicationToPetsc() >> AOPetscToApplication() >> >> ISGlobalToLocalMappingApply() >> >> ISLocalToGlobalMappingApply() >> >> > These are all just direct array lookups to an index translation table. The > AO have a somewhat high memory > cost in parallel, but the performance should be near identical for all of > these. > > Matt > > >> >> Thanks you very >> >> >> Pedro >> >> >> > > > -- > What most experimenters take for granted before they begin their > experiments is infinitely more interesting than any results to which their > experiments lead. > -- Norbert Wiener > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdiso at ustc.edu Fri Mar 26 21:08:29 2010 From: gdiso at ustc.edu (Gong Ding) Date: Sat, 27 Mar 2010 10:08:29 +0800 Subject: [petsc-users] Improve of win32fe Message-ID: <16707556C1DA42C9A9C4EBD701838841@cogendaeda> Dear Petsc developers For long time I am using win32fe as an excellent tool to port my code to windows. However, I found it slows down the compiling speed too much. Yesterday I investigate the code and find the my_cygwin_conv_to_full_win32_path is the bottleneck. For each unix path to windows path convertion, a cygwin command 'cygpath -aw PATH' should be executed. My code has many include path and source file, as a result, compiling each cc file involves ~20 calls of cygpath. And convert path of .o file when linking takse more than 30s. I'd like to replace cygpath execution with cygwin function cygwin_conv_path. Of course, this function only exist in cygwin system. So I use cygwin-gcc to compile win32fe (only with some small changes) There's a new problem appear. The new win32fe can not accept environment variable loaded in cygwin.bat. I then use batch file to wrap the win32fe.exe and set environment variable for cl/icl in the batch file. Now everything is ok. I tested with my code. The compiling time is greatly reduced, from 2h to 52min. I'd like to share this method. But there are some license problem. First, I even don't know the license of win32fe. Second, since new version of win32fe dependent on cygwin (cygwin1.dll), it must be GPL. I wonder if I can receive a notice that I can release it under GPL. BTW, there seems a small bug at compilerfe.cpp 340-341 linkarg.push_front(outfile); OutputFlag = --linkarg.begin(); I think it should be OutputFlag = linkarg.begin(); Sincerely Gong Ding From dominik at itis.ethz.ch Sat Mar 27 04:31:58 2010 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Sat, 27 Mar 2010 10:31:58 +0100 Subject: [petsc-users] petsc freezes Message-ID: <4BADD08E.7040107@itis.ethz.ch> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi, After a long period with no problems, suddenly my program freezes with - -np>1, right after: int main(int argc, char** argv) { int myid=0, np=1; PetscErrorCode ierr; ierr = PetscInitialize(&argc, &argv, (char *)0, (char *)0);CHKERRQ(ierr); // this is never reached // ... } This was with Petsc 3.0.0-p9. I have just downloaded and compiled the latest version (3.1-p0) to find out that "make ... test" freezes as well: > make PETSC_DIR=/home/domel/pack/petsc-3.1-p0 PETSC_ARCH=linux-gnu-c-release test Running test examples to verify correct installation C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 MPI process [FREEZE] I am using 64bit Ubuntu 9.10. How can I investigate what is happening? Regards, Dominik -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkut0IQACgkQ/EBMh9bUuzI98gCg5p2LA87ob4MI2xc22BqcEpmd FUgAnRWi2ZrRfc750GgnCfzltSPNdDm4 =QedY -----END PGP SIGNATURE----- From jed at 59A2.org Sat Mar 27 09:50:54 2010 From: jed at 59A2.org (Jed Brown) Date: Sat, 27 Mar 2010 15:50:54 +0100 Subject: [petsc-users] petsc freezes In-Reply-To: <4BADD08E.7040107@itis.ethz.ch> References: <4BADD08E.7040107@itis.ethz.ch> Message-ID: <8739zlrf75.fsf@59A2.org> On Sat, 27 Mar 2010 10:31:58 +0100, Dominik Szczerba wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hi, > > After a long period with no problems, suddenly my program freezes with > - -np>1, right after: > > int main(int argc, char** argv) > { > int myid=0, np=1; > PetscErrorCode ierr; > ierr = PetscInitialize(&argc, &argv, (char *)0, (char *)0);CHKERRQ(ierr); > // this is never reached > // ... > } > > This was with Petsc 3.0.0-p9. I have just downloaded and compiled the > latest version (3.1-p0) to find out that "make ... test" freezes as well: > > > make PETSC_DIR=/home/domel/pack/petsc-3.1-p0 > PETSC_ARCH=linux-gnu-c-release test > Running test examples to verify correct installation > C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 > MPI process > > [FREEZE] > > I am using 64bit Ubuntu 9.10. How can I investigate what is happening? I suspect a problem with your MPI configuration. Can you run a simple MPI program in parallel? To investigate, run in a debugger or attach a debugger after it has hung. mpiexec -n 2 ./app -start_in_debugger OR mpiexec -n 2 xterm -e gdb --args ./app OR mpiexec -n 2 ./app & gdb -pid `pgrep app` What does the stack trace look like? Jed From dominik at itis.ethz.ch Mon Mar 29 09:16:09 2010 From: dominik at itis.ethz.ch (Dominik Szczerba) Date: Mon, 29 Mar 2010 16:16:09 +0200 Subject: [petsc-users] petsc freezes In-Reply-To: <8739zlrf75.fsf@59A2.org> References: <4BADD08E.7040107@itis.ethz.ch> <8739zlrf75.fsf@59A2.org> Message-ID: <4BB0B629.9040004@itis.ethz.ch> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Jed, Indeed, I could not run anything in parallel at all. Quite desperate, I had rebooted my PC, and all works again. Must have been some pending system update or hanging sockets etc. Thanks, Dominik Jed Brown wrote: > On Sat, 27 Mar 2010 10:31:58 +0100, Dominik Szczerba wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> Hi, >> >> After a long period with no problems, suddenly my program freezes with >> - -np>1, right after: >> >> int main(int argc, char** argv) >> { >> int myid=0, np=1; >> PetscErrorCode ierr; >> ierr = PetscInitialize(&argc, &argv, (char *)0, (char *)0);CHKERRQ(ierr); >> // this is never reached >> // ... >> } >> >> This was with Petsc 3.0.0-p9. I have just downloaded and compiled the >> latest version (3.1-p0) to find out that "make ... test" freezes as well: >> >>> make PETSC_DIR=/home/domel/pack/petsc-3.1-p0 >> PETSC_ARCH=linux-gnu-c-release test >> Running test examples to verify correct installation >> C/C++ example src/snes/examples/tutorials/ex19 run successfully with 1 >> MPI process >> >> [FREEZE] >> >> I am using 64bit Ubuntu 9.10. How can I investigate what is happening? > > I suspect a problem with your MPI configuration. Can you run a simple > MPI program in parallel? To investigate, run in a debugger or attach a > debugger after it has hung. > > mpiexec -n 2 ./app -start_in_debugger > > OR > > mpiexec -n 2 xterm -e gdb --args ./app > > OR > > mpiexec -n 2 ./app & > gdb -pid `pgrep app` > > What does the stack trace look like? > > Jed > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkuwtiMACgkQ/EBMh9bUuzJ60ACgvtHCruaU1Ol7K7hgeKWym/Dg Kg4AnjFfjtBs6r+DGHcNOmPDMYqKb13Y =j1pZ -----END PGP SIGNATURE----- From vyan2000 at gmail.com Mon Mar 29 11:17:10 2010 From: vyan2000 at gmail.com (Ryan Yan) Date: Mon, 29 Mar 2010 12:17:10 -0400 Subject: [petsc-users] Snes ex22.c Message-ID: Dear All, Could someone help to answer a question about snes/ex22.c? I runed the program twice with two different option sets. In the first run, I used: mpirun -np 1 ./ex22 -da_grid_x 10 -snes_monitor this will invoke all the matrix_free_options at line 96. It is shown that on all grid levels we have a aggressive convergence. http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex22.c.html 0 SNES Function norm 6.285393610547e-01 1 SNES Function norm 3.743759728398e-06 2 SNES Function norm 2.855651342153e-11 0 SNES Function norm 4.708310005567e-01 1 SNES Function norm 2.696769540300e-06 2 SNES Function norm 9.009613143190e-12 0 SNES Function norm 3.456250981375e-01 1 SNES Function norm 3.224179904192e-07 2 SNES Function norm 8.831908707996e-13 0 SNES Function norm 2.565116065949e-01 1 SNES Function norm 2.781184363175e-08 2 SNES Function norm 1.880008919009e-14 0 SNES Function norm 1.957654311750e-01 1 SNES Function norm 4.039546692288e-07 2 SNES Function norm 3.668272652964e-13 In the second run, I use the matrix based method, with the following options mpirun -np 1 ./ex22 -da_grid_x 10 -use_matrix_based -ksp_type fgmres -snes_monitor (The reason why I use fgmres is because I want to compare the result with the first run.) No significant convergence can be obtained at this run. 0 SNES Function norm 6.285393610547e-01 1 SNES Function norm 1.646083626629e-01 0 SNES Function norm 6.321514185793e-01 0 SNES Function norm 1.046493792681e+00 0 SNES Function norm 1.906667654680e+00 0 SNES Function norm 3.721097900524e+00 I have checked the ksp_view and snes_view for both cases. The only difference seems to me is that in the first run it is using matrix free method to achieve MatVec for the GMRES method(inside the Multigrid preconditioner ) for the error equation, and in the second run it is using a concrete matrix obtained from finite difference to achieve MatVec for the GMRES method(inside the Multigrid preconditioner ) for the error equation. An deeper check using -ksp_monitor will show that for the second run, within each newton inner loop the ksp residual stagnates. I guess my question is: Is this behavior normal and how to understand this? Or is there any other difference that I did not see in those two different test runs, except the way they use to achieve the MatVec? Thank you very much, Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsmith at mcs.anl.gov Mon Mar 29 17:48:16 2010 From: bsmith at mcs.anl.gov (Barry Smith) Date: Mon, 29 Mar 2010 17:48:16 -0500 Subject: [petsc-users] Snes ex22.c In-Reply-To: References: Message-ID: Ryan, This is because the "explicit Jacobian" computed in ex22.c is not correct. Run with -snes_type test -dmmg_nlevels 1 This is because the code cannot determine the nonzero structure of the Jacobian and (also) hence a coloring of the Jacobian. If you run with -dmcomposite_dense_jacobian it will converge the same way (it treats the Jacobian as dense hence does get the Jacobian computation correct). See the source code for DMComposite in src/dm/da/ utils/pack.c The routine DMCompositeSetCoupling() provides support for the user code to indicate any additional coupling between the different parts of the Jacobian. tutorials/multiphysics/mp.c shows an example of using this routine. The DMComposite stuff in PETSc is pretty rough. I will add a note to ex22.c explaining why it doesn't work with matrix based. Barry On Mar 29, 2010, at 11:17 AM, Ryan Yan wrote: > Dear All, > Could someone help to answer a question about snes/ex22.c? > I runed the program twice with two different option sets. > > In the first run, I used: > mpirun -np 1 ./ex22 -da_grid_x 10 -snes_monitor > > this will invoke all the matrix_free_options at line 96. It is > shown that on all grid levels we have a aggressive convergence. > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex22.c.html > 0 SNES Function norm 6.285393610547e-01 > 1 SNES Function norm 3.743759728398e-06 > 2 SNES Function norm 2.855651342153e-11 > 0 SNES Function norm 4.708310005567e-01 > 1 SNES Function norm 2.696769540300e-06 > 2 SNES Function norm 9.009613143190e-12 > 0 SNES Function norm 3.456250981375e-01 > 1 SNES Function norm 3.224179904192e-07 > 2 SNES Function norm 8.831908707996e-13 > 0 SNES Function norm 2.565116065949e-01 > 1 SNES Function norm 2.781184363175e-08 > 2 SNES Function norm 1.880008919009e-14 > 0 SNES Function norm 1.957654311750e-01 > 1 SNES Function norm 4.039546692288e-07 > 2 SNES Function norm 3.668272652964e-13 > > > In the second run, I use the matrix based method, with the following > options > mpirun -np 1 ./ex22 -da_grid_x 10 -use_matrix_based -ksp_type fgmres > -snes_monitor > (The reason why I use fgmres is because I want to compare the result > with the first run.) > No significant convergence can be obtained at this run. > 0 SNES Function norm 6.285393610547e-01 > 1 SNES Function norm 1.646083626629e-01 > 0 SNES Function norm 6.321514185793e-01 > 0 SNES Function norm 1.046493792681e+00 > 0 SNES Function norm 1.906667654680e+00 > 0 SNES Function norm 3.721097900524e+00 > > > I have checked the ksp_view and snes_view for both cases. The only > difference seems to me is that in the first run it is using matrix > free method to achieve MatVec for the GMRES method(inside the > Multigrid preconditioner ) for the error equation, and in the second > run it is using a concrete matrix obtained from finite difference to > achieve MatVec for the GMRES method(inside the Multigrid > preconditioner ) for the error equation. > > An deeper check using -ksp_monitor will show that for the second > run, within each newton inner loop the ksp residual stagnates. > > I guess my question is: Is this behavior normal and how to > understand this? > Or is there any other difference that I did not see in those two > different test runs, except the way they use to achieve the MatVec? > > Thank you very much, > > Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vyan2000 at gmail.com Mon Mar 29 18:49:26 2010 From: vyan2000 at gmail.com (Ryan Yan) Date: Mon, 29 Mar 2010 19:49:26 -0400 Subject: [petsc-users] Snes ex22.c In-Reply-To: References: Message-ID: Dear Barry, Thank you very much for the clarification and pointers to other utilities for problems with coupling. And exactly, -dmcomposite_dense_jacobian make the code converged aggressively. Cheers, Yan On Mon, Mar 29, 2010 at 6:48 PM, Barry Smith wrote: > > Ryan, > > This is because the "explicit Jacobian" computed in ex22.c is not > correct. Run with -snes_type test -dmmg_nlevels 1 > > This is because the code cannot determine the nonzero structure of the > Jacobian and (also) hence a coloring of the Jacobian. > > If you run with -dmcomposite_dense_jacobian it will converge the same > way (it treats the Jacobian as dense hence does get the Jacobian computation > correct). See the source code for DMComposite in src/dm/da/utils/pack.c > > The routine DMCompositeSetCoupling() provides support for the user > code to indicate any additional coupling between the different parts of the > Jacobian. tutorials/multiphysics/mp.c shows an example of using this > routine. > > The DMComposite stuff in PETSc is pretty rough. > > I will add a note to ex22.c explaining why it doesn't work with matrix > based. > > Barry > > > On Mar 29, 2010, at 11:17 AM, Ryan Yan wrote: > > Dear All, > Could someone help to answer a question about snes/ex22.c? > I runed the program twice with two different option sets. > > In the first run, I used: > mpirun -np 1 ./ex22 -da_grid_x 10 -snes_monitor > > this will invoke all the matrix_free_options at line 96. It is shown that > on all grid levels we have a aggressive convergence. > > http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex22.c.html > 0 SNES Function norm 6.285393610547e-01 > 1 SNES Function norm 3.743759728398e-06 > 2 SNES Function norm 2.855651342153e-11 > 0 SNES Function norm 4.708310005567e-01 > 1 SNES Function norm 2.696769540300e-06 > 2 SNES Function norm 9.009613143190e-12 > 0 SNES Function norm 3.456250981375e-01 > 1 SNES Function norm 3.224179904192e-07 > 2 SNES Function norm 8.831908707996e-13 > 0 SNES Function norm 2.565116065949e-01 > 1 SNES Function norm 2.781184363175e-08 > 2 SNES Function norm 1.880008919009e-14 > 0 SNES Function norm 1.957654311750e-01 > 1 SNES Function norm 4.039546692288e-07 > 2 SNES Function norm 3.668272652964e-13 > > > In the second run, I use the matrix based method, with the following > options > mpirun -np 1 ./ex22 -da_grid_x 10 -use_matrix_based -ksp_type fgmres > -snes_monitor > (The reason why I use fgmres is because I want to compare the result with > the first run.) > No significant convergence can be obtained at this run. > 0 SNES Function norm 6.285393610547e-01 > 1 SNES Function norm 1.646083626629e-01 > 0 SNES Function norm 6.321514185793e-01 > 0 SNES Function norm 1.046493792681e+00 > 0 SNES Function norm 1.906667654680e+00 > 0 SNES Function norm 3.721097900524e+00 > > > I have checked the ksp_view and snes_view for both cases. The only > difference seems to me is that in the first run it is using matrix free > method to achieve MatVec for the GMRES method(inside the Multigrid > preconditioner ) for the error equation, and in the second run it is using a > concrete matrix obtained from finite difference to achieve MatVec for the > GMRES method(inside the Multigrid preconditioner ) for the error equation. > > An deeper check using -ksp_monitor will show that for the second run, > within each newton inner loop the ksp residual stagnates. > > I guess my question is: Is this behavior normal and how to understand this? > Or is there any other difference that I did not see in those two different > test runs, except the way they use to achieve the MatVec? > > Thank you very much, > > Yan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From torres.pedrozpk at gmail.com Tue Mar 30 12:49:14 2010 From: torres.pedrozpk at gmail.com (Pedro Torres) Date: Tue, 30 Mar 2010 14:49:14 -0300 Subject: [petsc-users] Configuring PETSc with Intel compiler. Message-ID: Hello, I'm trying to compile PETSc with intel compiler and get the error below. I can compile a c/c++ source code without problem. What am I doing wrong?. Thanks a lot ./config/configure.py PETSC_ARCH=linux-gnu-c++-nodebug-parmetis CXX=icc FC=ifort --with-clanguage=C++ --download-parmetis=1 --download-parmetis=/home/ptorres/soft/ParMetis-dev-p3.tar.gz --with-blas-lapack-dir=/opt/intel/mkl/10.2.4.032 --with-debugging=0 CXXOPTFLAGS='-O3 -xN -tpp7 -ipo' =============================================================================== Configuring PETSc to compile on your system =============================================================================== TESTING: checkCxxCompiler from config.setCompilers(config/BuildSystem/config/setCompilers.py:585) ******************************************************************************* UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for details): ------------------------------------------------------------------------------- C++ compiler you provided with -CXX=icc does not work ******************************************************************************* -- Pedro Torres GESAR/UERJ Rua Fonseca Teles 121, S?o Crist?v?o Rio de Janeiro - Brasil -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: configure.log Type: application/octet-stream Size: 70606 bytes Desc: not available URL: From balay at mcs.anl.gov Tue Mar 30 12:50:11 2010 From: balay at mcs.anl.gov (Satish Balay) Date: Tue, 30 Mar 2010 12:50:11 -0500 (CDT) Subject: [petsc-users] Configuring PETSc with Intel compiler. In-Reply-To: References: Message-ID: You should use CXX=icpc Satish On Tue, 30 Mar 2010, Pedro Torres wrote: > Hello, > > I'm trying to compile PETSc with intel compiler and get the error below. I > can compile a c/c++ source code without problem. > > What am I doing wrong?. Thanks a lot > > > ./config/configure.py PETSC_ARCH=linux-gnu-c++-nodebug-parmetis CXX=icc > FC=ifort --with-clanguage=C++ --download-parmetis=1 > --download-parmetis=/home/ptorres/soft/ParMetis-dev-p3.tar.gz > --with-blas-lapack-dir=/opt/intel/mkl/10.2.4.032 --with-debugging=0 > CXXOPTFLAGS='-O3 -xN -tpp7 -ipo' > =============================================================================== > Configuring PETSc to compile on your system > =============================================================================== > TESTING: checkCxxCompiler from > config.setCompilers(config/BuildSystem/config/setCompilers.py:585) > ******************************************************************************* > UNABLE to CONFIGURE with GIVEN OPTIONS (see configure.log for > details): > ------------------------------------------------------------------------------- > C++ compiler you provided with -CXX=icc does not work > ******************************************************************************* > > >